Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,906,600
47,740,693
Why can't I decode this UTF-8 page?
<p> Howdy folks,</p> <p>I'm new to getting data from the web using python. I'd like to have the source code of this page in a string: <a href="https://projects.fivethirtyeight.com/2018-nba-predictions/" rel="nofollow noreferrer">https://projects.fivethirtyeight.com/2018-nba-predictions/</a></p> <p>The following code has worked for other pages (such as <a href="https://www.basketball-reference.com/boxscores/201712090ATL.html" rel="nofollow noreferrer">https://www.basketball-reference.com/boxscores/201712090ATL.html</a>): </p> <pre><code>import urllib.request file = urllib.request.urlopen(webAddress) data = file.read() file.close() dataString = data.decode(encoding='UTF-8') </code></pre> <p> And I'd expect dataString to be a string of HTML (see below for my expectations in this specific case)</p> <pre><code>&lt;!DOCTYPE html&gt;&lt;html lang="en"&gt;&lt;head&gt;&lt;meta property="article:modified_time" etc etc </code></pre> <p>Instead, for the 538 website, I get this error:</p> <pre><code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte </code></pre> <p>My research has suggested that the problem is that my file isn't actually encoded using UTF-8, but both the page's charset and beautiful-soup's UnicodeDammit() claims it's UTF-8 (the second might be because of the first). chardet.detect() doesn't suggest any encoding. I've tried substituting the following for 'UTF-8' in the encoding parameter of decode() to no avail: </p> <p>ISO-8859-1</p> <p>latin-1</p> <p>Windows-1252</p> <p>Perhaps worth mentioning is that the byte array data doesn't look like I'd expect it to. Here's data[:10] from a working URL:</p> <pre><code>b'\n&lt;!DOCTYPE' </code></pre> <p>Here's data[:10] from the 538 site:</p> <pre><code>b'\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x03' </code></pre> <p>What's up?</p>
<p>The server provided you with gzip-compressed data; this is not completely common, as <code>urllib</code> by default doesn't set any <code>accept-encoding</code> value, so servers generally conservatively don't compress the data.</p> <p>Still, the <code>content-encoding</code> field of the response <em>is</em> set, so you have the way to know that your page is indeed gzip-compressed, and you can decompress it using Python <code>gzip</code> module before further processing.</p> <pre><code>import urllib.request import gzip file = urllib.request.urlopen(webAddress) data = file.read() if file.headers['content-encoding'].lower() == 'gzip': data = gzip.decompress(data) file.close() dataString = data.decode(encoding='UTF-8') </code></pre> <p>OTOH, if you have the possibility to use the <a href="http://docs.python-requests.org/en/master/" rel="noreferrer"><code>requests</code></a> module it will handle all this mess by itself, including compression (did I mention that you may also get <code>deflate</code> besides <code>gzip</code>, which <a href="https://stackoverflow.com/questions/388595/why-use-deflate-instead-of-gzip-for-text-files-served-by-apache">is the same but with different headers</a>?) and (at least partially) encoding.</p> <pre><code>import requests webAddress = "https://projects.fivethirtyeight.com/2018-nba-predictions/" r = requests.get(webAddress) print(repr(r.text)) </code></pre> <p>This will perform your request and correctly print out the already-decoded Unicode string.</p>
python|encoding|utf-8|character-encoding
6
1,906,601
47,819,937
Comparing a sys.arvg to a list
<p>So what I am trying to do is compare a terminal argument against a list in my code.</p> <p>so for example, I will put a command in like "python ./mycode.py -name.</p> <p>So I want to compare the argument -name to a list I have in my code. </p> <p>At the moment it looks something like this:</p> <pre><code>reqArgs = ["-name", "-age", "-date"] for arg in sys.argv: for req in arg: if req in reqArgs: print "Sucess" else: print "not working" </code></pre> <p>I know I am not that far off. What am I missing here?</p>
<p>You only need a single loop and reqArgs should be strings</p> <pre><code>reqArgs = ['-name', '-age', '-date'] for arg in reqArgs: if arg in sys.argv: print "Success" else: print "not working" </code></pre> <p>A better solution:</p> <pre><code>req_args = ['-name', '-age', '-date'] found_all_req_args = all(arg in sys.argv for arg in req_args) </code></pre> <p>Even better:</p> <p><a href="https://docs.python.org/2/howto/argparse.html" rel="nofollow noreferrer">https://docs.python.org/2/howto/argparse.html</a></p>
python|linux
1
1,906,602
66,210,726
Testing Two Classes With Each Other in Pytest
<p>I just started learning about writing unit tests with <code>pytest</code> but I cannot find anything about testing to check if 2 classes work properly together where one class takes the output of the second class, such as</p> <pre><code>assert bar.bar(foo.foo('hello')) == 'hellofoobar' </code></pre> <p>Should pytest be used for running such tests that involve 2 classes? If so, should this test be written in <code>test_foo.py</code> or <code>test_bar.py</code>?</p> <p>Will such a test be known as integration tests?</p> <p><strong>Foo.py</strong></p> <pre><code>class Foo: def foo(self, x): return f'{x}foo' </code></pre> <p><strong>Bar.py</strong></p> <pre><code>class Bar: def bar(self, x): return f'{x}bar' </code></pre> <p><strong>conftest.py</strong></p> <pre><code>import pytest from .Foo import Foo from .Bar import Bar @pytest.fixture def foo(): return Foo() @pytest.fixture def bar(): return Bar() </code></pre> <p><strong>test_foo.py</strong></p> <pre><code>def test_foo(foo): assert foo.foo('hello') == 'hellofoo' </code></pre> <p><strong>test_bar.py</strong></p> <pre><code>def test_bar(bar): assert bar.bar('world') == 'worldbar' </code></pre> <p><strong>__init__.py</strong></p> <pre><code> </code></pre>
<p>I think this is perfectly fine. It expects that both <code>Foo</code> and <code>Bar</code> are working properly. In that case, I think you could call it an integration test.</p> <p><strong>test_bar.py</strong></p> <pre class="lang-py prettyprint-override"><code>def test_bar_with_foo(bar, foo): assert bar.bar(foo.foo('hello')) == 'hellofoobar' </code></pre> <p>The need for this type of test might depend on how complex each class is. In the stated case (which is super simple) I think it would be ok to get away with breaking up the tests; meaning if <code>test_bar.test_bar</code> passes and <code>test_foo.test_foo</code> passes then <code>test_bar.test_bar_with_foo</code> becomes redundant.</p> <p>If we break down the test and assume <code>Foo</code> returns the correct thing the test starts to look something like the below. At which point the question arises; does this test provide further value beyond <code>test_bar.test_bar</code>?</p> <p><strong>test_bar.py</strong></p> <pre class="lang-py prettyprint-override"><code>def test_bar_2(bar): assert bar.bar('hellofoo') == 'hellofoobar' </code></pre>
python|unit-testing|integration-testing|pytest|tdd
1
1,906,603
31,980,440
how make handy decorator in python?
<p>assume this decorator code :</p> <h3>first code:</h3> <pre><code>def makeitalic(fn): def wrapped(): return &quot;&lt;i&gt;&quot; + fn() + &quot;&lt;/i&gt;&quot; return wrapped @makeitalic def hello(): return &quot;hello world&quot; print (hello()) &lt;i&gt;hello world&lt;/i&gt; </code></pre> <p>I want to make this output handy by this code:</p> <pre><code>def makeitalic(fn): def wrapped(): return &quot;&lt;i&gt;&quot; + fn() + &quot;&lt;/i&gt;&quot; return wrapped def hello(): return &quot;hello world&quot; hello() 'hello world' makeitalic(hello) &lt;function makeitalic.&lt;locals&gt;.wrapped at 0x02C25AE0&gt; makeitalic(hello()) &lt;function makeitalic.&lt;locals&gt;.wrapped at 0x02E902B8&gt; print(makeitalic(hello)) &lt;function makeitalic.&lt;locals&gt;.wrapped at 0x02C25AE0&gt; </code></pre> <p>but it just return obj. is there any way to reach the first code output by this method ?</p>
<p>Yes, there is:</p> <pre><code>makeitalic(hello)() </code></pre>
python
3
1,906,604
32,061,658
How to view a saved matplotlib plot
<p>I have successfully saved my graphs using the <code>plt.savefig()</code> function in the <code>matplotlib</code> library. </p> <p>When I try to open my graph using vi, the file is there but there are a lot of strange characters. I guess I'm viewing the code and other info rather than the visualization of the graph. How do I see the graph in its pictoral form?</p>
<p>Vi is a text editor, and can't view images as images. The Windows Paint program should be able to view them, however, or on a Mac, Preview should work.</p>
python|matplotlib|graph
3
1,906,605
40,704,528
Using re.split() to work correctly with utf-8 text
<p>I'm trying to split large russian text to words without sumbols '\s.,?!'.</p> <p>Actually, I don't understand how use re.split() and re.findall correctly, because after I use them, it turn all words to utf-8 sumbols.</p> <p>Thank you.</p> <pre><code> file_read = None file_name = 'untitled.txt' with codecs.open(file_name, 'r+', encoding='utf-8') as fin: file_read = fin.read() words = re.split('u[\s.,?!]', file_read) words = re.findall('\w+', file_read) </code></pre>
<p>This will retreive all russian words from the untitled.txt file</p> <pre><code>file_read = None file_name = 'untitled.txt' with codecs.open(file_name, 'r+', encoding='utf-8') as fin: file_read = fin.read() words = re.findall(ur'[а-яёА-ЯЁ]+', file_read) </code></pre>
python-2.7
0
1,906,606
9,860,988
cant remove .text. using a regular expression
<p>Say I have a string:</p> <pre><code>"the quick brown fox jumped over the moon.this text needs to be removed." </code></pre> <p>I am trying to remove <code>".this text needs to be removed."</code> using Python. </p> <p>I have tried multiple ways to do this, mainly consisting of <code>\w+(\..*\.)</code>, but it does not work. I need a general way to remove this last part, since the text is different per file, so something like <code>re.sub('\.this text needs to be removed\.', '', string)</code> will not work for me.</p>
<p>I agree with abhijit, why not just use the string functions? For example:</p> <pre><code>s1="the quick brown fox jumped over the moon.this text needs to be removed." s2=s1.replace(".this text needs to be removed.","") </code></pre> <p>While regular expressions are very powerful, the methods on string objects are often optimized for performance.</p>
python|regex
1
1,906,607
68,343,067
Can I make my class iterable only to use one of its iterable attributes?
<p>As I am making a genetic algorithm, I have a <code>Individu</code> class:</p> <pre><code>class Individu: genotype: List[int] code: List[int] phenotype: List[List[Client]] fitness: float </code></pre> <p>Most of the time only the attribute <code>genotype</code> is used in the program (for crossing over parents or computing the fitness).</p> <p>Now, instead of always writing <code>p1.genotype[]</code> when I need to use it, could I make <code>Individu</code> iterable so that I can write <code>p1[]</code> instead or is it a bad idea?</p> <p>I feel like it would make my program cleaner for me but at the same time could be confusing for others or 'break' some kind of programming best practice.</p>
<p>You seem to be talking about indexing, not iteration per se. Indexing is handled by the <code>__getitem__</code> method while iteration is handled by <code>__iter__</code>/<code>iter</code>. You could just define these methods for your class to forward the work to the <code>genotype</code> attribute:</p> <pre><code>def __getitem__(self, key): return self.genotype[key] def __iter__(self): return iter(self.genotype) </code></pre> <p>Personally I wouldn't do it because this indirection is extra work, probably isn't everything you want to forward to the <code>genotype</code> attribute, and obscures where the iterable really is. But if it fits your use case, then go for it.</p>
python|iterable
3
1,906,608
68,323,793
Keras LSTM input ValueError: Shapes are incompatible
<p>Not sure about why I'm getting an error with my LSTM neural network. It seems to be related with the input shape.</p> <p>This is my neural network architecture:</p> <pre><code>from keras.models import Sequential from keras.layers import LSTM, Dense, Dropout model = Sequential() # Recurrent layer model.add(LSTM(64, return_sequences=False, dropout=0.1, recurrent_dropout=0.1)) # Fully connected layer model.add(Dense(64, activation='relu')) # Dropout for regularization model.add(Dropout(0.5)) # Output layer model.add(Dense(y_train.nunique(), activation='softmax')) # Compile the model model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) </code></pre> <p>This is how I train it:</p> <pre><code>history = model.fit(X_train_padded, y_train_padded, batch_size=2048, epochs=150, validation_data=(X_test_padded, y_test_padded)) </code></pre> <p>This is the shape of my input data:</p> <pre><code>print(X_train_padded.shape, X_test_padded.shape, y_train_padded.shape, y_test_padded.shape) (98, 20196, 30) (98, 4935, 30) (98, 20196, 1) (98, 4935, 1) </code></pre> <p>This is part of my X_train_padded:</p> <pre><code>X_train_padded array([[[ 2.60352379e-01, -1.66420518e-01, -3.12893162e-01, ..., -1.51210476e-01, -3.56188897e-01, -1.02761131e-01], [ 1.26103191e+00, -1.66989382e-01, -3.13025807e-01, ..., 6.61329839e+00, -3.56188897e-01, -1.02761131e-01], [ 1.04418243e+00, -1.66840157e-01, -3.12994596e-01, ..., -1.51210476e-01, -3.56188897e-01, -1.02761131e-01], ..., [ 1.27399408e+00, -1.66998426e-01, -3.13025807e-01, ..., 6.61329839e+00, -3.56188897e-01, -1.02761131e-01], </code></pre> <p>This is the error that I'm getting:</p> <pre><code>Epoch 1/150 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-52-52422b54faa4&gt; in &lt;module&gt; ----&gt; 1 history = model.fit(X_train_padded, y_train_padded, 2 batch_size=2048, epochs=150, 3 validation_data=(X_test_padded, y_test_padded)) ... ValueError: Shapes (None, 20196) and (None, 12) are incompatible </code></pre> <p>As I'm using a <em>LSTM</em> layer, I have a 3D input shape. My output layer has 12 nodes (y_train.nunique()) because I have 12 different classes in my input. Given that I have 12 classes, I'm using <em>softmax</em> as activation function in my output layer and <em>categorical_crossentropy</em> as my loss function.</p> <p><strong>EDIT:</strong></p> <p>Let me try to explain better my <a href="https://xeek.ai/challenges/force-well-logs/data" rel="nofollow noreferrer">dataset</a>:</p> <p>I'm dealing with geological wells. My samples are different types of sedimentary rocks layers, where the features are the rocks' properties (such as gammay ray emission) and the label is the rock type (such as limestone). One of my features is the depth of the layer.</p> <p>The idea behing using an LSTM in this case, is to consider the depth of a well as a sequence. So that the previous sedimentary layer (rock) helps to predict the next sedimentary layer (rock).</p> <p>How did I get to my input shape:</p> <p>I have a total of <strong>98</strong> wells in my dataset. I splitted the dataset: <code>X_train_init, X_test_init, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)</code>. The well with the most samples (layers) has, in the training set, <strong>20196</strong> samples. The wells that didn't have this many samples, I padded them with zeros so that they had <strong>20196</strong> samples. The well with the most samples (layers) has, in the test set, <strong>4935</strong> samples. The wells that didn't have this many samples, I padded them with zeros so that they had <strong>4935</strong> samples. Removing the <em>well</em> feature and the <em>depth</em> feature (among other features) I ended up with <strong>30</strong> features total. My <code>y_train</code> and <code>y_test</code> has only <strong>1</strong> column which represents the label.</p> <p>I guess that my problem is actually getting this dataset to work in a LSTM. Most of the examples that I see, don't have 98 different time series, they just have one. I'm not really sure about how to deal with 98 different time series (wells).</p>
<p>It won't work. Except for the batch size, every other input dimension should be the same. Also, your inputs dimensions are all going crazy. for example -</p> <pre><code>print(X_train_padded.shape, # (98, 20196, 30) X_test_padded.shape, # (98, 4935, 30) y_train_padded.shape, # (98, 20196, 1) y_test_padded.shape) # (98, 4935, 1) </code></pre> <p>from what I see the first dimension is supposed to represent the total number of samples (in x_train,y_train, and x_test,y_test) but in your case, the total samples are represented by the second dimension. The first dimension should be in second place. So as to say the dimensions should</p> <pre><code>print(X_train_padded.shape, # (20196, 98, 30) X_test_padded.shape, # (4935, 98, 30) y_train_padded.shape, # (20196, 98, 1) y_test_padded.shape) # (4935, 98, 1) </code></pre> <p>This will put everything in the right place. You just need to look at how you came to the wrong dimensions and change that part.</p>
tensorflow|machine-learning|neural-network|lstm|recurrent-neural-network
1
1,906,609
26,265,136
Open text file either locally or remotely via url with python
<p>I am working on a project where I will need to potentially open and read a textfile either on a local server or remotely (via url). Is there a python function that works like php's :</p> <pre><code>file_get_contents() </code></pre> <p>that can do this? right now I have:</p> <pre><code>def get_data_from_file(path): for i, line in enumerate(open(path)): ..... </code></pre> <p>I would like to pass in a path either locally or remotely.</p>
<p>You could try:</p> <pre><code>def file_get_contents(path): try: urllib.urlretrieve(path, filename=path) except: print 'not a page' if os.path.exists(path): with open(path, r) as file: data = file.read() print data else: print 'no such file' </code></pre>
python
1
1,906,610
60,068,096
PubSub returns 503 - Service Unavailable all the time
<p>I created a small program in Python for reading messages from a Pub/Sub subscription. I am using Python 3.7 and google-cloud-pubsub 1.1.0.</p> <p>My code is very simple:</p> <pre><code>from google.cloud import pubsub_v1 from google.auth import jwt import json service_account_info = json.load(open("service-account-info.json")) audience_sub = "https://pubsub.googleapis.com/google.pubsub.v1.Subscriber" credentials_sub = jwt.Credentials.from_service_account_info( service_account_info, audience=audience_sub ) subscriber_ring = pubsub_v1.SubscriberClient(credentials=credentials_sub) def callback1(message): print("In callback!!") print(message.data) message.ack() sub_path = "projects/my-project/subscriptions/my-sub" future = subscriber_ring.subscribe(sub_path, callback=callback1) future.result() </code></pre> <p>When the code reaches "future.result()", it hangs there forever and times out 10 minutes later with the error </p> <p>pubsub 503 failed to connect to all addresses</p> <p>I already verified that:</p> <ul> <li>Pub/Sub is up and running</li> <li>My service account has all the needed permissions. I even tried with my personal Google Cloud account (I am the project owner) with the same results.</li> <li>There are unacked messages in the Topic</li> <li>My network connection is OK</li> </ul> <p>but I cannot make it work. Any ideas?</p> <p>EDIT: I got some more info from the exception:</p> <pre><code>om_grpc_error(exc), exc) File "&lt;string&gt;", line 3, in raise_from google.api_core.exceptions.ServiceUnavailable: 503 failed to connect to all addresses The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/anaconda3/envs/loadtest/lib/python3.7/site-packages/google/cloud/pubsub_v1/publisher/_batch/thread.py", line 219, in _commit response = self._client.api.publish(self._topic, self._messages) File "/usr/local/anaconda3/envs/loadtest/lib/python3.7/site-packages/google/cloud/pubsub_v1/gapic/publisher_client.py", line 498, in publish request, retry=retry, timeout=timeout, metadata=metadata File "/usr/local/anaconda3/envs/loadtest/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__ return wrapped_func(*args, **kwargs) File "/usr/local/anaconda3/envs/loadtest/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func on_error=on_error, File "/usr/local/anaconda3/envs/loadtest/lib/python3.7/site-packages/google/api_core/retry.py", line 206, in retry_target last_exc, File "&lt;string&gt;", line 3, in raise_from google.api_core.exceptions.RetryError: Deadline of 60.0s exceeded while calling functools.partial(&lt;function _wrap_unary_errors.&lt;locals&gt;.error_remapped_callable at 0x7fa030891a70&gt;, , metadata=[('x-goog-request-params', 'topic=projects/my-project/subscriptions/my-sub'), ('x-goog-api-client', 'gl-python/3.7.6 grpc/1.26.0 gax/1.16.0 gapic/1.2.0')]), last exception: 503 failed to connect to all addresses </code></pre>
<p>It is likely that there is a firewall rule in place or some network configuration that is disallowing/dropping connections to *.googleapis.com (or specifically pubsub.googleapis.com). You can see an example of this with <a href="https://github.com/googleapis/google-cloud-dotnet/issues/3112" rel="nofollow noreferrer">another Google product</a>.</p>
google-cloud-pubsub|google-cloud-python
0
1,906,611
32,370,722
How do I get a dictionary's information when passed as a variable in a function?
<p>I'm looking to get information from a dictionary/list while it's being passed as a variable into a function in Python.</p> <p>I will explain what I'm trying to do, so it's easier for you to offer suggestions.</p> <p>I have three directories with information:</p> <pre><code>lloyd = { "name": "Lloyd", "homework": [90.0, 97.0, 75.0, 92.0], "quizzes": [88.0, 40.0, 94.0], "tests": [75.0, 90.0] } alice = { "name": "Alice", "homework": [100.0, 92.0, 98.0, 100.0], "quizzes": [82.0, 83.0, 91.0], "tests": [89.0, 97.0] } tyler = { "name": "Tyler", "homework": [0.0, 87.0, 75.0, 22.0], "quizzes": [0.0, 75.0, 78.0], "tests": [100.0, 100.0] } </code></pre> <p>I also have a working function to calculate the average value:</p> <pre><code>def average(numbers): total = sum(numbers) total = float(total) / len(numbers) return total </code></pre> <p>I'm creating a new function that will pull a students array above e.g. alice as the variable, so I can calculate the average of the 'homework' list.</p> <p>At the moment I have the function below but it's simply outputting the word <code>alice</code> and not the values from <code>'homework'</code>.</p> <pre><code>def get_average(student): homework = average(student['homework'][0]) </code></pre> <p>I'm unsure on how this needs to be writen, but I'm sure it's easy once the syntax/method is correct.</p>
<p>You could rewrite your function like this:</p> <pre><code>&gt;&gt;&gt; def get_average(student, typeOfWork): ... return average(student[typeOfWork]) ... &gt;&gt;&gt; get_average(lloyd,'homework') 88.5 &gt;&gt;&gt; get_average(lloyd,'tests') 82.5 &gt;&gt;&gt; </code></pre>
python
3
1,906,612
32,672,906
how to mock dependency in python
<p>I am new to python unit testing framework and lot of confusion in mocking dependency.</p> <p>I am trying to write unit tests for below member function of a class, (<code>check_something()</code>):</p> <pre><code>class Validations: def check_something(self): abc = os.environ['PLATFORM'] xyz = Node() no_of_nodes = len(xyz.some_type_var) if abc != "X_PLATFORM" or no_of_nodes != 1: raise someException() </code></pre> <p>How do we eliminate dependency ?</p> <ol> <li>Need to mock <code>Node()</code> ?</li> <li>How do we make sure <code>abc</code> is assigned with <code>X_PLATFORM</code> ?</li> <li><p>How to assign value <code>1</code> to variable <code>no_of_nodes</code>? which is in turn derived from <code>Node()</code> object.</p> <pre><code>class Node(object): def __init__(self): self.nodes = DEF() self.some_type_var = someclass().getType() self.localnode = os.environ['HOSTNAME'] self.peertype = self.get_peer_type() def get_peer_type(self): return node </code></pre></li> </ol> <p>I tried writing below unit test. I am unable to check for fail and pass condition. I am not sure whether it is correct or not. </p> <pre><code>class TestValidation(unittest.TestCase): @mock.patch.object(Node, "get_peer_type") @mock.patch('somefile.Node', spec=True) def test_1(self, mock_object1, mock_object2): os.environ['PLATFORM'] = 'X_PLATFORM' obj = Validations() self.assertRaises(someException, obj.check_something) </code></pre> <p>Validation class uses <code>Node()</code> Class object and Node class uses some other class.</p> <ol> <li>How to make sure exception is raised or not depending on the condition?</li> </ol>
<p>Yes, you'd mock anything external to the unit of code under test. Here that means the <code>os.environ</code> dictionary and the <code>Node()</code> class.</p> <p>The patch needs to be applied to the module your code is in; <code>@mock.patch('somefile.Node', spec=True)</code> is correct if <code>somefile</code> is the same module as <code>Validations</code>; see the <a href="https://docs.python.org/3/library/unittest.mock.html#where-to-patch" rel="noreferrer"><em>Where to patch</em> section</a> as to why that is.</p> <p>I'm not sure that using <code>spec=True</code> is all that helpful here; your <code>Node</code> attributes are all instance attributes created in <code>Node.__init__</code>, so they are not available on the <em>class</em>, which is what informs the spec. See the section on <a href="https://docs.python.org/3/library/unittest.mock.html#auto-speccing" rel="noreferrer">autospeccing</a> on how to overcome that if you really want to set a spec.</p> <p><code>abc</code> is set from <code>os.environ</code>, you can use the <a href="https://docs.python.org/3/library/unittest.mock.html#unittest.mock.patch.dict" rel="noreferrer"><code>patch.dict()</code> object</a> to patch that dictionary for your needs.</p> <p>The <code>xyz = len(xyz.some_type_var)</code> is simply handled by either setting the <code>some_type_var</code> attribute to an object with the right length, <em>or</em> by setting <code>xyz.some_type_var.__len__.return_value</code>, since it is the <code>xyz.some_type_var.__len__()</code> method that is called for the <code>len()</code> function.</p> <p>So, to test, you'd do:</p> <pre><code>class TestValidation(unittest.TestCase): @mock.patch('somefile.Node') def test_1(self, mock_node): # set up the Node() instance, with the correct length node_instance = mock_node.return_value node_instance.some_type_var.__len__ = 2 # or, alternatively, node_instance.some_type_var = (1, 2) # set up os.environ['PLATFORM'] with mock.patch.dict('os.environ', PLATFORM='X_PLATFORM'): obj = Validations() with self.assertRaises(someException): obj.check_something() </code></pre>
python|unit-testing|mocking
6
1,906,613
32,749,130
QuickSort in python: getting error list index out of range
<pre><code>a = [10,100,45,60,90] def quickSort(a,first, last): if last-first&lt;1: return pivot = a[first] forward = first+1 backward = last while forward &lt; backward: if a[forward] &lt; pivot: forward=forward+1 if a[backward] &gt; pivot: backward = backward -1 if a[forward] &gt;= pivot and a[backward] &lt; pivot: temp = a[forward] a[forward]=a[backward] a[backward]=temp forward=forward+1 backward = backward -1 if a[backward] &lt; pivot: temp = a[backward] a[backward]= pivot a[first] =temp quickSort(a,first,backward-1) quickSort(a,backward+1,last) return a b=quickSort(a,0,len(a)-1) print b </code></pre>
<p>You need to loop while <code>forward &lt;= backward</code>:</p> <pre><code>def quickSort(a, first, last): if last - first &lt; 1: return pivot = a[first] forward = first + 1 backward = last # &lt;= while forward &lt;= backward: if a[forward] &lt; pivot: forward += 1 elif a[backward] &gt; pivot: backward -= 1 elif a[forward] &gt;= pivot &gt; a[backward]: temp = a[forward] a[forward] = a[backward] a[backward] = temp forward += 1 backward -= 1 if a[backward] &lt; pivot: temp = a[backward] a[backward] = pivot a[first] = temp quickSort(a, first, backward - 1) quickSort(a, backward + 1, last) return a </code></pre> <p>Output:</p> <pre><code>In [10]: a = [1,33,23,12,55,24] In [11]: b = quickSort(a, 0, len(a) - 1) In [12]: b Out[12]: [1, 12, 23, 24, 33, 55] </code></pre>
python|algorithm
0
1,906,614
32,861,213
Change indescriptive axis names of Pandas Panel and Panel4D
<p>The standard axis names of a Panel are Items, Major_axis and Minor_axis</p> <pre><code>In [2]: pd.Panel() Out[2]: &lt;class 'pandas.core.panel.Panel'&gt; Dimensions: 0 (items) x 0 (major_axis) x 0 (minor_axis) Items axis: None Major_axis axis: None Minor_axis axis: None </code></pre> <p>That is indescriptive as hell, and it gets worse for Panel4D, where Labels is added as fourth axis. Is there a way to change them during initialization? Or can I use <code>pd.core.create_nd_panel_factory</code> to create a new Panel4D factory with different axis names?</p> <p>EDIT: So what I finally would like to have is</p> <pre><code>Out[3]: &lt;class 'pandas.core.panel.Panel'&gt; Dimensions: 0 (items) x 0 (major_axis) x 0 (minor_axis) X axis: None Y axis: None Z axis: None </code></pre>
<p>Since the answer given in <a href="https://stackoverflow.com/questions/15533093/pandas-access-axis-by-user-defined-name">pandas access axis by user-defined name</a> is for an old pandas version and does not provide full functionality, this is how it works:</p> <pre><code>from pandas.core.panelnd import create_nd_panel_factory from pandas.core.panel import Panel Panel4D = create_nd_panel_factory( klass_name='Panel4D', orders=['axis1', 'axis2', 'axis3', 'axis4'], slices={'labels': 'labels', 'axis2': 'items', 'axis3': 'major_axis', 'axis4': 'minor_axis'}, slicer=Panel, stat_axis=2, ) def panel4d_init(self, data=None, axis1=None, axis2=None, axis3=None, axis4=None, copy=False, dtype=None): self._init_data(data=data, axis1=axis1, axis2=axis2, axis3=axis3, axis4=axis4, copy=copy, dtype=dtype) Panel4D.__init__ = panel4d_init </code></pre> <p>It is just a <a href="https://github.com/pydata/pandas/blob/master/pandas/core/panel4d.py" rel="nofollow noreferrer">part of the Pandas source code </a> but slightly reworked.</p> <p>Then you get </p> <pre><code>&gt;&gt;&gt; Panel4D(np.random.rand(4,4,4,4)) Out[1]: &lt;class 'pandas.core.panelnd.Panel4D'&gt; Dimensions: 4 (axis1) x 4 (axis2) x 4 (axis3) x 4 (axis4) Axis1 axis: 0 to 3 Axis2 axis: 0 to 3 Axis3 axis: 0 to 3 Axis4 axis: 0 to 3 </code></pre> <p>and in contrary to the answer given in <a href="https://stackoverflow.com/questions/15533093/pandas-access-axis-by-user-defined-name">pandas access axis by user-defined name</a>, an instance of <code>Panel4D</code> then is fully functional and behaves just like an instance of <code>pandas.Panel4D</code>. For example you now can do <code>Panel4D(np.empty((1,1,1,1)))[0]</code> whithout haveing an Exception thrown.</p>
python|pandas|panel
1
1,906,615
32,934,362
Model value won't change in admin area in Django
<p>At the moment I'm going through <strong><a href="https://docs.djangoproject.com/en/1.8/intro/tutorial02/" rel="nofollow">Writing your first Django app, part 2</a></strong> of Django themselves.</p> <p>Everything has gone fine, only one problem though. When creating the models I can't update the value of the Question text inside the admin area when edit and saving. It just shows the default value (What's new?) that I setup from the shell with the following:</p> <pre><code>q = Question(question_text="What's new?", pub_date=timezone.now()) </code></pre> <p><strong>models.py:</strong></p> <pre><code>from django.db import models class Question(models.Model): question_text = models.CharField(max_length = 200) pub_date = models.DateTimeField('date published') class Choice(models.Model): question = models.ForeignKey(Question) choice_text = models.CharField(max_length = 200) votes = models.IntegerField(default = 0) </code></pre> <p><strong>admin.py:</strong></p> <pre><code>from django.contrib import admin from .models import Choice, Question class ChoiceInline(admin.TabularInline): model = Choice extra = 3 class QuestionAdmin(admin.ModelAdmin): fieldsets = [ (None, {'fields': ['question_text']}), ('Date information', {'fields': ['pub_date'], 'classes': ['collapse']}), ] inlines = [ChoiceInline] admin.site.register(Question, QuestionAdmin) </code></pre> <p>Is there something I didn't do correctly?</p>
<p>I figured it out now. I missed adding a <code>__str__()</code> method to both Question and Choice. </p> <p>So it didn't return the <strong>choice_text</strong> or <strong>question_text</strong>.</p> <p>The models will look like this now:</p> <pre><code>from django.db import models class Question(models.Model): question_text = models.CharField(max_length = 200) pub_date = models.DateTimeField('date published') def __str__(self): return self.question_text class Choice(models.Model): question = models.ForeignKey(Question) choice_text = models.CharField(max_length = 200) votes = models.IntegerField(default = 0) def __str__(self): return self.choice_text </code></pre>
python|django
0
1,906,616
27,363,268
How can I avoid TypeError: MouseSwitch() missing 8 required positional arguments: 'msg', 'x', 'y', 'data', 'time', 'hwnd', and 'window_name'
<p>Trying to hook into mouse events but in my early tests the program stops responding after about 30 seconds[EDIT: See bottom of post] and gives this error</p> <blockquote> <p>TypeError: MouseSwitch() missing 8 required positional arguments: 'msg', 'x', 'y', 'data', 'time', 'hwnd', and 'window_name'</p> </blockquote> <p>Here's the code. It's supposed to just print all the event info, which it does until it crashes.</p> <pre><code>import pythoncom import pyHook def OnMouseEvent(event): print ('MessageName:',event.MessageName) print ('Message:',event.Message) print ('Time:',event.Time) print ('Window:',event.Window) print ('WindowName:',event.WindowName) print ('Position:',event.Position) print ('Wheel:',event.Wheel) print ('Injected:',event.Injected) print ('---') return True hm = pyHook.HookManager() hm.MouseAll = OnMouseEvent hm.HookMouse() pythoncom.PumpMessages() </code></pre> <p>Any help would be appreciated.</p> <p><strong>UPDATE!</strong> Having done some further testing, the crash only seems to happen when mousing over certain windows (such as the skype contact list). I also get the same error message (but with no crash) if I mouse over the header of a google chrome window.</p>
<p>pyHook is more oriented to python 2. There are repositories in github to use it in python 3 as will as modifications and extensions and anymore, better use pynput in python 3 as follows:</p> <pre class="lang-py prettyprint-override"><code># -*- coding: utf-8 -*- from pynput.keyboard import Listener def key_recorder(key): f=open('keylogger.txt','a') keyo=str(key) if keyo==&quot;Key.enter&quot;: f.write('\n') elif keyo==&quot;Key.space&quot;: f.write(&quot; &quot;) elif keyo ==&quot;Key.backspace&quot;: #f.write(keyo.replace(keyo,&quot;&quot;)) size=f.tell() # the size... f.truncate(size-1) elif keyo==&quot;Key.alt_l&quot; or keyo==&quot;Key.tab&quot;: f.write('') elif keyo==&quot;Key.ctrl_l&quot;: f.write('') elif keyo==&quot;Key.alt_gr&quot;: f.write('') else: print(keyo) f.write(keyo.replace(&quot;'&quot;,&quot;&quot;)) with Listener(on_press=key_recorder) as l : l.join() </code></pre>
python|typeerror|pyhook
1
1,906,617
12,380,645
Django Form validation error
<p>Trying my first app in Django, and have some problem on form validation (following documentation) This is part of file upload code. The form creates fine. But, the validation fails. Even though, I provide <code>Title</code>, it gives <code>This field is required.</code></p> <h2>views.py</h2> <pre><code>def upload(request): if request.method == 'POST': form = UploadFileForm(request.POST, request.FILES) if form.is_valid(): #TODO handle(request.FILES['file']) return HttpResponseRedirect('/') else: form = UploadFileForm() return render_to_response('setup.html', {'form': form},context_instance=RequestContext(request)) </code></pre> <h2>forms.py</h2> <pre><code>from django import forms class UploadFileForm(forms.Form): title = forms.CharField() file = forms.FileField() </code></pre> <h2>setup.html</h2> <pre><code>&lt;form action="/setup/" method="post"&gt;{% csrf_token %} {% for field in form %} &lt;div class="fieldWrapper"&gt; {{ field.errors }} {{ field.label_tag }}: {{ field }} &lt;/div&gt; {% endfor %} &lt;input type="submit" value="Submit" /&gt; &lt;/form&gt; </code></pre>
<p>Add <code>required</code> paremeter to your form field. <a href="https://docs.djangoproject.com/en/dev/ref/forms/fields/#required" rel="nofollow">If you don't provide it Django will assume it is required</a>.</p> <p><code>title = forms.CharField(required=False)</code></p>
python|django
0
1,906,618
23,309,230
How to detect license plate in an image with Python and without cvblob?
<p>I want to detect the presence of a license plate in an image, using Python code.</p> <p>I got multiple suggestions to use the package cvblob, but I am unable to install cvblob on my Ubuntu system.</p> <p>What is the best approach to this problem that does not require cvblob?</p>
<p>You may use <a href="http://www.simplecv.org" rel="nofollow">SimpleCV</a>, it is a wrapper on opencv-python alongwith OCR support which may help you read the license plate text, if its clearly visible. In my experience, it is the most beginner-friendly.</p>
python|cvblobslib
1
1,906,619
7,763,597
Best style for maintaining long equation
<p>What do you think is the best way of writing this method for calculating an <a href="http://en.wikipedia.org/wiki/Ackermann_function" rel="nofollow">ackermann function</a> value? This function incorporates several 'short cuts' to the simplest method of calculating the value that speeds the calculation up considerably by reducing the amount of recursive calls, but you end up with a long expression.</p> <p>The versions use:</p> <ul> <li>The line continuation character \</li> <li>Bracketed nested functions</li> <li>A single outer set of braces</li> </ul> <p>Does any version seem better to you? why? I'm curious.</p> <pre><code>&gt;&gt;&gt; def ack4(M, N): return (N + 1) if M == 0 else \ (N + 2) if M == 1 else \ (2*N + 3) if M == 2 else \ (8*(2**N - 1) + 5) if M == 3 else \ ack4(M-1, 1) if N == 0 else \ ack4(M-1, ack4(M, N-1)) &gt;&gt;&gt; def ack2(M, N): return (N + 1) if M == 0 else ( (N + 2) if M == 1 else ( (2*N + 3) if M == 2 else ( (8*(2**N - 1) + 5) if M == 3 else ( ack2(M-1, 1) if N == 0 else ack2(M-1, ack2(M, N-1)))))) &gt;&gt;&gt; def ack3(M, N): return ((N + 1) if M == 0 else (N + 2) if M == 1 else (2*N + 3) if M == 2 else (8*(2**N - 1) + 5) if M == 3 else ack3(M-1, 1) if N == 0 else ack3(M-1, ack3(M, N-1))) &gt;&gt;&gt; ack2(4, 2) == ack3(4, 2) == ack4(4, 2) True &gt;&gt;&gt; </code></pre>
<p>What's wrong with just nesting in a simple elif chain?</p> <pre><code>def ack5(m, n): if m == 0: return (n + 1) elif m == 1: return (n + 2) elif m == 2: return (2 * n + 3) elif m == 3: return (8 * ( 2 ** n - 1) + 5) elif n == 0: return ack5(m - 1, 1) else: return ack5(m - 1, ack5(m, n - 1)) </code></pre> <p>Python code should be readable for the programmer, so it's more of a personal choice question. If I had to pick one of your 3 examples I'd go with ack4 since those backslashes indicate that everything is one big statement without bloating the expression like a bracket does in my opinion.</p>
python|coding-style
4
1,906,620
7,823,878
Searching and deleting the Data in python
<p>I have two files which have a list of sites in them. The master file is called A.txt and the processed file is called B.txt</p> <p>Contents of A.txt</p> <pre><code>www.cnn.com www.google.com www.gmail.com www.iamvishal.com </code></pre> <p>Contents of B.txt</p> <pre><code>www.cnn.com www.google.com </code></pre> <p>I want to make a python script which compares or searches between the files and creates a new.txt which only has sites from A.txt which have not yet appeared in B.txt</p> <p>As I am new I did a lot of reading on this site and found some great examples. I have managed to get the work done but I am afraid my logic is wrong. Pls see the below code:</p> <pre><code>processedfile = open("b.txt") masterfile = open("a.txt") f=open("new.txt","w") for line in processedfile.readlines(): line = line.strip() print line; print "We are printing the processed part" for linetwo in masterfile.readlines(): linetwo= linetwo.strip() print linetwo print "we are printing the master part" if linetwo != line: f.write(linetwo+"\n") </code></pre> <p>So the new file new.txt contains all the sites which were in A.txt but which were not in B.txt expect the first entry. I am also afraid that there are other logical buts as I have kept B.txt in the same sequence as A.txt so the the code can easily break if the sites are not in sequence. </p> <p>Contents of new.txt</p> <pre><code>www.google.com www.gmail.com www.iamvishal.com </code></pre> <p>Please advice how should I go about it now as I am confused.</p>
<p>Read the files into two <code>set</code>s and use set difference. eg:</p> <pre><code>a = set(line.strip() for line in open('a.txt', 'r')) b = set(line.strip() for line in open('b.txt', 'r')) new = open('new.txt', 'w') new.write('\n'.join(a - b)) </code></pre>
python
3
1,906,621
759,771
How to retrieve google appengine entities using their numerical id?
<p>Is it possible to retrieve an entity from google appengine using their numerical IDs and if so how? I tried using:</p> <p>key = Key.from_path("ModelName", numericalId) m = ModelName.get(key)</p> <p>but the key generated wasnt correct.</p>
<p>You are looking for this: <a href="http://code.google.com/appengine/docs/python/datastore/modelclass.html#Model_get_by_id" rel="nofollow noreferrer">http://code.google.com/appengine/docs/python/datastore/modelclass.html#Model_get_by_id</a></p>
python|google-app-engine
2
1,906,622
46,876,614
tflearn loss is always 0.0 while training reinforcement learning agent
<p>I tried to train a reinforcement learning agent with gym and tflearn using this code:</p> <pre><code>from tflearn import * import gym import numpy as np env = gym.make('CartPole-v0') x = [] y = [] max_reward = 0 for i in range(1000): env.reset() while True: action = env.action_space.sample() observation, reward, done, info = env.step(action) if done: break if reward &gt;= max_reward: x.append(observation) y.append(np.array([action])) x = np.asarray(x) y = np.asarray(y) net = input_data((None,4)) net = fully_connected(net,8,'softmax') net = fully_connected(net,16,'softmax') net = fully_connected(net,32,'softmax') net = fully_connected(net,64,'softmax') net = fully_connected(net,128,'softmax') net = fully_connected(net,64,'softmax') net = fully_connected(net,32,'softmax') net = fully_connected(net,16,'softmax') net = fully_connected(net,8,'softmax') net = fully_connected(net,4,'softmax') net = fully_connected(net,2,'softmax') net = fully_connected(net,1) net = regression(net,optimizer='adam',learning_rate=0.01,loss='categorical_crossentropy',batch_size=1) model = DNN(net) model.fit(x,y,10) model.save('saved/model.tflearn') </code></pre> <p>The Problem is, when the model is training the loss is always <code>0.0</code>. Can someone help me with this Issue?</p>
<p>Not sure what is your objective but <code>categorical_crossentropy</code> is a loss function used for multiclass classification, but the output of your network is just one unit <code>fully_connected(net,1)</code> with a linear activation, that is why you are getting loss 0. </p> <p>Try with <code>mean_square</code> or even <code>binary_crossentropy</code> and you will see different values of loss.</p> <p>I would use a <code>sigmoid</code> activation on the last layer, and relus on the rest.</p>
python|artificial-intelligence|reinforcement-learning|tflearn|openai-gym
0
1,906,623
37,615,544
F1-score per class for multi-class classification
<p>I'm working on a multiclass classification problem using python and scikit-learn. Currently, I'm using the <code>classification_report</code> function to evaluate the performance of my classifier, obtaining reports like the following:</p> <pre><code>&gt;&gt;&gt; print(classification_report(y_true, y_pred, target_names=target_names)) precision recall f1-score support class 0 0.50 1.00 0.67 1 class 1 0.00 0.00 0.00 1 class 2 1.00 0.67 0.80 3 avg / total 0.70 0.60 0.61 5 </code></pre> <p>To do further analysis, I'm interesting in obtaining the per-class f1 score of each of the classes available. Maybe something like this:</p> <pre><code>&gt;&gt;&gt; print(calculate_f1_score(y_true, y_pred, target_class='class 0')) 0.67 </code></pre> <p>Is there something like that available on scikit-learn?</p>
<p>Taken from the <code>f1_score</code> <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html" rel="noreferrer">docs</a>.</p> <pre><code>from sklearn.metrics import f1_score y_true = [0, 1, 2, 0, 1, 2] y_pred = [0, 2, 1, 0, 0, 1] f1_score(y_true, y_pred, average=None) </code></pre> <p>Ouputs:</p> <pre><code>array([ 0.8, 0. , 0. ]) </code></pre> <p>Which is the scores for each class.</p>
python|machine-learning|scikit-learn
27
1,906,624
38,013,747
Python - Data type handling for data fetched by connecting to the MSSQL DB
<p>Using the below Python Code connected to the mssql db and tried fetching the data from the DB.</p> <pre><code>import pymssql conn=pymssql.connect(host='localhost',user='sa',password='Password',database='HB') mycursor=conn.cursor() mycursor.execute("Select * from HB.dbo.TRANS") results=mycursor.fetchall() with open('Output.csv','w') as f: for row in results: print str(row) s = ("%s\n" % str(row)) f.write(s) f.close() </code></pre> <p>Getting the output as :</p> <pre><code>(101, datetime.datetime(2016, 2, 1, 0, 0), 129.0, 0.0, 0.0, datetime.datetime(2016, 6, 22, 5, 50, 42, 83)) </code></pre> <p>Expected Output: </p> <pre><code>(101, 2016:02:01 00:00:00.000, 129, 0, 0,2016:06:22 00:00:00.000, 5, 50, 42, 83) </code></pre> <p>How do i handle the datatype coming in the data fetched? (i.e Don't want the datatype (datetime.datetime) to appear in the data)</p>
<p>If you want to change <code>datetime.datetime()</code> object you can use <strong>.strftime() method</strong></p> <p>Example usage:</p> <pre><code>import datetime some_datetime = datetime.datetime.now() print(some_datetime) # will output: datetime.datetime(2016, 6, 24, 14, 57, 54, 190307) print(some_datetime.strftime("%Y-%m-%d %H:%M:%S")) # will output: 2016-06-24 14:58:53 </code></pre> <p>You may consider learning more about datetime python module <strong><a href="https://docs.python.org/2/library/datetime.html" rel="nofollow">here</a></strong></p>
python|sql-server|datetime
0
1,906,625
29,949,957
Scraping data and inputting it into two different tables
<p>I have scraped the squad data from the following website: <a href="http://www.espnfc.us/club/real-madrid/86/squad" rel="nofollow">http://www.espnfc.us/club/real-madrid/86/squad</a></p> <p>I created a dictionary for each player and i was wondering if I can save the goalkeeper data in a different file than the outfield players data</p> <p>for now I'm using the following code to input all the data into one output file</p>
<p>Without knowing how your data is structured, it's hard to help.</p> <p>If <code>data</code> is a list of dicts, one for each player, with elements describing each column in the web table, you could use <a href="https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions" rel="nofollow">list comprehensions</a> to filter by position:</p> <pre><code>with open('goalkeepers.json','wb') as goalkeeper_file: json.dump( [player for player in data if player['POS'] == "G"], goalkeeper_file) with open('outfielders.json','wb') as outfield_file: json.dump( [player for player in data if player['POS'] != "G"], outfield_file) </code></pre>
python|json|web-scraping
0
1,906,626
27,450,876
Python quiz not writing to txt file
<p>I am trying to write a simple arithmetic quiz. Once the user has completed the quiz, I want to write their name and score to a text file. However, if they have already completed the quiz, then their new score should be appended on the same line as their previous score is on. Currently the text file contains:<br/> <code>Raju,Joyal : 10</code><br/> However, when completing the test under the same surname, the new score is not appended to this line, and when completing the test under a different surname no new line is written to the text file at all. This is my code:</p> <pre><code>rewrite = False flag = True while flag == True: try: # opening src in a+ mode will allow me to read and append to file with open("Class {0} data.txt".format(classNo),"a+") as src: # list containing all data from file, one line is one item in list data = src.readlines() for ind,line in enumerate(data): if surname.lower() in line.lower(): # overwrite the relevant item in data with the updated score data[ind] = "{0} {1}\n".format(line.rstrip(), ", ",score) rewrite = True else: src.write("{0},{1} : {2}{3} ".format(surname, firstName, score,"\n")) if rewrite == True: # reopen src in write mode and overwrite all the records with the items in data with open("Class {} data.txt".format(classNo),"w") as src2: src2.writelines(data) flag = False except IOError: errorHandle("Data file not found. Please ensure data files are the in same folder as the program") </code></pre>
<p>You're opening the file but, because you're in "append" mode (<code>a+</code>) your read/write pointer is positioned at the <em>end</em> of the file. So when you say <code>readlines()</code> you get nothing: even if the file is not empty, there are no <em>more</em> lines past where you currently are. As a result, your <code>for</code> loop is iterating over a list of length 0, so the code never runs.</p> <p>You should read up on working with files (look for the keywords <code>seek</code> and <code>tell</code>).</p> <p>Note that even if you're positioned in the right place in the middle of the file, overwriting what's already there in an existing file will not be a good way to go: if the data you want to write are a different number of bytes from what you want to overwrite, you'll get problems. Instead you'll probably want to open one copy of the file for reading and create a new one to write to. When they're both finished and closed, move the newer file to replace the older one.</p> <p>Finally note that <code>if surname.lower() in line.lower()</code> is not watertight logic. What happens if your file has the entry <code>Raju,Joyal: 10</code> and someone else has the surname "Joy" ?</p>
python
2
1,906,627
65,797,510
how to extract the date of a netcdf file?
<p>I have a large collection of netcdf files which I need to crop with specific latitudes and longitudes and rewrite it as a new file.</p> <p>What I'm having trouble to do is: when writing this new netcdf file I want to name it with its respective date and time, something like &quot;yyyymmddhhmm&quot;.nc, but I don't know how to extract the file's date. Below are some of the file's info:</p> <pre><code>processing_level: National Aeronautics and Space Administration (NASA) L2 date_created: 2020-01-01T14:10:01.2Z cdm_data_type: Image time_coverage_start: 2020-01-01T14:00:21.7Z time_coverage_end: 2020-01-01T14:09:52.5Z timeline_id: ABI Mode 6 production_data_source: Realtime id: e9ac2711-c550-4b8a-9c27-babcd1fc49f6 dimensions(sizes): lon(5777), lat(5777) variables(dimensions): |S1 crs(), float64 lat(lat), float64 lon(lon), int16 CMI(lat, lon) </code></pre>
<p>You should be able to solve this using xarray, assuming the file times are suitably formatted. Try the following:</p> <pre><code>import xarray as xr ds = xr.open_dataset(&quot;infile.nc) ds.time.values </code></pre>
python-3.x|netcdf|netcdf4
1
1,906,628
72,267,999
This was likely an oversight when migrating to django.urls.path()
<p>So hello guys, I am new to this django and i have encountered this type of error</p> <p><a href="https://i.stack.imgur.com/TERUs.png" rel="nofollow noreferrer">enter image description here</a></p> <p>while this if my url.py</p> <pre><code>from unicodedata import name from django.urls import path from Project.views import viewAppointment, addDiagnosis from . import views app_name = &quot;Project&quot; urlpatterns = [ path('', views.index , name='index'), path('counter', views.counter, name='counter'), path('Register', views.Register, name= 'Register'), path('login', views.login, name='login'), path('logout', views.logout, name = 'logout'), path('post/&lt;str:pk&gt;', views.post, name = 'post'), path('profile', views.profile, name='profile'), path(r'^appointment/appointment=(?P&lt;appointment_id&gt;[0- 100]+)', viewAppointment, name='appointment'), path(r'^appointment/appointment=(?P&lt;appointment_id&gt;[0- 100]+)/AddDiagnonsis', addDiagnosis, name='AddDiagnosis') ] </code></pre> <p>meanwhile this is my views.py</p> <pre><code>def viewAppointment(request, appointment_id): appointment = Appointment.objects.filter(id=appointment_id) return render(request, 'appointment_form.html', {'Appointment': appointment}) def addDiagnosis(request): return True </code></pre>
<p><strong>You are getting system error so do this:</strong></p> <p>change this only and try</p> <pre><code>import django.urls import re_path urlpatterns = [ re_path(r'^appointment/appointment=(?P&lt;appointment_id&gt;[0-100]+)',viewAppointment, name='appointment'), re_path(r'^appointment/appointment=(?P&lt;appointment_id&gt;[0-100]+)/AddDiagnonsis', addDiagnosis,name='AddDiagnosis') ] </code></pre>
python|django|database
0
1,906,629
43,195,202
pandas get business days data from datetime index
<p>I have pandas dataframe as:</p> <pre><code>df.ix[1:4] Data DateTime 2015-05-24 02:00:00 4368.02 2015-05-24 03:00:00 4254.63 2015-05-24 04:00:00 4167.88 </code></pre> <p>I have created a calendar as:</p> <pre><code>us_bd = CustomBusinessDay(calendar=myCalendar()) </code></pre> <p>How do I extract the business days data and non business days data from <code>df</code>?</p> <p>Right now I am extracting the dates from <code>df</code> and then checking their presence in <code>us_bd</code> using <code>numpy.in1d</code> which appears very clumsy.</p>
<p>I'd simply say a business day is such that adding and subtracting one business day returns to the same day.</p> <pre><code>df['is_biz'] = ((df.DateTime + us_bd) - us_bd ) == df.DateTime </code></pre>
python|pandas
0
1,906,630
36,886,650
How to add a new entry into a dictionary object while using jinja2?
<p>I am not able to append add a new entry into a dictionary object while using jinja2 template.</p> <p>For example, here I am using jinja2 template and I have created a <strong>data</strong> variable which is a dictionary. And after checking some <em>if</em> condition I <strong><em>WANT</em></strong> to append location attribute to the data object e.g.</p> <pre><code>{%- set data = { 'name' : node.Name, 'id' : node.id, } -%} {% if node.location !="" %} data.append({'location': node.location}) {% endif %} </code></pre> <p>However I could not find a way to achieve this and am getting the UndefinedError:</p> <pre><code>jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'append' </code></pre> <p><em>Has anyone faced this issue or could provide a reference to solve this?</em></p> <p>I searched the web but could not find a solution i.e. how to achieve adding an entry to the dict object in the Jinja.</p> <p>I have referred following and other web resources:</p> <ol> <li><a href="http://cewing.github.io/training.codefellows/assignments/day22/jinja2_walkthrough.html" rel="noreferrer">http://cewing.github.io/training.codefellows/assignments/day22/jinja2_walkthrough.html</a></li> <li><a href="https://stackoverflow.com/questions/3352724/in-jinja2-whats-the-easiest-way-to-set-all-the-keys-to-be-the-values-of-a-dictio">In Jinja2 whats the easiest way to set all the keys to be the values of a dictionary?</a></li> <li><a href="https://github.com/saltstack/salt/issues/27494" rel="noreferrer">https://github.com/saltstack/salt/issues/27494</a></li> </ol>
<p>Without the <code>jinja2.ext.do</code> extension, you can do this:</p> <pre><code>{% set x=my_dict.__setitem__("key", "value") %} </code></pre> <p>Disregard the <code>x</code> variable and use the dictionary which is now updated.</p> <p>UPD: Also, this works for <code>len()</code> (<code>__len__()</code>), <code>str()</code> (<code>__str__()</code>), <code>repr()</code> (<code>__repr__()</code>) and many similar things.</p>
python|dictionary|jinja2
27
1,906,631
48,645,217
Python 2.7 - Convert datetime with +01:00
<p>So I have this string: '2018-02-06T12:12:29.98+01:00' which is a date. Ultimately I want to convert it back to a string containing this date: '2018-02-06 13:12:29'. So basically I just want to remove +01:00. To do this I guess I first need to convert the string to a date object like this: </p> <pre><code>import pytz import dateutil.parser tempdate = '2018-02-06T12:12:29.98+01:00' test = dateutil.parser.parse(tempdate) print(test) # --&gt; 2018-02-06 12:12:29.980000+01:00 </code></pre> <p>When I later try to convert this to the desired format and remove the +01:00 like this I get one hour back instead of one hour forward:</p> <pre><code>date = test.astimezone(pytz.utc) print(date) # --&gt; 2018-02-06 11:12:29.980000+00:00 </code></pre> <p>Does anyone know how I can solve this? I am using python 2.7</p>
<pre><code>from datetime import datetime from dateutil.parser import parse import pytz date_str = '2018-02-06T12:12:29.98+01:00' date = parse(date_str) date = date.replace( tzinfo=pytz.utc ) print(date) </code></pre> <blockquote> <p>2018-02-06 12:12:29.980000+00:00</p> </blockquote> <p>pytz to the rescue :-)</p>
python|python-2.7|date|datetime
1
1,906,632
48,531,623
I have data x,y and can make a scatterplot. How do I change the markers for frequency using matplot in python?
<p>I've been googling for an hour or so and haven't found what I am looking for. Here is where I am at in my code. </p> <p>I used BS to pull the information down and save it to a CSV file. The CSV has x,y coordinates which I can make into a scatterplot.</p> <p>similar to this (there are about 1,500 datapoints and obviously 100 combinations)</p> <p>x,y</p> <p>0,6</p> <p>1,2</p> <p>0,7</p> <p>4,6</p> <p>9,9</p> <p>0,0</p> <p>4,4</p> <p>1,2</p> <p>etc.</p> <p>What I would like to do is make the size of the points on the scatterplot scale with the frequency of how often they appear. </p> <pre><code>df = pd.read_csv("book8.csv") df.plot(kind = 'scatter',x='x',y='y') plt.show() </code></pre> <p>The arrays are just numbers between 0 and 9. I'd like to make the size scale to how often combinations of 0-9 show up. </p> <p>I currently just have this, it's not really useful obviously. </p> <p><a href="https://imgur.com/a/25PEC" rel="nofollow noreferrer">https://imgur.com/a/25PEC</a></p> <p>Do I need to set x and y into their own arrays to accomplish this instead of using the dataframe(df)? </p>
<p>I'm not sure how I could push this into numpy just yet (I'll keep thinking). In the meantime, a Python solution:</p> <pre><code>import matplotlib.pyplot as plt import random from collections import Counter x_vals = [random.randint(0, 10) for x in range(1000)] y_vals = [random.randint(0, 10) for x in range(1000)] combos = list(zip(x_vals, y_vals)) weight_counter = Counter(combos) weights = [weight_counter[(x_vals[i], y_vals[i])] for i, _ in enumerate(x_vals)] plt.scatter(x_vals, y_vals, s=weights) plt.show() </code></pre>
python|matplotlib
5
1,906,633
48,867,969
Open Weather Map API Current Temperature
<p>I am using <a href="https://github.com/csparpa/pyowm" rel="nofollow noreferrer">pyowm</a> on a Raspberry Pi ZeroW to retrieve the most recently recorded temperature for my location. I want to refresh this periodically to update the reading. At the top of my script I open a connection to OWM using the API key I obtained when I registered for a free account.</p> <p>My question is whether I can put the statement to retrieve an "observation" location (e.g., weather_at_place, weather_at_zip_code, weather_at_coords) and the "weather" to execute once for the script (i.e., at the top) or whether I need to execute them every time I want to grab the temperature. Basically, do I have to invoke weather_at_...(), get_weather(), and get_temperature() every time OR just get_temperature().</p> <pre><code>owm = pyowm.OWM('OWM_API_KEY') observation = owm.weather_at_zip_code('POSTAL_CODE', 'COUNTRY_CODE') weather = observation.get_weather() while True: temp = weather.get_temperature('fahrenheit')["temp"] print(temp) sleep 300 </code></pre> <p>OR</p> <pre><code>owm = pyowm.OWM('OWM_API_KEY') while True: observation = owm.weather_at_zip_code('POSTAL_CODE', 'COUNTRY_CODE') weather = observation.get_weather() temp = weather.get_temperature('fahrenheit')["temp"] print(temp) sleep 300 </code></pre> <p>I can't determine this from either the <a href="https://github.com/csparpa/pyowm/blob/master/pyowm/docs/usage-examples.md" rel="nofollow noreferrer">usage examples</a> nor the <a href="https://pyowm.readthedocs.io/en/latest/" rel="nofollow noreferrer">documentation</a>. I'm sure it's probably in the documentation somewhere, but I have not been able to find it. I find information on how often weather stations are polled, forecasts are refreshed, etc. I just have not been able to find information on using the API in a looping scenario for temperature. Just as a matter of understanding. Yes, I could just put everything in the loop, but I also don't want to go over the API calls threshold and get throttled. Basically, I want to make sure I'm a good OWM citizen. Plus, no need to have the execution overhead if it can be avoided.</p> <p>Thank you!</p>
<p>If you have Geo location you can use the following endpoint</p> <p><a href="http://api.openweathermap.org/data/2.5/weather?lat=35&amp;lon=139" rel="nofollow noreferrer">http://api.openweathermap.org/data/2.5/weather?lat=35&amp;lon=139</a></p>
python|raspberry-pi|weather-api
0
1,906,634
20,130,227
Matplotlib connect scatterplot points with line - Python
<p>I have two lists, dates and values. I want to plot them using matplotlib. The following creates a scatter plot of my data.</p> <pre><code>import matplotlib.pyplot as plt plt.scatter(dates,values) plt.show() </code></pre> <p><code>plt.plot(dates, values)</code> creates a line graph.</p> <p>But what I really want is a scatterplot where the points are connected by a line.</p> <p>Similar to in R:</p> <pre><code>plot(dates, values) lines(dates, value, type="l") </code></pre> <p>, which gives me a scatterplot of points overlaid with a line connecting the points.</p> <p>How do I do this in python?</p>
<p>I think @Evert has the right answer:</p> <pre><code>plt.scatter(dates,values) plt.plot(dates, values) plt.show() </code></pre> <p>Which is pretty much the same as</p> <pre><code>plt.plot(dates, values, '-o') plt.show() </code></pre> <p>You can replace <code>-o</code> with another suitable <em>format string</em> as described in the <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.plot.html" rel="noreferrer">documentation</a>. You can also split the choices of line and marker styles using the <code>linestyle=</code> and <code>marker=</code> keyword arguments.</p>
python|matplotlib
188
1,906,635
20,249,008
Sending Ajax Request to Django
<p>I'm pretty new to Ajax and Django and I'm trying to send a simple ajax request to a function called 'update'. But I also don't want the actual url to change in the browser when the request is sent (example www.website.com/page/ will stay the same even with an ajax request). Basically, when I try to submit the ajax request I get a 403 error from the server. I believe part of my problem could be the url mapping in urls.py...</p> <p>This is my ajax request:</p> <pre><code>$.ajax({ type : "POST", url : "/page/update/", data : { data : somedata, }, }).done(function(data){ alert(data); }); </code></pre> <p>This is the view it should get:</p> <pre><code>def update(request): if request.is_ajax(): message = "Yes, AJAX!" else: message = "Not Ajax" return HttpResponse(message) </code></pre> <p>This is my urls.py</p> <pre><code>urlpatterns = patterns('', url(r'^$', views.index, name='index'), url(r'^update/$', views.update, name='update'), ) </code></pre> <p>Thank you in advance for the help.</p> <p>I looked a little more deeper into the error and the error states that I need to include {% csrf_token %} when sending a post back. This example <a href="https://stackoverflow.com/questions/6020928/how-to-get-post-data-in-django-1-3">how to get POST data in django 1.3</a> shows that its placed into a form however my request is only called on an on click function</p>
<p>Your url in ajax request "/page/update/" doesn't match from urls.py.That is why you are getting 404 error page. I will show you my line code you can try this.</p> <pre><code> $.ajax({ type: "POST", url: "/update/" data: { csrfmiddlewaretoken: '{{ csrf_token }}', data : somedata, }, success: function(data) { alert(data); }, error: function(xhr, textStatus, errorThrown) { alert("Please report this error: "+errorThrown+xhr.status+xhr.responseText); } }); </code></pre> <p>/*'{{ csrf_token }}' is specifically for django*/</p> <p>This is the views:</p> <pre><code>from django.views.decorators.csrf import csrf_exempt @csrf_exempt //You can use or not use choice is left to you def update(request): if request.is_ajax(): message = "Yes, AJAX!" else: message = "Not Ajax" return HttpResponse(message) </code></pre> <p>This is urls.py</p> <pre><code>urlpatterns = patterns('', url(r'^$', views.index, name='index'), url(r'^update/$', views.update, name='update'), ) </code></pre>
python|ajax|django
2
1,906,636
67,163,915
How to split a python program?
<p>First, I'm sorry for these stupid questions. I'm try to looking for how to split my python program in different part. I have a file .py and in this file there is all my program. I would like to split this in 3 different file: Main, constant and functions. (Does these files need a particular extension?) I organized the functions file like this:</p> <pre><code>import ..... def function0: .... def function1: .... etc. </code></pre> <p>Is that a wrong method? When I try to import this file <code>.py</code> in the main file, IDE gives me a yellow line and <code>Import &quot;functions&quot; could not be resolved</code> but everything works fine.</p> <p><img src="https://i.stack.imgur.com/Z3SjV.png" alt="" /></p> <p>Does it make sense to do it this way?</p> <p>All these files are located in the same folder.</p>
<p>Make sure that, when you are importing the file, the files are in the same folder, and you are only using <code>import functionfile</code>, NOT <code>import functionfile.py</code></p> <p><s><strong>EDIT:</strong> Are you sure you are using a lowercase &quot;i&quot; in import? I notice you've referred to it as &quot;Import&quot; twice now</s></p> <p><strong>EDIT 2</strong> Now I'm thinking it's just Visual Studio Code not recognizing the valid import. This has happened to me before, and restarting VSCode always seems to fix the issue.</p>
python|python-3.x|visual-studio-code|pylance
0
1,906,637
69,624,939
Buildozer process failed executing the last command error while debugging for android
<p>I have made a simple kivy app using also the socket module. But when I try to convert it to an android app using google collab and buildozer, I am getting this kind of an error.</p> <pre><code>{ ERROR: Could not find a version that satisfies the requirement socket (from versions: none) ERROR: No matching distribution found for socket STDERR: # Command failed: /usr/bin/python3 -m pythonforandroid.toolchain create --dist_name=starstocksapp --bootstrap=sdl2 --requirements=python3,socket,kivy --arch armeabi-v7a --copy-libs --color=always --storage-dir=&quot;/content/.buildozer/android/platform/build-armeabi-v7a&quot; --ndk-api=21 # ENVIRONMENT: # CUDNN_VERSION = '8.0.5.39' # PYDEVD_USE_FRAME_EVAL = 'NO' # LD_LIBRARY_PATH = '/usr/local/nvidia/lib:/usr/local/nvidia/lib64' # CLOUDSDK_PYTHON = 'python3' # LANG = 'en_US.UTF-8' # HOSTNAME = '047d4a118941' # OLDPWD = '/' # CLOUDSDK_CONFIG = '/content/.config' # NVIDIA_VISIBLE_DEVICES = 'all' # DATALAB_SETTINGS_OVERRIDES = '{&quot;kernelManagerProxyPort&quot;:6000,&quot;kernelManagerProxyHost&quot;:&quot;172.28.0.3&quot;,&quot;jupyterArgs&quot;:[&quot;--ip=\\&quot;172.28.0.2\\&quot;&quot;],&quot;debugAdapterMultiplexerPath&quot;:&quot;/usr/local/bin/dap_multiplexer&quot;,&quot;enableLsp&quot;:true}' # ENV = '/root/.bashrc' # PAGER = 'cat' # NCCL_VERSION = '2.7.8' # TF_FORCE_GPU_ALLOW_GROWTH = 'true' # JPY_PARENT_PID = '53' # NO_GCE_CHECK = 'True' # PWD = '/content' # HOME = '/root' # LAST_FORCED_REBUILD = '20211007' # CLICOLOR = '1' # DEBIAN_FRONTEND = 'noninteractive' # LIBRARY_PATH = '/usr/local/cuda/lib64/stubs' # GCE_METADATA_TIMEOUT = '0' # GLIBCPP_FORCE_NEW = '1' # TBE_CREDS_ADDR = '172.28.0.1:8008' # TERM = 'xterm-color' # SHELL = '/bin/bash' # GCS_READ_CACHE_BLOCK_SIZE_MB = '16' # PYTHONWARNINGS = 'ignore:::pip._internal.cli.base_command' # MPLBACKEND = 'module://ipykernel.pylab.backend_inline' # CUDA_VERSION = '11.1.1' # NVIDIA_DRIVER_CAPABILITIES = 'compute,utility' # SHLVL = '1' # PYTHONPATH = '/env/python' # NVIDIA_REQUIRE_CUDA = ('cuda&gt;=11.1 brand=tesla,driver&gt;=418,driver&lt;419 ' 'brand=tesla,driver&gt;=440,driver&lt;441 brand=tesla,driver&gt;=450,driver&lt;451') # COLAB_GPU = '0' # GLIBCXX_FORCE_NEW = '1' # PATH = '/root/.buildozer/android/platform/apache-ant-1.9.4/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin' # LD_PRELOAD = '/usr/lib/x86_64-linux-gnu/libtcmalloc.so.4' # GIT_PAGER = 'cat' # _ = '/usr/local/bin/buildozer' # PACKAGES_PATH = '/root/.buildozer/android/packages' # ANDROIDSDK = '/root/.buildozer/android/platform/android-sdk' # ANDROIDNDK = '/root/.buildozer/android/platform/android-ndk-r19c' # ANDROIDAPI = '27' # ANDROIDMINAPI = '21' # # Buildozer failed to execute the last command # The error might be hidden in the log above this error # Please read the full log, and search for it before # raising an issue with buildozer itself. # In case of a bug report, please add a full log with log_level = 2 } </code></pre> <p>I don't know why this error keeps on coming. I have my main.py in the directory and I have checked to install all the modules. But still nothing happens.</p> <p>I think socket module is not included or some problem is due to that I am including socket module. I have written the socket module's name in the spec file also in the modules to be included.</p> <p>Thanks in Advance.</p>
<p>You don't have to add socket in requirements of your buildozer.spec file as they are inbuild modules in python. So, they will automatically get added when you add python3 in your requirements. <br><br> See this post for more info: <a href="https://stackoverflow.com/questions/65524613/buildozer-not-using-correct-kivy-version-when-packaging-for-android">Buildozer not using correct kivy version when packaging for android</a></p>
python-3.x|sockets|kivy|kivy-language|buildozer
0
1,906,638
69,497,446
String interning in dictionary keys
<p>Does string interning work on dict keys in python ? Suppose I have a dictionary of dictionaries and each of the dictionaries in the dictionary has the same keys.</p> <p>e.g:</p> <p>dict1 -&gt; keys 'a','b','c' dict2 -&gt; keys 'a','b','c'</p> <p>Are the keys for those dictionaries referenced in the same memory place? or the string interning does not come implicitly for those strings ?</p>
<p>Yes, strings and integers are interned, but only small strings and integers.</p> <pre><code>&gt;&gt;&gt; list1 = ['a', 'b', 'c', 'longer and more complicated string'] &gt;&gt;&gt; list2 = ['a', 'b', 'c', 'longer and more complicated string'] &gt;&gt;&gt; list1[0] is list2[0] True &gt;&gt;&gt; list1[1] is list2[1] True &gt;&gt;&gt; list1[2] is list2[2] True &gt;&gt;&gt; list1[3] is list2[3] False </code></pre> <p>Two dicts with the same keys are allowed to have completely different values, however - the key-value mapping is tied to the dict's instance (and also to the hashes of the keys, moreso than the keys themselves), not to the key's instance, and dicts are <em>not</em> interned at all.</p> <pre><code>&gt;&gt;&gt; dict1 = {'a': 1, 'b': 2} &gt;&gt;&gt; dict2 = {'a': 3, 'b': 4} &gt;&gt;&gt; for (key1, key2) in zip(dict1.keys(), dict2.keys()): ... print(key1 is key2, end=&quot;; &quot;) ... print(dict1[key1] is dict2[key2]) ... True; False True; False </code></pre> <p>If you wish to save memory by having only one key-value mapping, have you considered making the dictionary values be tuples? e.g.</p> <pre><code># instead of dict1[key] -&gt; value1 dict2[key] -&gt; value2 # do dictx[key] -&gt; (value1, value2) </code></pre>
python|python-3.x
1
1,906,639
69,582,229
How do you add a link in an f string?
<p>I am trying to add a link in the f string below:</p> <pre><code>d += f'&lt;li&gt; {event.time} {event.teacher} {event.student} {event.status} &lt;/li&gt;' </code></pre> <p>Basically, I want it to look something like below:</p> <pre><code>f'&lt;li&gt; &lt;a href=&quot;{% url 'somewhere' event.pk %}&quot;&gt; {event.time} {event.teacher} {event.student} {event.status} &lt;/a&gt; &lt;/li&gt;' </code></pre> <p>However, I get the following error when I do this:</p> <pre><code>SyntaxError: f-string: expecting '}' </code></pre> <p>Do you guys know how to input a link in an f string? Please ask me any questions you have.</p> <p>Here is the context of the code where I have the f-string as some of you asked:</p> <pre><code>class Calendar(HTMLCalendar): def __init__(self, year=None, month=None): self.year = year self.month = month super(Calendar, self).__init__() # formats a day as a td # filter events by day def formatday(self, day, events): events_per_day = events.filter(date__day=day) d = '' if True: for event in events_per_day: d += f'&lt;li&gt; {event.time} {event.teacher} {event.student} {event.status} &lt;/li&gt;' if day != 0: return f&quot;&lt;td&gt;&lt;span class='date'&gt;{day}&lt;/span&gt;&lt;ul&gt; {d} &lt;/ul&gt;&lt;/td&gt;&quot; return '&lt;td&gt;&lt;/td&gt;' </code></pre> <p>By the way, this is all in my utils.py folder.</p>
<p>An f-string does will not evaluate Django template tags, it just sees this as curly brackets, but where the content happens to be a non-sensical expression.</p> <p>You can make use of <a href="https://docs.djangoproject.com/en/3.2/ref/urlresolvers/#reverse" rel="nofollow noreferrer"><strong><code>reverse(…)</code></strong> [Django-doc]</a> to perform URL pattern resolution:</p> <pre><code>from django.urls import <strong>reverse</strong> f'&lt;li&gt; &lt;a href=&quot;{ <strong>reverse(&quot;somewhere&quot;, args=(event.pk,))</strong> }&quot;&gt; {event.time} {event.teacher} {event.student} {event.status} &lt;/a&gt; &lt;/li&gt;'</code></pre>
python|django|django-models|django-views|django-templates
1
1,906,640
51,407,175
Pandas DataFrame - modify in place based on criteria
<p>I have this dataframe</p> <pre><code>import pandas as pd x = pd.DataFrame.from_dict({'A':[1,2,0,4,0,6], 'B':[0, 0, 0, 44, 48, 81], 'C':[1,0,1,0,1,0]}) </code></pre> <p>I try to change it like this, but it doesn't work</p> <pre><code>x[x['A']&gt;3]['A'] = 33 # has no effect </code></pre> <p>I also tried</p> <pre><code>x.loc(x['A']&gt;3)['A'] = 33 # getting an error </code></pre> <p>So what's the right way to do this?</p>
<p>You should using <code>.loc</code> </p> <pre><code>x.loc[x['A']&gt;3,'A'] = 33 x Out[480]: A B C 0 1 0 1 1 2 0 0 2 0 0 1 3 33 44 0 4 0 48 1 5 33 81 0 </code></pre> <p>Or <code>mask</code></p> <pre><code>x.A=x.mask(x.A&gt;3,33) x Out[483]: A B C 0 1 0 1 1 2 0 0 2 0 0 1 3 33 44 0 4 0 48 1 5 33 81 0 </code></pre>
python|pandas|dataframe
0
1,906,641
51,333,185
Is there a way to login with a password to an ssh server using nothing but python sockets?
<p>I want to know if there is a way to login to ssh with sockets like so:</p> <pre><code>import socket sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM); sock.connect(("127.0.0.1",22)); sock.send("username"); sock.send("password"); </code></pre>
<p>No, you cannot send username and password directly to the socket.</p> <p>What goes over the wire is encrypted, and there as well is a certain level of protocol between that.</p> <p>So you need a layer which provides you with the right protocol on one layer and something which encrypts your communication at a lower level.</p> <p>Both things can in theory be implemented in Python, but I don't know if such things really exist.</p>
python|sockets|ssh|python-sockets
1
1,906,642
17,615,206
Python Deeply Nested Dictionary of a Specific Type
<p>I would like to have a dictionary <em>deeply</em> nested. Lets consider that "deeply". To show what I would need a 5 level dictionary, e.g., <code>foo[1][2][3][4][5]</code> that would have a <code>set</code> or <code>list</code> as item.</p> <p>As I saw <a href="http://ohuiginn.net/mt/2010/07/nested_dictionaries_in_python.html" rel="nofollow noreferrer">here</a> I could accomplish that in, at least, two ways:</p> <pre><code>from collections import defaultdict foo = defaultdict(lambda: defaultdict(lambda:defaultdict(lambda: defaultdict(lambda: defaultdict(set))))) </code></pre> <p>or</p> <pre><code>from functools import partial foo = defaultdict(partial(defaultdict, partial(defaultdict, partial(defaultdict, partial(defaultdict, set))))) </code></pre> <p>and then in both cases I could, for example, <code>foo[1][2][3][4][5].add(1)</code></p> <p>But I was looking for a less cumbersome way to accomplish this and found two approaches. The first one also provided in the same place as the aforementioned solutions:</p> <pre><code>class NestedDict(dict): def __getitem__(self, key): if key in self: return self.get(key) return self.setdefault(key, NestedDict()) </code></pre> <p>and the second equivalent found here at SO as a <a href="https://stackoverflow.com/a/652284/914874">answer</a> to an Autovivification question.</p> <pre><code>class NestedDict(dict): """Implementation of perl's autovivification feature.""" def __getitem__(self, item): try: print "__getitem__: %s" % item return dict.__getitem__(self, item) except KeyError: value = self[item] = type(self)() print "value: %s" % value return value </code></pre> <p>I liked those last two approaches, but I do not know how to change them in order to generate a nested dictionary of a specific type that is not dict, e.g., <code>set</code> or <code>list</code> as accomplished with <code>defaultdict</code>.</p> <p>Thanks in advance for any suggestion, comments, or corrections.</p>
<p>Here is an autovivifier which does not require you to set the level at which you want the default factory. When you get an attribute that does not exist on the DefaultHasher, it changes itself into an instance of the default factory:</p> <pre><code>class DefaultHasher(dict): def __init__(self, default_factory, change_self=None): self.default_factory = default_factory self.change_self = change_self def change(self, key): def _change(): x = self.default_factory() self[key] = x return x return _change def __missing__(self, key): self[key] = DefaultHasher(self.default_factory, self.change(key)) return self[key] def __getattr__(self, name): result = self.change_self() return getattr(result, name) foo = DefaultHasher(set) foo[1][2][3][4][5].add(1) print(foo) # {1: {2: {3: {4: {5: set([1])}}}}} foo[1][2][3].add(20) print(foo) # {1: {2: {3: set([20])}}} foo[1][3] = foo[1][2] print(foo) # {1: {2: {3: set([20])}, 3: {3: set([20])}}} foo[1][2].add(30) print(foo) # {1: {2: set([30]), 3: {3: set([20])}}} </code></pre>
python|dictionary|functools
2
1,906,643
17,556,374
Adding mpl_toolkits.basemap canvas to Pyside
<p>I've generated a map using mpl_toolkits.basemap and it works. </p> <p>However, after trying to integrate it into Pyside, I'm having trouble displaying it as a QWidget. I'm not getting any errors, the program just hangs while I wait for it to launch. I've looked online, and there isn't much documentation on this subject</p> <pre><code>from PySide.QtGui import (QWidget, QVBoxLayout, QFormLayout, QLineEdit, QPushButton, QFileDialog, QGroupBox, QApplication) import sys import matplotlib matplotlib.use('Qt4Agg') matplotlib.rcParams['backend.qt4']='PySide' from matplotlib.figure import Figure from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas from matplotlib.backends.backend_qt4agg import NavigationToolbar2QTAgg as NavigationToolbar from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as plt import numpy as np class Map(QWidget): def __init__(self, parent=None): super(Map, self).__init__(parent) self.setupUI() def setupUI(self): self.fig = Figure() self.canvas = FigureCanvas(self.fig) self.layout = QVBoxLayout(self) self.mpl_toolbar = NavigationToolbar(self.canvas, self, coordinates = False) self.layout.addWidget(self.canvas) self.layout.addWidget(self.mpl_toolbar) self.axes = self.fig.add_subplot(111) self.setLayout(self.layout) # make sure the value of resolution is a lowercase L, # for 'low', not a numeral 1 map = Basemap(projection='robin', lat_0=0, lon_0=-100, resolution='l', area_thresh=1000.0, ax=self.axes) map.drawcoastlines() map.drawcountries() map.fillcontinents(color='green') map.drawmapboundary() # lat/lon coordinates of five cities. lats = [40.02, 32.73, 38.55, 48.25, 17.29] lons = [-105.16, -117.16, -77.00, -114.21, -88.10] cities=['Boulder, CO','San Diego, CA', 'Washington, DC','Whitefish, MT','Belize City, Belize'] # compute the native map projection coordinates for cities. x,y = map(lons,lats) # plot filled circles at the locations of the cities. map.plot(x,y,'bo') # plot the names of those five cities. for name,xpt,ypt in zip(cities,x,y): plt.text(xpt+50000,ypt+50000,name) self.canvas.draw() def main(): app = QApplication(sys.argv) map = Map() app.exec_() main() </code></pre>
<p>You forgot to show your widget. Add <code>self.show()</code> to the end of <code>setupUI</code>.</p>
python|qt|qt4|pyside|matplotlib-basemap
2
1,906,644
17,385,406
running nosetests on module locally installed with easy_install
<p>I can't get nosetests to test a newly installed Python pandas library. I don't have root access to this machine, so I installed pandas locally with easy_install:</p> <pre><code>$ easy_install --prefix=$HOME/.local pandas ... (Success) ... $ python &gt;&gt;&gt; import pandas &gt;&gt;&gt; </code></pre> <p>But several attempts to run nosetests on pandas have failed:</p> <pre><code>$ nosetests pandas Ran 0 tests in 0.001s OK $ nosetests ~/.local/lib/python2.7/site-packages/pandas-0.11.0-py2.7-linux-x86_64.egg/pandas/tests/ Ran 0 tests in 0.000s OK $ nosetests ~/.local/lib/python2.7/site-packages/pandas-0.11.0-py2.7-linux-x86_64.egg/pandas/tests/* ... Ran 3344 tests in 79.525s FAILED (SKIP=52, errors=101, failures=10) </code></pre> <p>I'm assuming the last failure is because some of the source files can't be found by nosetests. On a different machine with a different installation (Canopy Python), I get the desired output:</p> <pre><code>$ nosetests pandas ... Ran 3131 tests in 253.226s OK (SKIP=116) </code></pre> <p>Is there a way to tell nosetests where both the source and test directories of a locally-installed module are?</p>
<p>had the same problem, I had to run this:</p> <pre><code>sudo nosetests /usr/lib64/python2.7/site-packages/pandas-0.14.0-py2.7-linux-x86_64.egg/pandas /tests/*.py </code></pre> <p>I do not have an explanation for it, but the results is:</p> <p><em>Ran 4261 tests in 166.166s OK (SKIP=42)</em></p>
python|pandas|nosetests
0
1,906,645
55,931,322
Take totals of sales of distributors, and get the percentage of each distributor on the total of sales overall for all distributors
<p>I am trying to take totals of each movie from different distributors, turn those totals into percentages of the entirety of totals combined for all distributors. Then I need to take every distributor thats under 1% and combine all of those into a different distributor called other.</p> <p>There are 100+ distributors, take a total of sales overall and create percentages for each distribution instead of the number of sales. This is the output for the following code below.</p> <pre><code>print(df.groupby(df['Distributor'])['Tickets Sold'].sum()) Distributor 20th Century Fox 141367982 25th Frame 2989 26 Aries 867 A24 6494901 Abramorama Films 367311 Anchor Bay Entertainment 12710 Archstone Entertainment 1299 Area 23a 4615 ArtAffects 48549 ArtMattan Productions 319 </code></pre>
<p>Create boolean mask by compare <code>sum</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.lt.html" rel="nofollow noreferrer"><code>Series.lt</code></a> for <code>&lt;</code>, filter by inverted mask by <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> and add new value by <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#setting-with-enlargement" rel="nofollow noreferrer">setting with enlargement</a> and <code>sum</code> of filtered rows under <code>1%</code>:</p> <pre><code>mask = df.div(df.sum()).lt(0.01) out = df[~mask] out.loc['others'] = df[mask].sum() print (out) 20th Century Fox 141367982 A24 6494901 others 438659 dtype: int64 </code></pre>
python|pandas|csv
1
1,906,646
73,452,767
Sending suppressions from SendGrid
<p>How can I make a script to automatically get suppressions(Bounce, Spam, Invalid and Block) from SendGrid and send them by email to anyone that I want to? I was considering using SendGrid api but I am not sure if it possible.</p>
<p>You can retrieve a list of <a href="https://docs.sendgrid.com/api-reference/bounces-api/retrieve-all-bounces" rel="nofollow noreferrer">bounces</a>, <a href="https://docs.sendgrid.com/api-reference/blocks-api/retrieve-all-blocks" rel="nofollow noreferrer">blocks</a>, <a href="https://docs.sendgrid.com/api-reference/spam-reports-api/retrieve-all-spam-reports" rel="nofollow noreferrer">spam reports</a> and <a href="https://docs.sendgrid.com/api-reference/invalid-e-mails-api/retrieve-all-invalid-emails" rel="nofollow noreferrer">invalid emails</a> via the respective APIs.</p> <p>You can also use the <a href="https://docs.sendgrid.com/for-developers/tracking-events/event" rel="nofollow noreferrer">Event Webhook</a> to receive events about your emails including if an email was dropped, bounced, blocked or reported as spam.</p>
python|php|sendgrid
0
1,906,647
66,410,945
Find all cycles with at least 3 nodes in a directed graph using dictionary data structure
<p><a href="https://i.stack.imgur.com/DrzZk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DrzZk.png" alt="Directed Graph" /></a></p> <p>The above graph was drawn using LaTeX: <a href="https://www.overleaf.com/read/rxhpghzbkhby" rel="nofollow noreferrer">https://www.overleaf.com/read/rxhpghzbkhby</a></p> <p>The above graph is represented as a dictionary in Python.</p> <pre class="lang-py prettyprint-override"><code>graph = { 'A' : ['B','D', 'C'], 'B' : ['C'], 'C' : [], 'D' : ['E'], 'E' : ['G'], 'F' : ['A', 'I'], 'G' : ['A', 'K'], 'H' : ['F', 'G'], 'I' : ['H'], 'J' : ['A'], 'K' : [] } </code></pre> <p>I have a large graph of about 3,378,546 nodes.</p> <p>Given the above-directed graph, I am trying to find circles with at least 3 and less than 5 different nodes, and output the first 3 circles.</p> <p>I spent 1 day and a half on this problem. I looked in Stackoverflow and even tried to follow this <a href="https://www.geeksforgeeks.org/detect-cycle-in-a-graph/#:%7E:text=Approach%3A%20Depth%20First%20Traversal%20can,the%20tree%20produced%20by%20DFS." rel="nofollow noreferrer">Detect Cycle in a Directed Graph</a> tutorial but couldn't come up with a solution.</p> <p>In this example, the output is a tab-delimited text file where each line has a cycle.</p> <pre class="lang-py prettyprint-override"><code>0 A, D, E, G 1 F, I, H </code></pre> <p><code>0</code> and <code>1</code> are indexes. Also, there is no order in the alphabet of the graph nodes.</p> <p>I tried this form <a href="https://www.educative.io/edpresso/how-to-implement-depth-first-search-in-python" rel="nofollow noreferrer">How to implement depth-first search in Python</a> tutorial:</p> <pre class="lang-py prettyprint-override"><code>visited = set() def dfs(visited, graph, node): if node not in visited: print (node) visited.add(node) for neighbour in graph[node]: dfs(visited, graph, neighbour) dfs(visited, graph, 'A') </code></pre> <p>But this doesn't help. I also tried this <a href="https://stackoverflow.com/a/40834276/10543310">Post</a></p>
<p><strong>NOTE</strong>: This solution is an extended solution to the describe one. I extended to the original graph with ~3million nodes and I look for all cycles that are at least 3 nodes and less than 40 nodes and store the first 3 cycles into a file.</p> <hr /> <p>I came up with the following solution.</p> <pre class="lang-py prettyprint-override"><code># implementation of Johnson's cycle finding algorithm # Original paper: Donald B Johnson. &quot;Finding all the elementary circuits of a directed graph.&quot; SIAM Journal on Computing. 1975. from collections import defaultdict import networkx as nx from networkx.utils import not_implemented_for, pairwise @not_implemented_for(&quot;undirected&quot;) def findCycles(G): &quot;&quot;&quot;Find simple cycles of a directed graph. A `simple cycle` is a closed path where no node appears twice. Two elementary circuits are distinct if they are not cyclic permutations of each other. This is iterator/generator version of Johnson's algorithm [1]_. There may be better algorithms for some cases [2]_ [3]_. Parameters ---------- G : NetworkX DiGraph A directed graph Returns ------- cycle_generator: generator A generator that produces elementary cycles of the graph. Each cycle is represented by a list of nodes along the cycle. Examples -------- &gt;&gt;&gt; graph = {'A' : ['B','D', 'C'], 'B' : ['C'], 'C' : [], 'D' : ['E'], 'E' : ['G'], 'F' : ['A', 'I'], 'G' : ['A', 'K'], 'H' : ['F', 'G'], 'I' : ['H'], 'J' : ['A'], 'K' : [] } &gt;&gt;&gt; G = nx.DiGraph() &gt;&gt;&gt; G.add_nodes_from(graph.keys()) &gt;&gt;&gt; for keys, values in graph.items(): G.add_edges_from(([(keys, node) for node in values])) &gt;&gt;&gt; list(nx.findCycles(G)) [['F', 'I', 'H'], ['G', 'A', 'D', 'E']] Notes ----- The implementation follows pp. 79-80 in [1]_. The time complexity is $O((n+e)(c+1))$ for $n$ nodes, $e$ edges and $c$ elementary circuits. References ---------- .. [1] Finding all the elementary circuits of a directed graph. D. B. Johnson, SIAM Journal on Computing 4, no. 1, 77-84, 1975. https://doi.org/10.1137/0204007 .. [2] Enumerating the cycles of a digraph: a new preprocessing strategy. G. Loizou and P. Thanish, Information Sciences, v. 27, 163-182, 1982. .. [3] A search strategy for the elementary cycles of a directed graph. J.L. Szwarcfiter and P.E. Lauer, BIT NUMERICAL MATHEMATICS, v. 16, no. 2, 192-204, 1976. -------- &quot;&quot;&quot; def _unblock(thisnode, blocked, B): stack = {thisnode} while stack: node = stack.pop() if node in blocked: blocked.remove(node) stack.update(B[node]) B[node].clear() # Johnson's algorithm requires some ordering of the nodes. # We assign the arbitrary ordering given by the strongly connected comps # There is no need to track the ordering as each node removed as processed. # Also we save the actual graph so we can mutate it. We only take the # edges because we do not want to copy edge and node attributes here. subG = type(G)(G.edges()) sccs = [scc for scc in nx.strongly_connected_components(subG) if len(scc) in list(range(3, 41))] # Johnson's algorithm exclude self cycle edges like (v, v) # To be backward compatible, we record those cycles in advance # and then remove from subG for v in subG: if subG.has_edge(v, v): yield [v] subG.remove_edge(v, v) while sccs: scc = sccs.pop() sccG = subG.subgraph(scc) # order of scc determines ordering of nodes startnode = scc.pop() # Processing node runs &quot;circuit&quot; routine from recursive version path = [startnode] blocked = set() # vertex: blocked from search? closed = set() # nodes involved in a cycle blocked.add(startnode) B = defaultdict(set) # graph portions that yield no elementary circuit stack = [(startnode, list(sccG[startnode]))] # sccG gives comp nbrs while stack: thisnode, nbrs = stack[-1] if nbrs: nextnode = nbrs.pop() if nextnode == startnode: yield path[:] closed.update(path) # print &quot;Found a cycle&quot;, path, closed elif nextnode not in blocked: path.append(nextnode) stack.append((nextnode, list(sccG[nextnode]))) closed.discard(nextnode) blocked.add(nextnode) continue # done with nextnode... look for more neighbors if not nbrs: # no more nbrs if thisnode in closed: _unblock(thisnode, blocked, B) else: for nbr in sccG[thisnode]: if thisnode not in B[nbr]: B[nbr].add(thisnode) stack.pop() path.pop() # done processing this node H = subG.subgraph(scc) # make smaller to avoid work in SCC routine sccs.extend(scc for scc in nx.strongly_connected_components(H) if len(scc) in list(range(3, 41))) </code></pre> <pre class="lang-py prettyprint-override"><code>import sys, csv, json def findAllCycles(jsonInputFile, textOutFile): &quot;&quot;&quot;Find simple cycles of a directed graph (jsonInputFile). Parameters: ---------- jsonInputFile: a json file that has all concepts textOutFile: give a desired name of output file Returns: ---------- a .text file (named: {textOutFile}.txt) has the first 3 cycles found in jsonInputFile Each cycle is represented by a list of nodes along the cycle &quot;&quot;&quot; with open(jsonInputFile) as infile: graph = json.load(infile) # Convert the json file to a NetworkX directed graph G = nx.DiGraph() G.add_nodes_from(graph.keys()) for keys, values in graph.items(): G.add_edges_from(([(keys, node) for node in values])) # Search for all simple cycles existed in the graph _cycles = list(findCycles(G)) # Start with an empty list and populate it by looping over all cycles # in _cycles that have at least 3 and less than 40 different concepts (nodes) cycles = [] for cycle in _cycles: if len(cycle) in list(range(3, 41)): cycles.append(cycle) # Store the cycels under constraint in {textOutFile}.txt with open(textOutFile, 'w') as outfile: for cycle in cycles[:3]: outfile.write(','.join(n for n in cycle)+'\n') outfile.close() # When process finishes, print Done!! return 'Done!!' infile = sys.argv[1] outfile = sys.argv[2] first_cycles = findAllCycles(infile, outfile) </code></pre> <p>To run this program, you simply use a command line as follows:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt; python3 {program file name}.py graph.json {desired output file name}[.txt][.csv] </code></pre> <p>let, for example {desired output file name}}.[txt][.csv], be <code>first_3_cycles_found.txt</code></p> <p>In my case, the graph has 3,378,546 nodes which took me ~40min to find all cycles using the above code. Thus, the output file will be:</p> <p><a href="https://i.stack.imgur.com/V6tni.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V6tni.png" alt="enter image description here" /></a></p> <p><strong>Please contribute to this if you see it needs any improvement or something else to be added.</strong></p>
python-3.x|dictionary|graph|depth-first-search|directed-graph
2
1,906,648
64,702,974
early_stopping set to False, but iteration stops before max_iter in Sklearn MLPClassifier
<p>I work on sklearn MLPClassifier for making neural network model</p> <pre><code>from sklearn.neural_network import MLPClassifier from sklearn.metrics import accuracy_score clf = MLPClassifier(activation='logistic', learning_rate_init=0.5, early_stopping=False, max_iter=500, random_state=42, hidden_layer_sizes=(10,1)).fit(X_train, y_train) </code></pre> <p>I set early_stopping to False, and set max_iter to 500. It stops on 41st iteration and gets loss=0.0989939. Why it didn't reach maximum iteration?</p>
<p>See the descriptions of the parameters <code>tol</code> and <code>n_iter_no_change</code>: if the weights converge sufficiently, then learning will stop early.</p> <p>That's distinct from the use of <code>early_stopping</code>, which cuts short the learning when a <em>validation</em> score stops improving (generally, worsens because of overfitting). In your case, the model's weights just aren't moving enough to justify further calculations. If you really want to reach 500 iterations, you can set <code>tol=0</code> or <code>n_iter_no_change=500</code>.</p>
python|scikit-learn|neural-network
1
1,906,649
71,703,065
Why is my django-crontab cronjob not executing?
<p>I have a <code>django-project</code> with an app called <code>app</code> that has a file called <code>cron.py</code> with a function called <code>main_routine()</code>.</p> <p>I want the <code>main_routine()</code> function to be called every minute.</p> <p>In my <code>django-project/django-project/settings.py</code> I have this:</p> <pre><code>INSTALLED_APPS = [ 'django_crontab', ... ] ... CRONJOBS = [ ('*/1 * * * *', 'app.cron.main_routine') ] </code></pre> <p>My <code>django-project/app/cron.py</code> looks like this:</p> <pre><code>from app.models import SomeModel from django.utils import timezone def main_routine(): object = SomeModel.objects.get(name='TestObject1') object.updated = timezone.now() object.save() </code></pre> <p>Of course I ran : <code>python3 manage.py crontab add</code> And the terminal printed:</p> <pre><code>adding cronjob: (someHash) -&gt; ('*/1 * * * *', 'app.cron.main_routine') </code></pre> <p>To be safe I run: <code>python3 manage.py crontab show</code> And the terminal prints:</p> <pre><code>Currently active jobs in crontab: someHash -&gt; ('*/1 * * * *', 'app.cron.main_routine') </code></pre> <p>To check if evrything works I run: <code>python3 manage.py crontab run someHash</code></p> <p>Then I take a look at the admin page and see that <code>TestObject1</code> has an <code>updated</code> datetime of just now. (so far everything seems to be gooing smoothly)</p> <p><strong>The main issue: No matter how long I wait the job will not be executed automatically.</strong></p> <p>What am I doing wrong?</p> <p>some Background info:</p> <ul> <li>I am running this inside an Ubuntu Docker Conatiner on a VPS with nothing else on it.</li> </ul>
<p>First: I still do not know why <code>crontab</code> is not working.</p> <p>However I found a way around this issue.</p> <p>You can use the <em>python advanced scheduler</em> aka. <code>apscheduler</code>, to substitute <code>crontab</code>.</p> <p>The idea is to write a module that has your desired functionality in it, and wire it into on of your apps <code>AppConfig</code> in its <code>apps.py</code> file.</p> <p>There is a great walkthrough in <a href="https://medium.com/@kevin.michael.horan/scheduling-tasks-in-django-with-the-advanced-python-scheduler-663f17e868e6" rel="nofollow noreferrer">this article</a>.</p>
python|django|django-crontab
0
1,906,650
62,786,636
Keras LearningRateScheduler isn't executing due to an output type error
<p>I am working on a neural network that is having issues in that it will run as expected for maybe 20 epochs and then the accuracy suddenly plummets. I've read that it could be an issue with the learning rate and decreasing the learning rate value via a learning rate schedule might be the solution. I'm using the <code>Keras LearningRateScheduler</code> to try this. I am having an issue with the model accepting the new learning rates. It will run for the first ten epochs. When the rate is changed for the first time, it gives this error: <code>ValueError: The output of the &quot;schedule&quot; function should be float.</code> I have tried casting the return values using <code>float()</code> with no luck. I can't seem to find any explanation that makes sense. I'm hoping someone here can help me.</p> <p>I am using Python on Google Colab GPU to do this. The code for the network is below. Please let me know if more info is needed.</p> <pre><code>X_train, X_validate, Y_train, Y_validate=train_test_split(X,Y,test_size=0.2) from keras.backend import sigmoid def swish(x): return (x*sigmoid(x)) from keras.utils.generic_utils import get_custom_objects from keras.layers import Activation get_custom_objects().update({'swish': Activation(swish)}) model=Sequential() model.add(Dense(1024, activation='swish',input_shape=(6,))) model.add(Dense(512, activation='swish')) model.add(Dense(256, activation='swish')) model.add(Dense(128, activation='swish')) model.add(Dense(64, activation='swish')) model.add(Dense(32, activation='swish')) model.add(Dense(16, activation='swish')) model.add(Dense(10, activation='softmax')) model.compile(loss='categorical_crossentropy',optimizer='adam', metrics=['accuracy']) from keras.callbacks import LearningRateScheduler def scheduler(epochs, lr): if epochs &lt; 10: return 0.001 elif 10 &lt; epochs &lt; 20: return 0.0005 elif 20 &lt; epochs &lt; 30: return 0.00025 elif 30 &lt; epochs &lt; 50: return 0.000125 elif 50 &lt; epochs &lt; 75: return 0.0000625 elif 75 &lt; epochs: return 0.0000313 callback=LearningRateScheduler(scheduler, verbose=1) model.fit(X_train, Y_train, batch_size=75, epochs=50, callbacks=[callback], verbose=1) #Line referenced in error model.summary() score=model.evaluate(X_validate, Y_validate, verbose=1) print(&quot;The loss and accuracy of the validation set are: &quot;+str(score)) x=X_validate[52] y=np.argmax(Y_validate[52]) y_pred=model.predict(np.array([x])) y_pred=np.argmax(y_pred) print(&quot;For the input data, the known mode is: &quot;+str(y)) print(&quot;For the input data, the predicted mode is: &quot;+str(y_pred)) </code></pre>
<p>I figured it out. The return value can't simply be a value for the learning rate. Instead, it requires that a function be passed. For example: <code>return lr * math.exp(-0.1)</code>.</p> <p>Also, the schedule has seemed to solve my previous issue of the accuracy dropping suddenly.</p>
python|machine-learning|keras|deep-learning
0
1,906,651
56,768,300
Custom function for merge pandas dataframe
<p>I have the below codes for merging :</p> <pre><code>df_merge_1 = pd.merge(df_order_products_prior, df_products, on="product_id", how="left") df_merge_2 = pd.merge(df_order_products_prior, df_products, on=[“product_id”,”user_id] how=“inner”) </code></pre> <p>Is there a very generic function way that be written to use if for different merge ?</p> <p>my function:</p> <pre><code>def merge_df(df1, df2): return pd.merge( df1, df2, how='inner', on=[“product_id”, ”user_id], suffixes=('', '_y') </code></pre> <p>But, I wanted it to more dynamic where I can pass the below values to the Function :</p> <ol> <li><p>Column names by which it will merge ( it can be single column/ multiple columns - varies to case to case )</p></li> <li><p>How - can vary (inner, left, right)</p></li> </ol>
<p>Do you want something like this:</p> <pre><code>def merge_df(df1, df2, on, how='inner', suffixes=('', '_y')): return pd.merge(df1, df2, how=how, on=on.split(','), suffixes=suffixes) </code></pre>
python-3.x|function
3
1,906,652
68,933,637
using input is there a way to input a range
<p>I have some code which takes a series of numbers separated by commas using input(). While that works fine I would like the user to be able to put in these numbers but also include a range if they wanted to.</p> <p>input = 1,2,3 # works fine</p> <p>input = 1,2,3,4-9,15 # no idea if this can work</p> <p>I have no idea if this is best pursued with a regex as they totally mystify me tbh.</p> <p><strong>Code</strong></p> <pre><code>def convert_to_int(user_input_str): input_list_str = list(user_input_str.split(&quot;,&quot;)) number_list = [] for n in input_list_str: number_list.append(int(n)) return number_list if __name__ == &quot;__main__&quot;: #manual_testing() print(&quot;enter list, each integer separated by a comma&quot;) user_input_str = input() print(&quot;From user : &quot;, convert_to_int(user_input_str)) </code></pre> <p><strong>Input</strong></p> <pre><code>1,2,3 </code></pre> <p><strong>Returns</strong></p> <pre><code>From user : [1, 2, 3] </code></pre> <p><strong>Desired functionality</strong></p> <p><strong>input</strong></p> <p>1,2,3,9-14,21</p> <p><strong>Desired output</strong></p> <pre><code>From user : [1, 2, 3, 9, 10, 11, 12, 13, 14, 21] </code></pre>
<p>Just add another checker for <code>-</code> and add the range to the list:</p> <pre><code>def convert_to_int(user_input_str): input_list_str = list(user_input_str.split(&quot;,&quot;)) number_list = [] for n in input_list_str: if '-' not in n: number_list.append(int(n)) else: number_list.extend(range(int(n.split('-')[0]), int(n.split('-')[1]) + 1)) return number_list if __name__ == &quot;__main__&quot;: #manual_testing() print(&quot;enter list, each integer separated by a comma&quot;) user_input_str = input() print(&quot;From user : &quot;, convert_to_int(user_input_str)) </code></pre>
python|string|list
2
1,906,653
72,726,233
DataFrame Groupby gives all the values even after query
<p>Here is my code.</p> <pre><code>df1=pd.DataFrame(df_raw.query('COND1==&quot;A&quot; and COND2!=&quot;B&quot;')) df2=df1.groupby(['CAT1','CAT2']).size() </code></pre> <p>I've tried get row count after query data, but after groupby, All the values are given with 0 value.</p> <p>what I expected was</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>CAT1</th> <th>CAT2</th> <th>COUNT</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>B</td> <td>2</td> </tr> <tr> <td>A</td> <td>C</td> <td>5</td> </tr> <tr> <td>B</td> <td>A</td> <td>7</td> </tr> <tr> <td>B</td> <td>D</td> <td>3</td> </tr> </tbody> </table> </div> <p>but what I've got is</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>CAT1</th> <th>CAT2</th> <th>COUNT</th> <th>#COMMENT</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>A</td> <td>0</td> <td>&lt;-</td> </tr> <tr> <td>A</td> <td>B</td> <td>2</td> <td></td> </tr> <tr> <td>A</td> <td>C</td> <td>5</td> <td></td> </tr> <tr> <td>A</td> <td>D</td> <td>0</td> <td>&lt;-</td> </tr> <tr> <td>B</td> <td>A</td> <td>7</td> <td></td> </tr> <tr> <td>B</td> <td>B</td> <td>0</td> <td>&lt;-</td> </tr> <tr> <td>B</td> <td>B</td> <td>0</td> <td>&lt;-</td> </tr> <tr> <td>B</td> <td>D</td> <td>3</td> <td></td> </tr> </tbody> </table> </div> <p>These rows with arrow (&lt;-) are given which were supposed to be deleted after query.</p> <p>please help me how to get rid of this.</p>
<p>Try this after the groupby , is that what you're looking for?</p> <pre><code>df2.drop(df2.query('COUNT == 0').index) </code></pre> <p>RESULT:</p> <pre><code> CAT1 CAT2 COUNT 1 A B 2 2 A C 5 4 B A 7 7 B D 3 </code></pre>
python|pandas
0
1,906,654
68,371,321
How to Count Instances of Words per Line in a txt file
<p>I am having an issue counting the number of lines with 'union' in a txt file. I am able to count how many times 'union' appears in the txt file but the number of lines that have 'union' in it are incorrect.</p> <pre><code># Iterate through file_data and # compute your counts in this cell # --------------------------------- file_data = [] with open('/dsa/data/all_datasets/hamilton-federalist-548.txt', 'r') as file: # Hint: for line in file_data: line_count = 0 word_count = 0 for line in file_data: this_line_count = 0 # ------------ Add your code below -------------- #Loop through the array of words 'line' for line in file: line = line.strip() split_line = line.split(' ') file_data.append(split_line) #For each word in the array, test it to 'union' for line in file_data: if line == line.count('union'): line_count += 1 # Returns 'Lines: 0' - this is wrong. for word in file_data: word_count += word.count('union') # Returns 'Words: 35' #if it's a match increment this_line_count #at the end of the line loop add this_line_count to word_count #if this_line_count isn't 0, line_count would increment by one # ------------ =================== -------------- print('Lines: {}; Words: {}'.format(line_count, word_count)) </code></pre>
<p>I think the problem is here:</p> <pre class="lang-py prettyprint-override"><code>#For each word in the array, test it to 'union' for line in file_data: if line == line.count('union'): line_count += 1 # Returns 'Lines: 0' - this is wrong. </code></pre> <p>Instead of the above code, try</p> <pre class="lang-py prettyprint-override"><code> for line in file_data: if 'union' in line: # Check if 'union' is present in line, which is now a list of strings line_count += 1 </code></pre>
python-3.x
1
1,906,655
59,166,522
Custom TensorFlow loss function with batch size > 1?
<p>I have some neural network with following code snippets, note that batch_size == 1 and input_dim == output_dim:</p> <pre><code>net_in = tf.Variable(tf.zeros(shape = [batch_size, input_dim]), dtype=tf.float32) input_placeholder = tf.compat.v1.placeholder(shape = [batch_size, input_dim], dtype=tf.float32) assign_input = net_in.assign(input_placeholder) # Some matmuls, activations, dropouts, normalizations... net_out = tf.tanh(output_before_activation) def loss_fn(output, input): #input.shape = output.shape = (batch_size, input_dim) output = tf.reshape(output, [input_dim,]) # shape them into 1d vectors input = tf.reshape(input, [input_dim,]) return my_fn_that_only_takes_in_vectors(output, input) # Create session, preprocess data ... for epoch in epoch_num: for batch in range(total_example_num // batch_size): sess.run(assign_input, feed_dict = {input_placeholder : some_appropriate_numpy_array}) sess.run(optimizer.minimize(loss_fn(net_out, net_in))) </code></pre> <p>Currently the neural network above works fine, but it is very slow because it updates gradient every sample (batch size = 1). I would like to set batch size > 1, but my_fn_that_only_takes_in_vectors cannot accommodate matrices whose first dimension is not 1. Due to the nature of my custom loss, flattening the batch input into a vector of length (batch_size * input_dim) seems to not work. </p> <p>How would I write my new custom loss_fn now that the input and output are N x input_dim where N > 1? In Keras this would not have been an issue because keras somehow takes the average of the gradients of each example in the batch. For my TensorFlow function, should I take each row as a vector individually, pass them to my_fn_that_only_takes_in_vectors, then take the average of the results?</p>
<p>You can use a function that computes the loss on the whole batch, and works independently on the batch size. Basically the operations are applied to the whole first dimension of the input (the first dimension represents the element number in the batch). Here is an example, I hope this helps to see how the operations are carried out:</p> <pre><code> def my_loss(y_true, y_pred): dx2 = tf.math.squared_difference(y_true[:, 0], y_true[:, 2]) # shape (BatchSize, ) dy2 = tf.math.squared_difference(y_true[:, 1], y_true[:, 3]) # shape: (BatchSize, ) denominator = dx2 + dy2 # shape: (BatchSize, ) dst_vec = tf.math.squared_difference(y_true, y_pred) # shape: (Batch, n_labels) numerator = tf.reduce_sum(dst_vec, axis=-1) # shape: (BatchSize,) loss_vector = tf.cast(numerator / denominator, dtype=&quot;float32&quot;) # shape: (BatchSize,) this is a vector containing the loss of each element of the batch loss = tf.reduce_sum(loss_vector ) #if you want to sum the losses return loss </code></pre> <p>I am not sure whether you need to return the sum or the avg of the losses for the batch. If you sum, make sure to use a validation dataset with same batch size, otherwise the loss is not comparable.</p>
tensorflow|neural-network
1
1,906,656
35,417,111
Python: How to evaluate the residuals in StatsModels?
<p>I want to evaluate the residuals: (y-hat y).</p> <p>I know how to do that:</p> <pre><code>df = pd.read_csv('myFile', delim_whitespace = True, header = None) df.columns = ['column1', 'column2'] y, X = ps.dmatrices('column1 ~ column2',data = df, return_type = 'dataframe') model = sm.OLS(y,X) results = model.fit() predictedValues = results.predict() #print predictedValues yData = df.as_matrix(columns = ['column1']) res = yData - predictedValues </code></pre> <p>I wonder if there is a Method to do this (?).</p>
<p>That's stored in the <code>resid</code> attribute of the <a href="http://statsmodels.sourceforge.net/stable/generated/statsmodels.regression.linear_model.RegressionResults.resid.html?highlight=resid#statsmodels.regression.linear_model.RegressionResults.resid" rel="noreferrer">Results class</a></p> <p>Likewise there's a <code>results.fittedvalues</code> method, so you don't need the <code>results.predict()</code>. </p>
python|pandas|statsmodels|patsy
32
1,906,657
59,515,380
Broken Pipe During BLE Scan (RPi, Python 3.7)
<p>I've got some Python 3.7 BLE scanning code which typically runs fantastic on a RPi3 device in production. However, recently I've seen devices* introduced into the environment that will crash the BLE Scanner in ways that I don't know how to prevent/detect.</p> <p>*Note: A Windows 10 Lenovo laptop with Bluetooth chip <code>Qualcomm Atheros QCA61x4 Bluetooth 4.1</code> can bring this code to its knees. I've also heard that folks running the above next to a BLE Beacon site also crash frequently.</p> <p>The crash occurs during the <code>pkt = my_sock.recv(255)</code> command and the exception is a <code>_bluetooth.error (32, 'Broken Pipe')</code></p> <p>Here is a minimal code example below that demonstrates the problem:</p> <pre><code>import logging import select import struct import sys import time from subprocess import check_output, STDOUT, CalledProcessError import bluetooth._bluetooth as bluez ROOT_LOGGER = logging.getLogger() ROOT_LOGGER.setLevel(logging.DEBUG) ROOT_LOGGER.addHandler(logging.NullHandler()) SYS_CTL_LOG_FMT = '%(filename)-28s [%(lineno)-4d] %(levelname)-7s %(message)s' SYSCTL_HANDLER = logging.StreamHandler(sys.stdout) SYSCTL_HANDLER.setLevel(logging.DEBUG) SYSCTL_HANDLER.setFormatter(logging.Formatter(SYS_CTL_LOG_FMT)) ROOT_LOGGER.addHandler(SYSCTL_HANDLER) logger = logging.getLogger(__name__) DEV_ID = 0 OGF_LE_CTL = 0x08 OCF_LE_SET_SCAN_PARAMETERS = 0x000B OCF_LE_SET_SCAN_ENABLE = 0x000C SCAN_RANDOM = 0x01 OWN_TYPE = SCAN_RANDOM SCAN_TYPE = 0x01 INTERVAL = 0x10 WINDOW = 0x10 FILTER = 0x00 # all advertisements, not just whitelisted devices ENABLE = 0x01 if __name__ == '__main__': while True: my_sock = bluez.hci_open_dev(DEV_ID) my_sock.settimeout(30.0) cmd_pkt = struct.pack("&lt;BBBBBBB", SCAN_TYPE, 0x0, INTERVAL, 0x0, WINDOW, OWN_TYPE, FILTER) bluez.hci_send_cmd(my_sock, OGF_LE_CTL, OCF_LE_SET_SCAN_PARAMETERS, cmd_pkt) cmd_pkt = struct.pack("&lt;BB", ENABLE, 0x00) bluez.hci_send_cmd(my_sock, OGF_LE_CTL, OCF_LE_SET_SCAN_ENABLE, cmd_pkt) flt = bluez.hci_filter_new() bluez.hci_filter_all_events(flt) bluez.hci_filter_set_ptype(flt, bluez.HCI_EVENT_PKT) my_sock.setsockopt(bluez.SOL_HCI, bluez.HCI_FILTER, flt) try: packets_received = 0 while True: ready_to_read, ready_to_write, in_error = select.select([my_sock, ], [my_sock, ], [my_sock, ], 5) if len(ready_to_read) == 0: time.sleep(0.001) continue try: pkt = my_sock.recv(255) except bluez.error as exc_data: logger.error(f'Received a _bluetooth.error while trying to read. Aborting: {exc_data}') raise packets_received += 1 ptype, event, plen = struct.unpack("BBB", pkt[:3]) logger.info(f'{packets_received} {ptype}, {event}, {plen}') except bluez.error: my_sock.close() while True: # this loops until hciconfig is able to successfully restart try: check_output('sudo hciconfig hci0 up', shell=True, stderr=STDOUT) except CalledProcessError as exc_data: logger.warning(f'{type(exc_data)}: {exc_data}') continue time.sleep(1) break </code></pre> <p>Output looks like this:</p> <pre><code>pi@raspberrypi:~/my_test $ sudo python3 distilled_test.py distilled_test.py [63 ] INFO 1 4, 14, 4 distilled_test.py [63 ] INFO 2 4, 14, 4 distilled_test.py [63 ] INFO 3 4, 62, 27 distilled_test.py [63 ] INFO 4 4, 62, 26 distilled_test.py [63 ] INFO 5 4, 62, 12 distilled_test.py [63 ] INFO 6 4, 62, 31 distilled_test.py [63 ] INFO 7 4, 62, 31 distilled_test.py [63 ] INFO 8 4, 62, 31 distilled_test.py [63 ] INFO 9 4, 62, 31 distilled_test.py [63 ] INFO 10 4, 62, 31 distilled_test.py [63 ] INFO 11 4, 62, 31 distilled_test.py [63 ] INFO 12 4, 62, 31 distilled_test.py [63 ] INFO 13 4, 62, 31 distilled_test.py [63 ] INFO 14 4, 62, 31 distilled_test.py [63 ] INFO 15 4, 62, 31 distilled_test.py [63 ] INFO 16 4, 62, 31 distilled_test.py [63 ] INFO 17 4, 62, 31 distilled_test.py [63 ] INFO 18 4, 62, 31 distilled_test.py [63 ] INFO 19 4, 62, 31 distilled_test.py [63 ] INFO 20 4, 62, 31 distilled_test.py [63 ] INFO 21 4, 62, 31 distilled_test.py [63 ] INFO 22 4, 62, 31 distilled_test.py [59 ] ERROR Received a _bluetooth.error while trying to read. Aborting: (32, 'Broken pipe') distilled_test.py [72 ] WARNING &lt;class 'subprocess.CalledProcessError'&gt;: Command 'sudo hciconfig hci0 up' returned non-zero exit status 1. distilled_test.py [72 ] WARNING &lt;class 'subprocess.CalledProcessError'&gt;: Command 'sudo hciconfig hci0 up' returned non-zero exit status 1. distilled_test.py [72 ] WARNING &lt;class 'subprocess.CalledProcessError'&gt;: Command 'sudo hciconfig hci0 up' returned non-zero exit status 1. distilled_test.py [72 ] WARNING &lt;class 'subprocess.CalledProcessError'&gt;: Command 'sudo hciconfig hci0 up' returned non-zero exit status 1. distilled_test.py [72 ] WARNING &lt;class 'subprocess.CalledProcessError'&gt;: Command 'sudo hciconfig hci0 up' returned non-zero exit status 1. distilled_test.py [63 ] INFO 1 4, 14, 4 distilled_test.py [63 ] INFO 2 4, 14, 4 distilled_test.py [63 ] INFO 3 4, 62, 27 distilled_test.py [63 ] INFO 4 4, 62, 26 distilled_test.py [63 ] INFO 5 4, 62, 40 distilled_test.py [63 ] INFO 6 4, 62, 39 </code></pre> <p>My theory is that the new BLE Broadcasting device is flooding the RPi with Bluetooth traffic to the point where I'm not ingesting it fast enough and the Bluetooth service closes the socket. Advice?</p> <p>Raspbian Buster Lite bluez-5.52.tar.xz gattlib-0.20150805 pybluez-0.23 Python 3.7.3</p> <p>I should also note that this Lenovo/Qualcomm laptop Bluetooth advertising is enough to cause my go-to Android app <code>nRF Connect</code> to repeatedly cycle Bluetooth. Though I realize I can't prevent the Lenovo/Qualcomm from being naughty, I still feel like I need to protect my app from crashing due to the Bluetooth noise.</p>
<p>So, as it turns out, the <code>broken pipe</code> really does in-fact mean <code>broken pipe</code>... imagine that.</p> <p>I wired up the project to a RPi4 and was able to see the code processing Bluetooth messages fast enough to keep up. As I had supposed in the original question, the RPi3 code was not keeping up with the rate that the Bluetooth chip was receiving messages and at some point, some sort of buffer/pipe/queue filled up and the Bluetooth (Bluez, probably) broke the pipe.</p>
bluetooth-lowenergy|raspberry-pi3|python-3.7|pybluez
0
1,906,658
67,120,611
python celery monitoring events not being triggered
<p>I have a following project directory:</p> <pre><code>azima: __init.py main.py tasks.py monitor.py </code></pre> <p>tasks.py</p> <pre><code>from .main import app @app.task def add(x, y): return x + y @app.task def mul(x, y): return x * y @app.task def xsum(numbers): return sum(numbers) </code></pre> <p>main.py</p> <pre><code>from celery import Celery app = Celery('azima', backend='redis://localhost:6379/0', broker='redis://localhost:6379/0', include=['azima.tasks']) # Optional configuration, see the application user guide. app.conf.update( result_expires=3600, ) if __name__ == '__main__': app.start() </code></pre> <p>monitor.py</p> <pre><code>from .main import app def my_monitor(app): state = app.events.State() def announce_failed_tasks(event): state.event(event) task = state.tasks.get(event['uuid']) print(f'TASK FAILED: {task.name}[{task.uuid}]') def announce_succeeded_tasks(event): print('task succeeded') state.event(event) task = state.tasks.get(event['uuid']) print(f'TASK SUCCEEDED: {task.name}[{task.uuid}]') def worker_online_handler(event): state.event(event) print(&quot;New worker gets online&quot;) print(event['hostname'], event['timestamp'], event['freq'], event['sw_ver']) with app.connection() as connection: recv = app.events.Receiver(connection, handlers={ 'task-failed': announce_failed_tasks, 'task-succeeded': announce_succeeded_tasks, 'worker-online': worker_online_handler, '*': state.event, }) recv.capture(limit=None, timeout=None, wakeup=True) if __name__ == '__main__': # app = Celery('azima') my_monitor(app) </code></pre> <p>Started celery worker with</p> <pre><code>celery -A azima.main worker -l INFO </code></pre> <p>And started <code>monitor.py</code> with</p> <pre><code>python -m azima.monitor </code></pre> <p>But Only <code>worker-online</code> event is being triggered, while other events like <code>task-succeeded</code> is not triggered or handled.</p> <p><a href="https://i.stack.imgur.com/YI9bQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YI9bQ.png" alt="enter image description here" /></a></p> <p>What am I missing here?</p>
<p>Enable worker <code>task-</code> group events with <a href="https://docs.celeryproject.org/en/stable/reference/cli.html#celery-worker" rel="nofollow noreferrer">cli option</a> <code>-E</code> or <code>--task-events</code> and try to capture all events:</p> <pre class="lang-py prettyprint-override"><code>def my_monitor(app): def on_event(event): print(&quot;Event.type&quot;, event.get('type')) with app.connection() as connection: recv = app.events.Receiver(connection, handlers={'*': on_event}) recv.capture(limit=None, timeout=None, wakeup=True) </code></pre>
python|celery|celery-task
3
1,906,659
65,593,231
Could not establish an API connection in Google Cloud Platform
<p>For retrieving monitoring metrics from my project, I used below Python code:</p> <pre class="lang-py prettyprint-override"><code>from google.cloud import monitoring_v3 from google.oauth2 import service_account from googleapiclient import discovery credentials = service_account.Credentials.from_service_account_file( r'D:\GCP\credentials\blahblah-04e8fd0245b8.json') service = discovery.build('compute', 'v1', credentials=credentials) client = monitoring_v3.MetricServiceClient() project_name = f&quot;projects/{blahblah-300807}&quot; resource_descriptors = client.list_monitored_resource_descriptors( name=project_name) for descriptor in resource_descriptors: print(descriptor.type) </code></pre> <p>I did everything well. I gave the file path for credentials information correctly, but I received this error message:</p> <pre class="lang-sh prettyprint-override"><code>raise exceptions.DefaultCredentialsError(_HELP_MESSAGE) google.auth.exceptions.DefaultCredentialsError: \ Could not automatically determine credentials. \ Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create \ credentials and re-run the application. \ For more information, please see \ https://cloud.google.com/docs/authentication/getting-started </code></pre> <p>I even checked that link and tried the alternative method, but still, it didn't work. How can I rectify this? Am I making a mistake?</p>
<p>You don't use the credential when you create the client</p> <pre><code>client = monitoring_v3.MetricServiceClient() </code></pre> <p>You can change it like this</p> <pre><code>client = monitoring_v3.MetricServiceClient(credentials=credentials) </code></pre> <hr /> <p>Personally, I prefer to not explicitly provide the credential in the code, and I prefer to use the environment variable <code>GOOGLE_APPLICATION_CREDENTIALS</code> for this.</p> <p>Create an environment variable in your OS with the name <code>GOOGLE_APPLICATION_CREDENTIALS</code> and the value that point to the service account key file <code>D:\GCP\credentials\blahblah-04e8fd0245b8.json</code>.</p> <p>But, if it's on your own computer, you can even don't use service account key file (which is not really secure, I explain why in <a href="https://medium.com/google-cloud/the-2-limits-of-iam-service-on-google-cloud-7db213277d9c" rel="nofollow noreferrer">this article</a>), you can use your own credential. For this, simply create an application default credential (ADC) like this <code>gcloud auth application-default login</code></p>
python|google-app-engine|google-cloud-platform|credentials
1
1,906,660
26,822,085
how to change this decorator into Python?
<p>I have gotten a copy of the book "Head First Design Patterns", and started to codify some of their examples in Python. I have came to the example about the Decorator pattern, which I know it should use composition, but the author puts a sort of "inheritance". The excuse was that the decorator was made to "achieve the type matching, but not to used it to inherit behaviour". I have a question here, I know that Python is not a typed language so when I the example from the book (which is in Java), I got this:</p> <pre><code>from abc import ABCMeta, abstractmethod class Beverage: __metaClass__=ABCMeta def __init__(self): self.Description="Beverage desconocida" def getDescription(self): return self.Description @abstractmethod def calc(self): pass class CondimentDecorator(Beverage): __metaClass__=ABCMeta @abstractmethod def getDescription(self): pass class Espresso(Beverage): def __init__(self): self.Description="Espresso" def calc(self): return 2.5 class Pasado(Beverage): def __init__(self): self.Description="Roasted coffee" def calc(self): return 1.5 class Mocha(CondimentDecorator): def __init__(self,Beverage): self.Beverage=Beverage def getDescription(self): return self.Beverage.getDescription()+" Mocha " def calc(self): return 0.5+self.Beverage.calc() def main(): Beverage=Espresso() print Beverage.getDescription()," cost: ",Beverage.calc() Beverage2=Pasado() Beverage2=Mocha(Beverage2) print Beverage2.getDescription()," cost: ",Beverage2.calc() Beverage3=Espresso() Beverage3=Mocha(Mocha(Beverage3)) print Beverage3.getDescription()," cost: ",Beverage3.calc() </code></pre> <p>I wonder if the class:</p> <p>CondimentDecorator(Beverage)</p> <p>is well done, because as long as Python has no types, I would not need to inherit from Beverage; am I right?</p> <p>If I change it like:</p> <p>CondimentDecorator(), my code works still, but in that case I wonder if its necessary to have it, because it has only one abstract method and that's it.</p> <p>Do I need to change my code to be more consistent with the Python OOP Programming?</p> <p>Thanks</p>
<p>Python has no strict typing and often there is no need in inheritance or abstract classes. A Decorator-Class is needed for the decorator pattern, because it should define every method of the decorated class to call the method of the embedded instance. If you overwrite all methods, you strictly don't need this decorator class. Here is a version without inheritance:</p> <pre><code>class Espresso(object): description="Espresso" def calc(self): return 2.5 class Pasado(object): description="Roasted coffee" def calc(self): return 1.5 class Mocha(object): def __init__(self, beverage): self.beverage = beverage @property def description(self): return self.beverage.description+" Mocha" def calc(self): return 0.5+self.beverage.calc() def main(): beverage = Espresso() print beverage.description, " cost: ", beverage.calc() beverage2 = Pasado() beverage2 = Mocha(beverage2) print beverage2.description, " cost: ", beverage2.calc() beverage3 = Espresso() beverage3 = Mocha(Mocha(beverage3)) print beverage3.description, " cost: ", beverage3.calc() if __name__ == '__main__': main() </code></pre> <p>On the other side, Python is a dynamic language, and you could write an dynamic decorator</p> <p>class GenericDecorator(object): def <strong>init</strong>(self, obj): self.obj = obj</p> <pre><code> def __getattr__(self, name): return getattr(self.obj, name) class Flavored(GenericDecorator): """Flavor is for free""" def __init__(self, beverage, flavor): GenericDecorator.__init__(self, beverage) self.flavor = flavor @property def description(self): return self.flavor + '-' + self.obj.description def main(): beverage = Espresso() beverage = Flavored(beverage, 'Vanilla') print beverage.description, " cost: ", beverage.calc() if __name__ == '__main__': main() </code></pre>
java|python|decorator
0
1,906,661
45,256,544
Reversing equal sized chunks of a list
<p>If I have a list:</p> <pre><code>[1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3] </code></pre> <p>My goal is to split it into equal sized chunks of <code>n</code>, reverse each chunk, and then put the chunks back in order. So, for the example above, for chunk size 4, I'd get: </p> <pre><code>[1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3] [_________] [_________] [________] [______] | | | | 1 2 3 4 (this is smaller than 4 but receives the same treatment) || [4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 3, 2, 1] </code></pre> <p>This is what I have:</p> <pre><code>n = 4 l = [1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3] chunks = [l[i : i + n] for i in range(0, len(l), n)] print(chunks) # [[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3]] for i in range(len(chunks)): chunks[i] = list(reversed(chunks[i])) # or chunks[i] = chunks[i][::-1] from functools import reduce out = list(reduce(lambda x, y: x + y, chunks)) print(out) # [4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 3, 2, 1] </code></pre> <p>I don't think this is very good though. Is there another way that better utilises python's libraries than this?</p>
<p>What about using the following list comprehension:</p> <pre><code>[x for i in range(0,len(l),4) for x in reversed(l[i:i+4])] </code></pre> <p>or with parameterized chunk size:</p> <pre><code>chunk = 4 [x for i in range(0,len(l),chunk) for x in reversed(l[i:i+chunk])] </code></pre> <p>This generates:</p> <pre><code>&gt;&gt;&gt; [x for i in range(0,len(l),4) for x in reversed(l[i:i+4])] [4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 3, 2, 1] </code></pre> <p>for your given list. Furthermore I guess it is quite declarative (the <code>reversed(..)</code> indicates that you reverse, etc.)</p>
python
5
1,906,662
42,370,732
HEROKU Error opening data file /app/vendor/tesseract-ocr/tessdata/eng.traineddata
<p>I have a Django app which is deployed in Heroku. I'm trying to read text from image using <a href="https://pypi.python.org/pypi/pytesseract" rel="nofollow noreferrer">pytesseract</a> .I can run this app in localhost without problem but in heroku its showing an error <code>Error opening data file /app/vendor/tesseract-ocr/tessdata/eng.traineddata</code> even after I added <strong>pytesseract buildpacks</strong> as mentioned <a href="https://elements.heroku.com/buildpacks/matteotiziano/heroku-buildpack-tesseract" rel="nofollow noreferrer">here</a> <br></p> <pre><code>def ocr(serializer): imgObject = ImageModel.objects.get(id=serializer.data['id']) imgPath = (os.path.join(settings.MEDIA_ROOT, imgObject.image.name)) InputFile = str(imgPath).replace("\\", "/") pytesseract.pytesseract.tesseract_cmd = 'C:/Program Files (x86)/Tesseract-OCR/tesseract' return pytesseract.image_to_string(Image.open(InputFile)) </code></pre>
<p>It looks like this line:</p> <pre><code>pytesseract.pytesseract.tesseract_cmd = 'C:/Program Files (x86)/Tesseract-OCR/tesseract' </code></pre> <p>Is expecting to find a binary to use to perform the image manipulation. This binary won't exist on Heroku. Maybe the buildpack already handles this part of the configuration. Have you tried commenting out this line to see if it will work?</p>
python|django|heroku|ocr|pytesser
0
1,906,663
58,334,575
how to pick the first row in pandas dataframe?
<p>Im trying to sort (ascending) based on a date column, and want to check the first row is in within a date range. So, i can makesure a particular file doesn't suits for the process.</p> <pre><code>eg: file A : contains July+August records file B : contains September+October records </code></pre> <p>I want to pick <code>file B</code> only. If sorted based on date, fileA's first record will be July record/August record.</p> <p>After sorting how should i pick first record?</p> <pre><code>start, end = get_previous_month_start_end() df.sort_values('Document Date') &lt;--pick first record from ascending order if not df[df['Document Date'].between(start, end)] print ('This is not in the date range') </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.sort_values.html" rel="nofollow noreferrer"><code>Series.sort_values</code></a> with select first value by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.iat.html" rel="nofollow noreferrer"><code>Series.iat</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.nsmallest.html" rel="nofollow noreferrer"><code>Series.nsmallest</code></a> - it return one element Series, so also is necessary select by <code>iat</code>:</p> <pre><code>np.random.seed(2019) rng = pd.date_range('2017-04-03', periods=10) df = pd.DataFrame({'Document Date': rng, 'a':np.random.randint(10, size=10)}).sort_values('a') print (df) Document Date a 6 2017-04-09 0 7 2017-04-10 0 1 2017-04-04 2 2 2017-04-05 5 4 2017-04-07 6 8 2017-04-11 7 0 2017-04-03 8 3 2017-04-06 8 5 2017-04-08 8 9 2017-04-12 8 a = df['Document Date'].sort_values().iat[0] print(a) 2017-04-03 00:00:00 a = df['Document Date'].nsmallest(1).iat[0] print (a) 2017-04-03 00:00:00 </code></pre>
python|pandas
3
1,906,664
41,512,842
How to save <ipython.core.display.image object>
<p>I have png data that I can display via IPython.core.display.Image</p> <p>Code example:</p> <pre><code>class GoogleMap(object): &quot;&quot;&quot;Class that stores a PNG image&quot;&quot;&quot; def __init__(self, lat, long, satellite=True, zoom=10, size=(400,400), sensor=False): &quot;&quot;&quot;Define the map parameters&quot;&quot;&quot; base=&quot;http://maps.googleapis.com/maps/api/staticmap?&quot; params=dict( sensor= str(sensor).lower(), zoom= zoom, size= &quot;x&quot;.join(map(str, size)), center= &quot;,&quot;.join(map(str, (lat, long) )), style=&quot;feature:all|element:labels|visibility:off&quot; ) if satellite: params[&quot;maptype&quot;]=&quot;satellite&quot; # Fetch our PNG image data self.image = requests.get(base, params=params).content import IPython IPython.core.display.Image(GoogleMap(51.0, 0.0).image) </code></pre> <p>Result:</p> <p><a href="https://i.stack.imgur.com/gdbf8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gdbf8.png" alt="result" /></a></p> <p>How can I save this picture into a png file.</p> <p>Im actually interested in putting this into a loop, so 1 png file has like 3 pictures continuously.</p> <p>Thanks.</p>
<p>All you need to do is use Python's standard file-writing behavior:</p> <pre><code>img = GoogleMap(51.0, 0.0) with open("GoogleMap.png", "wb") as png: png.write(img.image) </code></pre> <p>Here's a very simple way of accessing the three lat/long pairs you want:</p> <pre><code>places = [GoogleMap(51.0, 0.0), GoogleMap(60.2, 5.2), GoogleMap(71.9, 8.9)] for position, place in enumerate(places): with open("place_{}.png".format(position), "wb") as png: png.write(place.image) </code></pre> <p>I'll leave it up to you to write a function that takes arbitrary latitude/longitude pairs and saves images of them.</p>
python|ipython|display
2
1,906,665
21,402,384
How to split a pandas time-series by NAN values
<p>I have a pandas TimeSeries which looks like this: </p> <pre><code>2007-02-06 15:00:00 0.780 2007-02-06 16:00:00 0.125 2007-02-06 17:00:00 0.875 2007-02-06 18:00:00 NaN 2007-02-06 19:00:00 0.565 2007-02-06 20:00:00 0.875 2007-02-06 21:00:00 0.910 2007-02-06 22:00:00 0.780 2007-02-06 23:00:00 NaN 2007-02-07 00:00:00 NaN 2007-02-07 01:00:00 0.780 2007-02-07 02:00:00 0.580 2007-02-07 03:00:00 0.880 2007-02-07 04:00:00 0.791 2007-02-07 05:00:00 NaN </code></pre> <p>I would like split the pandas TimeSeries everytime there occurs one or more NaN values in a row. The goal is that I have separated events.</p> <pre><code>Event1: 2007-02-06 15:00:00 0.780 2007-02-06 16:00:00 0.125 2007-02-06 17:00:00 0.875 Event2: 2007-02-06 19:00:00 0.565 2007-02-06 20:00:00 0.875 2007-02-06 21:00:00 0.910 2007-02-06 22:00:00 0.780 </code></pre> <p>I could loop through every row but is there also a smart way of doing that??? </p>
<p>You can use <code>numpy.split</code> and then filter the resulting list. Here is one example assuming that the column with the values is labeled <code>"value"</code>:</p> <pre><code>events = np.split(df, np.where(np.isnan(df.value))[0]) # removing NaN entries events = [ev[~np.isnan(ev.value)] for ev in events if not isinstance(ev, np.ndarray)] # removing empty DataFrames events = [ev for ev in events if not ev.empty] </code></pre> <p>You will have a list with all the events separated by the <code>NaN</code> values.</p>
python|numpy|split|pandas|time-series
16
1,906,666
45,885,261
How to run code with Hydrogen
<p>I am completely new to Atom.</p> <p>I installed it and it felt quite easy to use and set up. I read that the Hydrogen package enables functionality similar to the Jupyter Notebook. So I installed the package. Unfortunately, I have no Idea how to use it. I read the entire documentation (which isnt too extensive) and searched for everything I could.</p> <p>So here is my problem: I created a file called testfile1.py In that file i put the very simple line</p> <pre><code>print(‘Hello’) </code></pre> <p>just to see how it works. I marked the line and pressed Ctrl+Enter. At the top right, a window pops up saying “Hydrogen Kernels updated: Python 3”. But then nothing happens. I dont see the result of the code that I tried to run anywhere. I tried different lines of codes, tried differen run-combinations, nothing gives me any results. I am using arch linux, installed Anaconda through the AUR to /opt/anaconda. Using the terminal and running</p> <pre><code>jupyter notebook </code></pre> <p>for example works just fine and opens a Notebook in Firefox (as it should) and running code that imports modules that came along with Anaconda also work fine once i run them with the script package in Atom (things like</p> <pre><code>import numpy as np </code></pre> <p>is letting me work with all the numpy funtions as expected. So I think that it shouldnt be any issues related to the Anaconda packages itself) I tried to look everywhere I could, but I do not find any solution on why hydrogen would not give me any results. Is there anything I am missing or did wrong? I hope someone might be able to help me, thanks already in advance</p>
<p>I'm using Windows 10, Atom version 1.30.0 with Hydrogen 2.6.0 (both with default settings).</p> <p>If you navigate to Packages > Hydrogen you can see the default key bindings.</p> <p><a href="https://i.stack.imgur.com/Zh54k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zh54k.png" alt="Packages &gt; Hydrogen"></a></p> <p>To make a cell there are many options noted in the <a href="https://nteract.gitbooks.io/hydrogen/content/docs/Usage/GettingStarted.html#hydrogen-run-cell" rel="nofollow noreferrer">manual</a> as shown below.</p> <p><a href="https://i.stack.imgur.com/4Lpno.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4Lpno.png" alt="enter image description here"></a></p>
python|atom-editor|jupyter|hydrogen
3
1,906,667
21,464,951
How to populate a 2-d array in python?
<p>What's the pythonic way to rewrite the following C code?</p> <pre><code>int a[16][4]; int s[16] = {1,0,2,3,0,1,1,3,3,2,0,2,0,3,2,1}; for (int i = 0; i &lt; 16; ++i) { for (int j = 0; j &lt; 16; ++j) { int diff = i ^ j; int val = s[i] ^ s[j]; ++a[diff][val]; } } </code></pre>
<p>Here is some equivalent Python code:</p> <pre><code>a = [[0]*4 for i in range(16)] s = [1,0,2,3,0,1,1,3,3,2,0,2,0,3,2,1] for i in range(16): for j in range(16): diff = i ^ j val = s[i] ^ s[j] a[diff][val] += 1 </code></pre>
python|multidimensional-array
1
1,906,668
39,912,495
ubuntu python anaconda2: cannot import matplotlib after activate tensorflow
<p><a href="https://stackoverflow.com/questions/35526867/import-of-packages-does-not-work-probably-version-conflict">Just as this question</a>, I use Ubuntu and Anaconda for python 2.7 to install tensorflow and then activate the environment by <code>source activate tensorflow</code> which is exactly the same as shown in official website. After activation, use <code>python</code> command to enter python environment, now I can <code>import tensorflow as tf</code> but I cannot <code>import matplotlib</code>.</p> <p>Without activating tensorflow, the <code>import matplotlib</code> works but in that case I cannot <code>import tensorflow</code>. So is it a conflict? Can someone tell me how to solve it? Is there any way to keep tensorflow always activated so that I don't need to activate it everytime (my previous ubuntu do have this feature but I forgot how did I make it)?</p>
<p>Try installing <code>matplotlib</code> using anaconda directly with <code>conda install matplotlib</code> from your <code>tensorflow</code> environment. One of the ideas of using anaconda is to keep environment self contained with the ability to avoid dependency conflicts (i.e. I don't see a point in activating the <code>tensorflow</code> for every new shell if you don't intend to use anaconda). You could either avoid the use of anaconda entirely and install tensorflow locally or export <code>source activate tensorflow</code> to your ~/.bashrc</p>
python-2.7|matplotlib|tensorflow|anaconda
2
1,906,669
52,453,777
Scrapy: Simple Project
<p>I want to start a simply scrapy project. It is a python project from visual studio. The VS is running in administration mode. Unfortunately, parse(...) is never called, but should..</p> <pre><code>import scrapy from scrapy.crawler import CrawlerProcess import logging class BlogSpider(scrapy.Spider): name = 'blogspider' start_urls = ['https://blog.scrapinghub.com'] def parse(self, response): for title in response.css('.post-header&gt;h2'): yield {'title': title.css('a ::text').extract_first()} for next_page in response.css('div.prev-post &gt; a'): yield response.follow(next_page, self.parse) logging.error("this should be printed") process = CrawlerProcess({ 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)' }) process.crawl(BlogSpider) process.start() print("ready") </code></pre> <p>EDIT: my output:</p> <pre><code>2018-09-22 08:23:02 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: scrapybot) 2018-09-22 08:23:02 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, Twisted 18.7.0, Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 17:00:18) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.1.0i 14 Aug 2018), cryptography 2.3.1, Platform Windows-10-10.0.17134-SP0 2018-09-22 08:23:02 [scrapy.crawler] INFO: Overridden settings: {'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'} 2018-09-22 08:23:02 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] ready </code></pre> <p>As note: Twisted is used from <a href="https://www.lfd.uci.edu/~gohlke/pythonlibs/" rel="nofollow noreferrer">https://www.lfd.uci.edu/~gohlke/pythonlibs/</a>.</p>
<p>I installed Anaconda, and then executed conda <code>install -c conda-forge scrapy</code> (got some errors).</p> <p>Now everything works fine.</p> <p><a href="https://doc.scrapy.org/en/latest/intro/install.html" rel="nofollow noreferrer">Installation guide</a></p>
python|scrapy
0
1,906,670
26,331,244
Python repeat function in flask until desired result with 40 second delay
<p>I have code to close a gate with Flask on Raspberry Pi. The gate moves slow and when I trigger the motor, it may open, close or stop depending on where it is at in the controller cycle.</p> <p>I want the close() function to check if close limit switch is tripped and if not then trigger motor. After 40 second delay I want to repeat this until the close limit switch is tripped.</p> <p>Here is my close function</p> <pre><code>def Close(): # Check Close Sensor if CloseCheck() == False: return 'Gate is Closed' trigger() sleep(40) Close() return null </code></pre> <p>Flask does not seem to be waiting for the sleep request. Is this normal behavior? How can I re-run Close function every 40 sec until Close Sensor is triggered? </p>
<p>Just loop until <code>CloseCheck()</code> is False</p> <pre><code>def Close(): # Check Close Sensor while CloseCheck(): trigger() # run trigger sleep(40) # sleep and repeat return 'Gate is Closed' # CloseCheck is False, return </code></pre>
python|flask|raspberry-pi|sleep
1
1,906,671
27,685,610
How to write django templates for this
<p>I have a method in Python that returns a dict of values as follows:</p> <pre><code>{u'link': u'dns-prefetch'} {u'link': u'prefetch'} </code></pre> <p>Now my logic is as follows: if link <code>{u'link': u'dns-prefetch'}</code> is present then say <code>"DNS Pre-resolutin is enabled"</code> and if not say <code>"DNS Pre-resolutin is not enabled"</code>. If <code>{u'link': u'prefetch'}</code> is present then say <code>"Page prefetch is enabled"</code> otherwise say <code>"Page prefetch is not enabled"</code>. </p> <p>How can I write this in Django templates.</p>
<pre><code>{% for d in values %} &lt;div&gt; {% ifequal d.link 'dns-prefetch' %} DNS Pre-resolutin is enabled {% else %} DNS Pre-resolutin is not enabled {% endifequal %} &lt;/div&gt; &lt;div&gt; {% ifequal d.link 'prefetch' %} Page prefetch is enabled {% else %} Page prefetch is not enabled {% endifequal %} &lt;/div&gt; {% endfor %} </code></pre>
python|django|django-templates
0
1,906,672
70,416,097
Adding data labels ontop of my histogram Python/Matplotlib
<p>i am trying to add data labels values on top of my histogram to try to show the frequency visibly.</p> <p>This is my code now but unsure how to code up to put the value ontop:</p> <pre><code>plt.figure(figsize=(15,10)) plt.hist(df['Age'], edgecolor='white', label='d') plt.xlabel(&quot;Age&quot;) plt.ylabel(&quot;Number of Patients&quot;) plt.title = ('Age Distrubtion') </code></pre> <p>I was wondering if anyone knows the code to do this:</p> <p><img src="https://i.stack.imgur.com/l3v6v.png" alt="enter image description here" /></p>
<p>You can use the new <code>bar_label()</code> function using the bars returned by <code>plt.hist()</code>.</p> <p>Here is an example:</p> <pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt import pandas as pd import numpy as np df = pd.DataFrame({'Age': np.random.randint(20, 60, 200)}) plt.figure(figsize=(15, 10)) values, bins, bars = plt.hist(df['Age'], edgecolor='white') plt.xlabel(&quot;Age&quot;) plt.ylabel(&quot;Number of Patients&quot;) plt.title = ('Age Distrubtion') plt.bar_label(bars, fontsize=20, color='navy') plt.margins(x=0.01, y=0.1) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/ERkNR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ERkNR.png" alt="plt.hist() with plt.bar_label()" /></a></p> <p>PS: As the age is discrete distribution, it is recommended to explicitly set the bin boundaries, e.g. <code>plt.hist(df['Age'], bins=np.arange(19.999, 60, 5))</code>.</p>
python|matplotlib|annotations|bar-chart|histogram
3
1,906,673
50,133,385
preprocessing images generated using keras function ImageDataGenerator() to train resnet50 model
<p>I am trying to train resnet50 model for image classification problem.I have loaded the 'imagenet' pretrained weights before training the model on the image dataset I have. I am using keras function flow_from_directory() to load images from directory. </p> <pre><code>train_datagen = ImageDataGenerator() train_generator = train_datagen.flow_from_directory( './train_qcut_2_classes', batch_size=batch_size, shuffle=True, target_size=input_size[1:], class_mode='categorical') test_datagen = ImageDataGenerator() validation_generator = test_datagen.flow_from_directory( './validate_qcut_2_classes', batch_size=batch_size, target_size=input_size[1:], shuffle=True, class_mode='categorical') </code></pre> <p>And I pass the generators as parameters in the fit_generator function. </p> <pre><code>hist2=model.fit_generator(train_generator, samples_per_epoch=102204, validation_data=validation_generator, nb_val_samples=25547, nb_epoch=80, callbacks=callbacks, verbose=1) </code></pre> <p><strong>Question:</strong></p> <p>With this setup how do I use preprocess_input() function to preprocess the input images before passing them to the model?</p> <pre><code>from keras.applications.resnet50 import preprocess_input </code></pre> <p><strong>I tried using preprocessing_function parameter as below</strong> </p> <pre><code>train_datagen=ImageDataGenerator(preprocessing_function=preprocess_input) train_generator = train_datagen.flow_from_directory( './train_qcut_2_classes', batch_size=batch_size, shuffle=True, target_size=input_size[1:], class_mode='categorical') test_datagen = ImageDataGenerator(preprocessing_function=preprocess_input) validation_generator = test_datagen.flow_from_directory( './validate_qcut_2_classes', batch_size=batch_size, target_size=input_size[1:], shuffle=True, class_mode='categorical') </code></pre> <p>When i tried to extract the preprocessed output, I got this below result.</p> <pre><code>train_generator.next()[0][0] array([[[ 91.06099701, 80.06099701, 96.06099701, ..., 86.06099701, 52.06099701, 12.06099701], [ 101.06099701, 104.06099701, 118.06099701, ..., 101.06099701, 63.06099701, 19.06099701], [ 117.06099701, 103.06099701, 88.06099701, ..., 88.06099701, 74.06099701, 18.06099701], ..., [-103.93900299, -103.93900299, -103.93900299, ..., -24.93900299, -38.93900299, -24.93900299], [-103.93900299, -103.93900299, -103.93900299, ..., -52.93900299, -27.93900299, -39.93900299], [-103.93900299, -103.93900299, -103.93900299, ..., -45.93900299, -29.93900299, -28.93900299]], [[ 81.22100067, 70.22100067, 86.22100067, ..., 69.22100067, 37.22100067, -0.77899933], [ 91.22100067, 94.22100067, 108.22100067, ..., 86.22100067, 50.22100067, 6.22100067], [ 107.22100067, 93.22100067, 78.22100067, ..., 73.22100067, 62.22100067, 6.22100067], ..., [-116.77899933, -116.77899933, -116.77899933, ..., -36.77899933, -50.77899933, -36.77899933], [-116.77899933, -116.77899933, -116.77899933, ..., -64.77899933, -39.77899933, -51.77899933], [-116.77899933, -116.77899933, -116.77899933, ..., -57.77899933, -41.77899933, -40.77899933]], [[ 78.31999969, 67.31999969, 83.31999969, ..., 61.31999969, 29.31999969, -7.68000031], [ 88.31999969, 91.31999969, 105.31999969, ..., 79.31999969, 43.31999969, -0.68000031], [ 104.31999969, 90.31999969, 75.31999969, ..., 66.31999969, 53.31999969, -2.68000031], ..., [-123.68000031, -123.68000031, -123.68000031, ..., -39.68000031, -53.68000031, -39.68000031], [-123.68000031, -123.68000031, -123.68000031, ..., -67.68000031, -42.68000031, -54.68000031], [-123.68000031, -123.68000031, -123.68000031, ..., -60.68000031, -44.68000031, -43.68000031]]], dtype=float32) </code></pre> <p>To confirm this, I directly used the preprocessing function on a particular image,</p> <pre><code>import cv2 img = cv2.imread('./images.jpg') img = img_to_array(img) x = np.expand_dims(img, axis=0) x = x.astype(np.float64) x = preprocess_input(x) </code></pre> <p>which gives the below output,</p> <pre><code>array([[[[ 118.061, 125.061, 134.061, ..., 97.061, 99.061, 102.061], [ 118.061, 125.061, 133.061, ..., 98.061, 100.061, 102.061], [ 113.061, 119.061, 126.061, ..., 100.061, 101.061, 102.061], ..., [ 65.061, 64.061, 64.061, ..., 60.061, 61.061, 57.061], [ 64.061, 64.061, 63.061, ..., 66.061, 67.061, 59.061], [ 56.061, 59.061, 62.061, ..., 61.061, 60.061, 59.061]], [[ 113.221, 120.221, 129.221, ..., 112.221, 114.221, 113.221], [ 116.221, 123.221, 131.221, ..., 113.221, 115.221, 113.221], [ 118.221, 124.221, 131.221, ..., 115.221, 116.221, 113.221], ..., [ 56.221, 55.221, 55.221, ..., 51.221, 52.221, 51.221], [ 55.221, 55.221, 54.221, ..., 57.221, 58.221, 53.221], [ 47.221, 50.221, 53.221, ..., 52.221, 51.221, 50.221]], [[ 109.32 , 116.32 , 125.32 , ..., 106.32 , 108.32 , 108.32 ], [ 111.32 , 118.32 , 126.32 , ..., 107.32 , 109.32 , 108.32 ], [ 111.32 , 117.32 , 124.32 , ..., 109.32 , 110.32 , 108.32 ], ..., [ 34.32 , 33.32 , 33.32 , ..., 30.32 , 31.32 , 26.32 ], [ 33.32 , 33.32 , 32.32 , ..., 36.32 , 37.32 , 28.32 ], [ 25.32 , 28.32 , 31.32 , ..., 30.32 , 29.32 , 28.32 ]]]]) </code></pre> <p>Any ideas on why this happens?</p>
<p>As an argument when creating <code>ImageDataGenerator</code>:</p> <pre><code>train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input) </code></pre>
python|keras|generator|resnet|image-preprocessing
7
1,906,674
64,151,470
AttributeError at / 'QuerySet' object has no attribute 'users'
<p>I don't know why I am getting this error I am new to Django I researched a lot but I didn't find an answer please when giving me the answer explain as much as you can I want to learn but if you don't want to no problem Thank you, This is my Views.py</p> <pre><code>from django.views.generic import TemplateView from django.shortcuts import render, redirect from django.contrib.auth.models import User from home.forms import HomeForm from home.models import Post, Friend class HomeView(TemplateView): template_name = 'home/home.html' def get(self, request): form = HomeForm() posts = Post.objects.all().order_by('-created') users = User.objects.exclude(id=request.user.id) friend = Friend.objects.filter(current_user=request.user) friends = friend.users.all() args = { 'form': form, 'posts': posts, 'users': users, 'friends': friends } return render(request, self.template_name, args) def post(self, request): form = HomeForm(request.POST) if form.is_valid(): post = form.save(commit=False) post.user = request.user post.save() text = form.cleaned_data['post'] form = HomeForm() return redirect('home:home') args = {'form': form, 'text': text} return render(request, self.template_name, args) def change_friends(request, operation, pk): friend = User.objects.get(pk=pk) if operation == 'add': Friend.make_friend(request.user, friend) elif operation == 'remove': Friend.lose_friend(request.user, friend) return redirect('home:home') </code></pre> <p>This is my models.py</p> <pre><code>from django.db import models from django.contrib.auth.models import User class Post(models.Model): post = models.CharField(max_length=500) user = models.ForeignKey(User, on_delete=models.CASCADE) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) class Friend(models.Model): users = models.ManyToManyField(User) current_user = models.ForeignKey(User, related_name='owner', null=True, on_delete=models.CASCADE) @classmethod def make_friend(cls, current_user, new_friend): friend, created = cls.objects.get_or_create( current_user=current_user ) friend.users.add(new_friend) @classmethod def lose_friend(cls, current_user, new_friend): friend, created = cls.objects.get_or_create( current_user=current_user ) friend.users.remove(new_friend) </code></pre> <p>Traceback 1</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\core\handlers\exception.py&quot;, line 47, in inner response = get_response(request) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\core\handlers\base.py&quot;, line 179, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\views\generic\base.py&quot;, line 70, in v iew return self.dispatch(request, *args, **kwargs) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\views\generic\base.py&quot;, line 98, in d ispatch return handler(request, *args, **kwargs) File &quot;C:\Users\daghe\Desktop\Alone-Osama\home\views.py&quot;, line 17, in get friends = friend.users.all() AttributeError: 'QuerySet' object has no attribute 'users' </code></pre> <p>this is my code if you see some cheap code excuse me :D</p> <p>Traceback 2:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\core\handlers\exception.py&quot;, line 47, in inner response = get_response(request) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\core\handlers\base.py&quot;, line 179, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\views\generic\base.py&quot;, line 70, in view return self.dispatch(request, *args, **kwargs) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\views\generic\base.py&quot;, line 98, in dispatch return handler(request, *args, **kwargs) File &quot;C:\Users\daghe\Desktop\Alone-Osama\home\views.py&quot;, line 25, in get return render(request, self.template_name, args) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\shortcuts.py&quot;, line 19, in render content = loader.render_to_string(template_name, context, request, using=using) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\template\loader.py&quot;, line 62, in render_to_string return template.render(context, request) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\template\backends\django.py&quot;, line 61, in render return self.template.render(context) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\template\base.py&quot;, line 170, in render return self._render(context) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\template\base.py&quot;, line 162, in _render return self.nodelist.render(context) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\template\base.py&quot;, line 938, in render bit = node.render_annotated(context) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\template\base.py&quot;, line 905, in render_annotated return self.render(context) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\template\loader_tags.py&quot;, line 150, in render return compiled_parent._render(context) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\template\base.py&quot;, line 162, in _render return self.nodelist.render(context) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\template\base.py&quot;, line 938, in render bit = node.render_annotated(context) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\template\base.py&quot;, line 905, in render_annotated return self.render(context) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\template\defaulttags.py&quot;, line 312, in render return nodelist.render(context) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\template\base.py&quot;, line 938, in render bit = node.render_annotated(context) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\template\base.py&quot;, line 905, in render_annotated return self.render(context) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\template\defaulttags.py&quot;, line 446, in render url = reverse(view_name, args=args, kwargs=kwargs, current_app=current_app) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\urls\base.py&quot;, line 87, in reverse return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs)) File &quot;C:\Users\daghe\anaconda3\envs\env\lib\site-packages\django\urls\resolvers.py&quot;, line 685, in _reverse_with_prefix raise NoReverseMatch(msg) Exception Type: NoReverseMatch at / Exception Value: Reverse for 'create' not found. 'create' is not a valid view function or pattern name. </code></pre>
<p>It's this line:</p> <pre><code>friends = friend.users.all() </code></pre> <p>You've forgotten that your model classes start with an upper case letter :)</p> <pre><code>friends = Friend.users.all() </code></pre> <p>Should work ... Basically there is no model called <code>friend</code> and therefore it certainly has no <code>user</code> attribute!</p>
python|django|django-queryset|attributeerror
0
1,906,675
52,944,614
Regex with logical OR generates tuple with None
<p>I have been trying to use the regex pattern <code>&gt;(\S.*?)&lt;|#{1}\s+?(\w.*)</code> with the method <code>re.findall</code> over the string </p> <pre><code>&lt;h1 id="section"&gt;First Section&lt;/h1&gt;&lt;a name="first section"&gt; # Section_2 </code></pre> <p>My expected result is two lists</p> <pre><code>["First Section"] ["Section_2"] </code></pre> <p>However, I get</p> <pre><code>["First Section",""] ["","Section_2"] </code></pre> <p>Does someone knows what I am doing wrong?</p> <p>Thanks,</p>
<p>This works for you particular case. I tried to keep more or less the same structure as your regular expression with some minor changes.</p> <pre><code>import re a = '&lt;h1 id="section"&gt;First Section&lt;/h1&gt;&lt;a name="first section"&gt;' b = '# Section_2' r = re.compile(r'((?&lt;=&gt;)\S.*?(?=&lt;)|(?&lt;=#{1}\s)\w.*)') print(r.findall(a)) print(r.findall(b)) </code></pre> <p>The reason why you get two outputs is because you have two capturing groups - <code>(\S.*?)</code> and <code>(\w.*)</code>. Empty means that that group did not capture anything.</p> <p>In the regular expression for the answer I only use one capturing group with an OR condition.</p>
python|regex
1
1,906,676
62,805,002
How to make a discord bot accept multiple prefixes
<p>I am making a bot with multiple commands and for the sake of relevancy I want one command to be used with a <code>+</code> prefix and another one to be used with a <code>!</code> prefix.</p> <p>I have a config file with a dictionary that I imported so I could use those to define my prefix.</p> <p>Here is what my prefix bot thing is:</p> <pre><code>bot = commands.Bot(command_prefix=BOT['DEFAULT_PREFIX']) </code></pre> <p>I tried making another prefix in the config file so it has two of them:</p> <pre><code>'DEFAULT_PREFIX': '+', 'SPECIAL_PREFIX': '!', </code></pre> <p>I could add a second variable such as client = command.Bot... but I already tried that and the default prefix (<code>+</code>) worked fine being used in my cogs.py but the special prefix (<code>!</code>) didn't work with my <code>report</code> command.</p> <p>Is it possible to somehow have two available prefixes for commands to use? <strong>Or even better</strong>, to assign a custom prefix to one decorator? (I have tried doing <code>bot.command(command_prefix='!')</code> but had no luck).</p> <p>Thanks!</p>
<pre><code>bot = commands.Bot(command_prefix=['first prefix','second prefix']) </code></pre>
python|python-3.x|discord|discord.py
2
1,906,677
61,750,223
Iterating in a Pandas dataframe returning a blank result, unsure why?
<p>Instead of performing the calculation and returning a result, the following leaves the Balance column empty. Can anyone understand why?</p> <pre><code>df = pd.read_sql_query("SELECT * FROM "+n+" ORDER BY Date", conn) df = df.replace('None', '') df["DR"] = pd.to_numeric(df["DR"]) df["CR"] = pd.to_numeric(df["CR"]) df["Balance"] = '' i = 0 df.iloc[i,6] = df.iloc[i,3]-df.iloc[i,4] for i in range(1,df.shape[0]): df.iloc[i,6] = df.iloc[i-1,6]+df.iloc[i,3]-df.iloc[i,4] df = df.fillna('') html = df.to_html(header=True,classes="table table-sm table-hover",justify="center",index=False) </code></pre> <p><a href="https://i.stack.imgur.com/vAiKZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vAiKZ.jpg" alt="python"></a></p>
<p>This is because df.iloc[0,4] is NaN (if you see your table it is empty) and hence df.iloc[0,6] will also be NaN.</p> <p>Since the following rows of column-6(Balance) depend on the previous row, all the values end up being NaN.</p> <p>In the end you are performing <strong>df = df.fillna('')</strong> , replacing NaN's with blank space, hence the blank space throughout the column.</p>
python|pandas
2
1,906,678
61,916,996
Pandas insert rows based on value and fill with 0
<p>I have the following dataframe, with the following values. I want to insert rows in order to have a row for every person (Toby, Jane, David), and for every month in 2020. If there is no value for x or y, then fill with 0.</p> <pre><code> ID Name Date x y 0 001 Toby 2020-01-01 15 NaN 1 001 Toby 2020-02-01 12 7 2 001 Toby 2020-05-01 7 1 3 001 Toby 2020-07-01 NaN 1 4 002 Jane 2020-11-01 20 1 5 002 Jane 2020-12-01 21 10 6 003 David 2020-07-01 -3 2 </code></pre> <p>The resulting dataframe should have 36 rows, 12, for each person.</p> <pre><code>ID Name Date x y 0 001 Toby 2020-01-01 15 0 1 001 Toby 2020-02-01 12 7 2 001 Toby 2020-03-01 0 0 3 001 Toby 2020-04-01 0 0 4 001 Toby 2020-05-01 7 1 5 001 Toby 2020-06-01 0 0 6 001 Toby 2020-07-01 0 1 7 001 Toby 2020-08-01 0 0 8 001 Toby 2020-09-01 0 0 9 001 Toby 2020-10-01 0 0 10 001 Toby 2020-11-01 0 0 11 001 Toby 2020-12-01 0 0 12 002 Jane 2020-01-01 0 0 13 002 Jane 2020-02-01 0 0 14 002 Jane 2020-03-01 0 0 15 002 Jane 2020-04-01 0 0 16 002 Jane 2020-05-01 0 0 17 002 Jane 2020-06-01 0 0 18 002 Jane 2020-07-01 0 0 19 002 Jane 2020-08-01 0 0 20 002 Jane 2020-09-01 0 0 21 002 Jane 2020-10-01 0 0 22 002 Jane 2020-11-01 20 1 23 002 Jane 2020-12-01 21 10 24 003 David 2020-01-01 0 0 25 003 David 2020-02-01 0 0 26 003 David 2020-03-01 0 0 27 003 David 2020-04-01 0 0 28 003 David 2020-05-01 0 0 29 003 David 2020-06-01 0 0 30 003 David 2020-07-01 -3 2 31 003 David 2020-08-01 0 0 32 003 David 2020-09-01 0 0 33 003 David 2020-10-01 0 0 34 003 David 2020-11-01 0 0 35 003 David 2020-12-01 0 0 </code></pre> <p>I looked into <code>reindex</code>, and managed to make it work on a single series. But I haven't found a way to generate rows dynamically on a dataframe to then fill the missing values.</p> <p>Any help would be appreciated.</p>
<p>You can use <code>reindex</code> for the purpose:</p> <pre><code># list of the desired dates # make sure that it has the same type with `Date` in your data # here I assume strings dates = pd.Series([f'2020-{x}-01' for x in range(1,13)]), name='Date') (df.set_index(['Date']).groupby(['ID','Name']) .apply(lambda x: x.drop(['ID', 'Name'],axis=1).reindex(dates).fillna(0)) .reset_index() ) </code></pre>
python|pandas|nan|fill|reindex
4
1,906,679
67,232,404
Matplotib Finance (mplfinance) formatting axes of chart unsing mpf.plot()
<p>The MPL finance is great, however I cant seem to tweak the formatting of the axes. In the image I would like to show only the date, without the 00:00 time. Also the price, I would like to add a $ currency and decimal places (variable).</p> <pre><code>import pandas as pd import mplfinance as mpf df = pd.read_csv(csv) df.date = pd.to_datetime(df.date) cols = ['date', 'open', 'high', 'low', 'close', 'volume'] df = df[cols] df = df.sort_values(by=['date'], ascending=False) df = df.set_index('date') </code></pre> <p>And then calling mplfinance with (inserting style):</p> <pre><code>mpf.plot(df, type='candle', volume=True style= *style*) </code></pre> <p>Generates the below charts, I have highlight the parts I would like to change if possible.</p> <p><a href="https://i.stack.imgur.com/4JWBM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4JWBM.png" alt="enter image description here" /></a></p>
<p>For the date format, you can add kwarg <code>datetime_format</code>, for example:</p> <pre class="lang-py prettyprint-override"><code>mpf.plot(df, type='candle', volume=True, style=s, datetime_format='%b %d') </code></pre> <p>For the y-axis tick labels, I would suggest that you simply adjust the y-axis label (not the tick labels) using kwarg <code>ylabel='Price ($)'</code> or something like that.</p> <hr /> <p>Alternatively if you really want a $ sign next to each tick label, you can do this:</p> <ul> <li><a href="https://github.com/matplotlib/mplfinance/wiki/Acessing-mplfinance-Figure-and-Axes-objects" rel="nofollow noreferrer">first gain access to the mplfinance axes objects</a>.</li> <li>Then set the formatter for that axes, as follows:</li> </ul> <pre class="lang-py prettyprint-override"><code>from matplotlib.ticker import FormatStrFormatter fig, axlist = mpf.plot(df,type='candle',volume=True,style=s, datetime_format='%b %d',returnfig=True) axlist[0].yaxis.set_major_formatter(FormatStrFormatter('$%.2f')) mpf.show() </code></pre> <h2>Result with Default Style</h2> <p><a href="https://i.stack.imgur.com/GMJ49.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GMJ49.png" alt="enter image description here" /></a></p>
python|matplotlib|mplfinance
4
1,906,680
60,594,242
How to create Training Sets for K-Fold Cross Validation without ski-kit learn?
<p>I have a data set that has 95 rows and 9 columns and want to do a 5-fold cross-validation. In the training, the first 8 columns (features) are used to predict the ninth column. My test sets are correct, but my x training set is of size (4,19,9) when it should have only 8 columns and my y training set is (4,9) when it should have 19 rows. Am I indexing the subarrays incorrectly?</p> <pre><code>kdata = data[0:95,:] # Need total rows to be divisible by 5, so ignore last 2 rows np.random.shuffle(kdata) # Shuffle all rows folds = np.array_split(kdata, k) # each fold is 19 rows x 9 columns for i in range (k-1): xtest = folds[i][:,0:7] # Set ith fold to be test ytest = folds[i][:,8] new_folds = np.delete(folds,i,0) xtrain = new_folds[:][:][0:7] # training set is all folds, all rows x 8 cols ytrain = new_folds[:][:][8] # training y is all folds, all rows x 1 col </code></pre>
<p>Welcome to Stack Overflow.</p> <p>Once you created a new fold, you need to stack them row-wise using <code>np.row_stack()</code>.</p> <p>Also, I think you are slicing the array incorrectly, in Python or Numpy, the slicing behaviour is <code>[inclusive:exclusive]</code> thus, when you specify the slice as <code>[0:7]</code> you are only taking 7 columns, instead of 8 feature columns as you intended.</p> <p>Similarly, if you are specifying 5 fold in your for loop, it should be <code>range(k)</code> which gives you <code>[0,1,2,3,4]</code> instead of <code>range(k-1)</code> which only gives you <code>[0,1,2,3]</code>.</p> <p>Modified code as such:</p> <pre><code>folds = np.array_split(kdata, k) # each fold is 19 rows x 9 columns np.random.shuffle(kdata) # Shuffle all rows folds = np.array_split(kdata, k) for i in range (k): xtest = folds[i][:,:8] # Set ith fold to be test ytest = folds[i][:,8] new_folds = np.row_stack(np.delete(folds,i,0)) xtrain = new_folds[:, :8] ytrain = new_folds[:,8] # some print functions to help you debug print(f'Fold {i}') print(f'xtest shape : {xtest.shape}') print(f'ytest shape : {ytest.shape}') print(f'xtrain shape : {xtrain.shape}') print(f'ytrain shape : {ytrain.shape}\n') </code></pre> <p>which will print out the fold and the desired shape of training and testing sets for you:</p> <pre><code>Fold 0 xtest shape : (19, 8) ytest shape : (19,) xtrain shape : (76, 8) ytrain shape : (76,) Fold 1 xtest shape : (19, 8) ytest shape : (19,) xtrain shape : (76, 8) ytrain shape : (76,) Fold 2 xtest shape : (19, 8) ytest shape : (19,) xtrain shape : (76, 8) ytrain shape : (76,) Fold 3 xtest shape : (19, 8) ytest shape : (19,) xtrain shape : (76, 8) ytrain shape : (76,) Fold 4 xtest shape : (19, 8) ytest shape : (19,) xtrain shape : (76, 8) ytrain shape : (76,) </code></pre>
python|numpy|machine-learning|cross-validation|k-fold
1
1,906,681
60,545,752
pandas groupby by customized year, e.g. a school year
<p>In a pandas data frame I would like to find the mean values of a column, grouped by a 'customized' year. </p> <p>An example would be to compute the mean values of school marks for a school year (e.g. Sep/YYYY to Aug/YYYY+1). The pandas docs gives some information on offsets and business year etc., but I can't really make any sense out of that to get a working example.</p> <p>Here is a minimal example where mean values of school marks are computed per year (Jan-Dec), which is what I <strong>do not want</strong>. </p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame(data=np.random.randint(low=1, high=5, size=36), index=pd.date_range('2001-09-01', freq='M', periods=36), columns=['marks']) df_yearly = df.groupby(pd.Grouper(freq="A")).mean() </code></pre> <p>This could yield e.g.:</p> <pre><code>print(df): marks 2001-09-30 1 2001-10-31 4 2001-11-30 2 2001-12-31 1 2002-01-31 4 2002-02-28 1 2002-03-31 2 2002-04-30 1 2002-05-31 3 2002-06-30 3 2002-07-31 3 2002-08-31 3 2002-09-30 4 2002-10-31 1 ... 2003-11-30 4 2003-12-31 2 2004-01-31 1 2004-02-29 2 2004-03-31 1 2004-04-30 3 2004-05-31 4 2004-06-30 2 2004-07-31 2 2004-08-31 4 print(df_yearly): marks 2001-12-31 2.000000 2002-12-31 2.583333 2003-12-31 2.666667 2004-12-31 2.375000 </code></pre> <p>My desired output would correspond to something like:</p> <pre><code>2001-09/2002-08 mean_value 2002-09/2003-08 mean_value 2003-09/2004-08 mean_value </code></pre> <p>Many thanks!</p>
<p>We can manually compute the school years:</p> <pre><code># if month&gt;=9 we move it to the next year school_years = df.index.year + (df.index.month&gt;8).astype(int) </code></pre> <p>Another option is to use fiscal year starting from September:</p> <pre><code>school_years = df.index.to_period('Q-AUG').qyear </code></pre> <p>And we can groupby:</p> <pre><code>df.groupby(school_years).mean() </code></pre> <p>Output:</p> <pre><code> marks 2002 2.333333 2003 2.500000 2004 2.500000 </code></pre>
python|pandas|grouping|offset
2
1,906,682
71,302,058
Create multiple box plot at once in Python
<p>How can we create mutiple boxplot at once using matplotlib or seaborn? For example, in a data frame I have numerical variable 'y' and 4 catergorical variables. So, I want 4 box plot for each of the categorial variable with 'y' at once. I can do one by one which is the forth line of the code for one categorical variable. I am attaching my code.</p> <pre><code># Create boxplot and add palette # with predefined values like Paired, Set1, etc #x=merged_df[[&quot;MinWarrantyInMonths&quot;,&quot;MaxWarrantyInMonths&quot;]] sns.boxplot(x='MinWarrantyInMonths', y=&quot;CountSevereAlarm&quot;, data=merged_df, palette=&quot;Set1&quot;) import matplotlib.pyplot as plt plt.style.use('ggplot') from ggplot import ggplot, aes, geom_boxplot import pandas as pd import numpy as np data = merged_df #labels = np.repeat(['A','B'],20) merged_df[[&quot;MinWarrantyInMonths&quot;,&quot;MaxWarrantyInMonths&quot;]]=labels data.columns = ['vals','labels'] ggplot(data, aes(x='vals', y='labels')) + geom_boxplot() </code></pre>
<p>I hope I understood correctly what you're asking. If so, I suggest you try a for loop + using plt.subplot to create them together (side by side for example). See this:</p> <pre><code>columns = ['col1', 'col2', 'col3', 'col4'] for n, column in enumerate(columns): ax = plt.subplot(1, 4, n + 1) sns.boxplot(x=column, y=&quot;CountSevereAlarm&quot;, data=merged_df, palette=&quot;Set1&quot;) </code></pre> <p>within the plt.subplot you'll need to specify the number of rows and columns you want. In your situation this is 1 row, 4 columns (because you're interested in 4 box plots). The n+1 means the index location. Alternatively, (4,1,n+1) means that you'll have 4 rows, 1 column and box plots will appear one after another (not side by side).</p> <p>I hope this helps. You can also read online about Matplotlib and subplots as there are other options to get the same result as you want.</p>
python|matplotlib|seaborn|boxplot
0
1,906,683
70,686,948
How to break a loop depending on answer from input, then move on with the rest of the program
<p>Hi so I'm very new to coding and I'm making a blackjack game for my high school project, I'm on the part where it asks the user to hit or stand and when they hit it asks them again until they bust or stand, I want it so once they stand the loop for asking them to hit or stand breaks but moves on with the rest of the code which is the logic for the dealer / computer to hit or stand. But when they do stand it just stops the entire program. How do I fix this? The code is not fully completed and I apologize for what probably is a big mess.</p> <pre><code># Import the required modules import random # Print welcome message print('Welcome To Blackjack!') # State the rules for the game print('This is blackjack. Rules are simple whoever has the biggest sum of the cards at the end wins. Unless someone gets 21 / blackjack or goes over 21 (bust). Bust is when a player goes over 21 and the lose. Once it is your turn you can choose to hit or stand, if you hit you get another card, if you stand it becomes the other persons turn. Your goal is to get the closest number to 21 without going over it. The game uses no suit cards. The values of the face cards are a Jack = 10, Queen = 10, King = 10, Ace = 11 or 1 whatever works the best. ') # Global Variables deck = [2, 3, 4, 5, 6, 7, 8, 9, 10, 2, 3, 4, 5, 6, 7, 8, 9, 10, 2, 3, 4, 5, 6, 7, 8, 9, 10, 2, 3, 4, 5, 6, 7, 8, 9, 10, 'J', 'Q', 'K', 'A'] dealer_cards = [] player_cards = [] print('Dealing...') # Create the dealcard function to deal a card when needed def dealcard(turn): card = random.choice(deck) turn.append(card) deck.remove(card) # Removes the card out of the deck so it cant repeat. # Create the function to calculate the total cards that each player has def totalvalue(turn): totalvalue = 0 facecards = ['J', 'Q', 'K'] for card in turn: if card in range(1, 11): totalvalue += card # This adds the cards together elif card in facecards: # Checks if a card is a face card (J, Q, K,) totalvalue += 10 # This gives value to face cards else: # This checks if they get an ace and what works the best in case when they get an ace if totalvalue &gt; 11: # If total is over 11 Ace counts as 1 totalvalue += 1 else: # If total is under 11 Ace counts as 11 totalvalue += 11 return totalvalue for dealing in range(2): dealcard(dealer_cards) dealcard(player_cards) print(f&quot;The dealer's cards are {dealer_cards} and the total amount is {totalvalue(dealer_cards)}&quot;) print(f&quot;Your cards are {player_cards} and your total amount is {totalvalue(player_cards)}&quot;) while True: # Take input from user playerchoice = (input('Would you like to \n1.Hit \nor \n2.Stay\n')) # Check what choice user chose and execute it if playerchoice == '1': dealcard(player_cards) print(f&quot;You now have a total of {totalvalue(player_cards)} with these cards {player_cards}&quot;) continue # If they chose to stand move on to dealers / computers turn if playerchoice == '2': print(f&quot;Your cards stayed the same {player_cards} with a total of {totalvalue(player_cards)}&quot;) print(&quot;What will the dealer do?...&quot;) break # Create dealer logic if totalvalue(dealer_cards) &gt;= 18: print(f&quot;The dealer chose to stand their cards are {dealer_cards} with a total of {totalvalue(dealer_cards)}&quot;) if totalvalue(dealer_cards) &lt; 16: dealcard(dealer_cards) print(f&quot;The dealer chose to hit their new cards are {dealer_cards} and their total is {totalvalue(dealer_cards)}&quot;) if totalvalue(dealer_cards) == totalvalue(player_cards): print(f&quot;Its a tie you both have {totalvalue(player_cards)}&quot;) if totalvalue(dealer_cards) == 21: print(f&quot;The dealer got blackjack! You lose...&quot;) break </code></pre>
<blockquote> <p>I want it so once they stand the loop for asking them to hit or stand breaks but moves on with the rest of the code&quot;</p> </blockquote> <p>Your current while loop <code>while True:</code> just repeats infinitely. Instead, you need a condition to specify when you want to continue. For example, you could do</p> <pre><code>while playerchoice == '1': </code></pre> <p>Note that this change will also require more changes to your code. In particular, you won't need the <code>if</code> statement any more because the body of the while loop will only repeat if the user chose &quot;hit me&quot;. The logic for &quot;stand&quot; will go after the while loop. I leave the details for you to figure out.</p>
python
0
1,906,684
17,687,551
python django project and folder structure (differing from WAMP)
<p>I have my development environment setup on Win 7 like this:</p> <p><strong>Django development structure</strong></p> <pre><code>Apache -server- C:\Program Files (x86)\Apache Software Foundation\Apache2.4 PostgreSQL -database- C:\Program Files\PostgreSQL\9.2 Django -framework- C:\Python27\Lib\site-packages\django Python -code- C:\Python27 Project -root- C:\mysite |----------apps |----------HTML |----------CSS |----------JavaScript |----------assets </code></pre> <p>I am attempting to keep this extremely simple to start out. There are 5 main directories each with a distinct purpose. All the code resides in the project folder.</p> <p><strong>compared to WAMP structure:</strong></p> <pre><code>C:\WAMP |----------C:\Apache |----------C:\MySQL |----------C:\PHP |----------C:\www </code></pre> <p>I like how Apache, MySQL, and PHP all reside in a neat directory. I know to keep the root project OUTSIDE in another directory in Django for security reasons.</p> <ul> <li>Is it fine that Apache, PostgreSQL, and Python are installed all over the place in the Django environment?</li> <li>Did I miss a core Django component and/or directory?</li> <li>Will deploying and scaling be a problem?</li> </ul> <p>I want this to be a guideline for beginning Django web programmers.</p>
<p>Apache is just web server, it is used to serve files, but to make a website you do not necessary need it. Django comes with its own development server. See :</p> <pre><code>python manage.py runserver </code></pre> <p>Apache is required when you are developing PHP websites because your computer do not know how to compile and interpret it. But for Django, you use the Python language, and you have already install it if you are using Django.</p> <p>Read <a href="https://docs.djangoproject.com/en/1.5/intro/tutorial01/" rel="nofollow">https://docs.djangoproject.com/en/1.5/intro/tutorial01/</a></p> <p>And where it will be the time to set up your own server using Apache look at : <a href="https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/modwsgi/" rel="nofollow">https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/modwsgi/</a>.</p>
python|django|directory-structure|project-structure
1
1,906,685
60,787,392
I am returning a list from my function, but when I print that returned list, it prints NONE
<pre><code>def f(N, end): if end==-1: N=[1]+N print (N) return (N) if N[end]!=9: N[end]+=1 return (N) if N[end]==9: N[end]=0 end-=1 print (N) f(N,end) L=[9,9,9,9,9] print(f(L, len(L)-1)) </code></pre>
<p>You are missing the return clause in the second if statement</p> <pre><code> if N[end]!=9: N[end]+=1 return (N) if N[end]==9: N[end]=0 end-=1 print (N) return f(N,end) </code></pre> <p>By fixing your code like that it should work</p>
python|arrays|function|printing
0
1,906,686
66,018,691
How do I select one value from a table which was imported from Excel into Python with Pandas?
<p>I want to select just one value from a table which was imported into Python using Pandas.</p> <p>For example:</p> <pre><code>import pandas as pd data = pd.read_excel (r'x,y points.xlsx') print (data) </code></pre> <p>Output:</p> <pre><code> x y 0 8 12 1 9 10 2 11 11 3 11 12 4 13 14 5 14 16 6 18 21 7 15 17 </code></pre> <p>How do I select just one of the values.. For example the '18' in the 'x' field?</p>
<p>You should use <code>loc</code> (<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html</a>) It takes the index of the selection and then the column, for example the value 18 is in the index <code>6</code> and column <code>x</code> so it will be</p> <pre><code>value = data.loc[6, &quot;x&quot;] </code></pre>
python|pandas
0
1,906,687
69,045,234
Calculation of the mean_pressure_weighted function
<p>The <a href="https://unidata.github.io/MetPy/latest/api/generated/metpy.calc.mean_pressure_weighted.html#metpy.calc.mean_pressure_weighted" rel="nofollow noreferrer">mean pressure weighted</a> function defined <a href="https://github.com/Unidata/MetPy/blob/3dc05c8ae40a784956327907955fc09a3e46e0e1/src/metpy/calc/indices.py#L99" rel="nofollow noreferrer">here</a> seems to be based on an odd formulation(see code below). Holton(fifth edition ,page 20), and many otheres calculate the sum the of the desired variable multiplied by dp and not by pdp as shown in the code below. Also most authors normalize the result by summation of dp which is sufrace pressure - top pressure. Yet, the code below use sufrace pressure^2 - top pressure^2. Is there is any reference for the formula used below. Thanks</p> <pre><code># Taking the integral of the weights (pressure) to feed into the weighting # function. Said integral works out to this function: pres_int = 0.5 * (pres_prof[-1] ** 2 - pres_prof[0] ** 2) # Perform integration on the profile for each variable return [np.trapz(var_prof * pres_prof, x=pres_prof) / pres_int for var_prof in others] </code></pre>
<p>Unfortunately I don't have access to my copy of Holton right now, so I can't look at what's done there. I can say that if you weight by <code>dp</code> rather than <code>p * dp</code>, you're not calculating the <em>pressure weighted</em> mean, you're only calculating the mean.</p> <p>The formula used falls out directly from the definition of a <a href="https://mathinsight.org/averages_weighted_averages_refresher" rel="nofollow noreferrer">weighted average using an integral</a>, most importantly:</p> <p><a href="https://i.stack.imgur.com/ol2JH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ol2JH.png" alt="enter image description here" /></a></p> <p>When you substitute in <code>w(x)</code> as <code>p</code> and <code>dx</code> as <code>p</code> you get the integral of <code>p * dp</code>, which has an antiderivative of <code>p**2</code>.</p> <p>It would probably be useful to add to MetPy a function that does the same set of integrals without any weighting, since that is different than simply using <code>numpy.mean</code>.</p>
python|metpy
1
1,906,688
72,598,717
How to read all spss files in a folder in pandas? and concatenate them
<p>I would like to read several SAV files (SPSS) from a directory into pandas and concatenate them into one big DataFrame. I have not been able to figure it out though. Here is what I have so far:</p> <pre><code> path = r'\C:\abc\path' all_files = glob.glob(path + &quot;\*.sav&quot;) df_list = [] for filename in all_files: df = pd.read_spss(filename,convert_categoricals=False) df_list.append(filename) pd.concat(df_list) </code></pre> <p><strong>I am getting the error below.</strong></p> <pre><code>OverflowError: date value out of range </code></pre> <p><strong>The below code is running fine but I am getting error when I am looping through files and reading them.</strong></p> <pre><code>df = pd.read_spss(all_files[0]) </code></pre>
<p>When you append an element to the list, I think you should add the df instead of filename, like this:</p> <pre><code>df_list.append(df) </code></pre>
python|python-3.x|spss
0
1,906,689
59,128,468
How to get a list of lists from a dictionary of dictionaries in python?
<p>I want to convert a dictionary of dictionaries to a list of lists. The format of the dictionary is:</p> <pre><code>{0: {’apple’: 2, ’orange’: 5, ’banana’: 4}, 1: {'apple’: 2, ’orange’: 1, ’banana’: 7}} </code></pre> <p>Where the keys go from 0,1,2,3 etc... and the values of the keys inside the dictionary is the number of that fruit. I'm trying to make a list of lists that looks like:</p> <pre><code> [[2, 5, 4], [2, 1, 7]] </code></pre> <p>Where each sublist is the original key (0,1,2,3 etc...). So if there are 4 dictionaries, then there are 4 sublists. </p> <p>Id prefer with no fancy code and no imports. How would I go about doing this? Any help would be greatly appreciated!</p>
<pre><code>x = {0: {'apple': 2, 'orange': 5, 'banana': 4}, 1: {'apple': 2, 'orange': 1, 'banana': 7}} print([list(z.values()) for y,z in x.items()]) </code></pre>
python|python-3.x|list|dictionary|converters
0
1,906,690
63,023,447
How to download a list of csv using python?
<p>I'm running a python program to download a selected list of CSV files from canada.ca. I have all the urls I need but I don't know how to download them to my local directory. I believe that I have to use a request, and write the files in a loop. But i'm kind lost on how to do it, thanks in advance.</p> <pre><code>en_urls = [] for link in soup.find_all('a'): if 'EN.csv' in link.get('href', []): en_urls.append(link.get('href')) Output ['http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/ae-ei/Positive_Employers_EN.csv', 'http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/ae-ei/2015_Positive_Employers_EN.csv', 'http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/ae-ei/2016_Positive_Employer_EN.csv', 'http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/ae-ei/2017Q1Q2_Positive_EN.csv', 'http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/ae-ei/2017Q3_Positive_Employer_Stream_EN.csv', 'http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/ae-ei/2018Q1_Positive_Employer_EN.csv', 'http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/ae-ei/2018Q2_Positive_Employer_EN.csv', 'http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/ae-ei/2017Q4_Positive_Employer_EN.csv', 'http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/ae-ei/2018Q3_Positive_EN.csv', 'http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/ae-ei/2018Q4_Positive_EN.csv', 'http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/imt-lmi/TFWP_2019Q1_employer_positive_EN.csv', 'http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/imt-lmi/TFWP_2019Q2_employer_positive_EN.csv', 'http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/imt-lmi/TFWP_2019Q3_Positive_EN.csv', 'http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/imt-lmi/TFWP_2019Q4_Positive_EN.csv', 'http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/imt-lmi/TFWP_2020Q1_Positive_EN.csv'] </code></pre>
<p>You can use <a href="https://docs.python.org/3.8/library/urllib.request.html#urllib.request.urlretrieve" rel="nofollow noreferrer"><code>urllib.request.urlretrieve()</code></a> in a loop.</p> <p>For example:</p> <pre><code>import urllib.request lst = ['http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/ae-ei/Positive_Employers_EN.csv', 'http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/ae-ei/2015_Positive_Employers_EN.csv', 'http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/ae-ei/2016_Positive_Employer_EN.csv', 'http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/ae-ei/2017Q1Q2_Positive_EN.csv'] for i in lst: print('Downloading {}..'.format(i)) local_filename, _ = urllib.request.urlretrieve(i, filename=i.split('/')[-1]) print('File saved as {}'.format(local_filename)) </code></pre> <p>Prints:</p> <pre><code>Downloading http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/ae-ei/Positive_Employers_EN.csv.. File saved as Positive_Employers_EN.csv Downloading http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/ae-ei/2015_Positive_Employers_EN.csv.. File saved as 2015_Positive_Employers_EN.csv Downloading http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/ae-ei/2016_Positive_Employer_EN.csv.. File saved as 2016_Positive_Employer_EN.csv Downloading http://www.edsc-esdc.gc.ca/ouvert-open/bca-seb/ae-ei/2017Q1Q2_Positive_EN.csv.. File saved as 2017Q1Q2_Positive_EN.csv </code></pre>
python|csv|beautifulsoup|python-requests|data-analysis
1
1,906,691
62,310,937
What is the use of 'as_supervised' boolean expression while loading a Tensorflow Dataset?
<p>Let's say that I'm trying to import the <em>imdb_reviews</em> dataset from <strong>tensorflow_datasets</strong> using the following: </p> <pre><code>imdb, info = tfds.load("imdb_reviews", with_info=True, as_supervised=True) </code></pre> <p>Now in this, I tried to find in the documentation about the <em>as_supervised</em> boolean but didn't understand. Can anyone help me here?</p>
<p>If as_supervised, the resulting Dataset will be a collection of tuples containing a label for each input. </p> <p>("a horrible thing", "bad"), ("a wonderful thing", "good"), ("woe be me", "bad")…</p>
tensorflow|keras|deep-learning|dataset
1
1,906,692
62,065,617
How to read & write Local MySQL Server 8 from Google Colab with Pyspark?
<p>I have been trying but failing to write/read tables from MySQL Server 8.0.19 on localhost on Windows 10 with pyspark from Google colab. There's also a lot of similar questions and with some suggested answers but none of the solutions seem to work here. Here is my code:</p> <pre><code> &lt;...installations ...&gt; from pyspark.sql import SparkSession spark = SparkSession\ .builder\ .appName("Word Count")\ .config("spark.driver.extraClassPath", "/content/spark-2.4.5-bin-hadoop2.7/jars/mysql-connector-java-8.0.19.jar")\ .getOrCreate() </code></pre> <p>An here is the connection string:</p> <pre><code>MyjdbcDF = spark.read.format("jdbc")\ .option("url", "jdbc:mysql://127.0.0.1:3306/mydb?user=testuser&amp;password=pwtest")\ .option("dbtable", "collisions")\ .option("driver","com.mysql.cj.jdbc.Driver")\ .load() </code></pre> <p>I have as well used the <code>.option("driver","com.mysql.jdbc.Driver")</code> but still keep getting this error:</p> <pre><code>Py4JJavaError: An error occurred while calling o154.load. com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. ... ... ... Caused by: java.net.ConnectException: Connection refused (Connection refused) </code></pre> <p>From this, I guess that MySQL Sever is not reachable. I have Telnetted to port 3306 &amp; it confirmed that MySQL Server is accepting connections from client machine. I have read that running: <code>netsh advfirewall firewall add rule name="MySQL Server" action=allow protocol=TCP dir=in localport=3306</code> will permitting firewall rule for MySQL Server incase it was being blocked, yet no change.</p> <p>Can somebody help outpy?</p>
<p>Here's how I install and setup MySQL on Colab</p> <pre class="lang-py prettyprint-override"><code># install, set connection !apt-get install mysql-server &gt; /dev/null !service mysql start !mysql -e "ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'root'" !pip -q install PyMySQL %load_ext sql %config SqlMagic.feedback=False %config SqlMagic.autopandas=True %sql mysql+pymysql://root:root@/ # query using %sql or %%sql df = %sql SELECT Host, User, authentication_string FROM mysql.user df </code></pre>
python|mysql|pyspark|google-colaboratory
5
1,906,693
62,318,418
Pandas: Merge two Dataframes (same columns) with condition... How can i improve this code?
<p>(Sorry, my english skills is bad...)</p> <p>I'm studying with public data. I'm trying merge two excel files with some condition. I tried multi-loop code, but it's too slow... How can I improve my code?</p> <p>Please help me TvT</p> <h1>DataStructure example is</h1> <p>old data(entire_file.xlsx)</p> <pre><code> KeyCode Date Something 0 aaa 2020-01-01 00:00:00 adaf 1 bbb 2020-02-01 00:00:00 awd 2 ccc 2020-03-01 00:00:00 feq ... 6000 aewi 2020-03-03 00:00:00 awefeaw </code></pre> <p>new data(file2.xlsx)</p> <pre><code> KeyCode Date Something 1 bbb 2020-06-01 20:00:00 aafewfaewfaw 2 ccc 2020-06-01 20:00:00 dfqefqe 3 new 2020-06-01 20:00:00 newrow </code></pre> <p>hope(file3.xlsx)</p> <pre><code> KeyCode Date Something 0 aaa 2020-01-01 00:00:00 adaf 1 bbb 2020-06-01 20:00:00 aafewfaewfaw 2 ccc 2020-06-01 20:00:00 dfqefqe ... 6000 aewi 2020-03-03 00:00:00 awefeaw 6001 new 2020-06-01 20:00:00 newrow </code></pre> <p><strong>Code:</strong></p> <pre class="lang-py prettyprint-override"><code> import numpy as np import pandas as pd %matplotlib notebook import matplotlib.pyplot as plt data = pd.read_excel('fulldata_01_01_01_P_병원.xlsx', index_col='번호') tmp = pd.read_excel('(20200601~20200607)_01_01_01_P_병원.xlsx', index_col='번호') print('{} is tmp rows count'.format(len(tmp.index))) print('{} is data rows count'.format(len(data.index))) new_data = pd.DataFrame([]) for j in range(len(tmp.index)): ischange = False; isexist = False; for i in range(len(data.index)): if (data.iloc[i].loc['KeyCode'] == tmp.iloc[j].loc['KeyCode']) and (data.iloc[i].loc['Date'] &lt; tmp.iloc[j].loc['Date']) : ischange = True data.iloc[i] = tmp.iloc[j] break elif (data.iloc[i].loc['KeyCode'] == tmp.iloc[j].loc['KeyCode']) : isexist = True break if ischange : print('{} is change'.format(j)) elif isexist : print('{} is exist'.format(j)) elif not ischange and not isexist : print('{} is append'.format(j)) new_data.append(tmp.iloc[j], ignore_index=True) data.append(new_data, ignore_index=True) print('{} is tmp rows count'.format(len(tmp.index))) print('{} is data rows count'.format(len(data.index))) </code></pre> <p>But... it is not working...</p>
<p>If you just want to get the new data or the updated but not existing:</p> <pre><code>result = pd.concat([data, tmp], ignore_index=True, sort=False) result = result.sort_values(['KeyCode', 'Date'], ascending=[True,True]) # order to find duplicates later result = result.drop_duplicates('KeyCode', keep='first') # drop old data </code></pre>
python|pandas
0
1,906,694
31,560,065
Handling Parquet Files in Python
<p>I am trying to handle parquet tables from hive in Python and facing some data types issues. For Eg, if I have a a field in my hive parquet table as </p> <p><code>decimal (10,2)</code> , its giving a junk value while I am trying to read the file in python. Please give some inputs on this.</p>
<p>I thought this might help a bit, although it isn't a proper answer. I have this method in my PySpark job before I store to Parquet, for example, to convert decimals to floats so they read ok in Pandas DataFrames. In this case I am shrinking the types but you get the idea:</p> <pre><code>def shrink_types(df): &quot;&quot;&quot;Reduce data size by shrinking the types&quot;&quot;&quot; # Loop through the data type tuples and downcast the column for t in df.dtypes: column_name = t[0] column_type = t[1] if column_type == 'double' or 'decimal' in column_type: df = df.withColumn( column_name, F.col(column_name).cast('float') ) return df </code></pre> <p>Then I call it via:</p> <pre><code>equities_df = shrink_types(equities_df) # Save and restore so it actually runs equities_df.write.mode('overwrite').parquet( path='s3://bucket/path/dataset.parquet', ) </code></pre>
python|hive|parquet
0
1,906,695
15,728,054
Python data transfer via bluetooth
<p>i need to send data computed by <strong>python</strong> program to my <strong>lego mindstorms nxt2.0</strong> robot via <strong>bluetooth</strong>. how is this possible? what tools do i require?</p>
<p>I have been trying to do the same thing. There is a information file about the blue tooth communication at <a href="http://us.mindstorms.lego.com/en-us/support/files/default.aspx" rel="nofollow">the mindstorms web site.</a> If you are using python 3.3 or newer you can use <a href="http://docs.python.org/3/howto/sockets.html" rel="nofollow">sockets</a> to communicate. I still have not gotten this to work. If you are not using python 3.3 you could use pybluez. </p>
python|bluetooth|lego-mindstorms
0
1,906,696
59,823,164
Is there a way to enter a string starting with '-' sign as a command line argument using the argparse module? (without using flags)
<p>so i have this code</p> <pre><code># driver code if __name__ == "__main__": # parse command line arguments parser = argparse.ArgumentParser() parser.add_argument("InputDataFile", help="Enter the name of CSV file with .csv extention",type=str) parser.add_argument("Weights", nargs=1, help="Enter the weight vector comma separated" ,type=str) parser.add_argument("Impacts", nargs=1, help="Enter the impact vector comma separated",type=str) args = parser.parse_args() main(vars(args)) </code></pre> <p>and i want to enter a string like</p> <pre><code>python top2.py data.csv "0,1,1,1" "-,+,+,+" </code></pre> <p>but I get input error :</p> <pre><code>usage: top2.py [-h] InputDataFile Weights Impacts top2.py: error: the following arguments are required: Impacts </code></pre> <p>the code works properly if the first character for input string is '+' sign, with a '-' anywhere in-between (as in "+,-,+"). But if first char is '-' i get the above error. what i am guessing is that the parser takes the '-' hyphen as the beginning of another flag and its arguments.</p> <p>I couldnt find any relevant material online, please help.</p> <p>And it is important to input the string in the manner given above, so cannot change the input format.</p> <p>edit: if i enter the string as "- ,+,+,+" or add spaces anywhere in the string, he code runs fine. </p>
<p>You can simply add a lone <code>--</code> to your command line to indicate "this is the end of options", like this:</p> <pre><code>python top2.py -- data.csv "0,1,1,1" "-,+,+,+" </code></pre> <p>Everything after the <code>--</code> is parsed as a positional argument rather than an option.</p>
python|command-line|command-line-interface|argparse
4
1,906,697
59,629,609
Seaborn heatmap top and bottom row are partially truncated
<p>I am trying to make a heatmap out of a dataframe, but the size of the blocks in the first and last row are not as equal as the blocks in the other rows. How can I fix this problem?</p> <p>P.S. I am using python3 and seaborn library to produce the heatmap.</p> <p><a href="https://i.stack.imgur.com/hDRRj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hDRRj.png" alt="enter image description here"></a></p>
<p>This <a href="https://github.com/mwaskom/seaborn/issues/1773" rel="nofollow noreferrer">issue</a> has been raised and closed on the Seaborn github. The solution found there by <a href="https://github.com/ResidentMario" rel="nofollow noreferrer">ResidentMario</a> &amp; <a href="https://github.com/MaozGelbart" rel="nofollow noreferrer">MaozGelbart</a> was:</p> <blockquote> <p>This was a matplotlib regression introduced in 3.1.1 which has been fixed in 3.1.2 (still forthcoming). For now the fix is to downgrade matplotlib to a prior version.</p> </blockquote> <p>And later,</p> <blockquote> <p>Matplotlib 3.1.2 has been released (also available for conda users through conda-forge using conda install -c conda-forge matplotlib=3.1.2). This fixes the issue.</p> </blockquote>
python|python-3.x|pandas|matplotlib|seaborn
3
1,906,698
49,312,476
How to open and iterate through a list of CSV files - Python
<p>I have a list of pathnames to CSV files. I need to open each CSV file, take the data without the header and then merge it all together into a new CSV file.</p> <p>I have this code which gets me the list of CSV file pathnames:</p> <pre><code>file_list = [] for folder_name, sub_folders, file_names in os.walk(wd): for file_name in file_names: file_extention = folder_name + '\\' + file_name if file_name.endswith('csv'): file_list.append(file_extention) </code></pre> <p>An example of my list is:</p> <pre><code>['C:\\Users\\Documents\\GPS_data\\West_coast\\Westland\\GPS_data1.csv', 'C:\\Users\\Documents\\GPS_data\\West_coast\\Westland\\GPS_data2.csv', 'C:\\Users\\Documents\\GPS_data\\West_coast\\Westland\\GPS_data3.csv'] </code></pre> <p>I am struggling to figure out what to do, any help would be greatly appreciated. Thanks.</p>
<p>The main idea is to read in each line of a file, and write it to the new file. But remember to skip the first line that has the column headers in it. I previously recommend the cvs module, however it doesn't seem like that is necessary, since this task does not require analyzing the data.</p> <pre><code>file_list = ['data1.csv','data2.csv'] with open('new.csv', 'w') as newfile: # create a new file for filename in filelist: with open(filename) as csvfile: next(csvfile) # skip the header row for row in csvfile: newfile.write(line) # write to the new csv file </code></pre> <p>Edit: clarified my answer.</p>
python|csv
2
1,906,699
60,016,679
Send multiple files with Flask?
<p>I have this code for send single file to user side:</p> <p>Client side:</p> <pre><code>@app.route('/image, methods = ['GET','POST']) def image(): # CU.close() SqlPicPlace="SELECT ImgData FROM tablename WHERE ImgSaveID=2 " CU.execute(SqlPicPlace) ans=CU.fetchone() imgBinary = ans[0] return send_file( io.BytesIO(imgBinary), attachment_filename='a.jpg', mimetype='image/jpg' ,as_attachment=True ) </code></pre> <p>But I want send more than 1 file send to user side. How can I do? </p>
<p>HTTP does not support this functionality, but you can zip your files into an archive and send it to a user.</p>
python|flask|sendfile
1