Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,906,900 | 23,317,781 |
ValueError while using Scikit learn. Number of features of model don't match that of input
|
<p>I am working on a classification problem using RandomForestClassifier. In the code I'm splitting the dataset into a train and test data for making predictions. </p>
<p>Here's the code:</p>
<pre><code>from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split
import numpy as np
from numpy import genfromtxt, savetxt
a = (np.genfromtxt(open('filepath.csv','r'), delimiter=',', dtype='int')[1:])
a_train, a_test = train_test_split(a, test_size=0.33, random_state=0)
def main():
target = [x[0] for x in a_train]
train = [x[1:] for x in a_train]
rf = RandomForestClassifier(n_estimators=100)
rf.fit(train, target)
predicted_probs = [[index + 1, x[1]] for index, x in enumerate(rf.predict_proba(a_test))]
savetxt('filepath.csv', predicted_probs, delimiter=',', fmt='%d,%f',
header='Id,PredictedProbability', comments = '')
if __name__=="__main__":
main()
</code></pre>
<p>On exection however, I'm getting the following error:</p>
<blockquote>
<p>ValueError: Number of features of the model must match the input.
Model n_features is 1434 and input n_features is 1435</p>
</blockquote>
<p>Any suggestions as to how I should proceed? Thanks.</p>
|
<p>It looks like you are using <code>a_test</code> directly, without stripping out the output feature.</p>
<p>The model is confused because it expects only 1434 input features but you are feeding it 1434 features along with the output feature.</p>
<p>You can fix this by doing the same thing with <code>test</code> that you did with train.</p>
<pre><code>test = [x[1:] for x in a_test]
</code></pre>
<p>Then use <code>test</code> on the following line:</p>
<pre><code>predicted_probs = [[index + 1, x[1]] for index, x in enumerate(rf.predict_proba(test))]
</code></pre>
|
python|python-2.7|numpy|scikit-learn
| 4 |
1,906,901 | 8,169,561 |
How to speed up debugging of python programs in PyDev in Eclipse (esspecially Google App Engine)
|
<p>How it possible to speed up debugging in PyDev in Eclipse for Google App Engine programs?</p>
<ol>
<li>How to speed up code execution?</li>
<li>How to speed up application reloading?</li>
</ol>
<p>Please share your experience or suggestions.</p>
|
<p>How often do you need to reload the application?, the dev server will update all your code and configuration changes without need to reload.</p>
|
python|debugging|google-app-engine|optimization|pydev
| 0 |
1,906,902 | 41,781,427 |
Replace number in a string by bracketed number Python
|
<p>I have a string like this:</p>
<pre><code>s = k0+k1+k1k2+k2k3+1+12
</code></pre>
<p>I want to convert this, such that every number, which follows a letter (<code>k</code> here) becomes surrounded by square brackets:</p>
<pre><code>k[0]+k[1]+k[1]k[2]+k[2]k[3]+1+12
</code></pre>
<p>What is a good way to do that?</p>
<p>What I tried: Use <code>replace()</code> function 4 times (but it cannot handle numbers not followed by letters).</p>
|
<p>Here is one option using <code>re</code> module with regex <code>([a-zA-Z])(\d+)</code>, which matches a single letter followed by digits and with <code>sub</code>, you can enclose the matched digits with a pair of brackets in the replacement:</p>
<pre><code>import re
s = "k0+k1+k1k2+k2k3+1+12"
re.sub(r"([a-zA-Z])(\d+)", r"\1[\2]", s)
# 'k[0]+k[1]+k[1]k[2]+k[2]k[3]+1+12'
</code></pre>
<hr>
<p>To replace the matched letters with upper case, you can use a lambda in the replacement positions to convert them to upper case:</p>
<pre><code>re.sub(r"([a-zA-Z])(\d+)", lambda p: "%s[%s]" % (p.groups(0)[0].upper(), p.groups(0)[1]), s)
# 'K[0]+K[1]+K[1]K[2]+K[2]K[3]+1+12'
</code></pre>
|
python|string|replace
| 6 |
1,906,903 | 47,132,665 |
Cartesian Product in Tensorflow
|
<p>Is there any easy way to do cartesian product in Tensorflow like itertools.product? I want to get combination of elements of two tensors (<code>a</code> and <code>b</code>), in Python it is possible via itertools as <code>list(product(a, b))</code>. I am looking for an alternative in Tensorflow. </p>
|
<p>I'm going to assume here that both <code>a</code> and <code>b</code> are 1-D tensors.</p>
<p>To get the cartesian product of the two, I would use a combination of <code>tf.expand_dims</code> and <code>tf.tile</code>:</p>
<pre><code>a = tf.constant([1,2,3])
b = tf.constant([4,5,6,7])
tile_a = tf.tile(tf.expand_dims(a, 1), [1, tf.shape(b)[0]])
tile_a = tf.expand_dims(tile_a, 2)
tile_b = tf.tile(tf.expand_dims(b, 0), [tf.shape(a)[0], 1])
tile_b = tf.expand_dims(tile_b, 2)
cartesian_product = tf.concat([tile_a, tile_b], axis=2)
cart = tf.Session().run(cartesian_product)
print(cart.shape)
print(cart)
</code></pre>
<p>You end up with a len(a) * len(b) * 2 tensor where each combination of the elements of <code>a</code> and <code>b</code> is represented in the last dimension.</p>
|
python|tensorflow
| 11 |
1,906,904 | 47,144,092 |
Django BooleanField doesn't work on edit form
|
<p>My situation:</p>
<p><strong>Models.py</strong></p>
<pre><code>class Box(models.Model):
is_empty = models.BooleanField(default=False)
upload_date = models.DateTimeField(auto_add_now=True)
</code></pre>
<p><strong>Forms.py</strong></p>
<pre><code>class BoxForm(forms.ModelForm):
class Meta:
model = Box
fields = (is_empty,)
</code></pre>
<p><strong>Views.py</strong></p>
<pre><code>def edit_box(request, pk):
box = get_object_or_404(Box, pk=pk)
if request.method == 'POST':
form = BoxForm(request.POST, instance=box)
if form.is_valid():
form.save()
return redirect('home')
else:
form = BoxForm(instance=box)
return render(request, 'form.html', {'form': form})
</code></pre>
<p>The problem is that when I enter into the template form and I want to change checkbox value I can't do it, it seems disabled and I dan't know why.</p>
<p><strong>EDIT</strong>
<strong>Template.html</strong></p>
<pre><code>...
<form class="row gap-y" method="POST">{% csrf_token %}
<div class="col-2">
<div class="checkbox">
{% render_field form.is_empty class="form-control" %}
<label for="id_description-is_empty">Is empty?</label>
</div>
</div>
<div class="col-2">
<button class="btn btn-bold btn-primary" type="submit">Save</button>
</div>
</form>
...
</code></pre>
|
<p>Finally I achieve the solution:</p>
<p>I have to change <strong>templates.html</strong> like:</p>
<pre><code>...
<form class="row gap-y" method="POST">{% csrf_token %}
<div class="col-2">
<div class="checkbox">
{{ form.is_empty }}
<label for="id_is_empty">Is empty?</label>
</div>
</div>
<div class="col-2">
<button class="btn btn-bold btn-primary" type="submit">Save</button>
</div>
</form>
...
</code></pre>
<p>In checkbox fields is not necessary to render class="control-form", then using browser console, look for field <strong>id</strong> and copy into label <strong>for</strong> param.</p>
|
python|django
| 0 |
1,906,905 | 58,475,716 |
Skit-learn with Spacy parallelization error with RandomizedSearchCV
|
<p>I use Sklearn and Spacy to make a NLP machine learning model. But, I have a parallelization error when I train my model with the class <code>RandomizedSearchCV()</code>.</p>
<p>My class <code>TextProcessor</code> allows me to do text processing with the Spacy library.</p>
<pre><code>class TextProcessor(BaseEstimator, TransformerMixin):
def __init__(self, remove_stop_word=False):
self.remove_stop_word = remove_stop_word
self.nlp = spacy.load('en')
self.punctuations = string.punctuation
def spacy_text_processing(self, sentence):
'''
This function allow to process the text with spacy
'''
final_sentence = []
for word in self.nlp(sentence):
if self.remove_stop_word:
if word.is_stop:
continue
if word.text not in self.punctuations:
final_sentence.append(word.lemma_)
return final_sentence
def transform(self, X, y=None):
X_transformed = []
for sentence in X:
X_transformed.append(' '.join(self.spacy_text_processing(sentence)))
return X_transformed
def fit(self, X, y=None):
return self
</code></pre>
<p>After that I use a sklearn pipeline to perform different processing on the text and finally I add a SVR model (the error comes with any type of model). But when I use the parameter <code>n_jobs</code> with a value other than 1 I get a parallelization error.</p>
<pre><code>param_grid = {...}
svr_model = Pipeline([('text_processing', TextProcessor()),
('vectorizer', CountVectorizer()),
('tfidf', TfidfTransformer()),
('svr', SVR())])
random_search_svr = RandomizedSearchCV(svr_model, param_grid, scoring='neg_mean_absolute_error', n_jobs=-1)
random_search_svr.fit(X_train, y_train)
</code></pre>
<p>This problem is very annoying because training models with classes like <code>GridSearchCV()</code> and <code>RandomizedSearchCV()</code> take a lot of time. Would there be any way to solve the problem or get around it?</p>
<p>The variables X_train and y_train contain the following sample values:</p>
<pre><code>X_train = ["Morrisons book second consecutive quarter of sales growth", "Glencore to refinance its short-term debt early, shares rise", ...] #List of sentences
y_train = [0.43, 0.34, ...] #Sentiment between -1 and 1 associate to the sentence
</code></pre>
<p>The error is : </p>
<pre><code>Exception in thread QueueFeederThread:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\externals\joblib\externals\loky\backend\queues.py", line 150, in _feed
obj_ = dumps(obj, reducers=reducers)
File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\externals\joblib\externals\loky\backend\reduction.py", line 243, in dumps
dump(obj, buf, reducers=reducers, protocol=protocol)
File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\externals\joblib\externals\loky\backend\reduction.py", line 236, in dump
_LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\externals\joblib\externals\cloudpickle\cloudpickle.py", line 284, in dump
return Pickler.dump(self, obj)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 437, in dump
self.save(obj)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 662, in save_reduce
save(state)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 856, in save_dict
self._batch_setitems(obj.items())
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 882, in _batch_setitems
save(v)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 662, in save_reduce
save(state)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 856, in save_dict
self._batch_setitems(obj.items())
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 887, in _batch_setitems
save(v)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 662, in save_reduce
save(state)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 856, in save_dict
self._batch_setitems(obj.items())
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 882, in _batch_setitems
save(v)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 816, in save_list
self._batch_appends(obj)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 843, in _batch_appends
save(tmp[0])
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 771, in save_tuple
save(element)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 771, in save_tuple
save(element)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 662, in save_reduce
save(state)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 856, in save_dict
self._batch_setitems(obj.items())
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 882, in _batch_setitems
save(v)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 816, in save_list
self._batch_appends(obj)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 840, in _batch_appends
save(x)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 771, in save_tuple
save(element)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 662, in save_reduce
save(state)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 856, in save_dict
self._batch_setitems(obj.items())
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 882, in _batch_setitems
save(v)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 662, in save_reduce
save(state)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 856, in save_dict
self._batch_setitems(obj.items())
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 882, in _batch_setitems
save(v)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 662, in save_reduce
save(state)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 786, in save_tuple
save(element)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 524, in save
rv = reduce(self.proto)
File "stringsource", line 2, in preshed.maps.PreshMap.__reduce_cython__
TypeError: self.c_map cannot be converted to a Python object for pickling
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\threading.py", line 917, in _bootstrap_inner
self.run()
File "C:\ProgramData\Anaconda3\lib\threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\externals\joblib\externals\loky\backend\queues.py", line 175, in _feed
onerror(e, obj)
File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\externals\joblib\externals\loky\process_executor.py", line 310, in _on_queue_feeder_error
self.thread_wakeup.wakeup()
File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\externals\joblib\externals\loky\process_executor.py", line 155, in wakeup
self._writer.send_bytes(b"")
File "C:\ProgramData\Anaconda3\lib\multiprocessing\connection.py", line 183, in send_bytes
self._check_closed()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\connection.py", line 136, in _check_closed
raise OSError("handle is closed")
OSError: handle is closed
---------------------------------------------------------------------------
_RemoteTraceback Traceback (most recent call last)
_RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\externals\joblib\externals\loky\backend\queues.py", line 150, in _feed
obj_ = dumps(obj, reducers=reducers)
File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\externals\joblib\externals\loky\backend\reduction.py", line 243, in dumps
dump(obj, buf, reducers=reducers, protocol=protocol)
File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\externals\joblib\externals\loky\backend\reduction.py", line 236, in dump
_LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\externals\joblib\externals\cloudpickle\cloudpickle.py", line 284, in dump
return Pickler.dump(self, obj)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 437, in dump
self.save(obj)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 662, in save_reduce
save(state)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 856, in save_dict
self._batch_setitems(obj.items())
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 882, in _batch_setitems
save(v)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 662, in save_reduce
save(state)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 856, in save_dict
self._batch_setitems(obj.items())
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 887, in _batch_setitems
save(v)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 662, in save_reduce
save(state)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 856, in save_dict
self._batch_setitems(obj.items())
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 882, in _batch_setitems
save(v)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 816, in save_list
self._batch_appends(obj)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 843, in _batch_appends
save(tmp[0])
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 771, in save_tuple
save(element)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 771, in save_tuple
save(element)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 662, in save_reduce
save(state)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 856, in save_dict
self._batch_setitems(obj.items())
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 882, in _batch_setitems
save(v)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 816, in save_list
self._batch_appends(obj)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 840, in _batch_appends
save(x)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 771, in save_tuple
save(element)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 662, in save_reduce
save(state)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 856, in save_dict
self._batch_setitems(obj.items())
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 882, in _batch_setitems
save(v)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 662, in save_reduce
save(state)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 856, in save_dict
self._batch_setitems(obj.items())
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 882, in _batch_setitems
save(v)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 662, in save_reduce
save(state)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 786, in save_tuple
save(element)
File "C:\ProgramData\Anaconda3\lib\pickle.py", line 524, in save
rv = reduce(self.proto)
File "stringsource", line 2, in preshed.maps.PreshMap.__reduce_cython__
TypeError: self.c_map cannot be converted to a Python object for pickling
"""
The above exception was the direct cause of the following exception:
PicklingError Traceback (most recent call last)
<ipython-input-12-8979d799633f> in <module>
15
16 random_search_svr = RandomizedSearchCV(svr_grid_model, param_grid_svr,scoring='neg_mean_absolute_error',n_jobs=-1)
---> 17 random_search_svr.fit(X_train, y_train)
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py in fit(self, X, y, groups, **fit_params)
720 return results_container[0]
721
--> 722 self._run_search(evaluate_candidates)
723
724 results = results_container[0]
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py in _run_search(self, evaluate_candidates)
1513 evaluate_candidates(ParameterSampler(
1514 self.param_distributions, self.n_iter,
-> 1515 random_state=self.random_state))
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py in evaluate_candidates(candidate_params)
709 for parameters, (train, test)
710 in product(candidate_params,
--> 711 cv.split(X, y, groups)))
712
713 all_candidate_params.extend(candidate_params)
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py in __call__(self, iterable)
928
929 with self._backend.retrieval_context():
--> 930 self.retrieve()
931 # Make sure that we get a last message telling us we are done
932 elapsed_time = time.time() - self._start_time
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py in retrieve(self)
831 try:
832 if getattr(self._backend, 'supports_timeout', False):
--> 833 self._output.extend(job.get(timeout=self.timeout))
834 else:
835 self._output.extend(job.get())
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\externals\joblib\_parallel_backends.py in wrap_future_result(future, timeout)
519 AsyncResults.get from multiprocessing."""
520 try:
--> 521 return future.result(timeout=timeout)
522 except LokyTimeoutError:
523 raise TimeoutError()
C:\ProgramData\Anaconda3\lib\concurrent\futures\_base.py in result(self, timeout)
423 raise CancelledError()
424 elif self._state == FINISHED:
--> 425 return self.__get_result()
426
427 self._condition.wait(timeout)
C:\ProgramData\Anaconda3\lib\concurrent\futures\_base.py in __get_result(self)
382 def __get_result(self):
383 if self._exception:
--> 384 raise self._exception
385 else:
386 return self._result
PicklingError: Could not pickle the task to send it to the workers.
</code></pre>
<p>Version:</p>
<ul>
<li>Python: 3.7.1</li>
<li>Spacy: 2.2.1</li>
<li>Sklearn: 0.20.1</li>
</ul>
|
<p>Apparently, the issue lies with the parallel backend of sklearn, which uses <code>'loky'</code> by default. Changing the backend to <code>'multiprocessing'</code> solves this problem. As mentioned here <a href="https://github.com/explosion/spaCy/issues/3193" rel="nofollow noreferrer">here</a>.</p>
<p>More info on sklearn parallel backend can be found <a href="https://scikit-learn.org/stable/modules/generated/sklearn.utils.parallel_backend.html" rel="nofollow noreferrer">here</a>.</p>
<p>First, import this: </p>
<pre><code>from sklearn.externals.joblib import parallel_backend
</code></pre>
<p>When running the fit, do this to overwrite the parallel backend:</p>
<pre><code>with parallel_backend('multiprocessing'):
random_search.fit(X_train, y_train)
</code></pre>
|
python|python-3.x|scikit-learn|parallel-processing|spacy
| 1 |
1,906,906 | 33,658,729 |
Python: create frequency table from 2D list
|
<p>Starting with data formatted like this:</p>
<pre><code>data = [[0,1],[2,3],[0,1],[0,2]]
</code></pre>
<p>I would like to represent every value once with its frequency:</p>
<pre><code>output = [[[0,1],2],[[2,3],1],[[0,2],1]]
</code></pre>
<p>I've found many solutions to this problem for 1D lists, but they don't seem to work for 2D.</p>
|
<p>It's what that <code>collections.Counter()</code> is for:</p>
<pre><code>>>> from collections import Counter
>>>
>>> Counter(map(tuple,data))
Counter({(0, 1): 2, (2, 3): 1, (0, 2): 1})
>>> Counter(map(tuple,data)).items()
[((0, 1), 2), ((2, 3), 1), ((0, 2), 1)]
</code></pre>
<p>Note that since list objects are not hashable you can not use them as the dictionary keys.So you need to convert them to tuple which is a hashable object.</p>
|
python|list|multidimensional-array|compression|data-science
| 1 |
1,906,907 | 33,748,340 |
How does Python's OrderedDict remember elements inserted?
|
<p>How does <code>OrderedDict</code> in Python remember all the order of the elements? What is the performance overhead? For problems like implementing <code>LRU</code>, I found this really powerful and very simple to implement, but what is the performance gain here? How does it remember the order of the keys that were first inserted?</p>
<p>Does it use a <code>Dict()</code> and <code>Double Linked List</code> for remembering the keys as shown in the below picture? I will really appreciate if you could convey your message in a simple language rather than sharing some kind of a research paper.</p>
<p><a href="https://i.stack.imgur.com/BvuSK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BvuSK.png" alt="enter image description here"></a></p>
|
<p>The best thing about this is that you can look at the source for <a href="https://hg.python.org/cpython/file/2.7/Lib/collections.py" rel="noreferrer"><strong><code>OrderedDict</code></strong></a> to get a good understanding of it. </p>
<p>It is actually implemented in pure python too! This means that you can copy it, redefine it and generally freakishly mutate it as much as you want until you understand it.</p>
<p><em>It does use a Doubly Linked List</em>, this is specified in the docstring of the source file. Getting a grip of <a href="https://en.wikipedia.org/wiki/Doubly_linked_list" rel="noreferrer"><em>how doubly</em></a> <a href="http://www.cs.cmu.edu/~guna/15-123S11/Lectures/Lecture11.pdf" rel="noreferrer"><strong>linked lists work</strong></a> and by browsing the source, you'll get a good hang of how exactly it works:</p>
<blockquote>
<pre><code># An inherited dict maps keys to values.
# The inherited dict provides __getitem__, __len__, __contains__, and get.
# The remaining methods are order-aware.
# Big-O running times for all methods are the same as regular dictionaries.
# The internal self.__map dict maps keys to links in a doubly linked list.
# The circular doubly linked list starts and ends with a sentinel element.
# The sentinel element never gets deleted (this simplifies the algorithm).
# Each link is stored as a list of length three: [PREV, NEXT, KEY].
</code></pre>
</blockquote>
|
python|dictionary|collections|ordereddictionary|lru
| 9 |
1,906,908 | 33,828,822 |
Keyboard not working in PyCharm IDE, cursor doesn't appear anywhere
|
<p>I recently download PyCharm Professional edition as a trial for 30 days. But this IDE doesn't seem to be working properly. The mouse works fine inside but the text cursor doesn't appear anywhere and the keyboard doesn't work inside PyCharm. The keyboard doesn't work anywhere. I have tried disabling Tip of The Day as suggested by another question on StackOverflow itself: <a href="https://stackoverflow.com/questions/29991007/stop-keyboard-becoming-unresponsive-on-pycharm-startup">Stop keyboard becoming unresponsive on Pycharm startup</a>.<br>
But the above question doesn't answer my problem properly. I need a solution to this problem as I have lots of editing to do immediately.</p>
|
<p>you can disable vim from tools->preference-> plugin. that worked for me </p>
|
python|python-3.x|keyboard|pycharm
| 7 |
1,906,909 | 37,956,484 |
Rsync filters in a python loop
|
<p>After reading the man page on filtering rules and looking here: <a href="https://stackoverflow.com/questions/35364075/using-rsync-filter-to-include-exclude-files">Using Rsync filter to include/exclude files</a></p>
<p>I don't understand why the code below doesn't work.</p>
<pre><code>import subprocess, os
from ftplib import FTP
ftp_site = 'ftp.ncbi.nlm.nih.gov'
ftp = FTP(ftp_site)
ftp.login()
ftp.cwd('genomes/genbank/bacteria')
dirs = ftp.nlst()
for organism in dirs:
latest = os.path.join(organism, "latest_assembly_versions")
for path in ftp.nlst(latest):
accession = path.split("/")[-1]
fasta = accession+"_genomic.fna.gz"
subprocess.call(['rsync',
'--recursive',
'--copy-links',
#'--dry-run',
'-vv',
'-f=+ '+accession+'/*',
'-f=+ '+fasta,
'-f=- *',
'ftp.ncbi.nlm.nih.gov::genomes/genbank/bacteria/'+latest,
'--log-file=scratch/test_dir/log.txt',
'scratch/' + organism])
</code></pre>
<p>I also tried <code>'--exclude=*[^'+fasta+']'</code> to try to exclude files that don't match <code>fasta</code> instead of <code>-f=- *</code></p>
<p>For each directory <code>path</code> within <code>latest/*</code>, I want the file that matches <code>fasta</code> exactly. There will always be exactly one file <code>fasta</code> in the directory <code>latest/path</code>.</p>
<p><strong>EDIT:</strong> I am testing this with <strong>rsync version 3.1.0</strong> and have seen incompatibility issues with earlier versions.</p>
<p>Here is a link to working code that you should be able to paste into a python interpreter to get the results of a "dry run," which won't download anything onto your machine: <a href="http://pastebin.com/0reVKMCg" rel="nofollow noreferrer">http://pastebin.com/0reVKMCg</a> it gets EVERYTHING under <code>ftp.ncbi.nlm.nih.gov::genomes/genbank/bacteria/'+latest</code>, which is not what I want. and if I run that script with <code>'-f=- *'</code> uncommented, it doesn't get anything, which seems to contradict the answer here <a href="https://stackoverflow.com/questions/35364075/using-rsync-filter-to-include-exclude-files">Using Rsync filter to include/exclude files</a></p>
|
<p>This part of the rsync man page contained the info I needed to solve my problem:</p>
<blockquote>
<p>Note that, when using the --recursive (-r) option (which is implied by -a), every subcomponent of every
path is visited from the top down, so include/exclude patterns get applied recursively to each subcompo-
nent's full name (e.g. to include "/foo/bar/baz" the subcomponents "/foo" and "/foo/bar" must not be
excluded). The exclude patterns actually short-circuit the directory traversal stage when rsync finds the
files to send. If a pattern excludes a particular parent directory, it can render a deeper include pat-
tern ineffectual because rsync did not descend through that excluded section of the hierarchy. This is
particularly important when using a trailing '*' rule. For instance, this won't work:</p>
<p>+ /some/path/this-file-will-not-be-found</p>
<p>+ /file-is-included</p>
<p>- *</p>
<p>This fails because the parent directory "some" is excluded by the '*' rule, so rsync never visits any of
the files in the "some" or "some/path" directories. One solution is to ask for all directories in the
hierarchy to be included by using a single rule: "+ */" (put it somewhere before the "- *" rule), and per-
haps use the --prune-empty-dirs option. Another solution is to add specific include rules for all the
parent dirs that need to be visited. For instance, this set of rules works fine:</p>
<p>+ /some/</p>
<p>+ /some/path/</p>
<p>+ /some/path/this-file-is-found</p>
<p>+ /file-also-included</p>
<p>- *</p>
</blockquote>
<p>This helped me write the following code:</p>
<pre><code>def get_fastas(local_mirror="scratch/ncbi", bacteria="Escherichia_coli"):
ftp_site = 'ftp.ncbi.nlm.nih.gov'
ftp = FTP(ftp_site)
ftp.login()
ftp.cwd('genomes/genbank/bacteria')
rsync_log = os.path.join(local_mirror, "rsync_log.txt")
latest = os.path.join(bacteria, 'latest_assembly_versions')
for parent in ftp.nlst(latest)[0:2]:
accession = parent.split("/")[-1]
fasta = accession+"_genomic.fna.gz"
organism_dir = os.path.join(local_mirror, bacteria)
subprocess.call(['rsync',
'--copy-links',
'--recursive',
'--itemize-changes',
'--prune-empty-dirs',
'-f=+ '+accession,
'-f=+ '+fasta,
'--exclude=*',
'ftp.ncbi.nlm.nih.gov::genomes/genbank/bacteria/'+parent,
organism_dir])
</code></pre>
<p>It turns out <code>'-f=+ '+accession,</code> doesn't work with a <code>*</code> after the trailing <code>/</code>. Although it does work with just a trailing <code>/</code> without the <code>*</code></p>
|
python|ftp|subprocess|rsync
| 0 |
1,906,910 | 29,831,454 |
rewind iter(f.splitlines) one line back
|
<p>I have the following script:</p>
<pre><code>iterator = iter(f.splitlines())
for line in iterator:
if "some_string" in line:
next_line = next(iterator) # next line has some float/integer values
if next_line == "some_value":
do-something
</code></pre>
<p>I am opening a file and looking for a keywords in it. If I find them, I append the integer/float value that comes in the next line to a list. </p>
<p>The problem is that the "for" loop will take the line that comes after "next_line". For example:</p>
<pre><code>line1: "keyword"
line2: 3
line3: something-else
line4: keyword
line5: 4
</code></pre>
<p>if I found what I want in line1 and took the ineger from line2, "for-loop" will continue from line 3. How can make it continue from the last place that line1 was (continue from line2 regardless that next_line took it)? I want to go one line back.</p>
|
<p>It sounds like you want to use <code>itertools.tee</code> to get two iterators that can be set to be one item off from each other:</p>
<pre><code>import itertools
it0, it1 = tee(f) # no need for splitlines, just use the file object as an iterator
next(it1) # throw away the first value from one of the iterators
for val0, val1 in zip(it0, it1): # iterate in parallel
if "some_string" in val0 and val1 == "some_value":
do_stuff()
</code></pre>
|
python-2.7
| 1 |
1,906,911 | 61,319,263 |
Splitting a number into a string of separate ints. 'int' object is not iterable?
|
<p>Simple problem. </p>
<p>I have a number (integer):</p>
<pre><code>mynumber = 1239
</code></pre>
<p>I want to convert it into a list of the separate integers like <code>[1,2,3,9]</code>:</p>
<pre><code>numbersplit = [int(x) for x in mynumber]
</code></pre>
<p>But I get the error:</p>
<pre><code>TypeError: 'int' object is not iterable
</code></pre>
<p>Why doesn't this work? I'm just making sure the <code>ints</code> are actually <code>ints</code>. </p>
<p>However it does work when I wrap it in <code>str</code>..?</p>
<pre><code>[int(x) for x in str(mynumber)]
[1, 2, 3, 9, 8, 9, 8, 7, 2, 3, 9, 4, 7, 8]
</code></pre>
<p>Do lists need to be converted into a string before I can do this?</p>
|
<p>You cannot iterate on an int, but you can do it on a string.</p>
<p>One approach is to convert it to a string:</p>
<pre><code>mynumber = 1239
number_str = str(mynumber)
</code></pre>
<p>Then you can create your list, converting it back to int</p>
<pre><code>[int(x) for x in number_str]
</code></pre>
|
python|list
| 1 |
1,906,912 | 61,588,984 |
KeyError: "['something' 'something'] not in index"
|
<p>I'm currently encountering this error: </p>
<blockquote>
<p>KeyError: "['Malaysia' 'Singapore'] not in index"</p>
</blockquote>
<p>with the error pointing at :</p>
<blockquote>
<p>---> 37 wide_data = wide_data[['Malaysia','Singapore']]</p>
</blockquote>
<p>Upon checking <strong>wide_data</strong> with <code>print(wide_data.columns)</code> it returns :</p>
<pre><code>MultiIndex([( 'total_cases', 'Malaysia'),
( 'total_cases', 'Singapore'),
( 'new_cases', 'Malaysia'),
( 'new_cases', 'Singapore'),
('total_deaths', 'Malaysia'),
('total_deaths', 'Singapore'),
( 'new_deaths', 'Malaysia'),
( 'new_deaths', 'Singapore')],
names=[None, 'location'])
</code></pre>
<p>Both does exist. I'm not sure where did my code goes wrong.</p>
<p>Below are my code snippet and <a href="https://ourworldindata.org/coronavirus-source-data" rel="nofollow noreferrer">Dataset</a> used:</p>
<pre><code>import plotly.express as px
df = pd.read_csv('covid-data-2020.csv', index_col='date', parse_dates=True)
data = df[df.location.isin(['Malaysia', 'Singapore'])]
wide_data = data.pivot(columns='location', values=list(data.columns[2:6]))
wide_data = wide_data[['Malaysia','Singapore']]
wide_data.reset_index(level=0, inplace=True)
fig = px.line(wide_data.melt(id_vars='date'), x='date', y='value', color='location')
fig.update_yaxes(title='Malaysia vs Singapore')
fig.show()
</code></pre>
|
<p>Are you trying to compare the growth of the total cases/deaths in Malaysia and Singapore?
If so, maybe instead of :</p>
<pre><code>wide_data = wide_data[['Malaysia','Singapore']]
</code></pre>
<p>You use:</p>
<pre><code>wide_data = wide_data[['total_cases']]
</code></pre>
<p>Which will produce the graph you're expecting.</p>
|
python|pandas|jupyter-notebook
| 2 |
1,906,913 | 73,364,523 |
pandas series pct_change with initial expanding window
|
<p>I need to calculate pct change for pd series with increasing window size till N followed by fixed size of N.</p>
<p>If I use pct_change with periods of N, I get initial nans.</p>
<pre><code>s = pd.Series([1, 2, 4, 5, 6], name='Numbers')
s.pct_change(periods=3)
</code></pre>
<p>output</p>
<pre><code>0 NaN
1 NaN
2 NaN
3 4.0
4 2.0
Name: Numbers, dtype: float64
</code></pre>
<p>I want to replace initial NaN's with</p>
<pre><code>0 0 (1- 1) / 1
1 1 (2 - 1) / 1
2 3 (4 - 1)/ 1
3 4.0
4 2.0
</code></pre>
<p>So I want to have some kind of expanding window and replace initial NaNs with the new values.
Any pointers would be appreciated.</p>
<hr />
<p>EDIT..</p>
<p>adding one more example for clarity</p>
<pre><code>s = pd.Series([4, 5, 6, 7, 8, 9], name='Numbers')
s.pct_change(periods=3)
</code></pre>
<p>output</p>
<pre><code>0 NaN
1 NaN
2 NaN
3 0.75
4 0.60
5 0.50
Name: Numbers, dtype: float64
</code></pre>
<p>The output I expect</p>
<pre><code>0 0
1 0.25 (5-4)/4
2 0.5 (6-4)/4
3 0.75
4 0.60
5 0.50
</code></pre>
|
<p>Is this what you want?</p>
<pre><code>s = pd.Series([4, 5, 6, 7, 8, 9], name='Numbers')
s.pct_change(periods=3).fillna((s-s.iat[0])/s.iat[0])
</code></pre>
<p><strong>Result</strong></p>
<pre><code>0 0.00
1 0.25
2 0.50
3 0.75
4 0.60
5 0.50
</code></pre>
|
python|pandas|numpy
| 1 |
1,906,914 | 49,874,904 |
Illegal instruction when running python on raspberry pi
|
<p>I was doing a pi project where pi communicates with a UHF ID card reader via hardware serial and read cards. Pi gets the card information through serial and upload them to a remote database. Also some other common peripherals like LCD, RTC are connected to pi. I programmed the project with <code>python2</code>.</p>
<p>The project works OK. But after 15 to 30 days later program crashes with error </p>
<blockquote>
<p>Illegal instruction</p>
</blockquote>
<p>. And when this happens <code>python2</code> package no longer run. If I run <code>python2</code> from a terminal it throws same error and exits. Just a single line as shown above.</p>
<p>Can't understand why this is happening. I searched through internet and found that in some cases some modules cause this problem which are related to CPU instructions(Though they are for PCs). But in this case it is not some module which are the source of the problem because if it was then the python interpreter should work properly.</p>
<p>What additional tests can I do to trace the problem?</p>
<p>Thanks!</p>
|
<p>Should upgrade to new version of python series such as python 3.7, 3.8 and 3.9, because python 2.7 will move to 3.7</p>
|
python|raspberry-pi
| 0 |
1,906,915 | 50,108,986 |
How to incorporate a class into this code for Mastermind?
|
<p>I am stuck on how to implement a class into this code. I am pretty new to coding and this is my first project. The player is supposed to try to guess a computer-generated sequence of colors, and I am having trouble with implementing a class. </p>
<p>I am also somewhat puzzled about my try and except statement. I am trying to keep the code going even if the player input is wrong, but not count that as a guess and I can't figure that out.</p>
<p>The basic program works, but I am trying to make it more streamlined and less confusing, possibly making big chunks of it a function or class.</p>
<pre><code>'''
-------- MASTERMIND --------
'''
from random import choice, sample #importing these for list generation and list randomization
game_active = True
color_choices = ["r","g","b","y","o","p"] #choices for the player and computer
player_color_guess = ["","","",""] #player inputed colors
computer_color_guess = [choice(color_choices),choice(color_choices),choice(color_choices),choice(color_choices)] #generates random 4 choices for the computer
guesses = 0 #keeps track of number of guesses
def you_win(): #function for winning the game
return("You beat the codemaker in " + str(guesses) + " guesses! Great job!")
return("Press any key to exit")
def clean_list(ugly_list): #function that cleans the brackets and other symbols from a list
return(str(ugly_list).replace(',','').replace('[','').replace(']','').replace('\'',''))
print("Welcome to Mastermind!\n\n----------------------\n\nI will make a code of four colors: \nRed(r)\nGreen(g)\nBlue(b)\nYellow(y)\nOrange(o)\nPurple(p)\nGuess the combination of colors. If you get the correct color and the correct place you will get a '+', if you guess \nthe correct color and the wrong place you will get a '~', otherwise you will get a '-'. \nThe position of the symbols do not correlate to the positions of the colors.")
while game_active:
#making main loop for the game
player_guess_symbols = []
#this is to store the first output of symbols, not mixed
num_correct = 0 #to store number of correct guesses
'''try:'''
player_color_guess = str(input("\nInput four colors, separated by a comma.\n>>> ").split(',')) #actual player input
guesses += 1 #every time the player does this it adds one to guess variable
'''except ValueError:
print("{} is not a valid input.".format(player_color_guess))'''
for clr in range(0,4): #looping through first four numbers: 0,1,2,3
if computer_color_guess[clr] == player_color_guess[clr]: #takes the index clr and compares it to the player guess
player_guess_symbols.append("+")
num_correct += 1
elif computer_color_guess[clr] in player_color_guess: #checks if computer guess is in player guess
player_guess_symbols.append("~")
else:
player_guess_symbols.append("-")
if guesses > 10: #this is to check the number of guesses, and see if the player wins
print("You failed to guess the code in 8 guesses or under! The correct code was " + clean_list(computer_color_guess) + ". Press any key to exit")
break #not sure if this is the right place for a break statement
player_guess_symbols_mixed = sample(player_guess_symbols, len(player_guess_symbols))
#this mixes the symbol output each time so the placement of the symbols is not a pattern
if num_correct > 3: #checks of player wins
print(you_win())
print(clean_list(player_guess_symbols_mixed))
</code></pre>
|
<p>Your code is actually pretty well formatted for your first project, so kudos for that! You've done a great job naming your variables, having some meaningful comments, and using whitespace correctly (although Python forces you to do that last thing, it's still pretty key). Keep up the good work!</p>
<p>Usually this sort of thing isn't exactly the right fit for Stack Overflow, but I'll try to give you a few tips.</p>
<p>With regards to your try and except, you could wrap it in another loop, or perhaps use a <code>continue</code> statement:</p>
<pre><code>try:
player_color_guess = str(input("\nInput four colors, separated by a comma.\n>>> ").split(',')) #actual player input
guesses += 1 #every time the player does this it adds one to guess variable
except ValueError:
print("{} is not a valid input.".format(player_color_guess))
continue # go back to while
</code></pre>
<p>Sometimes using <code>continue</code> can lead to spaghetti, but in this case I think avoiding extra indentation is simpler. It works similar to the <code>break</code> statement, only instead of breaking out of the loop altogether, it just goes back to the <code>while</code> or <code>for</code> or what-have-you. </p>
<hr>
<p>Putting it into a class is a bit less straightforward. To be honest I'm not sure you need it all that much as the code is fairly readable as is! But a place I might start, is looking at encapsulating the current game state:</p>
<pre><code>class GuessState: # could easily call this MastermindState or what-have-you
guesses = 0
computer_guess = []
player_guess = [] # could be a good place to story a history of player guesses
</code></pre>
<p>This gives you a really good place to make a function for setting up <code>computer_guess</code>, say in an <code>__init__</code> function:</p>
<pre><code> def __init__(self):
self.computer_guess = [ ... etc ... ]
</code></pre>
<p>And you can also put some of the game logic there to help make things more readable:</p>
<pre><code> def check_player_guess(self, guess):
# check self.computer_guess against guess
# maybe increment self.guesses here too
# maybe return True if the guess was correct?
def scramble_player_guess(self):
return sample(self.player_guess, len(player_guess))
</code></pre>
<p>Now the setup and the game loop can be much simpler:</p>
<pre><code>state = GuessState()
while(game_active):
player_guess = str(input... etc ...) # might also be a good thing to put in a function
if state.check_player_guess(player_guess):
print "win in {}".format(state.guesses)
break
if state.out_of_guesses():
print "lose"
break # is actually pretty reasonable
print state.scramble_player_guess()
</code></pre>
<p>You'd need to implement <code>check_player_guess</code> and <code>out_of_guesses</code> of course :) This helps separate interpreting user input from the game logic, as well as from what you print out. It's not <em>all</em> the way there -- e.g. it'd take a bit more work to make it easy to drop in a graphical interface vs a text-based one -- but it's a little closer.</p>
<p>You could argue about the right way to do this quite a bit also, and you can look at the edit history to see some of the changes I went through in the process of writing this post even -- sometimes you don't see things until after you've stepped away a little. Does that help?</p>
|
python|class
| 0 |
1,906,916 | 50,040,496 |
Include unmatched rows in Python script that merges two files based on one column
|
<p>I have two csv files shown below.</p>
<p>First file:</p>
<blockquote>
<p>abTestGroup,platform,countryCode,userId</p>
<p>group_control,ios,GB,aaaaaaaaaaa group_control,ios,GB,aaaaaaaaaaa</p>
<p>group_control,ios,GB,aaaaaaaaaaa group_control,ios,GB,aaaaaaaaaaa</p>
<p>group_test,android,GB,ccccccccccc</p>
</blockquote>
<p>Second file:</p>
<blockquote>
<p>dateActivity,productId,cost,userId</p>
<p>2018-03-02,specialpack,0.198,aaaaaaaaaaa</p>
<p>2018-03-03,specialpack,0.498,aaaaaaaaaaa</p>
<p>2018-03-02,specialpack,0.398,bbbbbbbbbbb</p>
<p>2018-03-02,specialpack,0.998,ccccccccccc</p>
</blockquote>
<p>and they have one common thing in this case which is the <code>userId</code>.</p>
<p>I want to merge those files and create a parent-child relationship using Python (Pandas).</p>
<p>I have used the script below:</p>
<pre><code>import pandas as pd
a = pd.read_csv('PARENT.csv', encoding = "UTF-8", mangle_dupe_cols=True, usecols=['abTestGroup','platform','countryCode','userId'])
b = pd.read_csv("CHILD.csv")
merged = b.merge(a, on='userId', how='inner')
merged = merged.drop_duplicates()
merged.to_csv("final_output.csv", index=False)
</code></pre>
<p>in order to get the following output:</p>
<blockquote>
<p>dateActivity,productId,cost,userId,abTestGroup,platform,countryCode</p>
<p>2018-03-02,specialpack,0.198,aaaaaaaaaaa,group_control,ios,GB</p>
<p>2018-03-03,specialpack,0.498,aaaaaaaaaaa,group_control,ios,GB</p>
<p>2018-03-02,specialpack,0.998,ccccccccccc,group_test,android,GB</p>
</blockquote>
<p>The <code>userId</code> <em>'bbbbbbbbbbb'</em> does not appear in the final output because it doesn't exist in both files. How can I include the unmatched rows (unmatched <code>userIds</code>) and assign the 'Other' value to the cells?</p>
|
<p>join method will work for your case:</p>
<pre><code>a.join(b)
</code></pre>
|
python|pandas|merge
| 0 |
1,906,917 | 66,592,855 |
Class "Room" has no "objects" members
|
<p>I'm doing a web app with Django, but I keep getting this error all the time.</p>
<p>This is the code from my models.pty, where I create the class Room.</p>
<pre><code>from django.db import models
import string
import random
def generate_unique_code():
length = 6
while True:
code = ''.join(random.choices(string.ascii_uppercase, k=length))
if Room.objects.filter(code=code).count() == 0:
break
return code
class Room(models.Model):
code = models.CharField(max_length=8, default="", unique=True)
host = models.CharField(max_length=50, unique=True)
guest_can_pause = models.BooleanField(null=False, default=False)
votes_to_skip = models.IntegerField(null=False, default=1)
created_at = models.DateTimeField(auto_now_add=True)
</code></pre>
<p>And here is where I import the room to views.py, I'm also getting the same error here.</p>
<pre><code>from django.shortcuts import render
from rest_framework import generics
from .serializer import RoomSerializer
from .models import Room
class RoomView(generics.CreateAPIView):
queryset = Room.objects.all()
serializer_class = RoomSerializer
</code></pre>
<p>What's wrong with my code?</p>
|
<p>You need to install pylint-django</p>
<pre><code>pip install pylint-django
</code></pre>
<p>And assuming you use Visual Studio Code you need to add this setting by pressing <code>ctrl+shift+p</code> then going to <code>Preferences: Configure Language Specific Settings</code> and then selecting <code>Python</code>
It will open a json file. You will need to add this code to that file and save:</p>
<pre><code>{
"python.linting.pylintArgs": [
"--load-plugins=pylint_django"
],
"[python]": {
}
}
</code></pre>
<p>Restart VS Code and it should be ok</p>
<p>Based on the answers to this <a href="https://stackoverflow.com/questions/45135263/class-has-no-objects-member">question</a></p>
|
python|django|django-rest-framework|backend
| 0 |
1,906,918 | 53,106,850 |
Edited joystick code not receiving inputs
|
<p>I am trying to use a ps4 controller with a raspberry pi to generate inputs. I have sucessfully connected the controller and when i try and run the following code found on the pygame website everything is working fine. </p>
<pre><code>import pygame
BLACK = ( 0, 0, 0)
WHITE = ( 255, 255, 255)
class TextPrint:
def __init__(self):
self.reset()
self.font = pygame.font.Font(None, 20)
def print(self, screen, textString):
textBitmap = self.font.render(textString, True, BLACK)
screen.blit(textBitmap, [self.x, self.y])
self.y += self.line_height
def reset(self):
self.x = 10
self.y = 10
self.line_height = 15
def indent(self):
self.x += 10
def unindent(self):
self.x -= 10
pygame.init()
size = [500, 700]
screen = pygame.display.set_mode(size)
pygame.display.set_caption("My Game")
done = False
clock = pygame.time.Clock()
pygame.joystick.init()
textPrint = TextPrint()
while done==False:
for event in pygame.event.get(): # User did something
if event.type == pygame.QUIT: # If user clicked close
done=True # Flag that we are done so we exit this loop
JOYBUTTONUP JOYHATMOTION
if event.type == pygame.JOYBUTTONDOWN:
print("Joystick button pressed.")
if event.type == pygame.JOYBUTTONUP:
print("Joystick button released.")
screen.fill(WHITE)
textPrint.reset()
joystick_count = pygame.joystick.get_count()
textPrint.print(screen, "Number of joysticks: {}".format(joystick_count) )
textPrint.indent()
for i in range(joystick_count):
joystick = pygame.joystick.Joystick(i)
joystick.init()
textPrint.print(screen, "Joystick {}".format(i) )
textPrint.indent()
name = joystick.get_name()
textPrint.print(screen, "Joystick name: {}".format(name) )
axes = joystick.get_numaxes()
textPrint.print(screen, "Number of axes: {}".format(axes) )
textPrint.indent()
for i in range( axes ):
axis = joystick.get_axis( i )
textPrint.print(screen, "Axis {} value: {:>6.3f}".format(i, axis) )
textPrint.unindent()
buttons = joystick.get_numbuttons()
textPrint.print(screen, "Number of buttons: {}".format(buttons) )
textPrint.indent()
for i in range( buttons ):
button = joystick.get_button( i )
textPrint.print(screen, "Button {:>2} value: {}".format(i,button) )
textPrint.unindent()
hats = joystick.get_numhats()
textPrint.print(screen, "Number of hats: {}".format(hats) )
textPrint.indent()
for i in range( hats ):
hat = joystick.get_hat( i )
textPrint.print(screen, "Hat {} value: {}".format(i, str(hat)) )
textPrint.unindent()
textPrint.unindent()
pygame.display.flip()
clock.tick(20)
pygame.quit ()
</code></pre>
<p>So yeah when i run the above code everything works fine, however i didn't want to generate a game at all, and i'm only interested in getting the inputs from the d pad or the hats. in order to do this i wrote my own code based off the code above</p>
<pre><code> import pygame
pygame.joystick.init()
while True:
joystick = pygame.joystick.Joystick(0)
joystick.init()
hat = joystick.get_hat(0)
print(str(hat))
</code></pre>
<p>When i run the above code it runs and there are no syntax errors, however it continuously prints(0,0) regardless of what buttons of the d-pad i have pressed. The controller is still connected, which suggests to me that for some reason my new code isn't recieving inputs.</p>
<p>Thank you for taking the time to read this long post, any help would be much appreciated.</p>
<p>Kind regards</p>
|
<p>You have to call one of the <a href="https://www.pygame.org/docs/ref/event.html" rel="nofollow noreferrer"><code>pygame.event</code></a> functions regularly (for example <code>pygame.event.pump</code> or <code>for event in pygame.event.get():</code>), otherwise functions like <code>Joystick.get_hat</code> and <code>pygame.mouse.get_pressed</code> won't work and the pygame window will become unresponsive after a while.</p>
<p>You also don't need to create and initialize a new joystick object each frame. Do that once before the <code>while</code> loop. Here's a minimal, complete example:</p>
<pre><code>import pygame as pg
pg.init()
screen = pg.display.set_mode((100, 100))
clock = pg.time.Clock()
# Create a list of available joysticks and initialize them.
joysticks = [pg.joystick.Joystick(x) for x in range(pg.joystick.get_count())]
for joystick in joysticks:
joystick.init()
running = True
while running:
for event in pg.event.get():
if event.type == pg.QUIT:
running = False
elif event.type == pg.JOYBUTTONDOWN:
if event.button == 6: # Back button.
running = False
for joystick in joysticks: # Iterate over the available joysticks.
if joystick.get_id() == 0: # The first joystick.
numhats = joystick.get_numhats()
for hat in range(numhats): # Check all hats of the joystick.
print(joystick.get_hat(hat))
screen.fill((30, 30, 50))
pg.display.flip()
clock.tick(60)
pg.quit()
</code></pre>
|
python|input|pygame|raspberry-pi3|joystick
| 0 |
1,906,919 | 53,305,044 |
Python - Define variables before try/catch or just let them bubble out?
|
<p>Coming from Java and C based languages, this looks odd in Python. The <code>x</code> variable is defined in the try block but used outside of it.</p>
<p>I understand that python does not scope the try block though.</p>
<pre><code>try:
x = 5
except Exception as e:
print(str(e))
print(f"x = {x}")
</code></pre>
<p>Is this considered to be good form in Python, or is it preferred to set, say, <code>x = None</code> beforehand? or some third option? Why?</p>
|
<p>There are very few situations where a <code>try: / except:</code> is really the appropriate thing to do. Obviously the example you gave was abstracted, but in my opinion the answer is a hard "no," it's not good form to reference a potentially undeclared variable - if for some reason an error is encountered in the <code>try:</code> before <code>x = 5</code>, then you are going to get an error when you try to <code>print(f"x = {x}")</code>.</p>
<p>More to the point, <em>why oh why</em> would the variable be assigned in the try block? I would say a good rule of thumb is to <strong>only include in <code>try</code> that portion of the code you are actually testing for exceptions</strong>.</p>
<p>Side-notes:</p>
<ul>
<li>I have been previously advised on SO that it's bad form to use a <code>except Exception</code>, because what you really should be doing is handling a certain <code>type</code> of error, or better yet a <code>particular</code> error (eg. <code>except IndexError</code>, which will result in all other types of errors being unhandled)... <code>try / except</code> is something that can easily introduce difficult to diagnose bugs if it's used non-specifically.</li>
<li>I'm pretty sure <code>except:</code> and <code>except Exception</code> are equivalent.</li>
</ul>
|
python|exception|exception-handling
| 1 |
1,906,920 | 53,335,622 |
Getting "unknown command" Exception calling the "minimize_window" method
|
<p>Example code:</p>
<pre><code>from selenium import webdriver
browser = webdriver.Chrome()
browser.minimize_window()
</code></pre>
<p>Returns the following exception:</p>
<pre><code> File "myScript.py", line 4, in <module>
browser.minimize_window()
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 738, in minimize_window
self.execute(Command.MINIMIZE_WINDOW)
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 312, in execute
self.error_handler.check_response(response)
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 208, in check_response
raise exception_class(value)
selenium.common.exceptions.WebDriverException: Message: unknown command: session/8252be05ea571a2c623450db8ba097c0/window/minimize
</code></pre>
<p>Adding the line</p>
<pre><code>print dir(browser)
</code></pre>
<p>Shows that <code>minimize_window()</code> is a listed function of browser. So what gives? Is this functionality just not compatible with Chrome?</p>
<p>Python 2.7</p>
|
<p>This error message...</p>
<pre><code>selenium.common.exceptions.WebDriverException: Message: unknown command: session/8252be05ea571a2c623450db8ba097c0/window/minimize
</code></pre>
<p>...implies that the call to <strong>minimize_window()</strong> function wasn't recoznized.</p>
<p>You spotted it correct. As now <em>WebDriver Specification</em> is a <em>W3C Recommendation</em> the function defination for <em>maximizing</em> the window was adjusted as per the <em>W3C Recommended Specification</em> as follows:</p>
<pre><code>def maximize_window(self):
"""
Maximizes the current window that webdriver is using
"""
params = None
command = Command.W3C_MAXIMIZE_WINDOW
if not self.w3c:
command = Command.MAXIMIZE_WINDOW
params = {'windowHandle': 'current'}
self.execute(command, params)
</code></pre>
<p>But, the function defination for <em>minimizing</em> the window is still pending to be <em>W3C</em> compliant within the <em>Python Client</em> as is still defined as:</p>
<pre><code>def minimize_window(self):
"""
Invokes the window manager-specific 'minimize' operation
"""
self.execute(Command.MINIMIZE_WINDOW)
</code></pre>
<p>Hence you see the error:</p>
<pre><code>unknown command: session/8252be05ea571a2c623450db8ba097c0/window/minimize
</code></pre>
<hr />
<h2>Solution</h2>
<p>As the default/common practice is to open the browser in <strong>maximized</strong> mode, while <em>Test Execution</em> is <em>In Progress</em> <strong>minimizing</strong> the browser would be against the best practices as <a href="https://stackoverflow.com/questions/54459701/what-is-selenium-and-what-is-webdriver/54482491#54482491">Selenium</a> may loose the focus over the <em>Browsing Context</em> and an exception may raise during the <em>Test Execution</em>. However, <em>Selenium's</em> <a href="/questions/tagged/python" class="post-tag" title="show questions tagged 'python'" rel="tag">python</a> client does have a <a href="https://www.selenium.dev/selenium/docs/api/py/webdriver_remote/selenium.webdriver.remote.webdriver.html#selenium.webdriver.remote.webdriver.WebDriver.minimize_window" rel="nofollow noreferrer">minimize_window()</a> method which eventually pushes the <em>Chrome Browsing Context</em> effectively to the background.</p>
<p>Python:</p>
<pre><code>from selenium import webdriver
options = webdriver.ChromeOptions()
options.add_argument("--start-maximized")
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
driver = webdriver.Chrome(chrome_options=options, executable_path=r'C:\Utility\BrowserDrivers\chromedriver.exe')
driver.get('https://www.google.co.in')
driver.minimize_window()
</code></pre>
<p>Java Solution:</p>
<pre><code>driver.navigate().to("https://www.google.com/");
Point p = driver.manage().window().getPosition();
Dimension d = driver.manage().window().getSize();
driver.manage().window().setPosition(new Point((d.getHeight()-p.getX()), (d.getWidth()-p.getY())));
</code></pre>
|
python|python-2.7|selenium|selenium-webdriver
| 1 |
1,906,921 | 71,590,089 |
Displaying an image with pygame
|
<p>I am having a problem with displaying an image in pygame. The image is in the same directory/folder but it still has an error. This error popped out of the blue and I'm not sure how to fix it... My code below</p>
<pre><code>import pygame, sys
from pygame.constants import QUIT
pygame.init()
GameSprite = pygame.image.load('GameSprite.png')
GameSprite = pygame.transform.scale(GameSprite,(200, 200))
def main():
x = 100
y = 100
velocity = 0
velocity = pygame.Vector2()
velocity.xy = 3, 0
acceleration = 0.1
DISPLAY = pygame.display.set_mode((570, 570))
pygame.display.set_caption("Flappy Bird Related Game")
WHITE = (255,255,255)
BLUE = (0, 0, 255)
while True:
DISPLAY.fill(WHITE)
#pygame.draw.rect(DISPLAY,BLUE,(x,y,50,50))
DISPLAY.blit(GameSprite,(x,y))
y += velocity.y
x += velocity.x
velocity.y += acceleration
if x + 50 > DISPLAY.get_width():
velocity.x = -3
if x < 0:
velocity.x = 3
for event in pygame.event.get():
if event.type == pygame.QUIT:
sys.exit()
elif event.type == pygame.KEYDOWN and event.key == pygame.K_SPACE:
velocity.y = -4.2
pygame.display.update()
pygame.time.delay(10)
main()
</code></pre>
|
<p>It is not sufficient to copy the image files in the project directory. You also need to set the working directory.
The image file path has to be relative to the current working directory. The working directory can be different to the directory of the python script.<br />
The name and path of the script file can be retrieved with <a href="https://docs.python.org/3/reference/import.html#import-related-module-attributes" rel="nofollow noreferrer"><code>__file__</code></a>. The current working directory can be set with <a href="https://docs.python.org/3/library/os.html" rel="nofollow noreferrer"><code>os.chdir(path)</code></a>.<br />
Add the following lines of code at the beginning of your script to set the working directory to the same as the script's directory:</p>
<pre class="lang-py prettyprint-override"><code>import os
import pygame, sys
from pygame.constants import QUIT
os.chdir(os.path.dirname(os.path.abspath(__file__)))
pygame.init()
GameSprite = pygame.image.load('GameSprite.png')
</code></pre>
|
python|image|pygame|blit
| 0 |
1,906,922 | 71,547,749 |
thinking about writing a spell bee solver program
|
<p>Alright I have been thinking of creating a spell bee solver program on python which takes the letters as input and gives the all the possible words in output.
so, I have been thinking of using this English dictionary library for referencing words (<a href="https://pypi.org/project/english-dictionary/" rel="nofollow noreferrer">https://pypi.org/project/english-dictionary/</a>).
but the things that worries me is that for spell bee the words can be both smaller and larger than the provided set of letters and reoccurrence of letters is also a possibility.
so, what kind of loop should I run in order to include all the possible senerios?</p>
|
<p>You would need to find words for all possible subsets of the set of letters. You could use itertools to generate these subsets.</p>
<pre><code>import itertools
letters = ['a', 'c', 'n']
for i in range(1, len(letters) + 1):
combinations = list(itertools.combinations(letters, i))
generate_words(combinations) # implement this how you were planning to
</code></pre>
|
python
| 1 |
1,906,923 | 5,227,292 |
conditional evaluation of source file in python
|
<p>Say I have a file that's only used in pre-production code</p>
<p>I want to ensure it gets not run in production code- any calls out to it have to fail.</p>
<p>This snippet at the top of the file doesn't work - it breaks the Python grammar, which specifies that <code>return</code> must take place in a function.</p>
<pre><code>if not __debug__:
return None
</code></pre>
<p>What is the best solution here - the one that doesn't involve making a gigantic else, that is. :-)</p>
|
<pre><code>if not __debug__:
raise RuntimeError('This module must not be run in production code.')
</code></pre>
|
python
| 7 |
1,906,924 | 61,980,647 |
Printing a file path to a label
|
<p>I am writing a windows application using python where I am trying to get the path to a specific folder in the <strong>C:\Users\\AppData\Roaming</strong> folder and print it to a label. I have tried the filedialog and it only lets me select a file rather than a folder to print to the label. </p>
<pre class="lang-py prettyprint-override"><code>def getcache():
root.filename = filedialog.askopenfilename(initialdir="%appdata%", title="Select Cache Folder")
</code></pre>
<p>This is what I normally use to get a specific file. But I am looking for a folder in this case to print to a label. </p>
|
<p>If you are using <a href="https://docs.python.org/3.9/library/dialog.html" rel="nofollow noreferrer">Tkinter</a> you can try with askdirectory:</p>
<pre><code>from tkinter import filedialog
cache_folder = filedialog.askdirectory(initialdir="%appdata%", title="Select Cache Folder")
</code></pre>
|
python|python-3.x|tkinter|python-3.8
| 1 |
1,906,925 | 67,294,218 |
How can I filter for rows one hour before and after a set timestamp in Python?
|
<p>I am trying to filter a DataFrame to only show values 1-hour before and 1-hour after a specified time/date, but am having trouble finding the right function for this. I am working in Python with Pandas.</p>
<p>The posts I see regarding masking by date mostly cover the case of masking rows between a specified start and end date, but I am having trouble finding help on how to mask rows based around a single date.</p>
<p>I have time series data as a DataFrame that spans about a year, so thousands of rows. This data is at 1-minute intervals, and so each row corresponds to a row ID, a timestamp, and a value.</p>
<p>Example of DataFrame:</p>
<pre><code>ID timestamp value
0 2011-01-15 03:25:00 34
1 2011-01-15 03:26:00 36
2 2011-01-15 03:27:00 37
3 2011-01-15 03:28:00 37
4 2011-01-15 03:29:00 39
5 2011-01-15 03:30:00 29
6 2011-01-15 03:31:00 28
...
</code></pre>
<p>I am trying to create a function that outputs a DataFrame that is the initial DataFrame, but only rows for 1-hour before and 1-hour after a specified timestamp, and so only rows within this specified 2-hour window.</p>
<p>To be more clear:</p>
<p>I have a DataFrame that has 1-minute interval data throughout a year (as exemplified above).</p>
<p>I now identify a specific timestamp: 2011-07-14 06:15:00</p>
<p>I now want to output a DataFrame that is the initial input DataFrame, but now only contains rows that are within 1-hour before 2011-07-14 06:15:00, and 1-hour after 2011-07-14 06:15:00.</p>
<p>Do you know how I can do this? I understand that I could just create a filter where I get rid of all values before 2011-07-14 05:15:00 and 2011-07-14 07:15:00, but my goal is to have the user simply enter a single date/time (e.g. 2011-07-14 06:15:00) to produce the output DataFrame.</p>
<p>This is what I have tried so far:</p>
<pre><code>hour = pd.DateOffset(hours=1)
date = pd.Timestamp("2011-07-14 06:15:00")
df = df.set_index("timestamp")
df([date - hour: date + hour])
</code></pre>
<p>which returns:</p>
<pre><code>File "<ipython-input-49-d42254baba8f>", line 4
df([date - hour: date + hour])
^
SyntaxError: invalid syntax
</code></pre>
<p>I am not sure if this is really only a syntax error, or something deeper and more complex. How can I fix this?</p>
<p>Thanks!</p>
|
<p>You can do with:</p>
<pre><code>import pandas as pd
import datetime as dt
data = {"date": ["2011-01-15 03:10:00","2011-01-15 03:40:00","2011-01-15 04:10:00","2011-01-15 04:40:00","2011-01-15 05:10:00","2011-01-15 07:10:00"],
"value":[1,2,3,4,5,6]}
df=pd.DataFrame(data)
df['date']=pd.to_datetime(df['date'], format='%Y-%m-%d %H:%M:%S', errors='ignore')
date_search= dt.datetime.strptime("2011-01-15 05:20:00",'%Y-%m-%d %H:%M:%S')
mask = (df['date'] > date_search-dt.timedelta(hours = 1)) & (df['date'] <= date_search+dt.timedelta(hours = 1))
print(df.loc[mask])
</code></pre>
<p>result:</p>
<pre><code> date value
3 2011-01-15 04:40:00 4
4 2011-01-15 05:10:00 5
</code></pre>
|
python|dataframe|filter|timestamp|offset
| 0 |
1,906,926 | 67,201,072 |
Coding in jupyter notebooks
|
<p>I am working on jupyter notebooks first time and created code like this :</p>
<p><a href="https://i.stack.imgur.com/7yzZA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7yzZA.png" alt="Jupyter" /></a></p>
<p>Although it ran fine and produced the output. I am not sure if this is the ideal way to use jupyter notebooks. One issue I see with this is I pasted everything in one block.</p>
<p>I am not sure what are the pros n cons of putting code in different blocks.<br>
Any suggestions how this could be improved ?</p>
|
<p>Jupiter notebooks, google colab notebooks, etc are suposed to be used to run code by sections (like MatLab live scripts as well) the main difference between this and a regular ".py" script is that the memory space where your variables are allocated will not be emptied when you run your code.</p>
<p>So, the main advantage of this notebooks is that you can divide your code in sections and debug just one section at time, another way to debug is using the debug mode in, for instace Visual Studio Code.</p>
<p>In the other hand you can write a very nice documentation of your code, including outputs, images, etc.</p>
<p>Check <a href="https://realpython.com/lessons/why-use-jupyter-notebooks/" rel="nofollow noreferrer">this</a> video.</p>
|
python|jupyter-notebook
| 2 |
1,906,927 | 60,490,882 |
Cut mask out of image with certain pixel margin numpy opencv
|
<p>I am playing around with the Mask RCNN (<a href="https://github.com/matterport/Mask_RCNN" rel="nofollow noreferrer">https://github.com/matterport/Mask_RCNN</a>) segmentation program that is trained on the COCO data set. It detects persons (along with many other objects that I further neglect) in an image and returns one or multiple Person masks, i.e. Boolean Numpy arrays containing True values for all pixels that are classified as a 'Person' and False values for all other pixels:</p>
<p><a href="https://i.stack.imgur.com/XsQzh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XsQzh.png" alt="Overlay Image" /></a></p>
<p>So an inputted image (uint8 array of shape (3900,2922,3)) becomes a mask (Boolean array of shape (3900,2922)) or multiple masks when multiple persons are detected in the picture.</p>
<p>Now I can use this mask to cut the person out of the image with some simply Numpy array indexing:</p>
<pre><code>mask3d = np.dstack([mask]*3)
cut_out_mask = np.invert(mask3d)
res = np.where(cut_out_mask, 0, image)
</code></pre>
<p>This returns the following image:
<a href="https://i.stack.imgur.com/Wk24l.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Wk24l.jpg" alt="Cut_out Image" /></a></p>
<p>Since the masks that are returned by the Mask_RCNN program are quite tight, I would like to add a margin of a few pixels (let's say 15px), so that I get something like this:</p>
<p><a href="https://i.stack.imgur.com/rUujB.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rUujB.jpg" alt="manual" /></a></p>
<p>Which Numpy/ OpenCV functions can I leverage to cut out the mask from the original image (similar to <code>np.where</code>), adding a margin of 15 pixels around the mask?</p>
|
<p>One method to do this is to use <a href="https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.html#dilation" rel="nofollow noreferrer"><code>cv2.dilate</code></a> to increase the surface area of the mask. Depending on your mask shape, you can create varying structuring element shapes and sizes with <a href="https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.html#structuring-element" rel="nofollow noreferrer"><code>cv2.getStructuringElement</code></a>.</p>
<p>For instance, if your mask shape is rectangular you might want to use <code>cv2.MORPH_RECT</code> or if your mask shape is circular you can use <code>cv2.MORPH_ELLIPSE</code>. In addition, you can change the kernel size and the number of iterations to dilate. After dilating the mask, you can use <a href="https://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#bitwise-and" rel="nofollow noreferrer"><code>cv2.bitwise_and</code></a> to get your result. Here's a minimum reproducible example:</p>
<p>Original image</p>
<img src="https://i.stack.imgur.com/ndcsV.png" width="275">
<p>Mask</p>
<img src="https://i.stack.imgur.com/dtJan.png" width="275">
<p>Dilate</p>
<img src="https://i.stack.imgur.com/kpoYN.png" width="275">
<p>Bitwise-and for result</p>
<img src="https://i.stack.imgur.com/vxoa8.png" width="275">
<pre><code>import cv2
# Load image and mask
image = cv2.imread('1.png')
mask = cv2.imread('mask.png')
# Create structuring element, dilate and bitwise-and
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (25,25))
dilate = cv2.dilate(mask, kernel, iterations=3)
result = cv2.bitwise_and(image, dilate)
cv2.imshow('dilate', dilate)
cv2.imshow('result', result)
cv2.waitKey()
</code></pre>
|
python|image|numpy|opencv|image-processing
| 4 |
1,906,928 | 64,190,206 |
Send text to clipboard in selenium
|
<p>I am working with a form that does not allow to type accents, but it does allow to paste text with accent.</p>
<p>How can I send text to the clipboard, then paste the text containing accent into the form?</p>
<pre><code>from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium import webdriver
options = Options()
options.headless = True
driver = webdriver.Chrome('chromedriver.exe',options=options)
driver.get('https://www.website.com')
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, openform))).click()
send accent text to clipboard
driver.find_element(By.XPATH, formfield).send_keys(Keys.CONTROL, 'v')
</code></pre>
|
<p>You could try this in python to copy the desired text in clipboard and then pasting it. It is working with python 3.8. You can try it too. if you face any issue then let me know.</p>
<pre><code>import pyperclip
pyperclip.copy('Text to be copied to the clipboard.')
clipboard_text= pyperclip.paste()
print(clipboard_text)
</code></pre>
|
python-3.x|selenium|clipboard|accent-sensitive
| 1 |
1,906,929 | 70,236,110 |
How to find the line number in which something occurs?
|
<p>I was trying to create a program which tells me in a given text if there are 2 adjacent words which are the same and where in the text file does this happen( line and word number). So far I have been able to determine which word number but can't seem to figure out which line it happens in. Would anyone be able to give me a hand please? So far, next to the Error/ No Error, I am able to get the word number but if I could just get the line number as well.</p>
<pre><code>
for line in (textfile):
for x, y in enumerate(zip(line.split(), line.split()[1:])):
</code></pre>
|
<p>Why don't you simply use <code>enumerate</code> again in the outer loop?</p>
<pre><code>for line_number, line in enumerate(textfile):
for x, y in enumerate(zip(line.split(), line.split()[1:])):
if(x,y)==(y,x):
print(line_number,x,y,"Error")
else:
print(line_number,x,y,"No error")
</code></pre>
|
python|zip|text-files|enumerate
| 1 |
1,906,930 | 10,920,488 |
Search multiple directories, delete duplicate files
|
<p>I have a directory of files that contain files of records. I just got access to a new directory that has the same records but additional files as well, but the additional files are buried deep inside other folders and i cant find them.
So my solution would be to have a python program run and delete all files that are duplicates in the two different directories (and subdirectories), and leave others intact, which will give me the "new files" i'm looking for.</p>
<p>I have seen a couple of programs that find duplicates, but i'm unsure as to how they run really, and they haven't been helpful.</p>
<p>Any way i can accomplish what i'm looking for?
Thanks!</p>
|
<p>Possible approach:</p>
<ol>
<li>Create a set of MD5 hashes from your original folder.</li>
<li>Recursively MD5 hash the files in your new folder, deleting any files that generate hashes already present in your set.</li>
</ol>
<p>Caveat to the above is that there is a chance two different files can generate the same hash. How different are the files?</p>
|
python|duplicate-removal
| 1 |
1,906,931 | 63,738,089 |
return the value of a a variable from a class to a function
|
<p>I am looking to take the value <code>self.timestr</code> from the class <code>StopWatch</code> and then take that value into the function <code>lap_done</code></p>
<pre><code>class StopWatch(Frame):
def __init__(self, parent=None, **kw):
Frame.__init__(self, parent, kw)
self._start = 0.0
self._elapsedtime = 0.0
self._running = 0
self.timestr = StringVar()
self.makeWidgets()
def makeWidgets(self):
""" Make the time label. """
l = Label(self, textvariable=self.timestr, font=("Arial", 25, "bold"))
self._setTime(self._elapsedtime)
l.pack(fill=X, expand=NO, pady=2, padx=2)
def _update(self):
""" Update the label with elapsed time. """
self._elapsedtime = time.time() - self._start
self._setTime(self._elapsedtime)
self._timer = self.after(50, self._update)
return timestr
def _setTime(self, elap):
""" Set the time string to Minutes:Seconds:Hundreths """
minutes = int(elap/60)
seconds = int(elap - minutes*60.0)
hseconds = int((elap - minutes*60.0 - seconds)*100)
self.timestr.set('%02d:%02d:%02d' % (minutes, seconds, hseconds))
def Start(self):
""" Start the stopwatch, ignore if running. """
if not self._running:
self._start = time.time() - self._elapsedtime
self._update()
self._running = 1
def Stop(self):
""" Stop the stopwatch, ignore if stopped. """
if self._running:
self.after_cancel(self._timer)
self._elapsedtime = time.time() - self._start
self._setTime(self._elapsedtime)
self._running = 0
def Reset(self):
""" Reset the stopwatch. """
self._start = time.time()
self._elapsedtime = 0.0
self._setTime(self._elapsedtime)
def lap_done()
timestr = StopWatch()
query = ("""UPDATE places SET lapnum = %s and %s = %s WHERE userID = %s""")
print("1")
recordTuple = (lap2,race,timestr,id)
print("2")
cursor.execute(query,recordTuple)
print("3")
mydb.commit()
</code></pre>
<p>When I run this code I get the value of <code>pyvar</code> for the value of <code>timestr</code></p>
|
<p>You got your syntax wrong.
<code>timestr = StopWatch()</code> creates an object of the class StopWatch, it doesn't extract the value from there.</p>
<p>What you should be doing instead is create an object for StopWatch as <code>obj = StopWatch()</code>, and then access the <code>timestr</code> property as <code>obj.timestr</code></p>
<p>So, your <code>recordTuple</code> will be:
<code>recordTuple = (lap2,race,obj.timestr,id)</code></p>
|
python|class|stopwatch
| 0 |
1,906,932 | 56,799,620 |
order types in numpy.zeros_like
|
<p>I am looking at <a href="https://people.duke.edu/~ccc14/sta-663/CUDAPython.html#matrix-multiplication-wiht-cublas" rel="nofollow noreferrer">Numba Cuda library</a>.</p>
<pre><code>import numbapro.cudalib.cublas as cublas
blas = cublas.Blas()
n =100
A = np.random.random((n, n)).astype(np.float32)
B = np.random.random((n, n)).astype(np.float32)
C = np.zeros_like(A, order='F')
blas.gemm('T', 'T', n, n, n, 1.0, A, B, 1.0, C)
assert(np.allclose(np.dot(A, B), C))
</code></pre>
<p>After checking <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros_like.html" rel="nofollow noreferrer">numpy.zeros_like</a>, I am curious about the optional parameter <strong>order</strong> which has 4 different types: 'C', 'F', 'A', and 'K'.</p>
<blockquote>
<p>order : {‘C’, ‘F’, ‘A’, or ‘K’}, optional
Overrides the memory layout of the result. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if a is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of a as closely as possible.</p>
</blockquote>
<p>There are descriptions in the documentation. But I am still confused.
What is the difference between different order types?</p>
|
<p>The clearest example I can imagine of these orders is with a simple 2d array:</p>
<p>The default ordering, 'C':</p>
<pre><code>In [5]: x = np.arange(12).reshape(3,4)
In [6]: x
Out[6]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
</code></pre>
<p>'F' - not how values count down the columns:</p>
<pre><code>In [7]: x = np.arange(12).reshape(3,4, order='F')
In [8]: x
Out[8]:
array([[ 0, 3, 6, 9],
[ 1, 4, 7, 10],
[ 2, 5, 8, 11]])
</code></pre>
<p>now take that last 'F' order, and ravel the values</p>
<pre><code>In [9]: x.ravel(order='C')
Out[9]: array([ 0, 3, 6, 9, 1, 4, 7, 10, 2, 5, 8, 11])
In [10]: x.ravel(order='F')
Out[10]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
In [11]: x.ravel(order='K')
Out[11]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
</code></pre>
<p>And so on; we can play with other combinations.</p>
|
python|numpy
| 2 |
1,906,933 | 56,526,418 |
Basic question regarding Python 2.7 classes, instances & variable scope
|
<p>When I execute the following code:</p>
<pre><code>class myClass:
_a = 'a'
def __init__(self, a):
self._a = a;
def set_a(self, new_a):
self._a = new_a
def print_a(self):
print self._a
c = myClass('b')
print c._a
c.print_a
c.set_a('c')
print c._a
c.print_a
</code></pre>
<p>I expect the output to be:</p>
<pre><code>b
b
c
c
</code></pre>
<p>But instead the result is:</p>
<pre><code>b
c
</code></pre>
<p>Why is that so? The method print_a doesn't even seem to be able to find self._a.</p>
|
<p><code>c.print_a</code> is not executing anything. To run the function <code>c.print_a</code>, you should <em>call</em> it: <code>c.print_a()</code>.</p>
<p>Why is it even legal to use a function like that tho?! Yes, you can 'mention' functions, other attributes and pretty much any other name, and this <em>normally</em> won't do anything (for example, it won't actually run the function):</p>
<pre><code>class T:
thing = 4
T # does nothing
T.thing # does nothing
import math
math # does nothing
</code></pre>
<p>On the other hand, if the attribute is special, 'mentioning' it may invoke some behaviour:</p>
<pre><code>>>> class T:
... @property
... def thing(self):
... print("Hello!")
...
>>> T()
<__main__.T object at 0x10c2cec50>
>>> _.thing
Hello! # It doesn't look like something is being called, but it actually is!
</code></pre>
|
python|python-2.7
| 2 |
1,906,934 | 18,121,173 |
TypeError, When calculating variables from a dictionary
|
<p>What Type is a <code>val_dict.get(filename)[1]</code>? I keep getting this Typeerror, telling me its a NoneType: </p>
<pre><code>TypeError: unsupported operand type(s) for *: 'float' and 'NoneType'
</code></pre>
<p>When I try to run a script with this line :</p>
<pre><code>water = (float(x1[count]) * (val_dict.get(filename[1]))
</code></pre>
<p>I keep getting the errors. I've tried calling a <code>float</code>, <code>int</code> and <code>str</code>, but it obviously didn't help. </p>
<p>I've written a script to import certain values from an Excel-spreadsheet and scale them. After that my script is supposed to extract these values to a new excel spreadsheet, and what I thought was the easiest way was storing all my data in a dictionary that looks like this:</p>
<pre><code> val_dict[filename] = [model, lenghtvalue, dispvalue, Froudemax, Froudemin]
return val_dict
val_dict = locate_vals()
print val_dict
#output:
{'C:/eclipse/TST-folder\\Test spreadsheet model 151B WL3.xls': [u'M151B', 14.89128, 316.29, 1.0, 0.4041900066850382], 'C:/eclipse/TST-folder\\FolderA\\Test spreadsheet model 151 WL1 Lastet.xls': [u'M151', 14.672460000000001, 316.28887, 1.0, 0.4041900066850382]}
</code></pre>
<p>In case it is interesting, the x1-list's output looks like this:</p>
<pre><code>[0.35714285714285715, 0.35714285714285715] [1.5845602477694463e-07, 1.5845602477694463e-07]
</code></pre>
|
<p><code>val_dict[filename]</code> is an array which can be indexed, however <code>filename</code> is not.</p>
<p>P.S.
I think the following picture might help (from the blog post <a href="http://pythonforbiologists.com/index.php/29-common-beginner-python-errors-on-one-page/" rel="nofollow noreferrer"><em>29 common beginner Python errors on one page</em></a>). <br></p>
<p><img src="https://i.stack.imgur.com/7v6h6.png" alt="29 common beginner Python errors on one page"></p>
|
python|types|dictionary
| 1 |
1,906,935 | 61,027,610 |
How to store python output recieved in node js?
|
<p>I'm invoking a python script from node js. The python script retrieves data from a REST API and stores it in a dataframe and then there's a search function based on user input. I'm confused as to what variable type does python send the data to node js in? I've tried to convert into a string but in node js it says it is an unresolved variable type. Here's the code:</p>
<pre><code>r = requests.get(url)
data = r.json()
nested = json.loads(r.text)
nested_full = json_normalize(nested)
req_data= json_normalize(nested,record_path ='items')
search = req_data.get(["name", "id"," ])
#search.head(10)
filter = sys.argv[1:]
print(filter)
input = filter[0]
print(input)
result = search[search["requestor_name"].str.contains(input)]
result = result.to_String(index=false)
response = '```' + str(result) + '```'
print(response)
sys.stdout.flush()
</code></pre>
<p>Here's the node js program that invokes the above python script. How do i store the output in a format which i can pass to another function in node?</p>
<pre><code>var input = 'robert';
var childProcess = require("child_process").spawn('python', ['./search.py', input], {stdio: 'inherit'})
const stream = require('stream');
const format = require('string-format')
childProcess.on('data', function(data){
process.stdout.write("python script output",data)
result += String(data);
console.log("Here it is", data);
});
childProcess.on('close', function(code) {
if ( code === 1 ){
process.stderr.write("error occured",code);
process.exit(1);
}
else{
process.stdout.write('done');
}
});
</code></pre>
|
<p>According to <a href="https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options" rel="nofollow noreferrer">the docs</a>:</p>
<pre><code>childProcess.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
</code></pre>
|
python|node.js|type-conversion|child-process
| 1 |
1,906,936 | 60,844,932 |
Create new column based on data in existing column
|
<p>I have a three way hierarchy: property -> prov -> co. Each property has a segment, i.e. hotel / home. I have written a query to get the count of each below:</p>
<pre><code>properties = spark.sql("""
SELECT
COUNT(ps.property_id) as property_count,
ps.prov_id,
c.id as co_id,
ps.segment
FROM
schema.t1 ps
INNER JOIN
schema.t2 c
ON c.id = p.co_id
GROUP BY
2,3,4
""")
properties = properties.toPandas()
</code></pre>
<p>This gives me the total number of properties per segment, per prov, per co. From the above df <code>properties</code>, I want to create a new df which looks like this:</p>
<pre><code>- prov_id,
- prov_segment,
- co_id,
- co_segment
</code></pre>
<p>The <code>prov_segment</code> should be 'Home' if >50% of the properties in that <code>pro_id</code> fall into the <code>Home</code> segment, otherwise it should be <code>Core</code>.
Likewise, <code>co_segment</code> should be <code>Home</code> if >50% of the <code>prov_id</code>s fall into the <code>Home</code> prov_segment, otherwise it should be core.</p>
<p>I know, I can get the total number of properties by grouping the data:</p>
<pre><code>prop_total_count = properties.groupby('prov_name')['property_count'].sum()
</code></pre>
<p>However, I'm unsure how to use this to create the new dataframe.</p>
<p>Example data:</p>
<p><code>properties.show(6)</code>:</p>
<pre><code>| property_count | prov_id | co_id | segment |
|----------------|---------|-------|---------|
| 10 | 1 | ABC | Core |
| 200 | 1 | ABC | Home |
| 300 | 9 | ABC | Core |
| 10 | 9 | ABC | Home |
| 100 | 131 | MNM | Home |
| 200 | 199 | KJK | Home |
</code></pre>
<p>Based on the above, I would want the below output:</p>
<pre><code>| prov_id | prov_segment | co_id | co_segment |
|---------|--------------|-------|------------|
| 1 | Home | ABC | Core |
| 9 | Core | ABC | Core |
| 131 | Home | MNM | Home |
| 199 | Home | KJK | Home |
</code></pre>
<p>prov_id 1 gets a Home segment as it has 200 home properties compared to 10 core properties. prov_id 9 gets a Core segment as it has 300 core properties to 10 Home properties. </p>
<p>co_id ABC gets a Core segment due to the portfolio having a total 310 Core properties compared to 210 Home properties.</p>
<p>prov_id 131 and 199 only are in a single segment so that segment remains.</p>
|
<p>Ok, it is maybe possible to tackle this problem in a "shorter" way, but this should work. It relies on creating two other DataFrames with the segments per group (<code>co_id</code> or <code>prov_id</code>) and then merging the DataFrames at the end.</p>
<p>Merging a Series like <code>co_id['co_segment']</code> to a DataFrame is not possible with older <code>pandas</code> versions so I added the <code>.to_frame()</code> function for compatibility purposes. With <code>pandas</code> version <code>>= 0.25.1</code> this operation is allowed and that function call is superfluous.</p>
<p><strong>NB</strong>: This code assumes the only segments are <code>Home</code>, <code>Core</code> and <code>Managed</code>.</p>
<pre><code>import pandas as pd
properties = pd.DataFrame(data={'property_count': [10, 200, 300, 10, 100, 200],
'prov_id': [1, 1, 9, 9, 131, 199],
'co_id': ['ABC', 'ABC', 'ABC', 'ABC', 'MNM', 'KJK'],
'segment': ['Core', 'Home', 'Core', 'Home', 'Home', 'Home']})
def get_segment(row):
if row['home_perc'] > 0.5:
return 'Home'
elif row['core_perc'] > 0.5:
return 'Core'
else:
return 'Managed'
def get_grouped_dataframe(properties_df, grouping_col):
id = pd.DataFrame()
id['total'] = properties.groupby(grouping_col)['property_count'].sum()
id['home'] = properties[properties.segment == 'Home'].groupby(grouping_col)['property_count'].sum()
id['core'] = properties[properties.segment == 'Core'].groupby(grouping_col)['property_count'].sum()
id['managed'] = properties[properties.segment == 'Managed'].groupby(grouping_col)['property_count'].sum()
id['home_perc'] = id['home'] / id['total']
id['home_perc'] = id['home_perc'].fillna(0)
id['core_perc'] = id['core'] / id['total']
id['core_perc'] = id['core_perc'].fillna(0)
id['managed_perc'] = id['core'] / id['total']
id['managed_perc'] = id['core_perc'].fillna(0)
id['segment'] = id.apply(get_segment, axis=1)
return id
prov_id = get_grouped_dataframe(properties, 'prov_id')
prov_id.rename(columns={'segment': 'prov_segment'}, inplace=True)
# total home core home_perc core_perc prov_segment
# prov_id
# 1 210 200 10.0 0.952381 0.047619 Home
# 9 310 10 300.0 0.032258 0.967742 Core
# 131 100 100 NaN 1.000000 0.000000 Home
# 199 200 200 NaN 1.000000 0.000000 Home
co_id = get_grouped_dataframe(properties, 'co_id')
co_id.rename(columns={'segment': 'co_segment'}, inplace=True)
# total home core home_perc core_perc co_segment
# co_id
# ABC 520 210 310.0 0.403846 0.596154 Core
# KJK 200 200 NaN 1.000000 0.000000 Home
# MNM 100 100 NaN 1.000000 0.000000 Home
property_segments = properties.drop(columns=['property_count', 'segment']).drop_duplicates()
property_segments = pd.merge(property_segments, prov_id['prov_segment'].to_frame(), on='prov_id')
property_segments = pd.merge(property_segments, co_id['co_segment'].to_frame(), on='co_id')
# prov_id co_id co_segment prov_segment
# 0 1 ABC Core Home
# 1 9 ABC Core Core
# 2 131 MNM Home Home
# 3 199 KJK Home Home
</code></pre>
<p><strong>EDIT</strong>: Put repeated code in function, added <code>Managed</code> segment as per comment. Add extra <code>to_frame()</code> for compatibility purposes.</p>
|
python-3.x|pandas|apache-spark-sql|pandas-groupby
| 1 |
1,906,937 | 60,898,702 |
How to ETL very large csv from AWS S3 to Dynamo
|
<p>Looking for some tips here. I did a quiet a bit of coding and research using python3 and lambda. However, timeout is the biggest issue I am struggling with atm. I am trying to read a very large csv file (3GB) from S3 and push the rows into DynamoDB. I am currently reading about 1024 * 32 bytes at a time, then pushing the rows into dynamo DB (batch write with asyncio) using a pub/sub pattern, and it works great for small files, i.e. ~500K rows. It times out when I have millions of rows. I’m trying NOT to use AWS glue and/or EMR. I have some constraints/limitations with those.</p>
<p>Does anyone know if this can be done using Lambda or step functions? If so, could you please share your ideas? Thanks!!</p>
|
<p>Besides lambda time constraint you might run into lambda memory constraint while you are reading file in AWS Lambda as lambda has just <code>/tmp</code> directory storage of 512 MB and that again depends on how you are reading the file in lambda.</p>
<p>If you don't want to go via AWS Glue or EMR, another thing you can do is by provisioning an EC2 and run your same code you are running in lambda from there. To make it cost effective, you can make EC2 transient i.e. provision it when you need to run S3 to DynamoDB job and shut it down once the job is completed. This transient nature can be achieved by Lambda function. You can also orchestrate the same with Step Functions also. Another option that you can look into is via AWS Datapipeline.</p>
|
python-3.x|aws-lambda|aws-step-functions
| 2 |
1,906,938 | 66,313,271 |
In python, how to decode strings whose literal content is in utf-8?
|
<p>I was trying to make an add-on for <a href="https://github.com/ankitects/anki" rel="nofollow noreferrer">Anki</a> which imports the opml notes from Mubu, and I the contents that I needed were stored in a <code>str</code> object like the one below, and I was not able to decode them or convert them into byte objects.</p>
<pre><code>"\x3Cspan\x3E\xE6\x88\x91\xE5\x8F\x91\xE7\x8E\xB0\xE6\x88\x91\xE5\xB1\x85\xE7\x84\xB6\xE6\xB2\xA1\xE6\x9C\x89\xE6\xB5\x8B\xE8\xAF\x95\xE8\xBF\x87\xE4\xB8\xAD\xE6\x96\x87\xEF\xBC\x8C\xE8\xBF\x99\xE4\xB8\xAA\xE5\xB0\xB1\xE5\xA4\xAA\xE7\xA6\xBB\xE8\xB0\xB1\xE4\xBA\x86\xE3\x80\x82\x3C/span\x3E"
</code></pre>
<p>Previously, I was trying able to decode this string using the following method, but it does not support utf-8:</p>
<pre class="lang-py prettyprint-override"><code>text = text.encode().decode("unicode_escape")
</code></pre>
<p>I wonder if there is a way to turn str objects whose literal content is in utf-8 into byte objects.</p>
|
<p>This can also be solved by importing the <code>unquote</code> function from <code>urllib.parse</code>, and it can change the <code>%XX</code> into text.</p>
|
python
| 0 |
1,906,939 | 72,611,147 |
How to get more enemies to pop up after the snake has ate a block
|
<p>so I made a snake game but in order to make it harder I added a enemy (a red square) that the user has to avoid but I want it so when the snake eats one of its food then another enemy will randomly spawn. An Example would be if has ate 5 of his food then there will be 6 enemies on the game. So I was wondering how to make another enemy spawn in a different random location but still keeping the orignal amount of enemies before. This is the code I have right now. Any help is appreciated.</p>
<pre><code>import sys,pygame
import random
pygame.display.set_caption('Snake Game')
screen_width, screen_height = 600,470
screen = pygame.display.set_mode((screen_width, screen_height))
fps = pygame.time.Clock()
snake_speed = 22
white = pygame.Color(250, 250, 250)
green = pygame.Color(0, 250, 0)
blue = pygame.Color(250, 0, 0)
black = pygame.Color(0, 0, 0)
red = pygame.Color(250, 0, 0)
pygame.init()
snake_position = [100, 50]
snake_body = [ [150, 80],
[100, 50],
[100, 50],
[80, 50]
]
food_position = [random.randrange(1, (screen_width//10)) * 10,
random.randrange(1, (screen_height//10)) * 10]
food_spawn = True
enemy_position = [random.randrange(1, (screen_width//10)) * 10,
random.randrange(1, (screen_height//10)) * 10]
enemy_spawn = True
direction = 'RIGHT'
score = 0
def show_score(choice, color, font, size):
score_font = pygame.font.SysFont(font, size)
score_surface = score_font.render('Score : ' + str(score), True, color)
score_rect = score_surface.get_rect()
screen.blit(score_surface, score_rect)
blue = pygame.Color(0, 0, 255)
def game_over():
my_font = pygame.font.SysFont('comic sans', 50)
game_over_surface = my_font.render('Your Score is : ' + str(score), True, red)
game_over_rect = game_over_surface.get_rect()
game_over_rect.midtop = (screen_width/2, screen_height/4)
screen.blit(game_over_surface, game_over_rect)
pygame.display.flip()
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
while True:
change_to = direction
for event in pygame.event.get():
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_UP:
change_to = 'UP'
if event.key == pygame.K_DOWN:
change_to = 'DOWN'
if event.key == pygame.K_LEFT:
change_to = 'LEFT'
if event.key == pygame.K_RIGHT:
change_to = 'RIGHT'
if change_to == 'UP' and direction != 'DOWN':
direction = 'UP'
if change_to == 'DOWN' and direction != 'UP':
direction = 'DOWN'
if change_to == 'LEFT' and direction != 'RIGHT':
direction = 'LEFT'
if change_to == 'RIGHT' and direction != 'LEFT':
direction = 'RIGHT'
if direction == 'UP':
snake_position[1] -= 10
if direction == 'DOWN':
snake_position[1] += 10
if direction == 'LEFT':
snake_position[0] -= 10
if direction == 'RIGHT':
snake_position[0] += 10
snake_body.insert(0, list(snake_position))
if snake_position[0] == food_position[0] and snake_position[1] == food_position[1]:
score += 1
food_spawn = False
else:
snake_body.pop()
if not food_spawn:
food_position = [random.randrange(1, (screen_width//10)) * 10,
random.randrange(1, (screen_height//10)) * 10]
food_spawn = True
screen.fill(black)
for pos in snake_body:
pygame.draw.rect(screen, blue, pygame.Rect(
pos[0], pos[1], 10, 10))
pygame.draw.rect(screen, green, pygame.Rect(
food_position[0], food_position[1], 10, 10))
pygame.draw.rect(screen, red, pygame.Rect(
enemy_position[0], enemy_position[1], 10, 10))
if snake_position[0] == enemy_position[0] and snake_position[1] == enemy_position[1]:
game_over()
if snake_position[0] < 0 or snake_position[0] > screen_width-10:
game_over()
if snake_position[1] < 0 or snake_position[1] > screen_height-10:
game_over()
for block in snake_body[1:]:
if snake_position[0] == block[0] and snake_position[1] == block[1]:
game_over()
show_score(1, white, 'comic sans', 20)
show_score(1, white, 'comic sans', 20)
pygame.display.update()
fps.tick(snake_speed)
</code></pre>
|
<p>Create a list of enemies:</p>
<pre class="lang-py prettyprint-override"><code>enemy_list = [
[random.randrange(1, (screen_width//10)) * 10,
random.randrange(1, (screen_height//10)) * 10]
]
</code></pre>
<p>Draw the enemies in a for loop and run the collision test with the enemy in a for loop and add a new enemy to the list as the snake eats and grows:</p>
<pre class="lang-py prettyprint-override"><code>while True:
# [...]
snake_body.insert(0, list(snake_position))
if snake_position[0] == food_position[0] and snake_position[1] == food_position[1]:
score += 1
food_spawn = False
# INSERT
enemy_list.append(
[random.randrange(1, (screen_width//10)) * 10,
random.randrange(1, (screen_height//10)) * 10]
)
else:
snake_body.pop()
# [...]
# INSERT
for enemy_position in enemy_list:
pygame.draw.rect(screen, red, pygame.Rect(
enemy_position[0], enemy_position[1], 10, 10))
# DELETE
# pygame.draw.rect(screen, red, pygame.Rect(
# enemy_position[0], enemy_position[1], 10, 10))
# INSERT
for enemy_position in enemy_list:
if snake_position[0] == enemy_position[0] and snake_position[1] == enemy_position[1]:
game_over()
# DELETE
# if snake_position[0] == enemy_position[0] and snake_position[1] == enemy_position[1]:
# game_over()
# [...]
</code></pre>
|
python|pygame
| 1 |
1,906,940 | 72,595,575 |
Is there a Python code to pull information from Bitbucket?
|
<p>I would need information such as drawing number, all commit requests, all pull requests, who created it, etc.</p>
|
<p>Use <code>atlassian-python-api</code> package, BitBucket module: <a href="https://atlassian-python-api.readthedocs.io/bitbucket.html" rel="nofollow noreferrer">https://atlassian-python-api.readthedocs.io/bitbucket.html</a></p>
<p>for example you can extract commits by this line of code:</p>
<pre><code>bitbucket.get_commits(project, repository, hash_oldest, hash_newest, limit=99999)
</code></pre>
|
python|bitbucket
| 1 |
1,906,941 | 68,281,776 |
Compare two dataframes and output new column
|
<p>I'm a beginner in Python. I have two dataframes, each with 5 columns but only the first two columns from each dataframe have matching data. Each dataframe have different number of records. I would like to compare column <strong>A</strong> from <strong>df1</strong> against column <strong>A</strong> from <strong>df2</strong> and if they match, then output column <strong>D</strong> (ownerEmail) from df2. If columns A don't match, column D should be null.</p>
<p><strong>df1</strong></p>
<pre><code>subscriptionId | displayName | state | authorization | tenantId
12345 | DEV_SPS | Enabled | RoleBased | 938c49a8
67890 | PROD_LINUX | Enabled | RoleBased | 0a9cb9ee
11900 | TST_WIN | Enabled | RoleBased | e1513511
</code></pre>
<p><strong>df2</strong></p>
<pre><code>subscriptionId | SubName | Connected | ownerEmail | organization
12345 | DEV_SPS | Enabled | john.doe@gmail.com | Marketing
67890 | PROD_LINUX | Enabled | alex.bre@gmail.com | Sales
</code></pre>
<p><strong>Desired output</strong></p>
<pre><code>subscriptionId | displayName | state | authorization | tenantId | ownerEmail
123456 | DEV_SPS | Enabled | RoleBased | 938c49a8 | john.doe@gmail.com
67890 | PROD_LINUX | Enabled | RoleBased | 0a9cb9ee | alex.bre@gmail.com
11900 | TST_WIN | Enabled | RoleBased | e1513511 | null
</code></pre>
<p>I have tried something like this but it didn't work.</p>
<pre><code>df1['ownerEmail'] = np.where(df1['subscriptionId'] == df2['subscriptionId'], ['ownerEmail'], "")
print(df1)
</code></pre>
<p>Any help would be much appreciated.</p>
<p>Thank you.</p>
|
<p>Merge your dataframes on <code>subscriptionId</code> column and keep all records from <code>df1</code> (<code>how='left'</code>):</p>
<pre><code>>>> pd.merge(df1.astype({'subscriptionId': str}),
df2[['subscriptionId', 'ownerEmail']].astype({'subscriptionId': str}),
on='subscriptionId', how='left')
subscriptionId displayName state authorization tenantId ownerEmail
0 12345 DEV_SPS Enabled RoleBased 938c49a8 john.doe@gmail.com
1 67890 PROD_LINUX Enabled RoleBased 0a9cb9ee alex.bre@gmail.com
2 11900 TST_WIN Enabled RoleBased e1513511 NaN
</code></pre>
|
python|pandas|dataframe|merge|compare
| 1 |
1,906,942 | 59,404,050 |
% symbol and regular expressions
|
<p>How does this line of code work? Google searches on individual characters don't work well. </p>
<pre><code>re.sub(r'(.*>.*/.*)%s(_R[12].*)' % sample.group(1), r'\1%s\2' % sample_name[1], line)
</code></pre>
<p>What I don't understand:</p>
<ul>
<li><code>"% sample.group(1)"</code> .... what is % doing?</li>
<li><code>'\1%s\2' %</code></li>
<li><code>%s</code></li>
</ul>
<p>What I understand:</p>
<ul>
<li>re.sub(x,y,z) will substitute x for y in string z</li>
<li>r is for raw (don't mess with /)</li>
<li>arrays & indexes</li>
<li><code>_R[12].*</code> matches "_R" and a 1 or 2 followed by random characters.</li>
<li>line (it's a string)</li>
</ul>
<p>Thanks!</p>
|
<p>The <a href="https://docs.python.org/2/library/stdtypes.html#string-formatting" rel="nofollow noreferrer"><code>%</code> string operator</a> is used for string interpolation/formatting. Think <code>sprintf</code> or <code>String.format</code>:</p>
<pre><code>r'(.*>.*/.*)%s(_R[12].*)' % sample.group(1)
</code></pre>
<p>Equals</p>
<pre><code>r'(.*>.*/.*)' + sample.group(1) + r'(_R[12].*)'
</code></pre>
<p>Specifically, the <code>s</code> operator (i.e., <code>%s</code>) is defined as:</p>
<blockquote>
<p>String (converts any Python object using <a href="https://docs.python.org/2/library/functions.html#str" rel="nofollow noreferrer"><code>str()</code></a>).</p>
</blockquote>
<p><a href="https://docs.python.org/3/library/string.html#string-formatting" rel="nofollow noreferrer"><code>.format</code> is the modern way</a> to go, though.</p>
|
python|regex|python-3.x
| 2 |
1,906,943 | 59,058,588 |
Pyspark got TypeError: can’t pickle _abc_data objects
|
<p>I’m trying to generate predictions from a pickled model with pyspark, I get the model with the following command</p>
<p><code>model = deserialize_python_object(filename)</code></p>
<p>with <code>deserialize_python_object(filename)</code> defined as:</p>
<pre><code>import pickle
def deserialize_python_object(filename):
try:
with open(filename, ‘rb’) as f:
obj = pickle.load(f)
except:
obj = None
return obj
</code></pre>
<p>the error log looks like:</p>
<pre><code>File “/Users/gmg/anaconda3/envs/env/lib**strong text**/python3.7/site-packages/pyspark/sql/udf.py”, line 189, in wrapper
return self(*args)
File “/Users/gmg/anaconda3/envs/env/lib/python3.7/site-packages/pyspark/sql/udf.py”, line 167, in __call__
judf = self._judf
File “/Users/gmg/anaconda3/envs/env/lib/python3.7/site-packages/pyspark/sql/udf.py”, line 151, in _judf
self._judf_placeholder = self._create_judf()
File “/Users/gmg/anaconda3/envs/env/lib/python3.7/site-packages/pyspark/sql/udf.py”, line 160, in _create_judf
wrapped_func = _wrap_function(sc, self.func, self.returnType)
File “/Users/gmg/anaconda3/envs/env/lib/python3.7/site-packages/pyspark/sql/udf.py”, line 35, in _wrap_function
pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)
File “/Users/gmg/anaconda3/envs/env/lib/python3.7/site-packages/pyspark/rdd.py”, line 2420, in _prepare_for_python_RDD
pickled_command = ser.dumps(command)
File “/Users/gmg/anaconda3/envs/env/lib/python3.7/site-packages/pyspark/serializers.py”, line 600, in dumps
raise pickle.PicklingError(msg)
_pickle.PicklingError: Could not serialize object: TypeError: can’t pickle _abc_data objects
</code></pre>
|
<p>Seems that you are having the same problem like in this issue:
<a href="https://github.com/cloudpipe/cloudpickle/issues/180" rel="noreferrer">https://github.com/cloudpipe/cloudpickle/issues/180</a></p>
<p>What is happening is that pyspark's cloudpickle library is outdated for python 3.7, you should fix the problem with this crafted patch by now <a href="https://github.com/apache/spark/pull/26009" rel="noreferrer">until pyspark gets that module updated</a>.</p>
<p>Try using this workaround:</p>
<ol>
<li><p>Install cloudpickle <code>pip install cloudpickle</code></p>
</li>
<li><p>Add this to your code:</p>
</li>
</ol>
<pre><code>import cloudpickle
import pyspark.serializers
pyspark.serializers.cloudpickle = cloudpickle
</code></pre>
<p>monkeypatch credit <a href="https://github.com/cloudpipe/cloudpickle/issues/305" rel="noreferrer">https://github.com/cloudpipe/cloudpickle/issues/305</a></p>
|
python|pyspark
| 8 |
1,906,944 | 73,101,098 |
Cumulative result with specific number in pandas
|
<p>This is my DataFrame:</p>
<pre><code>index, value
10, 109
11, 110
12, 111
13, 110
14, 108
15, 106
16, 100
</code></pre>
<p>I want to build another column based on multippliing by 0,05 with cumulative result.</p>
<pre><code>index, value, result
10, 109, 109
11, 110, 109 + (0,05 * 1) = 109,05
12, 111, 109 + (0,05 * 2) = 109,1
13, 110, 109 + (0,05 * 3) = 109,15
14, 108, 109 + (0,05 * 4) = 109,2
15, 106, 109 + (0,05 * 5) = 109,25
16, 100, 109 + (0,05 * 6) = 109,3
</code></pre>
<p>I tried to experiment with shift and cumsum, but nothing works. Can you give me an advice how to do it?</p>
<p>Now I do something like:</p>
<pre><code>counter = 1
result = {}
speed = 0,05
for item in range (index + 1, last_row_index + 1):
result[item] = result[first_index] + speed * counter
counter += 1
</code></pre>
<p>P.S. During your answers I've edited column result. Please don't blame me. I am really silly person, but I try to grow.</p>
<p>Thank you all for your answers!</p>
|
<p>Use <code>numpy</code>:</p>
<pre><code>df['result'] = df['value'].iloc[0]*1.05**np.arange(len(df))
</code></pre>
<p>Output:</p>
<pre><code> index value result
0 10 109 109.000000
1 11 110 114.450000
2 12 111 120.172500
3 13 110 126.181125
4 14 108 132.490181
5 15 106 139.114690
6 16 100 146.070425
</code></pre>
<p>After you edited the question:</p>
<pre><code>df['result'] = df['value'].iloc[0]+0.05*np.arange(len(df))
</code></pre>
<p>output:</p>
<pre><code> index value result
0 10 109 109.00
1 11 110 109.05
2 12 111 109.10
3 13 110 109.15
4 14 108 109.20
5 15 106 109.25
6 16 100 109.30
</code></pre>
|
python|pandas
| 2 |
1,906,945 | 62,346,531 |
transform a tsv file into a pandas dataframe
|
<p>I'm facing a problem since last week with my tsv that I would like to modify and transform into a pandas dataframe.</p>
<p>My file looks like this : </p>
<pre><code>'NC_011745.1_islands.csv': [['PAI 1 EaaA, EibA : 3.1'],
['PAI 2 EaaA : 7.75'],
['PAI 3 Capsule : 4.428571428571429'],
['PAI 4 EaaA : 7.75'],
['PAI 5 ipaH : 7.75'],
['PAI 6 IreA, IrgA homolog adhesin (Iha) : '
'0.96875'],
['PAI 7 IrgA homolog adhesin (Iha), Aerobactin : '
'0.8157894736842105'],
['PAI 8 MsbB2, VirK : 2.8181818181818183'],
['PAI 9 Antigen 43, AIDA-I type : '
'1.3478260869565217']],
'NC_017632_islands.csv': [['PAI 1 Capsule : 15.857142857142858'],
['PAI 2 AAI/SCI-II, direct heme uptake system, '
'Colibactin, Colibactin : 1.819672131147541'],
['PAI 3 F9-like fimbriae, Type 1 fimbriae : '
'3.3636363636363638'],
['PAI 4 Ferrous iron transport : 5.045454545454546'],
['PAI 5 Cah, AIDA-I type, Salmochelin, S fimbriae : '
'2.707317073170732'],
['PAI 6 ECP, Tsh : 13.875'],
['PAI 7 ACE/AEC T6SS : 9.25'],
['PAI 8 Tia/Hek, P fimbriae, F17-like fimbriae, '
'AAI/SCI-II, CNF-1, Alpha-hemolysin, '
'hemagglutinin-like adhesin : 1.088235294117647']],
'NC_017646_islands.csv': [['PAI 1 Allantion utilization : 5.285714285714286'],
['PAI 2 direct heme uptake system : 4.44'],
['PAI 3 ipaH : 27.75'],
['PAI 4 P fimbriae, Aerobactin, Sat, IrgA homolog '
'adhesin (Iha), K1 capsule, K1 capsule, T2SS : '
'1.3058823529411765'],
['PAI 5 P fimbriae, Tia/Hek : 5.842105263157895'],
['PAI 6 VirK, MsbB2 : 10.090909090909092']]}
</code></pre>
<p>And I would like to modify and export it as a pandas dataframe like this :</p>
<pre><code>\ EaaA, EibA EaaA Capsule ipaH IreA, IrgA homolog adhesin (Iha) ...
NC_011745.1 3.1 7.75 4.4285.. 7.75 0.96875
NC_017632 NA NA 15.8574 NA NA
</code></pre>
<p>The main problem for me is to put it as a dataframe, I tried :</p>
<pre><code>df = pd.DataFrame([dict]).T
df.to_tsv()
</code></pre>
<p>but it says that this fucntion is not working with tsv but with csv</p>
|
<p>You can't do this out of the box with pandas - pandas is good, but it isn't magic. You're going to need to do a lot of manipulation before your data is ready for a dataframe in the format that you want. Try something like this:</p>
<pre><code>_dict={'NC_011745.1_islands.csv': [['PAI 1 EaaA, EibA : 3.1'],
['PAI 2 EaaA : 7.75'],
['PAI 3 Capsule : 4.428571428571429'],
['PAI 4 EaaA : 7.75'],
['PAI 5 ipaH : 7.75'],
['PAI 6 IreA, IrgA homolog adhesin (Iha) : '
'0.96875'],
['PAI 7 IrgA homolog adhesin (Iha), Aerobactin : '
'0.8157894736842105'],
['PAI 8 MsbB2, VirK : 2.8181818181818183'],
['PAI 9 Antigen 43, AIDA-I type : '
'1.3478260869565217']],
'NC_017632_islands.csv': [['PAI 1 Capsule : 15.857142857142858'],
['PAI 2 AAI/SCI-II, direct heme uptake system, '
'Colibactin, Colibactin : 1.819672131147541'],
['PAI 3 F9-like fimbriae, Type 1 fimbriae : '
'3.3636363636363638'],
['PAI 4 Ferrous iron transport : 5.045454545454546'],
['PAI 5 Cah, AIDA-I type, Salmochelin, S fimbriae : '
'2.707317073170732'],
['PAI 6 ECP, Tsh : 13.875'],
['PAI 7 ACE/AEC T6SS : 9.25'],
['PAI 8 Tia/Hek, P fimbriae, F17-like fimbriae, '
'AAI/SCI-II, CNF-1, Alpha-hemolysin, '
'hemagglutinin-like adhesin : 1.088235294117647']],
'NC_017646_islands.csv': [['PAI 1 Allantion utilization : 5.285714285714286'],
['PAI 2 direct heme uptake system : 4.44'],
['PAI 3 ipaH : 27.75'],
['PAI 4 P fimbriae, Aerobactin, Sat, IrgA homolog '
'adhesin (Iha), K1 capsule, K1 capsule, T2SS : '
'1.3058823529411765'],
['PAI 5 P fimbriae, Tia/Hek : 5.842105263157895'],
['PAI 6 VirK, MsbB2 : 10.090909090909092']]}
f = {}
for key, a in _dict.items():
e = {}
for b in a:
for c in b:
d = c.split(" : ")
d[0] = d[0].replace("PAI ", "")[2:]
d = {d[0]:d[1]}
e = {**e, **d}
f[key] = e
df = pd.DataFrame.from_dict(f, 'index')
</code></pre>
<p>You'll need to work out a robust method for parsing your strings - probably regex - but this should get you started.</p>
|
python|pandas|dataframe
| 1 |
1,906,946 | 62,453,430 |
TypeError: forward() missing 1 required positional argument with Tensorboard PyTorch
|
<p>I am trying to write my model to <code>tensorboard</code> with the following code:</p>
<pre><code>model = SimpleLSTM(4, HIDDEN_DIM, HIDDEN_LAYERS, 1, BATCH_SIZE, device)
writer = tb.SummaryWriter(log_dir=tb_path)
sample_data = iter(trainloader).next()[0]
writer.add_graph(model, sample_data.to(device))
</code></pre>
<p>I get the error: <code>TypeError: forward() missing 1 required positional argument: 'batch_size'</code></p>
<p>My model looks like this:</p>
<pre><code>class SimpleLSTM(nn.Module):
def __init__(self, input_dims, hidden_units, hidden_layers, out, batch_size, device):
super(SimpleLSTM, self).__init__()
self.input_dims = input_dims
self.hidden_units = hidden_units
self.hidden_layers = hidden_layers
self.batch_size = batch_size
self.device = device
self.lstm = nn.LSTM(self.input_dims, self.hidden_units, self.hidden_layers,
batch_first=True, bidirectional=False)
self.output_layer = nn.Linear(self.hidden_units, out)
def init_hidden(self, batch_size):
hidden = torch.rand(self.hidden_layers, batch_size, self.hidden_units, device=self.device, dtype=torch.float32)
cell = torch.rand(self.hidden_layers, batch_size, self.hidden_units, device=self.device, dtype=torch.float32)
hidden = nn.init.xavier_normal_(hidden)
cell = nn.init.xavier_normal_(cell)
return (hidden, cell)
def forward(self, input, batch_size):
hidden = self.init_hidden(batch_size) incomplete batch
lstm_out, (h_n, c_n) = self.lstm(input, hidden)
raw_out = self.output_layer(h_n[-1])
return raw_out
</code></pre>
<p>How can I write this model to TensorBoard?</p>
|
<p>Your model takes two arguments <code>input</code> and <code>batch_size</code>, but you only provide one argument for <code>add_graph</code> to call your model with.</p>
<p>The inputs (second argument to <code>add_graph</code>) should be a tuple with the <code>input</code> and the <code>batch_size</code>:</p>
<pre class="lang-py prettyprint-override"><code>writer.add_graph(model, (sample_data.to(device), BATCH_SIZE))
</code></pre>
<p>You don't really need to provide the batch size to the forward method, because you can infer it from the input. As your LSTM uses <code>batch_first=True</code>, it means that the input is required to have size <em>[batch_size, seq_len, num_features]</em>, therefore the size of the first dimension is the current batch size.</p>
<pre class="lang-py prettyprint-override"><code>def forward(self, input):
batch_size = input.size(0)
# ...
</code></pre>
|
python|pytorch|lstm|tensorboard
| 1 |
1,906,947 | 62,119,496 |
Pyautogui image recognition returns None no matter what I do
|
<p>I have been trying to get pyautogui to locate graphically a folder on my desktop but it fails no matter what I do:</p>
<p>This is my code (int_auto.py)</p>
<pre><code>import pyautogui
button = pyautogui.locateCenterOnScreen('/Users/cadellteng/Desktop/Test-ground/randompy/pyxell2.png')
print(button)
</code></pre>
<p>I literally read through every SO thread I could find on this topic and some suggested that the photo needs to be lossless, so I used the cmd+shift+4 command on Mac to achieve this. A screenshot of my desktop and the reference image I used is attached here.</p>
<p><a href="https://i.stack.imgur.com/NwVev.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NwVev.jpg" alt="desktop_image"></a>
Note: This photo was converted to jpg because SO only allows photos up to 2MiB to be uploaded</p>
<p><a href="https://i.stack.imgur.com/AR5o6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AR5o6.png" alt="pyautogui reference image"></a></p>
<p>Other things I tried are:
1. using locateAllOnScreen
2. grayscale = True
3. uninstall pyautogui and reinstalling pyautogui</p>
<p>But no matter what I did, I couldn't seems to be able to get the outcome I want. If it helps, I use a double screen and my code editor (VS Code) is on the secondary screen. What you see here as my desktop is my primary screen.</p>
<p>When the program is running nothing is blocking the folder.</p>
<p>There is also a warning on my terminal which I'm not sure if it's going to be useful:</p>
<pre><code>/Users/cadellteng/Desktop/Test-ground/randompy/env/lib/python3.8/site-packages/rubicon/objc/ctypes_patch.py:21:
UserWarning: rubicon.objc.ctypes_patch has only been tested with Python 3.4 through 3.7.
You are using Python 3.8.2. Most likely things will work properly, but you may experience crashes if Python's internals have changed significantly. warnings.warn(
</code></pre>
<p>Do let me know if you require additional information.</p>
<p>EDIT June 01, 2020: So I thought that maybe pyautogui was not able to detect the folder because there are multiple folders and they all look the same and was not able to detect it. So I got this image off the web and tried to search it with it opened on my desktop with nothing else blocking it. But even then it was not able to find it.
<a href="https://i.stack.imgur.com/7pQnP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7pQnP.png" alt="robot.png"></a></p>
<p>I'm not sure if I am doing something wrong here but with each try, I am becoming more convinced that this API is broken.</p>
|
<p>You have a Macbook so I think its because of the pixel density of the screen and the resolution pyautogui detects. You can check by:
image = pyautogui.LocateCenterOnScreen(‘image.png’)
print(image)
If you see coordinates above 2000 its probably because of the resolution. I suggest a workaround if you want to click the image like:
pyautogui.click(image.x/2, image.y/2, clicks=1, interval=10, button=‘left’) also try cropping the images as small as possible!</p>
|
python-3.x|pyautogui
| 0 |
1,906,948 | 62,419,253 |
generate random integer with numpy WITH include constraints
|
<p>I am working with numpy and simpy on a simulation. I simulate over a 12 months periods.</p>
<pre><code>env.run(until=12.0)
</code></pre>
<p>I need to generate random demand values that are between 2 and 50, occuring at random moments within the 12 periods length of the env.</p>
<pre><code>d = np.random.randint(2,50) #generate random demand values
</code></pre>
<p>now the values are passed at random intervals into the 12 months simpy environement</p>
<pre><code>0.2 40
0.65 21
0.67 03
1.01 4
1.1 19
...
11.4 49
11.9. 21
</code></pre>
<p>what I trying to achieve is to constraint the numpy generation to make the sure that the sum of the values generated in each period (0,1,2...) does not exceed 100</p>
<p>to put it in different words, i am trying to generate random quantities, at random intervals along a 12 periods axis and I am trying to make sure that the sum of these quantities for one period does not exceed a given value</p>
<p>I cannot find anything about it online to twick numpy randint function to do that, would someone have a hint? </p>
|
<p>I do not understand your question. If you are looking for your simulation to give you an average of 100 per month, then the values on demand should not be in between [2, 50] as the maximum possible average will be 50. I think you might be looking for this: <a href="https://numpy.org/doc/stable/reference/random/generated/numpy.random.normal.html" rel="nofollow noreferrer">https://numpy.org/doc/stable/reference/random/generated/numpy.random.normal.html</a> </p>
<p>I won't go to the math but drawing random numbers from a normal distribution, and finding the mean, will give the mean of the normal distribution, which is a parameter you can use.</p>
|
python|numpy|simpy
| 0 |
1,906,949 | 35,682,457 |
Display textfield with line breaks in Flask
|
<p>I have a form with textfields in Flask, and users enter texts line by line. When I look at the DB the enties are stored with line breaks but when I display it on my website, in Flask templates, it just joins the sentences and I couldn't display the text line by line anymore.</p>
<p>To illustrate, user enters text as</p>
<p>1</p>
<p>2</p>
<p>3</p>
<p>However, it looks like in my page as:</p>
<p>1 2 3</p>
<p>This is my code in the template:</p>
<pre><code>{{ vroute.comments }}
</code></pre>
|
<p>Try</p>
<pre><code><pre>{{ vroute.comments }}</pre>
</code></pre>
<p>HTML collapses whitespace unless you indicate otherwise.</p>
|
python|flask|jinja2
| 6 |
1,906,950 | 35,750,540 |
Why Does Hex() Function returns a string instead an int hex?
|
<p>I dont know why Hex function returns a string like '0x41' instead 0x41</p>
<p><a href="https://i.stack.imgur.com/0u2la.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0u2la.png" alt="enter image description here"></a></p>
<p>I need to convert an ASCII value into a hex. But i want in 0x INT format, not into a '0x' string.</p>
<pre><code>ascii = 360
hexstring = hex(ascii)
hexstring += 0x41 # i cant do this because hexstring is a string not a int hex
</code></pre>
<p>How i can get a int hex??
thanks</p>
|
<p>There is no int hex object. There is only an <em>alternative syntax</em> to create integers:</p>
<pre><code>>>> 0x41
65
</code></pre>
<p>You could have used <code>0o1010</code> too, to get the same value. Or use <code>0b1000001</code> to specify it in binary; they are all <em>the exact same numeric value</em> to Python; they are all just different forms to specify an integer value in your code.</p>
<p>Simply keep <code>ascii</code> as an integer and sum your hex notation values with that:</p>
<pre><code>>>> ascii = 360
>>> ascii += 0x41
>>> ascii
425
</code></pre>
<p><code>hex()</code> produces a string that can be interpreted by a Python program in the same manner, and is usually used when debugging code or quick presentation output (but you should use <code>format(number, 'x')</code> if you want to produce end-user output without the <code>0x</code> prefix). It is not needed to work with integers.</p>
|
python|string|int|hex|ascii
| 9 |
1,906,951 | 58,809,810 |
Python fails to run from shell despite being installed and py command running
|
<p>I am trying to follow the tutorial of setting up an app for Django, but cannot progress past this part;<br>
"python manage.py runserver" from <a href="https://docs.djangoproject.com/en/2.2/intro/tutorial01/" rel="nofollow noreferrer">https://docs.djangoproject.com/en/2.2/intro/tutorial01/</a> </p>
<pre><code>PS C:\Users\Elias\Documents\WebApp\mysite> ls
Directory: C:\Users\Elias\Documents\WebApp\mysite
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 2019-11-11 4:32 PM mysite
-a---- 2019-11-11 4:32 PM 647 manage.py
PS C:\Users\Elias\Documents\WebApp\mysite> py manage.py runserver
PS C:\Users\Elias\Documents\WebApp\mysite> python manage.py runserver
Program 'python' failed to run: No application is associated with the specified file for this operationAt line:1 char:1
+ python manage.py runserver
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~.
At line:1 char:1
+ python manage.py runserver
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ResourceUnavailable: (:) [], ApplicationFailedException
+ FullyQualifiedErrorId : NativeCommandFailed
PS C:\Users\Elias\Documents\WebApp\mysite> py -m manage.py runserver
PS C:\Users\Elias\Documents\WebApp\mysite> python ./manage.py runserver
Program 'python' failed to run: No application is associated with the specified file for this operationAt line:1 char:1
+ python ./manage.py runserver
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~.
At line:1 char:1
+ python ./manage.py runserver
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ResourceUnavailable: (:) [], ApplicationFailedException
+ FullyQualifiedErrorId : NativeCommandFailed
PS C:\Users\Elias\Documents\WebApp\mysite> ./manage.py runserver
PS C:\Users\Elias\Documents\WebApp\mysite> python -V
Program 'python' failed to run: No application is associated with the specified file for this operationAt line:1 char:1
+ python -V
+ ~~~~~~~~~.
At line:1 char:1
+ python -V
+ ~~~~~~~~~
+ CategoryInfo : ResourceUnavailable: (:) [], ApplicationFailedException
+ FullyQualifiedErrorId : NativeCommandFailed
PS C:\Users\Elias\Documents\WebApp\mysite> py -V
Python 3.8.0
PS C:\Users\Elias\Documents\WebApp\mysite> python -m django --version
Program 'python' failed to run: No application is associated with the specified file for this operationAt line:1 char:1
+ python -m django --version
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~.
At line:1 char:1
+ python -m django --version
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ResourceUnavailable: (:) [], ApplicationFailedException
+ FullyQualifiedErrorId : NativeCommandFailed
PS C:\Users\Elias\Documents\WebApp\mysite> py -m django --version
2.2.7
PS C:\Users\Elias\Documents\WebApp\mysite> py ./manage.py runserver
PS C:\Users\Elias\Documents\WebApp\mysite> py ./manage.py runserver 8080
</code></pre>
<p>As you can see, Django is installed (but you can only see the version using 'py' and not 'python' for whatever bizarre reason). Even still, 'py ./manage.py runserver' was unable to run a server on either 127.0.0.1 from port 8000 or 8080. All environment variables are setup in<br>
LOCALAPPDATA%\Programs\Python\Python38-32;\%LOCALAPPDATA%\Programs\Python\Python38-32\Lib\site-packages;%LOCALAPPDATA%\Programs\Python\Python38-32\Scripts. </p>
|
<p>In your environment variables, you must set up path to the <code>\Python38-32\</code> only instead of site-packages, the reason is there is a<code>python.exe</code> file there which helps you run the<code>python manage.py runserver</code> command.</p>
<p>Also make sure you have Django installed, the recommended way is to use a <code>virtualenv</code>, but if you are a beginner it's fine</p>
|
python|django|python-3.x
| 1 |
1,906,952 | 59,781,061 |
How to create new column from groupby column and conditional check on another column in pandas?
|
<p>I have a pandas dataframe, </p>
<pre><code>data = pd.DataFrame([['TRAN','2019-01-06T21:44:09Z','T'],
['LMI','2019-01-06T19:44:09Z','U'],
['ARN','2019-01-02T19:44:09Z','V'],
['TRAN','2019-01-08T06:44:09Z','T'],
['TRAN','2019-01-06T18:44:09Z','U'],
['ARN','2019-01-04T19:44:09Z','V'],
['LMI','2019-01-05T16:34:09Z','U'],
['ARN','2019-01-08T19:44:09Z','V'],
['TRAN','2019-01-07T14:44:09Z','T'],
['TRAN','2019-01-06T11:44:09Z','U'],
['ARN','2019-01-10T19:44:09Z','V'],
],
columns=['Type', 'Date', 'Decision'])
</code></pre>
<p>I need to groupby Type column and find min Date of each type and create a new column for the min date as "First" else "Later"</p>
<p>I can <code>data.groupby('Type')</code> based on the Type, i dont know how to find <code>min(data['Date'])</code> in the groupdyDF and create a new column.</p>
<p>My final data looks like</p>
<pre><code>['TRAN','2019-01-06T21:44:09Z','T','Later'],
['LMI','2019-01-06T19:44:09Z','U','Later'],
['ARN','2019-01-02T19:44:09Z','V','First'],
['TRAN','2019-01-08T06:44:09Z','T','Later'],
['TRAN','2019-01-06T18:44:09Z','U','Later'],
['ARN','2019-01-04T19:44:09Z','V','Later'],
['LMI','2019-01-05T16:34:09Z','U','First'],
['ARN','2019-01-08T19:44:09Z','V','Later'],
['TRAN','2019-01-07T14:44:09Z','T','Later'],
['TRAN','2019-01-06T11:44:09Z','U','First'],
['ARN','2019-01-10T19:44:09Z','V','Later'],
],
columns=['Type', 'Date', 'Decision']
</code></pre>
|
<p>IIUC you can use this:</p>
<pre><code>df.groupby('Type').agg(First=('Date','first'), Later=('Date','last')).reset_index()
</code></pre>
|
python|pandas
| 3 |
1,906,953 | 59,606,877 |
how to find the confusion matrix of 5 different classification?
|
<p>I used this for getting the confusion matrix having 5 different classifications</p>
<pre><code>y_test_non_category = [ np.argmax(t) for t in y_test ]
y_predict_non_category = [ np.argmax(t) for t in y_pred ]
from sklearn.metrics import confusion_matrix
conf_mat = confusion_matrix(y_test_non_category, y_predict_non_category)
</code></pre>
<p>and got</p>
<pre><code>array([[ 76, 152, 4, 130, 56, 224],
[ 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0]])
</code></pre>
<p>How do I calculate precision and F1 score? <a href="https://i.stack.imgur.com/N6je2.png" rel="nofollow noreferrer">confusion matrix method</a></p>
|
<p>You can use the
<code>sklearn.metrics.classification_report</code>
to report get the respective metrics as you like.</p>
<pre><code>>>> y_true = [0, 1, 2, 2, 2]
>>> y_pred = [0, 0, 2, 2, 1]
>>> target_names = ['class 0', 'class 1', 'class 2']
>>> print(classification_report(y_true, y_pred, target_names=target_names))
precision recall f1-score support
<BLANKLINE>
class 0 0.50 1.00 0.67 1
class 1 0.00 0.00 0.00 1
class 2 1.00 0.67 0.80 3
<BLANKLINE>
accuracy 0.60 5
macro avg 0.50 0.56 0.49 5
weighted avg 0.70 0.60 0.61 5
<BLANKLINE>
>>> y_pred = [1, 1, 0]
>>> y_true = [1, 1, 1]
>>> print(classification_report(y_true, y_pred, labels=[1, 2, 3]))
precision recall f1-score support
<BLANKLINE>
1 1.00 0.67 0.80 3
2 0.00 0.00 0.00 0
3 0.00 0.00 0.00 0
<BLANKLINE>
micro avg 1.00 0.67 0.80 3
macro avg 0.33 0.22 0.27 3
weighted avg 1.00 0.67 0.80 3
</code></pre>
|
python|machine-learning
| 1 |
1,906,954 | 59,836,084 |
Why lambda function in list comprehension is slower than in map?
|
<p>I made a test</p>
<pre><code>%timeit list(map(lambda x:x,range(100000)))
%timeit [(lambda x:x)(i) for i in range(100000)]
</code></pre>
<p>gives</p>
<pre><code>29 ms ± 4.4 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
51.2 ms ± 3.76 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
</code></pre>
<p>Why in python lambda function is slower in list comprehension than in map?</p>
|
<p>Because in your first code snippet the <code>lambda</code> is created only once and executed 100000 times while on the second it is created and executed in every iteration.</p>
<p>To be honest I am surprised that the difference is not even bigger but your two timings should grow further apart the largest the length of the iterable is.</p>
<hr>
<p>As a sidenote, notice that even if you change the <code>lambda</code> to a built-in function that does not have to be created first but rather only looked-up, you still get the same trend:</p>
<pre><code>> py -m timeit "list(map(float, range(10000)))"
200 loops, best of 5: 1.78 msec per loop
> py -m timeit "[float(x) for x in range(10000)]"
100 loops, best of 5: 2.35 msec per loop
</code></pre>
<p>An improvement can be achieved by binding the function to a variable, but the <code>list(map())</code> scheme is still faster.</p>
<pre><code>> py -m timeit "r=float;[r(x) for x in range(10000)]"
100 loops, best of 5: 1.93 msec per loop
</code></pre>
<p>As to <em>why</em> the binding to a local variable is faster, you can find more information <a href="http://stupidpythonideas.blogspot.com/2015/12/how-lookup-works.html" rel="nofollow noreferrer">here</a>.</p>
|
python|lambda
| 6 |
1,906,955 | 60,327,288 |
How do I pass a script to Sage before running it in interactive mode?
|
<p>In my Python workflow, I commonly use the <code>-i</code> flag to open a Python interpreter which first executes the script I am working on, then allows me to interact with it. For example, in <code>test.py</code>:</p>
<pre><code>#!/usr/bin/env python
print("Hello World")
x=2
</code></pre>
<p>When I run <code>python -i test.py</code> from the command line, I receive the following output:</p>
<pre><code>Hello World!
>>>
</code></pre>
<p>Interactive mode is enabled, yet all definitions made in the script are available to me. Typing <code>x</code> will yield <code>2</code>.</p>
<p>Is there an analogous process for Sagemath? I have tried the <code>-c</code> flag, but the command <code>sage -c "attach('test.sage')"</code> fails to enter interactive mode after loading the module I am working on.</p>
<p>Ideally there would be a solution simpler that the one outlined <a href="https://stackoverflow.com/a/14397442/1438723">which uses <code>expect</code></a>, but if that is indeed the best solution, how would one go about using <code>expect</code> to cause Sagemath to start an interactive session after loading a specific file?</p>
|
<p>There is a startup file you can make called <code>init.sage</code> for every interactive Sage session. See <a href="http://doc.sagemath.org/html/en/faq/faq-usage.html#can-i-make-sage-automatically-execute-commands-on-startup" rel="nofollow noreferrer">this FAQ</a> and <a href="http://doc.sagemath.org/html/en/reference/repl/startup.html#the-init-sage-script" rel="nofollow noreferrer">this documentation</a>. Is that what you are looking for? You are right that <code>sage -c</code> only computes.</p>
|
python|interactive|sage
| 1 |
1,906,956 | 59,928,101 |
Fetch text around the parentheses from a pandas dataframe column and copy the output to the same column
|
<p>I want to fetch only the text around parenthesis and keep this text in the same column. </p>
<p>I have the following dataframe df:</p>
<pre><code>id feature
1 mutation(MI:0118)
2 mutation(MI:0119)
3 mutation(MI:01120)
</code></pre>
<p>The expected output is:</p>
<pre><code>id feature
1 MI:0118
2 MI:0119
3 MI:01120
</code></pre>
<p>I tried the following regex but it is not allowing me to copy it to the same column.</p>
<pre><code>df['feature'] = df['feature'].str.extract(r"\((.*?)\)", expand=False)
</code></pre>
<p>I am getting following warning and the above code is converting all the values in the feature column to NaN</p>
<pre><code>/home/lib/python2.7/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel.
</code></pre>
<p>Thanks</p>
|
<p>Try using the below code with a different pattern:</p>
<pre><code>df['feature'] = df['feature'].str.extract('.*\((.*)\).*', expand=False)
print(df)
</code></pre>
<p>Output:</p>
<pre><code> id feature
0 1 MI:0118
1 2 MI:0119
2 3 MI:01120
</code></pre>
<p><a href="https://regex101.com/r/ANq7Cm/1" rel="nofollow noreferrer">Regex101</a></p>
|
python|regex
| 1 |
1,906,957 | 60,112,796 |
Convert list comprehension python to javascript
|
<p>I am trying to convert some python code into JavaScript code.</p>
<p>I have a python list comprehension that take the paramente <code>a</code> as input. <code>a</code> is a string such as "bac".</p>
<pre><code>asubstring = [a[i:i + j] for j in range(1, len(a) + 1) for i in range(len(a) - j + 1)]
</code></pre>
<p>The output is: <code>['b', 'a', 'c', 'ba', 'ac', 'bac']</code></p>
<p>I converted it into JavaScript by doing:</p>
<pre><code>let j = 1
let i = 0
while(j < aTrimmed.length+1) {
while(i < aTrimmed.length - j + 1) {
aSubstring.push(aTrimmed.slice(i, i+j))
i++
}
j++
}
</code></pre>
<p><strong>However</strong>, my output is: <code>[ 'b', 'a', 'c' ]</code></p>
<p>I am not sure what I am missing in the two while loops.</p>
|
<p>Use <code>for</code> loop instead of <code>while</code> because you are forgetting reset you <code>i</code> index to 0 at the end of the second loop.</p>
<pre><code>let aTrimmed = "bac";
let aSubstring = [];
for(let j = 1; j < aTrimmed.length+1; j++) {
for(let i = 0; i < aTrimmed.length - j + 1; i++) {
aSubstring.push(aTrimmed.slice(i, i + j));
}
}
alert(aSubstring);
</code></pre>
|
javascript|python
| 1 |
1,906,958 | 60,087,760 |
Run multiple web browsers simultaneously Selenium Python 3
|
<p>I'm attempting to open multiple chrome drivers at once and have it run as fast as possible.
It opens the first page, and it has to load completely before the second function executes with "browser_2".<br>
Is there a way to make these functions load at the same time? </p>
<p>notice - I'm hiding "Proxy_list" from my post to protect those ips for this post.</p>
<pre><code>browser_1 = 0
browser_2 = 1
browser_3 = 2
browser_4 = 3
browser_5 = 4
browser_6 = 5
browser_7 = 6
browser_8 = 7
browser_9 = 8
browser_10 = 9
Link_1 = "https://www.google.com"
session_list = [browser_1, browser_2, browser_3, browser_4, browser_5, browser_6, browser_7, browser_8, browser_9, browser_10]
def create_browser(browser):
options = webdriver.ChromeOptions()
options.add_argument('--proxy-server=%s' % (Proxy_list[browser]))
options.add_argument("start-maximized")
options.add_argument("disable-infobars")
options.add_argument("--disable-extensions")
print("opening session #" + str(browser+ 1))
# Defines Browser
browser = webdriver.Chrome(options=options)
browser.get(Link_1)
return browser
create_browser(browser_1)
create_browser(browser_2)
create_browser(browser_3)
create_browser(browser_4)
create_browser(browser_5)
</code></pre>
|
<p>Here is the sample code.</p>
<p>I have tried and it would open multiple browsers at same time.</p>
<pre><code>from selenium import webdriver
from multiprocessing import Process, Pipe, Pool
def create_browser(num):
options = webdriver.ChromeOptions()
options.add_argument("start-maximized")
options.add_argument("disable-infobars")
options.add_argument("--disable-extensions")
# Defines Browser
browser = webdriver.Chrome(options=options)
browser.get('https://mail.google.com/')
return browser
pool = Pool(processes=10) # Maximum number of browsers opened at same time
for i in range(0, 5): # Five browsers will be created
async_result = pool.apply_async(create_browser, args=(i))
pool.close()
pool.join()
</code></pre>
<hr>
<p>Updated:</p>
<p>You can pass parameter like below.</p>
<pre><code>def test_function(x, y, z=0):
# do something
...
async_result = pool.apply_async(test_function, args=(1, 2), kwds={'z':3}) # x=1, y=2, z=3
</code></pre>
<hr>
<p>In your case:</p>
<pre><code>session_list = [browser_1, browser_2, browser_3, browser_4, browser_5, browser_6, browser_7, browser_8, browser_9, browser_10]
pool = Pool(processes=10) # Maximum number of browsers opened at same time
for i in range(0, len(session_list)): # Ten browsers will be created
async_result = pool.apply_async(create_browser, args=(session_list[i]))
pool.close()
pool.join()
</code></pre>
|
python-3.x|selenium|selenium-webdriver|selenium-chromedriver
| 1 |
1,906,959 | 5,634,291 |
Django code only works in debug
|
<p>Very confused about this one. This code in views.py works, but only when I'm debugging using Pycharm. If I just do <code>runserver</code> I get a 500 error. </p>
<p>views.py:</p>
<pre><code>def add_post(request):
if request.method == 'POST':
form = PostForm(request.POST)
cd = form.cleaned_data
if form.is_valid():
print "valid"
post = Post(nickname=cd['nickname'], body=cd['body'], category=cd['category'])
post.save()
return HttpResponse("success")
return HttpResponseServerError("fail")
</code></pre>
<p>Error as seen in Chrome Inspector</p>
<pre><code> <th>Exception Value:</th>
<td><pre>&#39;PostForm&#39; object has no attribute &#39;cleaned_data&#39;</pre></td>
</code></pre>
<p>No attribute cleaned_data? But why...?</p>
|
<p>The <code>cleaned_data</code> attribute becomes available after calling <code>is_valid()</code> on a form. You should move <code>cd = form.cleaned_data</code> to below the <code>if</code>.</p>
|
python|django|debugging
| 4 |
1,906,960 | 6,282,654 |
wxPython - Getting attribute from another class?
|
<p>I want to update the self.CreateStatusBar() in MainWindow from MainPanel. And update the self.textOutput in MainPanel from MainWindow.</p>
<p>Been reading alot, but still cant grasp it. Please help me. =)</p>
<pre><code>import wx
ID_EXIT = 110
class MainPanel(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent)
self.buttonRun = wx.Button(self, label="Run")
self.buttonRun.Bind(wx.EVT_BUTTON, self.OnRun )
self.buttonExit = wx.Button(self, label="Exit")
self.buttonExit.Bind(wx.EVT_BUTTON, self.OnExit)
self.labelChooseRoot = wx.StaticText(self, label ="Root catalog: ")
self.labelScratchWrk = wx.StaticText(self, label ="Scratch workspace: ")
self.labelMergeFile = wx.StaticText(self, label ="Merge file: ")
self.textChooseRoot = wx.TextCtrl(self, size=(210, -1))
self.textChooseRoot.Bind(wx.EVT_LEFT_UP, self.OnChooseRoot)
self.textScratchWrk = wx.TextCtrl(self, size=(210, -1))
self.textMergeFile = wx.TextCtrl(self, size=(210, -1))
self.textOutput = wx.TextCtrl(self, style=wx.TE_MULTILINE | wx.TE_READONLY)
self.sizerF = wx.FlexGridSizer(3, 2, 5, 5)
self.sizerF.Add(self.labelChooseRoot) #row 1, col 1
self.sizerF.Add(self.textChooseRoot) #row 1, col 2
self.sizerF.Add(self.labelScratchWrk) #row 2, col 1
self.sizerF.Add(self.textScratchWrk) #row 2, col 2
self.sizerF.Add(self.labelMergeFile) #row 3, col 1
self.sizerF.Add(self.textMergeFile) #row 3, col 2
self.sizerB = wx.BoxSizer(wx.VERTICAL)
self.sizerB.Add(self.buttonRun, 1, wx.ALIGN_RIGHT|wx.ALL, 5)
self.sizerB.Add(self.buttonExit, 0, wx.ALIGN_RIGHT|wx.ALL, 5)
self.sizer1 = wx.BoxSizer()
self.sizer1.Add(self.sizerF, 0, wx.ALIGN_RIGHT | wx.EXPAND | wx.ALL, 10)
self.sizer1.Add(self.sizerB, 0, wx.ALIGN_RIGHT | wx.EXPAND | wx.ALL)
self.sizer2 = wx.BoxSizer()
self.sizer2.Add(self.textOutput, 1, wx.EXPAND | wx.ALL, 5)
self.sizerFinal = wx.BoxSizer(wx.VERTICAL)
self.sizerFinal.Add(self.sizer1, 0, wx.ALIGN_RIGHT | wx.EXPAND | wx.ALL)
self.sizerFinal.Add(self.sizer2, 1, wx.ALIGN_RIGHT | wx.EXPAND | wx.ALL)
self.SetSizerAndFit(self.sizerFinal)
def OnChooseRoot(self, event):
dlg = wx.DirDialog(self, "Choose a directory:", style=wx.DD_DEFAULT_STYLE)
if dlg.ShowModal() == wx.ID_OK:
root_path = dlg.GetPath()
self.textChooseRoot.SetValue(root_path)
dlg.Destroy()
def OnRun(self, event):
#First check if any of the boxes is empty
pass
def OnExit(self, event):
self.GetParent().Close()
class MainWindow(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, title="IndexGenerator", size=(430, 330),
style=((wx.DEFAULT_FRAME_STYLE | wx.NO_FULL_REPAINT_ON_RESIZE |
wx.STAY_ON_TOP) ^ wx.RESIZE_BORDER))
self.CreateStatusBar()
self.fileMenu = wx.Menu()
self.fileMenu.Append(ID_EXIT, "E&xit", "Exit the program")
self.menuBar = wx.MenuBar()
self.menuBar.Append(self.fileMenu, "&File")
self.SetMenuBar(self.menuBar)
wx.EVT_MENU(self, ID_EXIT, self.OnExit)
self.Panel = MainPanel(self)
self.CentreOnScreen()
self.Show()
def OnExit(self, event):
self.Close()
if __name__ == "__main__":
app = wx.App(False)
frame = MainWindow()
app.MainLoop()
</code></pre>
|
<p>Generally speaking, <strong>UI elements shouldn't directly modify one another</strong>. In an event-driven application, UI elements can listen for events, and then perform some action following the event. Example: if the user does something to element B, element A can be notified of the event and then take an action.</p>
<p>Read more about events in wxPython here:</p>
<p><a href="http://wiki.wxpython.org/AnotherTutorial#Events" rel="nofollow">http://wiki.wxpython.org/AnotherTutorial#Events</a></p>
|
python|class|attributes|get|wxpython
| 2 |
1,906,961 | 67,774,350 |
webscraping the physiotherapie praxis list and expand all items list
|
<p>Here i am trying to create a list of physiotherapist from German yellow pages. The actual number are 90+ and here i am getting 52, where 50 of them are the list and 2 of them are unwanted items. The yellow markings are the unwanted items. How can i remove those from the list and expand it all so that i get all the list from that page.</p>
<p>web_address ='https://www.gelbeseiten.de/Suche/Physiotherapie%20praxis/Rostock'</p>
<pre><code>business_name = soup.find_all('articles', h2 ='data-wipe-name="Title"')
business_name = soup.find_all('h2')
for name in business_name:
print(name.get_text())
print(business_name)
</code></pre>
<p><a href="https://i.stack.imgur.com/RDVrt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RDVrt.jpg" alt="Web-scraping List of Physiotherapist practitioner " /></a></p>
|
<p>Probably it is getting from another h2 tag as your method is <code>find_all</code> on that tag you can specify <code>attrs</code> and remove that 2 unwanted items</p>
<pre><code>import requests
from bs4 import BeautifulSoup
res=requests.get("https://www.gelbeseiten.de/Suche/Physiotherapie%20praxis/Rostock")
soup=BeautifulSoup(res.text,"html.parser")
business_name = soup.find_all('h2',attrs={"data-wipe-name":"Titel"})
for name in business_name:
print(name.get_text())
print(len(business_name))
</code></pre>
<p>Output:</p>
<pre><code>Göllner Sabine Krankengymnastik & Physiotherapie
Friemel Physiotherapie Inh. B. Neumann Krankengymnastik & Physiotherapie
Nehrenberg Dorothee Physiotherapie
...
50
</code></pre>
|
python|beautifulsoup
| 3 |
1,906,962 | 67,769,037 |
How to make the Logistic Regression model work for other files to do prediction in Python?
|
<p>I have created a Logistic Regression model for train.csv which uses its data to do the prediction. How can I use the same model to do the prediction for test.csv? Sorry I am very new to Python.</p>
<p>Here is the screen capture for the last few commands and its result for train.csv. When i want to test test.csv i got the following error in last sentence :
<a href="https://pasteboard.co/K4p4aZA.jpg" rel="nofollow noreferrer">https://pasteboard.co/K4p4aZA.jpg</a></p>
<p>For the test.csv, train.csv and anaconda notebook you can visit: <a href="https://drive.google.com/drive/u/1/folders/1U6TcJz8fp7FqbxpUcqRmAU-HSL42VN-S" rel="nofollow noreferrer">https://drive.google.com/drive/u/1/folders/1U6TcJz8fp7FqbxpUcqRmAU-HSL42VN-S</a></p>
<pre><code>import pandas as pd
import numpy as np
df_test=pd.read_csv("test.csv")
df_train=pd.read_csv("train.csv")
# many lines in between for details please read the notebook in google drive
# below is the last few sentence
from sklearn.linear_model import LogisticRegression
lr=LogisticRegression()
lr.fit(X_train,y_train)
#prediction
df_result=pd.DataFrame(y_train)
df_result['predicted']=lr.predict_proba(X_train)[:,1]
</code></pre>
|
<p>I didn't read your actual code in google drive, but generally you can do something like this.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
df_test=pd.read_csv("test.csv")
df_train=pd.read_csv("train.csv")
# I would assume df_test and df_train have exactly the same structure, as it should be.
# You would have some process to clean up your input data
# Abstracted into a function below
def fun(df: pd.DataFrame):
# Do your cleaning and stuff
return x, y
x_train, y_train = fun(df_train)
from sklearn.linear_model import LogisticRegression
lr=LogisticRegression()
lr.fit(x_train,y_train )
#prediction
df_result=pd.DataFrame(y_train)
df_result['predicted']=lr.predict_proba(x_train)[:,1]
# Now you can do exactly the same thing on your `df_test`
x_test, y_test = fun(df_test)
test_result = lr.predict_proba(x_test)[:,1]
</code></pre>
<p>I would recommend take a read on <code>sklearn.Pipeline</code>, which build one single <code>Pipeline</code> to handle both the preprocessing and the actual modeling.</p>
|
python|pandas|logistic-regression
| 0 |
1,906,963 | 30,381,581 |
SSLError: Can't connect to HTTPS URL because the SSL module is not available on google app engine
|
<p>Want to use <a href="http://wechat-python-sdk.readthedocs.org/zh_CN/master/basic.html" rel="noreferrer">wechat sdk</a> to create menu </p>
<pre><code>WeChat.create_menu({
"button":[
{
"type":"click",
"name":"Daily Song",
"key":"V1001_TODAY_MUSIC"
},
{
"type":"click",
"name":" Artist Profile",
"key":"V1001_TODAY_SINGER"
},
{
"name":"Menu",
"sub_button":[
{
"type":"view",
"name":"Search",
"url":"http://www.soso.com/"
},
{
"type":"view",
"name":"Video",
"url":"http://v.qq.com/"
},
{
"type":"click",
"name":"Like us",
"key":"V1001_GOOD"
}]
}]
})
</code></pre>
<p>Currently not work because of this error:</p>
<pre><code>Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 267, in Handle
result = handler(dict(self._environ), self._StartResponse)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1519, in __call__
response = self._internal_error(e)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1511, in __call__
rv = self.handle_exception(request, response, e)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1505, in __call__
rv = self.router.dispatch(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1253, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1077, in __call__
return handler.dispatch()
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 547, in dispatch
return self.handle_exception(e, self.app.debug)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 545, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechatAPIHandler.py", line 72, in post
"key":"V1001_GOOD"
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 355, in create_menu
data=menu_data
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 949, in _post
**kwargs
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 907, in _request
"access_token": self.access_token,
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 849, in access_token
self.grant_token()
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 273, in grant_token
"secret": self.__appsecret,
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 935, in _get
**kwargs
File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 917, in _request
**kwargs
File "/base/data/home/apps/s~project-boom/1.384461758981660124/requests/api.py", line 50, in request
response = session.request(method=method, url=url, **kwargs)
File "/base/data/home/apps/s~project-boom/1.384461758981660124/requests/sessions.py", line 465, in request
resp = self.send(prep, **send_kwargs)
File "/base/data/home/apps/s~project-boom/1.384461758981660124/requests/sessions.py", line 573, in send
r = adapter.send(request, **kwargs)
File "/base/data/home/apps/s~project-boom/1.384461758981660124/requests/adapters.py", line 431, in send
raise SSLError(e, request=request)
SSLError: Can't connect to HTTPS URL because the SSL module is not available.
</code></pre>
<p>python request module is include in the app engine project. Using python 2.7. Being look for ways to solve this problem but have not find very clear way to solve the problem yet</p>
|
<p>If you're using GAE's Sockets, you can get SSL support without any hacks by simply loading the SSL library.</p>
<p>Simply add this to your app.yaml file:</p>
<pre><code>libraries:
- name: ssl
version: latest
</code></pre>
<p>This is documented on <a href="https://cloud.google.com/appengine/docs/python/sockets/ssl_support">Google Cloud's OpenSSL Support documentation.</a> </p>
|
python|google-app-engine|ssl|python-requests|wechat
| 42 |
1,906,964 | 67,156,033 |
where should I found etag of rss feed
|
<p>Now I want to only get the change of rss feed and not fetch repeat conent, I found the doc from <a href="https://pythonhosted.org/feedparser/http-etag.html" rel="nofollow noreferrer">https://pythonhosted.org/feedparser/http-etag.html</a> and tell me send a etag to server. but I do not found any etag from rss feed response, and last modified also not found, where to found the etag and last modified? this is the rss address:'https://blog.izgq.net/feed/'</p>
<p><a href="https://i.stack.imgur.com/x2bRa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x2bRa.png" alt="enter image description here" /></a></p>
|
<p>now you are using http request and set timeout, so find the etag from http response:</p>
<pre><code>resp = requests.get(source.sub_url, headers=headers, timeout=15.0)
if resp.status_code == 304:
logger.info("RSS source not changed")
return
# Put it to memory stream object universal feedparser
content = BytesIO(resp.content)
if resp.headers.keys().__contains__("Etag"):
etag = resp.headers['Etag']
if resp.headers.keys().__contains__("Last-Modified"):
last_modified = resp.headers['Last-Modified']
</code></pre>
|
python-3.x
| 0 |
1,906,965 | 66,356,016 |
Python @property vs @property.getter
|
<p>I'm writing a Python class, and it I'm using the <code>@property</code> decorator to create properties for the class.</p>
<p>I haven't found much in the documentation about this decorator, but from what I can gather from Stack Overflow and the instructions in my Python linter: in its entirety, a property created with the property decorator can take on the form <em>definition, getter, setter, deleter</em>, as shown here:</p>
<pre><code>@property
def name(self):
return self.__name
@name.getter
def name(self):
return self.__name
@name.setter
def name(self, value):
self.__name=value
@name.deleter
def name(self):
del self.__name
</code></pre>
<p>I'm not entirely sure what the very first block is for. The code inside it is the exact same as the <code>getter</code> function.</p>
<p>What is the first block for; how is it different from the <code>getter</code> block, and if it is not, can I remove either one of them?</p>
|
<p>You code works the same, because the code for <code>@name.getter</code> is the same as the code for <code>@property</code>.</p>
<p><code>@property</code> is necessary, because it defines the property.</p>
<p>If you try:</p>
<pre><code>class MyClass:
@name.getter
def name(self):
return self.__name
</code></pre>
<p>You will get error message:</p>
<pre><code>Traceback (most recent call last):
File "/path/to/my/code/prop.py", line 1, in <module>
class MyClass:
File "/path/to/my/code/prop.py", line 3, in MyClass
@name.getter
NameError: name 'name' is not defined
</code></pre>
<p>So, when creating a property, you always start with:</p>
<pre><code>@property
def name(self):
return self.__name
</code></pre>
<p>This creates the property <code>name</code> and also the <code>getter</code> for this property, which you can see here:</p>
<pre><code>class MyClass:
@property
def name(self):
return self.__name
print(MyClass.name) # Note: we didn't create any objects
print(MyClass.name.getter)
</code></pre>
<p>The output will be:</p>
<pre><code><property object at 0x10beee050>
<built-in method getter of property object at 0x10beee050>
</code></pre>
<p>If you add <code>getter</code>, this will overwrite the original <code>getter</code>.</p>
<p>In your case, both getters were the same, so there was no changes. But try changing the code, making the new getter different:</p>
<pre><code>class MyClass:
@property
def name(self): # property and original getter
print('This one will never get called')
return self.__name
@name.getter
def name(self): # redefined getter
return 'hello '+self.__name
</code></pre>
<p>Now the class has a new getter, and if you create an object <code>obj</code> and then use <code>obj.name</code>, the new <code>getter</code> will be called, and not the original one.</p>
|
python|properties|attributes
| 8 |
1,906,966 | 72,307,428 |
How to get only the date from datetimefield grouped by date
|
<p>Struggeling for many hours now and don't know what do do.</p>
<p>I have a database table with some stuff like below:</p>
<pre><code>id date_time uuid_id status_id room_label_id
1 2022-06-06 11:15:00 228451edc3fa499bb30919bf57b4cc32 0 1
2 2022-06-06 12:00:00 50e587d65f8449f88b49129143378922 0 1
3 2022-06-06 12:45:00 d1323b0ebd65425380a79c359a190ec6 0 1
4 2022-06-06 13:30:00 219af9da06ac4f459df2b0dc026fd269 0 1
</code></pre>
<p>With many more entries. date_time is a datetimefield.
I want to display several columns from this table in a html, grouped by date_time. But firstly i want to display the date out of the datetimefield grouped by date.</p>
<p>With:</p>
<pre><code><table>
<tr>
<th>Field 1</th>
</tr>
{% for terms in term %}
<tr>
<td>{{ terms.date_time.date }}</td>
</tr>
{% endfor %}
</table>
</code></pre>
<p>and my view:</p>
<pre><code>def schedule(request):
term = appointment.objects.all()
return render(request, 'scheduler/schedule.html', {'term' : term})
</code></pre>
<p>I get as result:</p>
<pre><code>Field 1
6. Juni 2022
6. Juni 2022
6. Juni 2022
6. Juni 2022
6. Juni 2022
6. Juni 2022
6. Juni 2022
7. Juni 2022
7. Juni 2022
7. Juni 2022
7. Juni 2022
7. Juni 2022
7. Juni 2022
7. Juni 2022
8. Juni 2022
8. Juni 2022
8. Juni 2022
8. Juni 2022
</code></pre>
<p>So far, so good. But i need only the date from the datetimefield like the following:</p>
<pre><code>Field 1
6. Juni 2022
7. Juni 2022
8. Juni 2022
</code></pre>
<p>and so on.</p>
<p>Any hints/tips or solutions for that kind of problem?</p>
<p>I also tried to set this up in querys and template language of django but without any luck.</p>
<p>EDIT:</p>
<p>Found a solution on my own. Not the prettiest way, but it works:
In short: displays the date of date_time only if there is a new content to display</p>
<pre><code><table>
<tr>
<th>Field 1</th>
</tr>
{% for terms in term %}
<tr>
<td>{% ifchanged terms.date_time.date %}
{{ terms.date_time.date }} {% endifchanged %}</td>
</tr>
{% endfor %}
</table>
</code></pre>
|
<p>Depending on your needs you could use one of two possible alternatives.
If you just want the distinct dates from your model instances you could use the following</p>
<pre><code>from django.db.models.functions.datetime import TruncDay
days = (appointment
.objects
.annotate(day=TruncDay('date_time')
.order_by('day')
.values_list('day')
.distinct())
</code></pre>
<p>This will give you the list of the distinct days only.</p>
<hr />
<p>As and alternative if you would like to group your data by this day within a template you could use the following</p>
<p><strong>in your view</strong></p>
<pre><code>from django.db.models.functions.datetime import TruncDay
data_with_days = (appointment
.objects
.annotate(day=TruncDay('date_time'))
</code></pre>
<p>and in your template</p>
<pre><code>{% regroup data_with_days by day as grouped %}
<table>
<thead>
<tr>
<th>Day</th>
<th>Data</th>
</tr>
</thead>
<tbody>
{% for g in grouped %}
<tr>
<td>{{ g.grouper }}</td>
<td>
{% for item in g.list %}
{{ item }} <!-- Display what you want from your appointment -->
{% if not forloop.last %}<br/>{% endif %}
{% endfor %}
</td>
</tr>
{% endfor %}
</tbody>
</table>
</code></pre>
|
python|django|django-queryset|python-datetime
| 0 |
1,906,967 | 65,634,684 |
Make a list of python pairs of two columns from dataframe in python
|
<p>I have pandas dataframe</p>
<pre><code>df_1 = pd.DataFrame({'x' : [a, b, c, d], 'y' : [e, f, g, h]})
</code></pre>
<p>I need to get from string like this:</p>
<pre><code>(first_element_from_first_row,first_element_from_second_row),
(second_element_from_first_row,second_element_from_second_row),
................................................................
(last_element_from_first_row,last_element_from_second_row);
</code></pre>
<p>at the end should be semicolon.</p>
<p>in my case the answer should be:</p>
<pre><code>(a,e),(b,f),(c,g),(d,h);
</code></pre>
<p>How should I solve my problem?</p>
|
<p>If I understand the question correctly- you want to apply the following transformation:</p>
<p>you can use zip to iterate over each element of column "x" and column "y" at the same time as a tuple of elements. You can join those elements so that they are a string and wrap that in parentheses to get the desired row-wise output. Then you store all of those in a larger list and turn that larger list into a string separated by commas, and add a semicolon at the end.</p>
<pre><code>all_pairs = []
for pair in zip(df_1["x"], df_1["y"]):
pair_str = "({})".format(",".join(pair))
all_pairs.append(pair_str)
final_str = ",".join(all_pairs) + ";"
print(final_str)
'(a,e),(b,f),(c,g),(d,h);'
</code></pre>
|
python|pandas|list|dataframe|ggpairs
| 2 |
1,906,968 | 65,696,568 |
view didn't return a response in flask
|
<p>I'm new to flask. I'm trying to create two upload request: 1. first file will save no matter 2. if i don't upload second file, it will help me to generate output.json. Can anyone suggest why haven't I return a valid response? Thanks</p>
<p><strong>TypeError: The view function did not return a valid response. The function either returned None or ended without a return statement.</strong></p>
<p>route:</p>
<pre><code>@app.route('/upload', methods=['POST','GET'])
def upload():
if request.method == 'POST' and 'txt_data' in request.files:
#Files upload and initiation
num_sentences = int(request.form['num_sentences'])
session["num_sentences"] = num_sentences
uploaded_file = request.files['txt_data']
filename = secure_filename(uploaded_file.filename)
session["filename"] = filename
# save uploaded file
if filename != '':
uploaded_file.save(os.path.join(app.config['UPLOAD_PATH'], filename))
text_file = open(os.path.join('uploads',filename), "r").read()
nlp_file_doc = nlp(text_file)
all_sentences = list(nlp_file_doc.sents)
ongoing_sentences = all_sentences[num_sentences:]
first_sentence = ongoing_sentences[0].text
# Save output file
if 'output_data' in request.files:
output_file = request.files['output_data']
output_filename = secure_filename(output_file.filename)
uploaded_file.save(os.path.join(app.config['OUTPUT_PATH'], output_filename))
else:
data = {first_sentence:[]}
with open("output.json", "w") as write_file:
json.dump(data, write_file)
#Test out the first sentence
extraction = apply_extraction(first_sentence, nlp)
return render_template('annotation.html',
all_sentences = all_sentences,
extraction = extraction, num_sentences = num_sentences)
</code></pre>
<p>html:</p>
<pre><code><form method=POST enctype=multipart/form-data action="{{ url_for('upload') }}" class="form-group">
<div class="form-group">
<input type="file" name="txt_data">
<form method=POST enctype=multipart/form-data action="{{ url_for('upload') }}" class="form-group">
<div class="form-group">
<input type="file" name="output_data">
</form>
</code></pre>
|
<p>Your route accepts both <strong>GET & POST</strong> methods but returns only in the case you have a POST request.</p>
<pre><code>@app.route('/upload', methods=['POST','GET'])
def upload():
if request.method == 'POST':
...
return something
# What happens here if the request.method is 'GET'?
</code></pre>
<p>If you make a GET request on the /upload, then the function will return nothing, throwing error.</p>
<p>You can either remove the GET method or return something for the GET case.</p>
<p><strong>Solution 1:</strong></p>
<pre><code>@app.route('/upload', methods=['POST'])
</code></pre>
<p><strong>Solution 2:</strong></p>
<pre><code>@app.route('/upload', methods=['POST','GET'])
def upload():
if request.method == 'POST':
return something
return something_else # if request.method == 'POST' returns false.
</code></pre>
|
python|html|flask
| 0 |
1,906,969 | 65,902,428 |
numpy find last less than value index in sorted array?
|
<p>I have a exactly sorted numpy array like this:</p>
<pre><code>arr = np.asarray([1351.1, 1351.11, 1351.14, 1351.16])
</code></pre>
<p>and I have a value array also exactly sorted value like this:</p>
<pre><code>vs = np.asarray([1351.10, 1351.13, 1351.17])
</code></pre>
<p>I want to find the last index of value in <code>arr</code> is less than or equal to <code>vs</code>, for example:</p>
<ol>
<li><code>vs[0]=1351.10</code> => <code>arr[0] == v[0]</code> => output <code>0</code></li>
<li><code>vs[1]=1351.13</code> => <code>arr[1] < v[1]</code> and <code>arr[2] > v[1]</code> => output <code>1</code></li>
<li><code>vs[2]=1351.17</code> => <code>arr[3] < v[2]</code> => output <code>3</code></li>
</ol>
<p>so at last output <code>[0, 1, 3]</code></p>
<p>Of course I can for loop <code>arr</code>, then compare <code>arr</code> with <code>vs</code>, but if <code>arr</code> size is very big, it maybe not a good option.</p>
<p>And I find <code>np.searchsorted</code>, it is not what my want, for example:
<code>np.searchsorted(arr, 1351.13)</code> returns <code>2</code>, but I want to <code>1</code>. And also another question, it cannot make use of <code>vs</code> is also sorted.</p>
|
<p>Just use <code>np.searchsorted</code> with <code>side = 'right'</code> and then subtract 1:</p>
<pre><code>np.searchsorted(arr, vs, side = 'right') - 1
array([0, 1, 3], dtype=int64)
</code></pre>
|
python|arrays|numpy
| 1 |
1,906,970 | 50,745,293 |
Run python script after pushing ladda button
|
<p>I've made a few python scripts that I want to execute when I push a ladda button.
<a href="https://lab.hakim.se/ladda/" rel="nofollow noreferrer">https://lab.hakim.se/ladda/</a></p>
<p>First I've created a php file to run the python script</p>
<pre><code><?php
$command = escapeshellcmd('/www/html/enable_s1.py');
$output = shell_exec($command);
echo $output;
header( "refresh:3;url=index.php" );
?>
</code></pre>
<p>This works without any problems</p>
<p>Next I've created a page to test the ladda button. This also works.
But now I want to link an action to the button. I want to run a python script after a button is pressed.</p>
<p>When I also include the php code to execute python, it will run everytime the page loads.
How can I stop this behavior?
Is PHP for executing the python script the best option?</p>
|
<p>I hope this code useful for you : </p>
<p><code>index.html</code> file : </p>
<pre><code><form action="test.php">
<button type="submit" class="ladda-button" data-color="green" data-style="expand-left"><span class="ladda-label">Submit</span><span class="ladda-spinner"></span></button>
</form>
</code></pre>
<p><code>test.php</code> file : </p>
<pre><code><?php
exec('python3.6 enable_s1.py', $output);
?>
</code></pre>
<p><code>enable_s1.py</code> file : </p>
<pre><code>print(' hi everybody')
# do somthing
</code></pre>
|
php|python
| -1 |
1,906,971 | 51,064,030 |
Python pandas - resample every 2nd row rather than every 2nd business day
|
<p>I am working with stock price data and would like to get <code>resample()</code> to return every 2nd row rather than every 2nd business day (<code>resample('2B')</code>). The obstacle is any holiday that lands on a weekday. See below, MLK Day is Monday, Jan 15, 2018:</p>
<pre><code>import pandas as pd
data = '''\
date,price
2018-01-08,88.28
2018-01-09,88.22
2018-01-10,87.82
2018-01-11,88.08
2018-01-12,89.6
2018-01-16,88.35
2018-01-17,90.14
2018-01-18,90.1
2018-01-19,90.0
2018-01-22,91.61
2018-01-23,91.9
2018-01-24,91.82
2018-01-25,92.33
2018-01-26,94.06'''
fileobj = pd.compat.StringIO(data)
df = pd.read_csv(fileobj, parse_dates=['date'], index_col=[0])
df_resample = df.resample('2B').min()
print(df_resample)
</code></pre>
<p>Output:</p>
<pre><code> price
2018-01-08 88.22
2018-01-10 87.82
2018-01-12 89.60
2018-01-16 88.35
2018-01-18 90.00
2018-01-22 91.61
2018-01-24 91.82
2018-01-26 94.06
</code></pre>
<p>I'd like the resample to jump from 1/12 to 1/17. I know that I can use <code>df['price'].loc[::2]</code> to deliver <code>df.resample('2B').last()</code> but I need to use <code>min()</code>, <code>max()</code> and <code>sum()</code> as well.</p>
<p>Thanks. </p>
<p>Expected Output:</p>
<p><a href="https://i.stack.imgur.com/NAKBL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NAKBL.png" alt="enter image description here"></a></p>
|
<p>For a stable solution I'd have a look at redifining the B-days somehow.</p>
<p>But if you reset index you could use the index numbers and groupby:</p>
<pre><code>df = df.reset_index()
df_resample = df.groupby(df.index // 2).min()
print(df_resample)
</code></pre>
<p>Returns:</p>
<pre><code> date price
0 2018-01-08 88.22
1 2018-01-10 87.82
2 2018-01-12 88.35
3 2018-01-17 90.10
4 2018-01-19 90.00
5 2018-01-23 91.82
6 2018-01-25 92.33
</code></pre>
<hr>
<p><strong>Or</strong> you could do something like this:</p>
<pre><code>g = np.arange(len(df))// 2
df_resample = df.groupby(g).agg(['last','min','max','sum'])
df_resample.insert(0, 'Date', df.index[1::2])
print(df_resample)
</code></pre>
<p>Returns:</p>
<pre><code> Date price
last min max sum
0 2018-01-09 88.22 88.22 88.28 176.50
1 2018-01-11 88.08 87.82 88.08 175.90
2 2018-01-16 88.35 88.35 89.60 177.95
3 2018-01-18 90.10 90.10 90.14 180.24
4 2018-01-22 91.61 90.00 91.61 181.61
5 2018-01-24 91.82 91.82 91.90 183.72
6 2018-01-26 94.06 92.33 94.06 186.39
</code></pre>
|
python|pandas|resampling|datetimeindex
| 2 |
1,906,972 | 50,245,840 |
Saving image for Looping
|
<p>I'm trying for use looping to get the figures of each my data. For example:</p>
<pre><code>In [1]: data.shape
Out[1]: (5, 784)
</code></pre>
<p>Then i'm using this:</p>
<pre><code>import matplotlib.pyplot as plt
for i in range(len(data)):
x=plt.imshow(data[i].reshape(28,28), cmap="gray_r")
plt.show()
name='%s%s.png' % str(x[i])
plt.imsave(name, x)
</code></pre>
<p>and i got error here:</p>
<pre><code>TypeError: 'AxesImage' object does not support indexing
</code></pre>
<p>My goal is to get the save image for each loop without overwritten. But, i don't understand what should i do when i got this error. Because i'm new in python.</p>
|
<p>I think there's some confusion here between</p>
<ul>
<li>plotting an image and saving the plot and</li>
<li>just saving an image given as a numpy array</li>
</ul>
<p>If the latter is what you want, which I guess, try this:</p>
<pre><code>data = arange(5*784).reshape((5,784)) # test data
for i in range(data.shape[0]):
# for every row in date, reshape to 28x28 and save as image
img = data[i].reshape(28,28)
plt.imsave("{}.png".format(i), img, cmap="gray_r")
</code></pre>
<p>And if it's the former, try this. Note the axes and coordinate labels in the resulting images.</p>
<pre><code>data = arange(5*784).reshape((5,784)) # test data
for i in range(data.shape[0]):
# for every row in date, reshape to 28x28, plot and save as image
img = data[i].reshape(28,28)
plt.imshow(img, cmap="gray_r")
plt.savefig("{}.png".format(i))
</code></pre>
<p><code>plt.savefig</code> will contextually know that it's supposed to save the image that you plotted with <code>imshow</code> beforehand, so there's no need to pass any arguments other than the filename.</p>
<p>It might become necessary to clear the figure with <code>plt.clf()</code> in between, since <code>imshow</code> just "paints over" the current figure with the previous images from the loop on it. It worked without for this example when I tested it, though.</p>
|
python-3.x
| 2 |
1,906,973 | 50,295,136 |
Message: 'chromedriver' executable needs to be in PATH while executing python selenium on web server
|
<p>I have a python script for scraping with selenium. Everything is going well on my local laptop.
But when I put this python file on the web server, it always had errors about selenium and now I can't execute successfully owing to</p>
<pre><code>Traceback (most recent call last):
File "test_availability.py", line 32, in <module>
driver = webdriver.Chrome(executable_path=CHROMEDRIVER_PATH, chrome_options=chrome_options)
File "/usr/local/lib/python3.6/site-packages/selenium/webdriver/chrome/webdriver.py", line 68, in __init__
self.service.start()
File "/usr/local/lib/python3.6/site-packages/selenium/webdriver/common/service.py", line 83, in start
os.path.basename(self.path), self.start_error_message)
selenium.common.exceptions.WebDriverException:
Message: 'chromedriver' executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home
</code></pre>
<p>But I put the <code>chromedriver</code> in the same location as where <code>chromedriver</code> is on my local laptop on the web server. And the error appears.
I tried many methods but this error still there.</p>
<p>I put <code>chromedriver</code> into <code>/usr/local/bin</code> on the web server
My question is different from the
<a href="https://stackoverflow.com/questions/46085270/selenium-common-exceptions-webdriverexception-message-chromedriver-executabl">selenium.common.exceptions.WebDriverException: Message: 'chromedriver' executable needs to be in PATH error with Headless Chrome</a>
Since I already used the method from the accepted acswer, but there still show the error</p>
<p>I need to run my python file on the web server. Below is my codes:</p>
<pre><code>CHROMEDRIVER_PATH = "/home/animalsp/public_html/maps/maps2/chromedriver"
WINDOW_SIZE ="1920,1080"
chrome_options = Options()
chrome_options.add_argument("--headless")
chrome_options.add_argument("--window-size=%s" % WINDOW_SIZE)
driver = webdriver.Chrome(executable_path=CHROMEDRIVER_PATH, chrome_options=chrome_options)
driver.get("https://na.chargepoint.com/charge_point")
</code></pre>
<p>And I even tried it with Firefox. Below is my codes with Firefox:</p>
<pre><code>FIREFOXDRIVER_PATH ="/home/animalsp/public_html/maps/maps2/geckodriver"
WINDOW_SIZE ="1920,1080"
firefox_options = Options()
firefox_options.add_argument("--headless")
firefox_options.add_argument("--window-size=%s" % WINDOW_SIZE)
driver = webdriver.Firefox(executable_path=FIREFOXDRIVER_PATH, firefox_options=firefox_options)
driver.get("https://na.chargepoint.com/charge_point")
</code></pre>
<p>Could someone help me with this?
Any response will be appreciated!</p>
<blockquote>
<p>Selenium 3.12.0 </p>
<p>python 3.6.5 </p>
<p>Chrome 66.0 </p>
<p>Chromedriver 2.3.8 </p>
<p>Firefox 60</p>
<p>geckodriver v0.20.1</p>
</blockquote>
|
<p>You need to put your <strong>chromedriver</strong> executable file in same directory where you run your script and change your chrome_path to this:</p>
<pre><code>import os
chrome_path = os.path.realpath('chromedriver')
</code></pre>
|
php|python|web-services|selenium|webserver
| 2 |
1,906,974 | 26,544,478 |
Add counting numbers to words in a list
|
<p>I have a list like:</p>
<pre><code><w>Asf</w>
<k>BOO</k>
<l>leg</l>
<w>kum</w>
...
</code></pre>
<p>Now i want something like:</p>
<pre><code><w id='1'>Asf</w>
<k>BOO</k>
<l>leg</l>
<w id='2'>kum</w>
...
<w id='250'>mau</w>
...
</code></pre>
<p>So i want to add this <code>id='n'</code>, but only for that <code><a></code>. I can add stuff to that list, but i dont know how to do that with a counting number. I do not even know what to try. I can adress the <code><w></code> with regex and put something in it, but how can i put the counting id in?</p>
<p>What i tried:</p>
<pre><code>chars = ('w', 'k','l')
tags = itertools.cycle(chars)
for word, tag in zip(my_list, tags):
names1.append("<{0} id='1'>{1}</{0}>".format(tag, word))
print("<{0} id='1'>{1}</{0}>".format(tag, word))
</code></pre>
<p>But thats totally wrong. I get that id for the start and ending tag, and its obviously not counting.</p>
|
<p>Not entirely sure what you are asking. As I understand your question, you want to add an increasing <code>id</code> attribute to each of those <code><w></code> tags. You could try something like this:</p>
<pre><code>data = ['<w>Asf</w>', '<k>BOO</k>', '<l>leg</l>', '<w>kum</w>']
counter = 0
for i, line in enumerate(data):
if "<w>" in line:
data[i] = line.replace("<w>", "<w id='{}'>".format(counter))
counter += 1
print data
</code></pre>
<p>This uses a <code>counter</code> variable that is increased each time the lines contains a <code><w></code> tag. If you want to add ids to other tags, you can easily extend this to a function, taking <code>w</code> or any other tag name as a parameter.</p>
<p>Output:</p>
<pre><code>["<w id='0'>Asf</w>", '<k>BOO</k>', '<l>leg</l>', "<w id='1'>kum</w>"]
</code></pre>
|
python-2.7
| 1 |
1,906,975 | 56,691,196 |
Python IDE recommendation on Ubuntu
|
<p>I am new to Python and Ubuntu. Using it to learning Machine learning. so it learning 3 things at same time. </p>
<p>As of now I am trying out of code on terminal, any recommendation which IDE to use ? </p>
|
<p>Try using Jupyter notebook. You could use this by installing Anaconda. For developing your applications further, you may still want to try Pycharm later.</p>
<p>But I would really recommend you to first go with Jupyter notebook because it allows to run a part of your code and check the results below it. It will help you to understand your code well.</p>
|
python|ubuntu
| 1 |
1,906,976 | 58,120,448 |
Pytorch :Why my dataset variance do not get the correct result?
|
<p>Here is the function I wrote:</p>
<pre><code>def channel_var(image_dataset):
res = image_dataset[0]
for image in image_dataset[1:]:
res += image
return tuple(map(lambda x: x/len(image_dataset),
(torch.var(res[0]),
torch.var(res[1]),
torch.var(res[2]))))
</code></pre>
<p>then I tested it with a Normal distribution : </p>
<pre><code>m = normal.Normal(0, 3)
m.sample((1, 3, 32, 32))
</code></pre>
<p>And I get this wrong result : </p>
<pre><code>channel_var(list_test)
>>(tensor(0.0338), tensor(0.0352), tensor(0.0365))
</code></pre>
<p>Thank you</p>
|
<p>Your function is wrong. And that's because you are computing the average image and then computing the channel variance in the average image. I don't think you want that. You can just find the variance in each channel by using</p>
<p><code>torch.var(img, dim=[0,2,3])</code></p>
<p>assuming <code>dim=1</code> is the channel dimension and img is a torch tensor. If img is not a torch tensor, you can concatenate list of imgs to make a tensor. </p>
<p>You can do this as <code>torch.var(torch.cat(img, dim=0), dim=[0,2,3])</code> <code>cat</code> operation concatenates list to a tensor. </p>
|
python|machine-learning|pytorch
| 1 |
1,906,977 | 57,743,955 |
Setting the weights on the Knn classifier
|
<p>I am new to python (I am using python 3) and would like to set my weights in the knn classifier. Actually, I want to use as weight the inverse square of the distance. How can I set this index in the training matrix if this distance depends on the input you want to predict?</p>
<p>From what I read, I tried to create a function as a parameter, but I have some doubts when determining the inputs. My gut feeling tells me that the input of my function should be an array, more specifically, a row of my test matrix.</p>
<p>My <code>X_train</code> has 4 columns (so my <code>X_test</code> too) </p>
<pre class="lang-py prettyprint-override"><code>def my_weight_function(row_vector):
w=[]
for i in range(len(X_train)):
dif = np.asarray(X_train[i:i+1]) - np.asarray(row_vector)
dist = np.linalg.norm(dif)
w.append([1/dist])
return w
</code></pre>
<p>This functions is working</p>
<pre class="lang-py prettyprint-override"><code>my_weight_function(X_test[1:2])[0:6]
</code></pre>
<p>Now I train</p>
<pre class="lang-py prettyprint-override"><code>knn2 = KNeighborsClassifier(n_neighbors = 5, weights=my_weight_function)
knn2.fit(X_train, y_train)
</code></pre>
<p>So far there are no errors, but when I do this</p>
<pre class="lang-py prettyprint-override"><code>fruit_prediction = knn2.predict([[9.2,9.6,362,0.74]])
print(fruit_prediction)
</code></pre>
<p>I have the following error:</p>
<blockquote>
<p>ValueError: operands could not be broadcast together with shapes (1,4)
(1,5)</p>
</blockquote>
|
<p>Your requirement is already implemented in Sklearn. All you have to do is <code>weights='distance'</code></p>
<p><strong>From <a href="https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html#sklearn.neighbors.KNeighborsClassifier" rel="nofollow noreferrer">Documentation</a>:</strong></p>
<blockquote>
<p>‘distance’ : weight points by the inverse of their distance. in this
case, closer neighbors of a query point will have a greater influence
than neighbors which are further away.</p>
</blockquote>
<p>Here is the piece of code, which implements weights as inverse of distance.</p>
<p>If you want to customize your weighing function using some other strategy, then your <code>udf</code> has take the distances array and then compute the weights. </p>
<blockquote>
<p>[callable] : a user-defined function which accepts an array of
distances, and returns an array of the same shape containing the
weights.</p>
</blockquote>
|
python|scikit-learn
| 0 |
1,906,978 | 58,038,175 |
How to update Config File on while loop and get value from different script at same time?
|
<p>Here is the scenario, I have two scripts lets say abc.py and xyz.py</p>
<p>using abc.py I want to update config file every other second. Here is the sample code.</p>
<p><strong>ABC.PY</strong></p>
<pre class="lang-py prettyprint-override"><code>while True:
cfgfile=config.read("config.ini")
config.set('section','option',Value)
with open('config.ini', 'w') as configfile:
config.write(configfile)
time.sleep(1)
</code></pre>
<p>On Xyz.py I want to fetch the values from config.ini.
my code on <strong>XYZ.PY</strong></p>
<pre class="lang-py prettyprint-override"><code>import configparser
file = input("Enter the file name: ")
config = configparser.ConfigParser()
cfgfile = config.read("config.ini")
values = config.get(file, 'option')
print(values)
</code></pre>
<p>But the problem is, the <strong>ABC.py</strong> is only updating the config file once! That means it is updating the file only on First While loop. It isn't updating the config file every second, as I thought it would. </p>
|
<p>In the <code>ABC.py</code> script, <code>Value</code> is not generated in the loop thus, every second you will write the same value in your config file. So it is normal that your second script read the same value.</p>
|
python|python-3.x|while-loop|config|configparser
| 0 |
1,906,979 | 57,845,655 |
Using itertools groupby to find the group of all even numbers in a list
|
<p>I am trying to understand the usefulness of the <code>itertools.groupby()</code> function and have created a rather naive use case. I have a list of numbers, and want to group them by oddness or evenness. Below is my code using <code>itertools.groupby()</code>:</p>
<pre><code>for decision, group in groupby(range(2, 11), key=lambda x: x % 2 == 0):
...: print(decision, list(group))
</code></pre>
<p>And below is the output I get:</p>
<pre><code>True [2]
False [3]
True [4]
False [5]
True [6]
False [7]
True [8]
False [9]
True [10]
</code></pre>
<p>Basically what I was expecting is something like where all the "True's" are grouped together and all the "False's" are grouped together.</p>
<p>Is this even possible with <code>groupby()</code>?</p>
|
<p><code>groupby()</code> combines <strong>consecutive values</strong> where their key output is equal, giving you a shared iterator for each such group. It will not process the whole input in one go, it gives you the groups <em>as you iterate</em>.</p>
<p>For your example, the key changes for each value, and so the groups consist of a single value each; it starts with <code>2</code> and the key output is <code>True</code>, then next comes <code>3</code> and the key produces <code>False</code>. Because <code>True != False</code>, that's a new group. The next value <code>4</code> changes the key again, to <code>True</code>, so it's another group, etc.</p>
<p>What you want to do can't be done with <code>groupby()</code>; to sort values into buckets <em>across</em> the whole iterable, just use a dictionary:</p>
<pre><code>grouped = {}
for value in range(2, 11):
key = value % 2 == 0
grouped.setdefault(key, []).append(value)
</code></pre>
<p>You could also sort your input first, with <code>sorted(range(2, 11), key=lambda x: x % 2 == 0)</code>, but <em>that's a waste of time</em>. Sorting is a more complex algorithm, taking O(n log n) (<a href="https://en.wikipedia.org/wiki/Time_complexity#Quasilinear_time" rel="nofollow noreferrer"><em>quasilinear</em></a>) time, whereas using a dictionary to group the values takes O(n) (<a href="https://en.wikipedia.org/wiki/Time_complexity#Linear_time" rel="nofollow noreferrer"><em>linear</em></a>) time. That might not matter when you have just 9 elements, but when you have to process 1000 elements, sorting would increase the time taken by a factor of nearly 10, for 1 million elements, a factor of nearly 20, etc.</p>
|
python|python-3.x|itertools
| 3 |
1,906,980 | 18,735,495 |
SQLAlchemy primary key assignement
|
<p>I'm wondering where in the process of creating objects and storing them in the database the primary key gets assigned by SQLAlchemy. In my app, when something happens I create an Event for that 'happening' and then create a notification for each user that needs to know about that Event. This all happens in the same method. </p>
<p>The problem now is that the Notification references the Event. Should I connect twice to the database to achieve this? First to store the Event so it gets assigned a primary key and secondly to store the notification?
Is it possible to only connect once to the database?</p>
<p>So these steps should happen:</p>
<ol>
<li>User does something</li>
<li>Create an Event</li>
<li><strong>Necessary?</strong> Store the Event in the database so I get a primary key to reference to</li>
<li>Create a Notification that references the Event</li>
<li>Store the Notification</li>
</ol>
|
<p>You don't need to worry about the primary-key to create the <code>Notification</code> just pass the <code>Event</code> object to the <code>Notification</code>, and <code>commit</code>. You're good to go.</p>
<p>SQLAlchemy doesn't assign the primary-key, it is the database that usually and implicitly <a href="https://stackoverflow.com/questions/17698521/set-sqlalchemy-to-use-postgresql-serial-for-identity-generation">does it</a> for you, provided you have declared the table with something like this: <code>id = Column(Integer, primary_key = True)</code>.</p>
<pre><code>class Event(Base):
__tablename__ = "events"
id = Column(Integer, primary_key = True)
...
class Notification(Base):
__tablename__ = "notifications"
id = Column(Integer, primary_key = True)
event_id = Column(Integer, ForeignKey("events.id"))
event = relationship("Event")
...
def __init__(self, event):
self.event = event
notification = Notification(Event())
session.add(notification)
session.commit()
</code></pre>
|
python|sqlalchemy
| 2 |
1,906,981 | 71,566,029 |
Python - Fastest way to strip the trailing zeros from the bit representation of a number
|
<p>This is the python version of <a href="https://stackoverflow.com/questions/70879695/fastest-way-to-strip-trailing-zeroes-from-an-unsigned-int">the same C++ question</a>.</p>
<p>Given a number, <code>num</code>, what is the fastest way to strip off the trailing zeros from its binary representation?</p>
<p>For example, let <code>num = 232</code>. We have <code>bin(num)</code> equal to <code>0b11101000</code> and we would like to strip the trailing zeros, which would produce <code>0b11101</code>. This can be done via string manipulation, but it'd probably be faster via bit manipulation. So far, I have thought of something using <code>num & -num</code></p>
<p>Assuming <code>num != 0</code>, <code>num & -num</code> produces the binary <code>0b1<trailing zeros></code>. For example,</p>
<pre><code>num 0b11101000
-num 0b00011000
& 0b1000
</code></pre>
<p>If we have a <code>dict</code> having powers of two as keys and the powers as values, we could use that to know by how much to right bit shift <code>num</code> in order to strip just the trailing zeros:</p>
<pre><code># 0b1 0b10 0b100 0b1000
POW2s = { 1: 0, 2: 1, 4: 2, 8: 3, ... }
def stripTrailingZeros(num):
pow2 = num & -num
pow_ = POW2s[pow2] # equivalent to math.log2(pow2), but hopefully faster
return num >> pow_
</code></pre>
<p>The use of dictionary <code>POW2s</code> trades space for speed - the alternative is to use <code>math.log2(pow2)</code>.</p>
<hr />
<p>Is there a faster way?</p>
<hr />
<p>Perhaps another useful tidbit is <code>num ^ (num - 1)</code> which produces <code>0b1!<trailing zeros></code> where <code>!<trailing zeros></code> means take the trailing zeros and flip them into ones. For example,</p>
<pre><code>num 0b11101000
num-1 0b11100111
^ 0b1111
</code></pre>
<hr />
<p>Yet another alternative is to use a while loop</p>
<pre><code>def stripTrailingZeros_iterative(num):
while num & 0b1 == 0: # equivalent to `num % 2 == 0`
num >>= 1
return num
</code></pre>
<hr />
<p>Ultimately, I need to execute this function on a big list of numbers. Once I do that, I want the maximum. So if I have <code>[64, 38, 22, 20]</code> to begin with, I would have <code>[1, 19, 11, 5]</code> after performing the stripping. Then I would want the maximum of that, which is <code>19</code>.</p>
|
<p>There's really no answer to questions like this in the absence of specifying the expected distribution of inputs. For example, if all inputs are in <code>range(256)</code>, you can't beat a single indexed lookup into a precomputed list of the 256 possible cases.</p>
<p>If inputs can be two bytes, but you don't want to burn the space for 2**16 precomputed results, it's hard to beat (assuming <code>that_table[i]</code> gives the count of trailing zeroes in <code>i</code>):</p>
<pre><code>low = i & 0xff
result = that_table[low] if low else 8 + that_table[i >> 8]
</code></pre>
<p>And so on.</p>
<p>You do <em>not</em> want to rely on <code>log2()</code>. The accuracy of that is entirely up to the C library on the platform CPython is compiled for.</p>
<p>What I actually use, in a context where ints can be up to hundreds of millions of bits:</p>
<pre><code> assert d
if d & 1 == 0:
ntz = (d & -d).bit_length() - 1
d >>= ntz
</code></pre>
<p>A <code>while</code> loop would be a disaster in this context, taking time quadratic in the number of bits shifted off. Even one needless shift in that context would be a significant expense, which is why the code above first checks to see that at least one bit needs to shifted off. But if ints "are much smaller", that check would probably cost more than it saves. "No answer in the absence of specifying the expected distribution of inputs."</p>
|
python|performance|bit-manipulation|bit-representation
| 4 |
1,906,982 | 71,513,873 |
PyQt5 keyPressEvent blocks reading keys by QWebView page
|
<p>I want to set <code>Escape</code> key as one that exits application</p>
<pre><code> def keyPressEvent(self, event):
if event.key() == QtCore.Qt.Key_Escape:
self.close()
</code></pre>
<p>but when I do that QWebView loses keyboard events.</p>
<pre><code>class PlayFlash(QWebView):
def __init__(self):
# QWebView
self.view = QWebView.__init__(self)
self.setWindowFlag(Qt.FramelessWindowHint)
self.resize(1024, 768)
self.move(0, 0)
# enable flashplayer plugin
self._settings = QWebSettings.globalSettings()
self._settings.setAttribute(QWebSettings.PluginsEnabled, True)
self.setFocusPolicy(Qt.StrongFocus)
self.load("file:///home/kamil/gitlab/PlayFlash/PlayFlash.html")
</code></pre>
<p>If I don't use <code>keyPressEvent</code> at all, keyboard events are read by QWebView page.</p>
|
<p>Solution:</p>
<pre><code> def keyPressEvent(self, event):
if event.key() == QtCore.Qt.Key_Escape:
self.restoreMainMenu()
else:
super().keyPressEvent(event)
</code></pre>
|
python-3.x|pyqt5
| 0 |
1,906,983 | 71,775,002 |
Why opencv 'cv2.morphologyEx' operations shift images in one direction during iterations?
|
<p>I am currently using <code>"morphologyEx"</code> operations in OpenCV to eliminate some noise in the image. It's working successfully but for some strange reason, roi keeps moving south during iterations.
The original image is : <a href="https://i.stack.imgur.com/fL79e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fL79e.png" alt="enter image description here" /></a></p>
<p>The image wth scale bars : <a href="https://i.stack.imgur.com/ZiKQP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZiKQP.png" alt="enter image description here" /></a></p>
<p>The python script that I am running is</p>
<pre><code>test_image = r"C:/test/test.bmp"
image = cv2.imread(test_image,cv2.COLOR_BAYER_BG2RGB)
blurred = cv2.medianBlur(image, 3)
ret,binary = cv2.threshold(blurred, 0, 255, cv2.THRESH_OTSU | cv2.THRESH_BINARY)
img_adj = cv2.morphologyEx(blurred, cv2.MORPH_OPEN,(3,11),iterations=25)
#imshow(binary)
imshow(img_adj)
</code></pre>
<p>But after iterations it is as follows :
<a href="https://i.stack.imgur.com/36x82.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/36x82.png" alt="enter image description here" /></a></p>
<p>Image roi has shifted south which is proportional to iterations.
How can I prevent shifting ?</p>
|
<p>The main issue is the <code>(3,11)</code> argument passed to <code>cv2.morphologyEx</code>.</p>
<p>According to the documentation of <a href="https://docs.opencv.org/3.4/d4/d86/group__imgproc__filter.html#ga67493776e3ad1a3df63883829375201f" rel="nofollow noreferrer">morphologyEx</a>, kernel is a <strong>Structuring element</strong>, and not the size of the kernel.</p>
<p>Passing <code>(3,11)</code> is probably like passing <code>np.array([1, 1])</code> (or just undefined behavior).</p>
<p>The correct syntax is passing 3x11 NumPy a array of ones (and <code>uint8</code> type):</p>
<pre><code>img_adj = cv2.morphologyEx(blurred, cv2.MORPH_OPEN, np.ones((3, 11), np.uint8), iterations=25)
</code></pre>
<hr />
<p>Using large kernel with 25 iterations is too much, so I reduced it to 3x5 and 5 iterations.</p>
<p>The following code sample shows that the image is not shifted:</p>
<pre><code>import cv2
import numpy as np
test_image = "test.bmp"
#image = cv2.imread(test_image, cv2.COLOR_BAYER_BG2RGB) # cv2.COLOR_BAYER_BG2RGB is not in place
image = cv2.imread(test_image, cv2.IMREAD_GRAYSCALE) # Read image as grayscale
blurred = cv2.medianBlur(image, 3)
ret, binary = cv2.threshold(blurred, 0, 255, cv2.THRESH_OTSU | cv2.THRESH_BINARY)
#img_adj = cv2.morphologyEx(blurred, cv2.MORPH_OPEN, (3, 11), iterations=25)
img_adj = cv2.morphologyEx(blurred, cv2.MORPH_OPEN, np.ones((3, 5), np.uint8), iterations=5)
montage_img = np.dstack((255-image, 0*image, 255-img_adj)) # Place image in the blue channel and img_adj in the red channel
# Show original and output images using OpenCV imshow method (instead of using matplotlib)
cv2.imshow('image', image)
cv2.imshow('img_adj', img_adj)
cv2.imshow('montage_img', montage_img)
cv2.waitKey()
cv2.destroyAllWindows()
</code></pre>
<hr />
<p><code>image</code>:<br />
<a href="https://i.stack.imgur.com/B7oBn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B7oBn.png" alt="enter image description here" /></a></p>
<p><code>img_adj</code>:<br />
<a href="https://i.stack.imgur.com/EmaFl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EmaFl.png" alt="enter image description here" /></a></p>
<p><code>montage_img</code>:<br />
<a href="https://i.stack.imgur.com/zZBCI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zZBCI.png" alt="enter image description here" /></a></p>
<hr />
<p>A better solution would be finding the largest connected component (that is not the background):</p>
<pre><code>import cv2
import numpy as np
test_image = "test.bmp"
image = cv2.imread(test_image, cv2.IMREAD_GRAYSCALE) # Read image as grayscale
ret, binary = cv2.threshold(image, 0, 255, cv2.THRESH_OTSU | cv2.THRESH_BINARY_INV)
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(binary, 8) # Finding connected components
# Find the largest non background component.
# Note: range() starts from 1 since 0 is the background label.
max_label, max_size = max([(i, stats[i, cv2.CC_STAT_AREA]) for i in range(1, nb_components)], key=lambda x: x[1])
res = np.zeros_like(binary) + 255
res[output == max_label] = 0
cv2.imshow('res', res)
cv2.waitKey()
cv2.destroyAllWindows()
</code></pre>
<hr />
<p>Result:<br />
<a href="https://i.stack.imgur.com/Gwqab.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gwqab.png" alt="enter image description here" /></a></p>
|
python|opencv
| 2 |
1,906,984 | 71,697,625 |
Import "pygame" could not be resolved
|
<p>I installed pygame using the pip in command prompt but it still shows an error. I have already attempted changing the path so pip could properly be accessed and using the line "pygame.init()" but the error still appears.</p>
<p>Here's the code:</p>
<hr />
<pre><code>import pygame, sys
pygame.init()
w, h = 600, 600
r = (255, 0, 0)
s = pygame.display.set_mode( (w, h) )
s.fill(r)
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
sys.exit()
pygame.display.update()
</code></pre>
<hr />
|
<p>by the way if it still isn't working you could try using Pycharm. If you can't do that you could try using replit and downloading the pygame package, then exporting it with Github.</p>
|
python|installation|pygame
| 0 |
1,906,985 | 55,492,903 |
Error in creating Choropleth map, Json decode error
|
<p>I was trying to create a chloropleth map using Folium in Python. But getting an error. The data can be found <a href="https://drive.google.com/open?id=1o85ah3U8cMDSusZFjDZRRS0AEUjjYBo3" rel="nofollow noreferrer">here</a></p>
<pre><code>---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-42-ab43cadb1d56> in <module>()
8 fill_opacity=0.7,
9 line_opacity=0.2,
---> 10 legend_name='Immigration to Canada'
11 )
12
c:\users\himanshu poddar\appdata\local\programs\python\python36-32\lib\site-packages\folium\folium.py in choropleth(self, geo_data, data, columns, key_on, threshold_scale, fill_color, fill_opacity, line_color, line_weight, line_opacity, name, legend_name, topojson, reset, smooth_factor, highlight)
325 style_function=style_function,
326 smooth_factor=smooth_factor,
--> 327 highlight_function=highlight_function if highlight else None)
328
329 self.add_child(geo_json)
c:\users\himanshu poddar\appdata\local\programs\python\python36-32\lib\site-packages\folium\features.py in __init__(self, data, style_function, name, overlay, control, smooth_factor, highlight_function)
480 else: # This is a filename
481 with open(data) as f:
--> 482 self.data = json.loads(f.read())
483 elif data.__class__.__name__ in ['GeoDataFrame', 'GeoSeries']:
484 self.embed = True
c:\users\himanshu poddar\appdata\local\programs\python\python36-32\lib\json\__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
352 parse_int is None and parse_float is None and
353 parse_constant is None and object_pairs_hook is None and not kw):
--> 354 return _default_decoder.decode(s)
355 if cls is None:
356 cls = JSONDecoder
c:\users\himanshu poddar\appdata\local\programs\python\python36-32\lib\json\decoder.py in decode(self, s, _w)
337
338 """
--> 339 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
340 end = _w(s, end).end()
341 if end != len(s):
c:\users\himanshu poddar\appdata\local\programs\python\python36-32\lib\json\decoder.py in raw_decode(self, s, idx)
355 obj, end = self.scan_once(s, idx)
356 except StopIteration as err:
--> 357 raise JSONDecodeError("Expecting value", s, err.value) from None
358 return obj, end
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
</code></pre>
<p>Here is what I have tried</p>
<pre><code># download countries geojson file
!wget --quiet https://ibm.box.com/shared/static/cto2qv7nx6yq19logfcissyy4euo8lho.json -O world_countries.json
print('GeoJSON file downloaded!')
world_geo = r'world_countries.json' # geojson file
# create a plain world map
world_map = folium.Map(location=[0, 0], zoom_start=2, tiles='Mapbox Bright')
# generate choropleth map using the total immigration of each country to Canada from 1980 to 2013
world_map.choropleth(
geo_data=world_geo,
data=df_can,
columns=['Country', 'Total'],
key_on='feature.properties.name',
fill_color='YlOrRd',
fill_opacity=0.7,
line_opacity=0.2,
legend_name='Immigration to Canada'
)
# display map
world_map
</code></pre>
<p>Note that I have tried all suggestion for the solution to this question and none of them worked for me.</p>
|
<p>In my case the error was that geo_data in <code>choropleth()</code> function was not getting the correct value.</p>
|
python|pandas|maps|choropleth|folium
| 1 |
1,906,986 | 55,227,474 |
unexpected EOF while parsing on line that has nothing
|
<p><a href="https://i.stack.imgur.com/IKydr.png" rel="nofollow noreferrer">Why do I keep getting this error message when there is nothing on that line?</a></p>
<p>How come I keep getting this error message? There is literally nothing in the line (pic included) so I don't understand why I keep getting these messages. the only things I have in this program are:</p>
<pre><code>SURVEY QUESTION = "(insert question)"
SURVEY_RESULTS = [0, 1, 2, 3]
</code></pre>
<p>and </p>
<pre><code>print(SURVEY_RESULTS)
</code></pre>
<p>Nothing seems to be working.</p>
|
<p>Look at what it's telling you. Unexpected EOF. That means it is trying to find the end of some previous construct in your code above, and not finding it. Perhaps an unterminated string literal, or a function missing an ending right-paren.</p>
<p>Perhaps it's the missing underscore in <code>SURVEY QUESTION</code>.</p>
|
python|parsing|eof
| 0 |
1,906,987 | 57,563,348 |
How login to cgi with python requests passing a certificate?
|
<p>I'm trying to use the requests module in Python to handle cgi-bin to login in a website, but I have to login using a digital certificate, file .pfx</p>
<p>I'm using this code that I found <a href="https://gist.github.com/erikbern/756b1d8df2d1487497d29b90e81f8068" rel="nofollow noreferrer">https://gist.github.com/erikbern/756b1d8df2d1487497d29b90e81f8068</a></p>
<pre><code>@contextlib.contextmanager
def pfx_to_pem(pfx_path, pfx_password):
''' Decrypts the .pfx file to be used with requests. '''
with tempfile.NamedTemporaryFile(suffix='.pem') as t_pem:
f_pem = open(t_pem.name, 'wb')
pfx = open(pfx_path, 'rb').read()
p12 = OpenSSL.crypto.load_pkcs12(pfx, pfx_password)
f_pem.write(OpenSSL.crypto.dump_privatekey(OpenSSL.crypto.FILETYPE_PEM, p12.get_privatekey()))
f_pem.write(OpenSSL.crypto.dump_certificate(OpenSSL.crypto.FILETYPE_PEM, p12.get_certificate()))
ca = p12.get_ca_certificates()
if ca is not None:
for cert in ca:
f_pem.write(OpenSSL.crypto.dump_certificate(OpenSSL.crypto.FILETYPE_PEM, cert))
f_pem.close()
yield t_pem.name
with pfx_to_pem('path/file.pfx', 'password') as cert:
login_page = "https://zeusr.sii.cl/AUT2000/InicioAutenticacion/IngresoCertificado.html?https://misiir.sii.cl/cgi_misii/siihome.cgi"
headers = {
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3",
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "es-ES,es;q=0.9",
"Cache-Control": "max-age=0",
"Connection": "keep-alive",
"Cookie": "s_cc=true",
"Host": "herculesr.sii.cl",
"Origin": "https://zeusr.sii.cl",
"Referer": "https://misiir.sii.cl/cgi_misii/siihome.cgi",
"Sec-Fetch-Mode": "navigate",
"Sec-Fetch-Site": "same-site",
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36"
}
s = requests.Session()
s.cert = cert
print(s.cert)
r = s.get(login_page, cert=s.cert, headers=headers)
print(r.content)
</code></pre>
<p>When I sent with headers I received
<code>b'Error 400'</code>
and without headers I received the html document with a message that I am not logged</p>
|
<p>Well finally I have to convert the .pfx file to cert and key and works easily, just in request I have to add verify=false </p>
<pre><code>s = requests.Session()
s.cert = cert
r = s.get(login_page, cert=("certificate.cert", "certkey.key"), verify=false)
print(r.content)
</code></pre>
|
python|pfx|cgi-bin
| 0 |
1,906,988 | 57,380,749 |
OWL2, SWRL: Query if item is in range of another item?
|
<p>My problem is that if i want to check if an instance is in range, i use the following rule:</p>
<pre><code>Rule1: Error(?d), TimeRelatedError(?c), TimeRelatedError_start_at(?c, ?s), error_at(?d, ?b), greaterThan(?b, ?s) -> is_after_TimeRelatedError_start(?d, true)
Rule2: Error(?d), TimeRelatedError(?c), TimeRelatedError_end_at(?c, ?e), error_at(?d, ?b), lessThan(?b, ?e) -> is_before_TimeRelatedError_end(?d, true)
</code></pre>
<p>It works if i have only one TimeRelatedError in my ontology, if i have more instances, it will always trigger true (because one of the TimeRelatedError are always before/after start/end point). Do you have any ideas how to solve this problem? I think i could tackle the problem if i assign my TimeRelatedError somehow to the Error instance but i do not know how. Please OWL/SWRL professionals help me with this task :)</p>
|
<p>I only can blame myself, i found a solution, only one rule with:</p>
<pre><code>Error(?d), TimeRelatedError(?c), TimeRelatedError_start_at(?c, ?s), error_at(?d, ?b), greaterThan(?b, ?s), TimeRelatedError_end_at(?c, ?e), error_at(?d, ?b), lessThan(?b, ?e) -> in_range(?d, true)
</code></pre>
<p>It works because everything after greaterThan will only be called if the statement (greaterThan) is true, in the end if the end is also less than, we can call it in_range. I leave it here, for other that stumble upon the same question.</p>
|
python|owl|swrl|owlready
| 0 |
1,906,989 | 42,543,306 |
Python Searching for Specific Entry
|
<p>So I have the code;</p>
<pre><code>names= [index + " - " + js[index]["name"] for index in js]
</code></pre>
<p>To search through this data:</p>
<pre><code>{ "1": {"name":"One"} },
{ "2": {"name":"Two"} },
{ "3": {"name":"Three"} },
</code></pre>
<p>How could I change it so I could set a variable to 2 earlier in the program and make the code only search for the name of 2?</p>
|
<p>Assuming js is a valid dict:</p>
<pre><code>js = { "1": {"name":"One"},
"2": {"name":"Two"},
"3": {"name":"Three"}}
</code></pre>
<p>You are building a list with all the values of "name". If you only want those indexes, where "name" equals "Two", include it in your list comprehension:</p>
<pre><code>>>> needle = "Two"
>>> names = ["{} - {}".format(index, js[index]["name"]) for index in js if js[index]["name"]==needle]
>>> print(names)
['2 - Two']
</code></pre>
<p><strong>EDIT</strong>: Regarding your comment, if you try to get the value "Two" for the key "2", you can directly access the dict the usual way:</p>
<pre><code>>>> needle="2"
>>> js[needle]["name"]
'Two'
</code></pre>
<p>In this specific, simplified case it would be easier to use a flat dictionary:</p>
<pre><code>js = { "1": "One",
"2": "Two",
"3": "Three"}
</code></pre>
<p>The access ("search") would then be:</p>
<pre><code>>>> js[needle]
'Two'
</code></pre>
|
python|json
| 0 |
1,906,990 | 54,045,859 |
Gremlin-Python: Returning a fully populated subgraph
|
<p>I am using the Gremlin-Python Client to query Gremlin Server with a janusgraph backend. </p>
<p>Running the following query:</p>
<pre><code>graph = Graph()
g = graph.traversal().withRemote(DriverRemoteConnection('ws://localhost:8182/gremlin','g'))
sg = g.E().subgraph('a').cap('a').next()
</code></pre>
<p>The query returns a subgraph containing a list of edges and vertices.</p>
<p>I have the following serializers configured on the server</p>
<pre><code>serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { serializeResultToString: true }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
</code></pre>
<blockquote>
<p>Does anyone know how to configure gremlin-server and a sample code to return a fully
populated subgraph?</p>
</blockquote>
<p><strong>Updated test case based on Stephen's feedback</strong></p>
<pre><code># DB: Janusgraph with Opensource Cassandra storage backend
# Data: v[41427160]--reports_to-->v[36712472]--reports_to-->v[147841048]
# Objective: get subgraph detached to python client with all properties of the vertex and edges
(py365)$ pip list | grep gremlinpython
gremlinpython 3.3.4
(py365)$ python
Python 3.6.5 (default, Apr 25 2018, 14:26:36)
[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from gremlin_python.driver import client
>>> from gremlin_python.driver.serializer import GraphSONSerializersV3d0
>>> session = client.Client('ws://localhost:8182/gremlin', 'g', message_serializer=GraphSONSerializersV3d0())
>>> query_parameters = {"vids": [41427160, 36712472]}
>>> query = "g.V(vids).outE('reports_to').subgraph('1').otherV().cap('1').next()"
>>> results = session.submit(query, query_parameters)
>>> for r in results:
... r_vertices = r[0]['@value'].get('vertices')
... r_edges = r[0]['@value'].get('edges')
... print(r)
... print(r_vertices)
... print(r_edges)
...
[{'@type': 'tinker:graph', '@value': {'vertices': [v[41427160], v[147841048], v[36712472]], 'edges': [e[{'@type': 'janusgraph:RelationIdentifier', '@value': {'relationId': '21y8ez-onxeg-f11-luviw'}}][41427160-reports_to->36712472], e[{'@type': 'janusgraph:RelationIdentifier', '@value': {'relationId': '225dz7-luviw-f11-2g0qvs'}}][36712472-reports_to->147841048]]}}]
[v[41427160], v[147841048], v[36712472]]
[e[{'@type': 'janusgraph:RelationIdentifier', '@value': {'relationId': '21y8ez-onxeg-f11-luviw'}}][41427160-reports_to->36712472], e[{'@type': 'janusgraph:RelationIdentifier', '@value': {'relationId': '225dz7-luviw-f11-2g0qvs'}}][36712472-reports_to->147841048]]
>>>
</code></pre>
<blockquote>
<p>Is it true that gremlinpython is lightweight that, even when using
script based approach, only necessary elements(id and label) are
detached as <em>"reference elements"</em> part of the graphson?</p>
</blockquote>
|
<p>You can't fully return the result of <code>subgraph()</code> step as a <code>Graph</code> with Gremlin Python (or any other language variant for that matter). The problem is that Gremlin Python is meant to be a lightweight implementation of Gremlin and thus does not have a graph data structure instance to deserialize the returned data into. </p>
<p>At this time, the only workaround is to simply return the data that forms the graph and then you would have to store that data into something graph-like in Python. So perhaps you would do:</p>
<pre><code>g.E().project('edgeId','label','inId','outId').
by(id).
by(label).
by(inV().id()).
by(outV().id())
</code></pre>
<p>That would return the minimum data required for the structure of the subgraph as a <code>Map</code> and then you could do something with that data in Python. </p>
<p>The other option which I think is less recommended would be to submit a script with Python rather than use a bytecode based request. With a script you would get a GraphSON representation of the subgraph and then you could parse it as necessary to some data structure in Python. Here is the equivalent of the script you would need to send:</p>
<pre><code>gremlin> graph = g.E().hasLabel('knows').subgraph('sg').cap('sg').next()
==>tinkergraph[vertices:3 edges:2]
gremlin> mapper = GraphSONMapper.build().addRegistry(TinkerIoRegistryV3d0.instance())create().createMapper()
==>org.apache.tinkerpop.shaded.jackson.databind.ObjectMapper@f6de586
gremlin> mapper.writeValueAsString(graph)
==>{"@type":"tinker:graph","@value":{"vertices":[{"@type":"g:Vertex","@value":{"id":{"@type":"g:Int32","@value":1},"label":"person","properties":{"name":[{"@type":"g:VertexProperty","@value":{"id":{"@type":"g:Int64","@value":0},"value":"marko","label":"name"}}],"age":[{"@type":"g:VertexProperty","@value":{"id":{"@type":"g:Int64","@value":1},"value":{"@type":"g:Int32","@value":29},"label":"age"}}]}}},{"@type":"g:Vertex","@value":{"id":{"@type":"g:Int32","@value":2},"label":"person","properties":{"name":[{"@type":"g:VertexProperty","@value":{"id":{"@type":"g:Int64","@value":2},"value":"vadas","label":"name"}}],"age":[{"@type":"g:VertexProperty","@value":{"id":{"@type":"g:Int64","@value":3},"value":{"@type":"g:Int32","@value":27},"label":"age"}}]}}},{"@type":"g:Vertex","@value":{"id":{"@type":"g:Int32","@value":4},"label":"person","properties":{"name":[{"@type":"g:VertexProperty","@value":{"id":{"@type":"g:Int64","@value":6},"value":"josh","label":"name"}}],"age":[{"@type":"g:VertexProperty","@value":{"id":{"@type":"g:Int64","@value":7},"value":{"@type":"g:Int32","@value":32},"label":"age"}}]}}}],"edges":[{"@type":"g:Edge","@value":{"id":{"@type":"g:Int32","@value":7},"label":"knows","inVLabel":"person","outVLabel":"person","inV":{"@type":"g:Int32","@value":2},"outV":{"@type":"g:Int32","@value":1},"properties":{"weight":{"@type":"g:Property","@value":{"key":"weight","value":{"@type":"g:Double","@value":0.5}}}}}},{"@type":"g:Edge","@value":{"id":{"@type":"g:Int32","@value":8},"label":"knows","inVLabel":"person","outVLabel":"person","inV":{"@type":"g:Int32","@value":4},"outV":{"@type":"g:Int32","@value":1},"properties":{"weight":{"@type":"g:Property","@value":{"key":"weight","value":{"@type":"g:Double","@value":1.0}}}}}}]}}
</code></pre>
<p>We'll be reconsidering how subgraphing works for different language variants in future versions of TinkerPop but for now these are the only solutions that we have.</p>
|
gremlin|janusgraph|gremlin-server|gremlinpython
| 6 |
1,906,991 | 58,545,912 |
Why doesn't the next screen show up in kivy app?
|
<p>I am expecting the following kivy app to switch screens when I press and then release the button, but nothing happens and there's no error on the terminal. When I run the app, the GirisEkrani screen show up, and then when I press and release the button in GirisEkrani, the next screen (GirisEkrani2) should show up. Do you how to make that work?</p>
<pre><code>from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager, Screen
Builder.load_string("""
<ekranApp>
GirisEkrani:
id: ge
Button:
text: "İleri"
on_release: ge.manager.current = ge.manager.next()
GirisEkrani2:
Button:
id: ge2
text: "Ileri 2"
on_release: ge2.manager.current = ge2.manager.next()
KontrolEkrani:
id: ke
Button:
text: "Geri"
on_release: ke.manager.current = ke.manager.previous()
""")
class GirisEkrani(Screen):
pass
class GirisEkrani2(Screen):
pass
class KontrolEkrani(Screen):
pass
class ekranApp(App, ScreenManager):
def build(self):
#root = ScreenManager()
#root.add_widget(GirisEkrani(name = "giris_ekrani"))
#root.add_widget(GirisEkrani2(name = "giris_ekrani2"))
#root.add_widget(KontrolEkrani(name = "kontrol_ekrani"))
return self
if __name__ == "__main__":
ekranApp().run()
</code></pre>
<p>While people seem to advocate the use of .kv files as opposed to using pure Python, I find it very frustrating to have no errors showing up when something doesn't work.</p>
|
<p>The <code>build()</code> method of an <code>App</code> is expected to return a <code>Widget</code> which will be the root of the <code>Widget</code> tree for your <code>App</code>, but yours returns the <code>App</code> itself. I suspect that will cause problems. However, your code is not working because you are not setting the names of the <code>Screens</code>. Here is a corrected version of your <code>kv</code>:</p>
<pre><code>Builder.load_string("""
<ekranApp>:
GirisEkrani:
id: ge
name: "giris_ekrani"
Button:
text: "İleri"
on_release: ge.manager.current = ge.manager.next()
GirisEkrani2:
id: ge2
name: "giris_ekrani2"
Button:
text: "Ileri 2"
on_release: ge2.manager.current = ge2.manager.next()
KontrolEkrani:
id: ke
name: "kontrol_ekrani"
Button:
text: "Geri"
on_release: ke.manager.current = ke.manager.previous()
""")
</code></pre>
<p>Also, your <code>id</code> for <code>GirisEkrani2</code> is actually set on the <code>Button</code>.</p>
|
python|kivy|screen
| 1 |
1,906,992 | 45,646,981 |
Error while calling method defined inside a class
|
<p>I am trying to create a class and I can't seem to get it to work? I'm fairly new to Python, so any assistance would be appreciated. Also, not sure if this is the most efficient way to create and use an object. I am trying to build a well model and this is one piece of that model, once I get this simple issue figured out the rest should be fairly easy. Thanks.</p>
<pre><code>import sys
import os
import csv
import pyodbc
import pandas as pd
import pandas.io.sql as psql
from pandas import Series, DataFrame
from time import gmtime, strftime
#Drill Pipe Class
class DP:
#Properties
DP_ID = 1.00
DP_OD = 1.00
DP_Name = 'Drill Pipe'
#test global
idwel = '6683AFCEA5DF429CAC123213F85EB9B3'
#Constructor <- Accepts idwell to get info
def __init__(self,idwell):
self.id = idwell
#..
#WV DB connecton Function -> return as dataframe -Updated 7/5/17
def WV_Read_Query(_query):
try:
cnxn = pyodbc.connect("DSN=SQL_R_WV")
cur = cnxn.cursor()
df = psql.read_sql(_query, cnxn)
cnxn.close()
#print(df)
return df
except "Error":
return "Query Error...!"
#..
def get_DP_Data(_id):
_id = str(_id)
DP_Query = """Select Top 1
DS.des as 'dp_name',DS.SZIDNOM as 'dp_id',
DS.SZODNOM as 'dp_od',DS.SYSCREATEDATE as 'date'
From [dbo].[US_WVJOBDRILLSTRINGCOMP] DS
Where IDWELL = '""" + _id +"""'
AND Des = 'Drill Pipe' Order by SYSCREATEDATE Desc"""
mud_Data = WV_Read_Query(DP_Query)
return mud_Data
#..
DP_Table = get_DP_Data(id)
def get_DP_ID(self, DP_Table):
dp_id = DP_Table['dp_id']
return dp_id
#..
def get_DP_OD(self, DP_Table):
dp_od = DP_Table['dp_od']
return dp_od
#..
def get_Date(self, DP_Table):
u_date = DP_Table['date']
return u_date
#..
def get_Des(self, DP_Table):
des = DP_Table['dp_name']
return des
#..
#Print DP Info
def DP_Info(self):
Des = get_Des()
ID = get_DP_ID()
OD = get_DP_OD()
Updated = strftime("%Y-%m-%d %H:%M:%S", gmtime())
return Des + "\nDP Id:\t" + ID + "\nDP Id:\t" + OD + "\nUpdated:\t" + Updated
#..
#...
dp = DP('6683AFCEA5DF429CAC123213F85EB9B3')
dp_info = dp.DP_Info()
print(dp_info)
</code></pre>
<blockquote>
<p>Traceback (most recent call last): File "u:\Development\Python
Scripts\HCP\CUC Export Files 8_7_17\Well_Model.py", line 71, in
class DP: File "u:\Development\Python Scripts\HCP\CUC Export Files 8_7_17\Well_Model.py", line 108, in DP
DP_Table = get_DP_Data(id) File "u:\Development\Python Scripts\HCP\CUC Export Files 8_7_17\Well_Model.py", line 104, in
get_DP_Data
mud_Data = WV_Read_Query(DP_Query) NameError: name 'WV_Read_Query' is not defined</p>
</blockquote>
|
<p>If you are defining non-static, non-class methods within a function, the first argument is <em>always</em> an instance of that class. We usually call this argument <code>self</code>:</p>
<pre><code>def WV_Read_Query(self, _query):
...
</code></pre>
<p>And, </p>
<pre><code>def get_DP_Data(self, _id):
</code></pre>
<p>Furthermore, you call a these methods on the object <code>self</code>:</p>
<pre><code>self.WV_Read_Query(DP_Query)
</code></pre>
<p>You might wonder why the function is defined with 2 arguments, but only 1 passed to. That's because the instance is <em>implicitly</em> passed as the first parameter, automatically.</p>
<p>This is equivalent to </p>
<pre><code>DP.WV_Read_Query(self, DP_Query)
</code></pre>
<p>Where you call the method on the class, but <em>explicitly</em> pass the instance to it. </p>
<hr>
<p>Further reading:</p>
<ol>
<li><p><a href="https://docs.python.org/3/tutorial/classes.html#classes" rel="nofollow noreferrer">Python <code>classes</code></a></p></li>
<li><p><a href="https://stackoverflow.com/questions/1053592/what-is-the-difference-between-class-and-instance-methods">What is the difference between class and instance methods?</a> </p></li>
</ol>
|
python
| 2 |
1,906,993 | 28,492,865 |
Accessing nested values in JSON data from Twitch API
|
<p>I am using the code:</p>
<pre><code>getSource = requests.get("https://api.twitch.tv/kraken/streams")
text = json.loads(getSource.text)
print text["streams"]["_links"]["_self"]
</code></pre>
<p>text is a dictionary that I get by calling json.loads. I then try to get the nested values, eventually trying to get the url but this error shows up:</p>
<pre><code>Traceback (most recent call last):
File "Twitch Strim.py", line 11, in <module>
print text["streams"]["_links"]["_self"]
TypeError: list indices must be integers, not str
</code></pre>
<p>How can I correct this?</p>
|
<p>You are accessing the json data wrong, the correct way of accessing this would be:</p>
<pre><code>print text["streams"][0]["_links"]["self"]
</code></pre>
<p>Full code:</p>
<pre><code>>>> import requests
>>> import json
>>> getSource = requests.get("https://api.twitch.tv/kraken/streams")
>>> text = json.loads(getSource.text)
>>> print text["streams"][0]["_links"]["self"]
https://api.twitch.tv/kraken/streams/summit1g
>>> print text["streams"][1]["_links"]["self"]
https://api.twitch.tv/kraken/streams/tsm_theoddone
>>> print text["streams"][2]["_links"]["self"]
https://api.twitch.tv/kraken/streams/reynad27
</code></pre>
<p>Note that <code>text["streams"]</code> provides you a list of items (and not a dict), so you will have to access by its index values.</p>
<p>You can check the same using</p>
<pre><code>>>> type(text["streams"])
<type 'list'>
</code></pre>
|
python|dictionary|twitch
| 0 |
1,906,994 | 68,460,496 |
Pandas df long to wide and pivot?
|
<p>I have a pandas df like such:</p>
<p><a href="https://i.stack.imgur.com/VYvTA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VYvTA.png" alt="enter image description here" /></a></p>
<p>Here is the input data:</p>
<pre><code>[{'Region/Province': 'PHILIPPINES', 'Commodity': 'Atis [Sugarapple]', '2018 January': '..', '2018 February': '..'}, {'Region/Province': 'PHILIPPINES', 'Commodity': 'Avocado', '2018 January': '..', '2018 February': '..'}, {'Region/Province': 'PHILIPPINES', 'Commodity': 'Banana Bungulan, green', '2018 January': '12.57', '2018 February': '12.48'}, {'Region/Province': 'PHILIPPINES', 'Commodity': 'Banana Cavendish', '2018 January': '9.96', '2018 February': '8.8'}]
</code></pre>
<p>Where the columns after <code>commodity</code> are like this: 2018 January, 2018 February.. 2018 Annual all the way up to 2021.</p>
<p>But I need it like this:</p>
<p><a href="https://i.stack.imgur.com/kRx7v.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kRx7v.png" alt="enter image description here" /></a></p>
<p>Where there are repeated <code>Commodity</code> names, but split by year/month with the <code>Amount</code> being it's own column. I've tried <code>pd.wide_to_long()</code> and it's close to what I need, but the years become their own columns.</p>
<p>Any help is much appreciated</p>
|
<p>Try <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>stack</code></a> with <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>str.split</code></a></p>
<pre><code>stacked = (
df.set_index(['Region/Province', 'Commodity'])
.stack()
.reset_index(name='Amount')
)
stacked[['Year', 'Month']] = stacked['level_2'].str.split(expand=True)
stacked = stacked.drop('level_2', axis=1)
</code></pre>
<p><code>stacked</code>:</p>
<pre><code> Region/Province Commodity Amount Year Month
0 PHILIPPINES Atis [Sugarapple] .. 2018 January
1 PHILIPPINES Atis [Sugarapple] .. 2018 February
2 PHILIPPINES Avocado .. 2018 January
3 PHILIPPINES Avocado .. 2018 February
4 PHILIPPINES Banana Bungulan, green 12.57 2018 January
5 PHILIPPINES Banana Bungulan, green 12.48 2018 February
6 PHILIPPINES Banana Cavendish 9.96 2018 January
7 PHILIPPINES Banana Cavendish 8.8 2018 February
</code></pre>
<hr />
<p>or <a href="https://pandas.pydata.org/docs/reference/api/pandas.melt.html" rel="nofollow noreferrer"><code>melt</code></a> and <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>str.split</code></a></p>
<pre><code>melt = df.melt(['Region/Province', 'Commodity'], value_name='Amount')
melt[['Year', 'Month']] = melt['variable'].str.split(expand=True)
melt = melt.drop('variable', axis=1)
</code></pre>
<p><code>melt</code>:</p>
<pre><code> Region/Province Commodity Amount Year Month
0 PHILIPPINES Atis [Sugarapple] .. 2018 January
1 PHILIPPINES Avocado .. 2018 January
2 PHILIPPINES Banana Bungulan, green 12.57 2018 January
3 PHILIPPINES Banana Cavendish 9.96 2018 January
4 PHILIPPINES Atis [Sugarapple] .. 2018 February
5 PHILIPPINES Avocado .. 2018 February
6 PHILIPPINES Banana Bungulan, green 12.48 2018 February
7 PHILIPPINES Banana Cavendish 8.8 2018 February
</code></pre>
|
python|pandas|dataframe
| 5 |
1,906,995 | 41,554,277 |
How to use else inside Python's timeit
|
<p>I am new to using the timeit module, and I'm having a hard time getting multi-line code snippets to run inside timeit.</p>
<p>What works:</p>
<pre><code>timeit.timeit(stmt = "if True: print('hi');")
</code></pre>
<p>What does not work (these all fail to even run):</p>
<pre><code>timeit.timeit(stmt = "if True: print('hi'); else: print('bye')")
timeit.timeit(stmt = "if True: print('hi') else: print('bye')")
timeit.timeit(stmt = "if True: print('hi');; else: print('bye')")
</code></pre>
<p>I have found that I can use triple-quotes to encapsulate multi-line code segments, but I'd rather just type on one line.</p>
<p>Is there any way to use an else statement inside one line in timeit?</p>
|
<p>The string you provide is interpreted as a source code, so you can use multiline strings with three quotation marks, like</p>
<pre><code>>>> timeit.timeit(stmt = """if True: 'hi'
... else: 'bye'""")
0.015218939913108187
</code></pre>
<p><strong>or</strong> <code>\n</code> for newlines (but it looks pretty messy)</p>
<pre><code>>>> timeit.timeit(stmt = "if True: 'hi'\nelse: 'bye'")
0.015617805548572505
</code></pre>
<hr>
<p>You can also use ternary <code>if-else</code> condition if you need only a single branch (so no newline is required):</p>
<pre><code>>>> timeit.timeit(stmt = "'hi' if True else 'bye'")
0.030958037935647553
</code></pre>
|
python|timeit
| 7 |
1,906,996 | 41,445,443 |
More Pythonic to write Functions with Regex
|
<p>I've got 20'000+ court documents I want to pull specific data points out of: date, document number, verdict. I am using Python and Regex to perform this. </p>
<p>The verdicts are in three languages (German, French and Italian) and some of them have slightly different formatting. I am trying to develop functions for the various data points that take this and the different languages into regards. </p>
<p>I'm finding my functions very clumsy. Has anybody got a more pythonic way to develop these functions?</p>
<pre><code>def gericht(doc):
Gericht = re.findall(
r"Beschwerde gegen [a-z]+ [A-Z][a-züöä]+ ([^\n\n]*)", doc)
Gericht1 = re.findall(
r"Beschwerde nach [A-Za-z]. [0-9]+ [a-z]+. [A-Z]+ [a-z]+ [a-z]+[A-Za-z]+ [a-z]+ [0-9]+. [A-Za-z]+ [0-9]+ ([^\n\n]*)", doc)
Gericht2 = re.findall(
r"Revisionsgesuch gegen das Urteil ([^\n\n]*)", doc)
Gericht3 = re.findall(
r"Urteil des ([^\n\n]*)", doc)
Gericht_it = re.findall(
r"ricorso contro la sentenza emanata il [0-9]+ [a-z]+ [0-9]+ [a-z]+ ([^\n\n]*)", doc)
Gericht_fr = re.findall(
r"recours contre l'arrêt ([^\n\n]*)", doc)
Gericht_fr_1 = re.findall(
r"recours contre le jugement ([^\n\n]*)", doc)
Gericht_fr_2 = re.findall(
r"demande de révision de l'arrêt ([^\n\n]*)", doc)
try:
if Gericht != None:
return Gericht[0]
except:
None
try:
if Gericht1 != None:
return Gericht1[0]
except:
None
try:
if Gericht2 != None:
return Gericht2[0]
except:
None
try:
if Gericht3 != None:
return Gericht3[0]
except:
None
try:
if Gericht_it != None:
return Gericht_it[0]
except:
None
try:
if Gericht_fr != None:
Gericht_fr = Gericht_fr[0].replace('de la ', '').replace('du ', '')
return Gericht_fr
except:
None
try:
if Gericht_fr_1 != None:
Gericht_fr_1 = Gericht_fr_1[0].replace('de la ', '').replace('du ', '')
return Gericht_fr_1
except:
None
try:
if Gericht_fr_2 != None:
Gericht_fr_2 = Gericht_fr_2[0].replace('de la ', '').replace('du ', '')
return Gericht_fr_2
except:
None
</code></pre>
|
<p>The result of <code>re.findall()</code> is <em>never</em> <code>None</code>, so all those <code>if</code> statements testing this are superfluous. Then using <code>findall()</code> when you just want the first result does not make sense.</p>
<p>The replacing in french results may remove too much. For instance the <code>'du '</code> replacement does not just remove the word <em>du</em> but also affects words <em>ending</em> with <em>du</em>.</p>
<pre><code>def gericht(doc):
for pattern, is_french in [
(r'Beschwerde gegen [a-z]+ [A-Z][a-züöä]+ ([^\n]*)', False),
(
r'Beschwerde nach [A-Za-z]. [0-9]+ [a-z]+. [A-Z]+ [a-z]+'
r' [a-z]+[A-Za-z]+ [a-z]+ [0-9]+. [A-Za-z]+ [0-9]+ ([^\n]*)',
False
),
(r'Revisionsgesuch gegen das Urteil ([^\n]*)', False),
(r'Urteil des ([^\n]*)', False),
(
r'ricorso contro la sentenza emanata il [0-9]+ [a-z]+ [0-9]+'
r' [a-z]+ ([^\n]*)',
False
),
(r"recours contre l'arrêt ([^\n]*)", True),
(r'recours contre le jugement ([^\n]*)', True),
(r"demande de révision de l'arrêt ([^\n]*)", True),
]:
match = re.search(pattern, doc)
if match:
result = match.group(1)
if is_french:
for removable in [' de la ', ' du ']:
result = result.replace(removable, ' ')
return result
return None
</code></pre>
|
python|regex
| 0 |
1,906,997 | 6,852,028 |
Django ManyToMany field transformation on a ModelForm
|
<p>I have this manytomany field in my model that I have overridden with a CharField that receives a csv list of the second models name attribute. </p>
<pre><code>class PostForm(ModelForm):
tests = CharField(label="tests")
class Meta:
model = Post
fields = ('title','body')
def clean_tests(self):
# Here I clean, create or retrieve and return a list of Test objects.
</code></pre>
<p>Now, saving and validating is alright with this code, everything works, my problem comes when I create the PostForm with an existing instance, like <code>PostForm(instance=current_post)</code>. </p>
<p>The CharField should contain a csv list but it contains nothing, obviously this happens because there is no conversion happening from Test object list to test name list, the problem is I do not know where to put that code, I see no method I could override to get this done, i've looked into initial data and default properties of fields. </p>
|
<p>I'm not sure if there's a method you could override to do this -- from a look at <a href="https://code.djangoproject.com/browser/django/trunk/django/forms/models.py#L224" rel="nofollow">the <code>BaseModelForm</code> constructor</a> though, it looks perfectly okay to specify both the <code>instance</code> and <code>initial</code> keyword arguments together -- the <code>instance</code> is converted into a dict (subject to the <code>fields</code> and <code>exclude</code> options in <code>Meta</code>), and that dict's <code>update</code> method is called with <code>initial</code>. Something like this should work:</p>
<pre><code># build your csv list somehow (just speculation here)
tests = ','.join(test.name for test in current_post.tests.all())
form = PostForm(instance=current_post, initial={'tests': tests})
</code></pre>
|
python|django|django-models|django-forms
| 0 |
1,906,998 | 6,318,156 |
Adding Python to PATH on Windows
|
<p>I've been trying to add the Python path to the command line on Windows, yet no matter the method I try, nothing seems to work. I've used the <code>set</code> command, I've tried adding it through the Edit Environment Variables prompt, etc.</p>
<p>Furthermore, if I run the set command on the command line it lists this.</p>
<pre><code>python = c:\python27
</code></pre>
<p>Yet it still doesn't recognize the Python command.</p>
<p>Reading the documentation, and various other sources haven't seemed to help.</p>
<p>Just to clarify further, I've appended the path of the Python executable to PATH in the Edit Environment prompt. Doesn't seem to work.</p>
|
<ol>
<li>Hold <kbd>Win</kbd> and press <kbd>Pause</kbd>.</li>
<li>Click Advanced System Settings.</li>
<li>Click Environment Variables.</li>
<li>Append <code>;C:\python27</code> to the <code>Path</code> variable.</li>
<li>Restart Command Prompt.</li>
</ol>
|
python|windows|python-2.7|path
| 270 |
1,906,999 | 25,787,594 |
Tkinter gui calculator
|
<p>Hey i want to make it so on my calculator when a button is pressed the number of the button clicked is entered on screen. Would anyone have an idea of doing this in the most efficient way possible?.
Here is my code so far.</p>
<pre><code>import tkinter
window = tkinter.Tk()
window.geometry("150x170")
lbl1 = tkinter.Label(text = "Calculator").grid(row = 1, column = 4)
displaynum = 0
lb = tkinter.Label(text = displaynum, borderwidth = 4).grid(row = 2, column = 4)
def displayednum():
global displaynum
lbl = tkinter.Label(text = displaynum, borderwidth = 4).grid(row = 2, column = 4)
for n in range(1,4):
btn1 = tkinter.Button(text = n,borderwidth = 2, command = changenumber ).grid(row = 3,column = n*2)
lbl2 = tkinter.Label(text = " ").grid(row = 3, column = n*2+1)
for u in range(1,4):
btn2 = tkinter.Button(text = u+3,borderwidth = 2).grid(row = 6, column = u*2)
lbl3 = tkinter.Label(text = " ").grid(row = 6, column = u*2+1)
for m in range(1,4):
btn2 = tkinter.Button(text = m+6,borderwidth = 2).grid(row = 9, column = m*2)
lbl4 = tkinter.Label(text = " ").grid(row = 9, column = m*2+1)
window.mainloop()
</code></pre>
<p>Anyone got a solution?</p>
|
<p>The function to change the label text could be implemented like this:</p>
<pre><code>def changenumber(label, val):
label.config(text=str(val))
</code></pre>
<p>However you should be careful while assigning variables.
Both variables lbl1 and lb are None.
You should try it this way:</p>
<pre><code>lbl1 = tkinter.Label(text = "Calculator")
lbl1.grid(row = 1, column = 4)
lb = tkinter.Label(text = displaynum, borderwidth = 4)
lb.grid(row = 2, column = 4)
</code></pre>
|
python|tkinter|calculator
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.