Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,901,800 | 42,667,348 |
Running Jupyter notebook with python 3
|
<p>So I installed the latest version of Anaconda, then created an environment with python 3. To ensure that python 3 is actually the one recognized I first activated the environment then typed <code>python</code> below is what I got</p>
<pre><code>Python 3.5.3 |Continuum Analytics, Inc.| (default, Feb 22 2017, 21:28:42) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
</code></pre>
<p>So now that I made sure I already have python 3, I tried to run <code>Jupyter Notebook</code> however it seems to only have a python 2 kernel. And the kernel is not even linked to a specific environment as show in the image below</p>
<p><a href="https://i.stack.imgur.com/45bja.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/45bja.png" alt="enter image description here"></a></p>
<p>How can I add python3 kernel ? and why was not it recognized in the first place ?</p>
|
<p>I had the same problem once. I ended up installing jupyter with pip3 to get the python3 kernel.</p>
|
python|python-3.x|ipython|ipython-notebook|jupyter-notebook
| 0 |
1,901,801 | 72,467,176 |
How to import all possible modules and pass ones that don't exist?
|
<p>Sorry if the title's confusing (feel free to edit if you think you can explain it better).</p>
<p>I want to import all modules (seperate python scripts) named A-H, but there is uncertainty about whether they will exist or not. I want to just ignore them if they don't exist and import them if they do.</p>
<p>I have worked out a way, but it's long and seems unnecessary, and I feel like there must be a better way to do it. Here's my code:</p>
<pre class="lang-py prettyprint-override"><code>try:
from scripts import A
except:
pass
try:
from scripts import B
except:
pass
try:
from scripts import C
except:
pass
try:
from scripts import D
except:
pass
try:
from scripts import E
except:
pass
try:
from scripts import F
except:
pass
try:
from scripts import G
except:
pass
try:
from scripts import H
except:
pass
</code></pre>
<p>How can I tidy this up?</p>
|
<h2>Method 1:</h2>
<p>Not the best practice, but you can try this:</p>
<pre><code>from scripts import *
</code></pre>
<p>Note that this imports everything, and thus has a potential to substantially pollute your name space.</p>
<h2>Method 2:</h2>
<pre><code>modules = 'A B C D E F G H'.split()
for module in modules:
try:
globals()[module] = __import__(module)
except:
pass
</code></pre>
|
python|import|module|code-cleanup
| 1 |
1,901,802 | 65,697,084 |
inconsistent result in python API for z3
|
<p>I am working with the ocaml for Z3 in order to name each constraints when getting unsat core such as</p>
<pre><code>(set-option :produce-unsat-cores true)
(declare-fun x () Int)
(assert (!(> x 0) :named p1))
(assert (!(not (= x 1)) :named p2))
(assert (!(< x 0) :named p3))
(check-sat)
(get-unsat-core)
</code></pre>
<p>The result is <code>unsat</code>. Then I found a API <code>assert_and_track</code> in ocaml for Z3 and a solution in python (<a href="https://stackoverflow.com/questions/14560745/unsatisfiable-cores-in-z3-python">Unsatisfiable Cores in Z3 Python</a>), and then I have the following simple example:</p>
<pre><code>from z3 import *
x = Int('x')
s = Solver()
s.set(unsat_core=True)
s.assert_and_track(x > 0, 'p1')
s.assert_and_track(x != 1, 'p2')
s.assert_and_track(x < 0, 'p3')
print(s.check())
print(s.to_smt2())
c = s.unsat_core()
for i in range(0, len(c)):
print(c[i])
</code></pre>
<p>The result of this example is</p>
<pre><code>unsat
; benchmark generated from python API
(set-info :status unknown)
(declare-fun x () Int)
(declare-fun p1 () Bool)
(declare-fun p2 () Bool)
(declare-fun p3 () Bool)
(assert
(let (($x7 (> x 0)))
(=> p1 $x7)))
(assert
(let (($x33 (and (distinct x 1) true)))
(=> p2 $x33)))
(assert
(let (($x40 (< x 0)))
(=> p3 $x40)))
(check-sat)
p1
p3
</code></pre>
<p>Then I copy the generated SMT2 format to z3 and found the result is <code>sat</code>. Why the generated z3 smt2 format code from python is inconsistent with python, and How can I use this in ocaml?</p>
|
<p>The generated benchmark (via the call to <code>s.to_smt2())</code> does not capture what the python program is actually doing. This is unfortunate and confusing, but follows from how the internal constraints are translated. (It just doesn't know that you're trying to do an unsat-core.)</p>
<p>The good news is it's not hard to fix. You need to get the generated benchmark, and add the following line at the top:</p>
<pre><code>(set-option :produce-unsat-cores true)
</code></pre>
<p>Then, you need to replace <code>(check-sat)</code> with:</p>
<pre><code>(check-sat-assuming (p1 p2 p3))
(get-unsat-core)
</code></pre>
<p>So, the final program looks like:</p>
<pre><code>; benchmark generated from python API
(set-info :status unknown)
(set-option :produce-unsat-cores true)
(declare-fun x () Int)
(declare-fun p1 () Bool)
(declare-fun p2 () Bool)
(declare-fun p3 () Bool)
(assert
(let (($x7 (> x 0)))
(=> p1 $x7)))
(assert
(let (($x33 (and (distinct x 1) true)))
(=> p2 $x33)))
(assert
(let (($x40 (< x 0)))
(=> p3 $x40)))
(check-sat-assuming (p1 p2 p3))
(get-unsat-core)
</code></pre>
<p>If you run this final program, you'll see it'll print:</p>
<pre><code>unsat
(p1 p3)
</code></pre>
<p>as expected.</p>
<p>Regarding how to do this from O'Caml: You need to use the <code>get_unsat_core</code> function, see here: <a href="https://github.com/Z3Prover/z3/blob/master/src/api/ml/z3.ml#L1843" rel="nofollow noreferrer">https://github.com/Z3Prover/z3/blob/master/src/api/ml/z3.ml#L1843</a></p>
|
python|ocaml|z3
| 1 |
1,901,803 | 65,531,899 |
how to run apply_gradients for more than one trainable_weights in TensorFlow 2.0
|
<p>I've build a encoder-decoder like deep learning model with TensorFlow2.0, and for certain reason the encoder and decoder are two seprate models, and I would like to optimize the two models with the optimizer.apply_gradients() function. The <a href="https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Optimizer" rel="nofollow noreferrer">TensorFlow document</a> did not provide too much infomation for this case. It shows that the <em><strong>grads_and_vars</strong></em> should be <em>"List of (gradient, variable) pairs"</em>. I tried two method as shown below:</p>
<p>1st method:
<code>optimizer.apply_gradients(zip(grads, self._encoder.trainable_weights, self._decoder.trainable_weights))</code></p>
<p>2nd method:
<code>optimizer.apply_gradients([zip(grads, self._encoder.trainable_weights), zip(grads, self._decoder.trainable_weights)])</code></p>
<p>and neither works. What is the right way to do it?</p>
|
<p>You can try a training loop like the following code, referenced from <a href="https://www.tensorflow.org/tutorials/generative/dcgan" rel="nofollow noreferrer">DCGAN tutorial</a>.</p>
<pre><code>with tf.GradientTape() as enc_tape, tf.GradientTape() as dec_tape:
feature_space = encoder(images, training=True)
recovered_image = decoder(feature_space, training=True)
#Some custom loss function...
loss = tf.keras.losses.MSE(recovered_image, images)
gradients_of_encoder = enc_tape.gradient(loss , encoder.trainable_variables)
gradients_of_decoder = dec_tape.gradient(loss, decoder.trainable_variables)
enc_optimizer.apply_gradients(zip(gradients_of_encoder , encoder.trainable_variables))
dec_optimizer.apply_gradients(zip(gradients_of_decoder , decoder.trainable_variables))
</code></pre>
|
tensorflow|keras|tensorflow2.0|tensorflow2.x
| 2 |
1,901,804 | 50,942,677 |
tensorflowjs how to get inner layer output in a cnn prediction
|
<p>I am looking at tensorflow.js CNN example from <a href="https://js.tensorflow.org/tutorials/mnist.html" rel="nofollow noreferrer">tfjs</a>.</p>
<p>The testing repo can be found here: <a href="https://github.com/tensorflow/tfjs-examples/tree/master/mnist" rel="nofollow noreferrer">testing repo</a>.</p>
<p>Is there any way I can get outputs from each layer? </p>
<pre><code> async showPredictions() {
const testExamples = 1;
// const testExamples = 100;
const batch = this.data.nextTestBatch(testExamples);
tf.tidy(() => {
const output: any = this.model.predict(batch.xs.reshape([-1, 28, 28, 1]));
output.print();
const axis = 1;
const labels = Array.from(batch.labels.argMax(axis).dataSync());
const predictions = Array.from(output.argMax(axis).dataSync());
// ui.showTestResults(batch, predictions, labels);
});
}
</code></pre>
<p>Above is the prediction method from the tfjs example, but only the last layer is printed. How can I get outputs from each layer (including conv, max pooling and fully connect layers) in a prediction?</p>
|
<p>To get all inner layers, you can use the property <code>layers</code> of models. Once you get the layers you can use the properties <code>input</code> and <code>output</code> of each layer to define a new model or use the <a href="https://js.tensorflow.org/api/0.12.0/#tf.layers.Layer.apply" rel="nofollow noreferrer">apply</a> method.</p>
<p>Similar questions have been asked <a href="https://stackoverflow.com/questions/51483897/get-the-layers-from-one-model-and-assign-it-to-another-model">here</a> and <a href="https://stackoverflow.com/questions/51483285/print-all-layers-output">there</a></p>
|
javascript|tensorflow|machine-learning|deep-learning|tensorflow.js
| 3 |
1,901,805 | 50,642,777 |
Set data type for specific column when using read_csv from pandas
|
<p>I have a large csv file (~10GB), with around 4000 columns. I know that most of data i will expect is int8, so i set:</p>
<pre><code>pandas.read_csv('file.dat', sep=',', engine='c', header=None,
na_filter=False, dtype=np.int8, low_memory=False)
</code></pre>
<p>Thing is, the final column (4000th position) is int32, is there away can i tell read_csv that use int8 by default, and at column 4000th, use int 32?</p>
<p>Thank you</p>
|
<p>If you are certain of the number you could recreate the dictionary like this:</p>
<pre><code>dtype = dict(zip(range(4000),['int8' for _ in range(3999)] + ['int32']))
</code></pre>
<p>Considering that this works:</p>
<pre><code>import pandas as pd
import numpy as np
data = '''\
1,2,3
4,5,6'''
fileobj = pd.compat.StringIO(data)
df = pd.read_csv(fileobj, dtype={0:'int8',1:'int8',2:'int32'}, header=None)
print(df.dtypes)
</code></pre>
<p>Returns:</p>
<pre><code>0 int8
1 int8
2 int32
dtype: object
</code></pre>
<p>From the docs:</p>
<blockquote>
<p>dtype : Type name or dict of column -> type, default None</p>
<p>Data type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32}
Use str or object to preserve and not interpret dtype. If converters
are specified, they will be applied INSTEAD of dtype conversion.</p>
</blockquote>
|
python|pandas
| 10 |
1,901,806 | 50,633,792 |
Errors in grocery shopping program in Python
|
<p>I'm new to Python here and need help with my program that I'm creating for my programming class. It's a basic grocery shopping list program that asks for the grocery name, quantity, and stores them both in an array. It also asks if they want paper or plastic. In the end, the application should output the list of groceries they want, the quantity, and their choice of paper or plastic bags. However, I'm getting the following error and can't proceed without fixing it:</p>
<pre><code>Traceback (most recent call last):
line 164, in <module>
grocery_list()
line 159, in grocery_list
total_quantity = calculate_total_groceries(quantity)
line 133, in calculate_total_groceries
while counter < quantity:
TypeError: '<' not supported between instances of 'int' and 'list'
</code></pre>
<p>Below is the code for the program:</p>
<pre><code>def get_string(prompt):
value = ""
value = input(prompt)
return value
def valid_real(value):
try:
float(value)
return True
except:
return False
def get_real(prompt):
value = ""
value = input(prompt)
while not valid_real(value):
print(value, "is not a number. Please provide a number.")
value = input(prompt)
return float(value)
def get_paper_or_plastic(prompt):
value = ""
value = input(prompt)
if value == "plastic" or value == "Plastic" or value == "paper" or value == "Paper":
return value
else:
print("That is not a valid bag type. Please choose paper or plastic")
value = input(prompt)
def y_or_n(prompt):
value = ""
value = input(prompt)
while True:
if value == "Y" or value == "y":
return False
elif value == "N" or value == "n":
return True
else:
print("Not a valid input. Please type Y or N")
value = input(prompt)
def get_groceries(grocery_name, quantity,paper_or_plastic):
done = False
counter = 0
while not done:
grocery_name[counter] = get_string("What grocery do you need today? ")
quantity[counter] = get_real("How much of that item do you need today?")
counter = counter + 1
done = y_or_n("Do you need anymore groceries (Y/N)?")
paper_or_plastic = get_paper_or_plastic("Do you want your groceries bagged in paper or plastic bags today?")
return counter
def calculate_total_groceries(quantity):
counter = 0
total_quantity = 0
while counter < quantity:
total_quantity = total_quantity + int(quantity[counter])
counter = counter + 1
return total_quantity
def grocery_list():
grocery_name = ["" for x in range (maximum_number_of_groceries)]
quantity = [0.0 for x in range (maximum_number_of_groceries)]
total_quantity = 0
paper_or_plastic = ""
get_groceries(grocery_name, quantity, paper_or_plastic)
total_quantity = calculate_total_groceries(quantity)
print ("Total number of groceries purchased is: ", total_quantity," and you have chosen a bage type of ", paper_or_plastic)
grocery_list()
</code></pre>
|
<p>Change </p>
<pre><code>while counter < quantity
</code></pre>
<p>to </p>
<pre><code>while counter < len(quantity)
</code></pre>
|
python|python-3.x
| 1 |
1,901,807 | 50,454,068 |
Invalid Syntax error on try clause in Python
|
<p>First I apologize if this is formatted poorly, I've never asked a question here before. </p>
<p>I'm running python 2.7.15 in a virtualenv on win10-64. I'm trying to upload some test strings to a MySQL database but I'm getting the dumbest error and I don't know how to get around it. The MySQL Python/Connector should be installed correctly. Same with the GCP SDK. </p>
<pre><code>import mysql.connector
from mysql.connector import errorcode
# Config info will be moved into config file(s) after testing
# Google Proxy Connection (Proxy must be running in shell)
# C:\Users\USER\Google Drive\Summer Education\GCP
# $ cloud_sql_proxy.exe -instances="pdf2txt2sql-test"=tcp:3307
config1 = {
'user': 'USER',
'password': 'PASSWORD',
'host': 'IP',
'port': '3307',
'database': 'pdftxttest',
'raise_on_warnings': True,
}
# Direct Connection to Google Cloud SQL
config2 = {
'user': 'USER',
'password': 'PASSWORD',
'host': 'IP',
'database': 'pdftxttest',
'raise_on_warnings': True,
}
try:
cnx = mysql.connector.connect(**config1)
except mysql.connector.Error as err:
if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print("Something is wrong with your user name or password")
elif err.errno == errorcode.ER_BAD_DB_ERROR:
print("Database does not exist")
else:
print(err)
print("Connection not made")
cursor = cnx.cursor()
# Test information
id = str(1)
testtitle = str("Look a fake title")
teststring = str('thislistis representingaveryshort pdfwithfuckedup spaces')
add_pdf = ("INSERT INTO pdftexttest (id, title, text) VALUES (%s, %s, %s)", (id, testtitle, teststring)
try:
cursor.execute(add_pdf)
except mysql.connector.Error as err:
if err.errno == errorcode.ER_BAD_TABLE_ERROR:
print("no pdf for you")
else:
print(err)
print("here")
cnx.commit()
cursor.close()
cnx.close()
</code></pre>
<p>After running this code I get </p>
<pre><code>(env) C:\Users\USER\Google Drive\Summer Education\ProjPdf2Txt>python TXT2SQL.py
File "TXT2SQL.py", line 47
try:
^
SyntaxError: invalid syntax
</code></pre>
<p>I have some previous experience in java but I'm still a novice programmer. </p>
<p>If I remove the Try...Except clause and go straight to cursor.execute() the console tells me</p>
<pre><code>(env) C:\Users\USER\Google Drive\Summer Education\ProjPdf2Txt>python TXT2SQL.py
File "TXT2SQL.py", line 46
cursor.execute(add_pdf)
^
SyntaxError: invalid syntax
</code></pre>
|
<p>You were missing a parentesis there.</p>
<p>add_pdf = ("INSERT INTO pdftexttest (id, title, text) VALUES (%s, %s, %s)", (id, testtitle, teststring)<strong>)</strong></p>
|
python|mysql|google-cloud-platform
| 2 |
1,901,808 | 45,077,683 |
How can I achieve the SQL equivalent of "Like" using Pandas
|
<p>Can I use regular expressions and isin() to perform the SQL LIKE statement ?</p>
<p>I have a dataframe with the following values: </p>
<pre><code>my_list=['U*']
df = pd.DataFrame({'countries':['US','UK','Germany','China']})
df['node']=0
print(df)
df.loc[df['countries'].isin(my_list),'node']=100
print(df)
</code></pre>
<p>I wanted the node values for US and UK to be changed to 100.</p>
|
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.startswith.html" rel="nofollow noreferrer"><code>str.startswith</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>str.contains</code></a> with <code>^</code> for start of strings for condition:</p>
<pre><code>print (df[df.countries.str.startswith('U')])
countries
0 US
1 UK
</code></pre>
<p>Or:</p>
<pre><code>print (df[df.countries.str.contains('^U')])
countries
0 US
1 UK
</code></pre>
<p>EDIT:</p>
<pre><code>df['node'] = np.where(df.countries.str.startswith('U'), 100, 0)
print (df)
countries node
0 US 100
1 UK 100
2 Germany 0
3 China 0
</code></pre>
|
python|pandas
| 0 |
1,901,809 | 45,202,414 |
Performance differences in Tensorflow when nesting multiple operations at a single node
|
<p>I am somewhat new to Tensorflow, and was wondering what kind of performance considerations I should keep in mind when constructing a graph. </p>
<p>My main question is whether there is any change in the performance of a computation when multiple operations are nested at a single node, when compared to assigning each operation to a separate node. For example, if I want to use batch normalization, followed by a dense layer, and an relu, I could structure it so that all three operations are performed at a single node:</p>
<pre><code>input=tf.placeholder(shape=[None,input_len],dtype=tf.float32
output=tf.nn.relu(tf.matmul(tf.contrib.layers.batch_norm(input),W)+b)
</code></pre>
<p>or I could separate them into three separate nodes:</p>
<pre><code>input=tf.placeholder(shape=[None,input_len],dtype=tf.float32
x1=tf.contrib.layers.batch_norm(input)
x2=tf.matmul(x1,W)+b
ouput=tf.nn.relu(x2)
</code></pre>
<p>Obviously this will affect the compactness/readability of the code, but does this also affect how TF implements the graph, and runs the computations? Is nesting operations at a single node discouraged, and if so is it because of performance issues, or just stylistic? </p>
<p>If it makes a difference, I am interested in running my computations on a gpu.</p>
|
<p>Both code fragments will generate identical TensorFlow graphs, and the resulting graphs will have the same performance characteristics.</p>
<p>To validate this assertion, you can look at the <code>tf.GraphDef</code> protocol buffer that TensorFlow builds by calling <code>print tf.get_default_graph().as_graph_def()</code> after running either code fragment. </p>
|
tensorflow
| 1 |
1,901,810 | 60,543,591 |
Python turtle: if statement with position sometimes works and sometimes doesn't
|
<p>I'm making an adventure/board game where the player can collect an item if he steps on it. So I made the function <code>check_item()</code> to check this: </p>
<pre><code>def check_is_inside_screen(check_x, check_y):
if check_x < -450 or check_x > 450 or check_y < -350 or check_y > 300:
return False
return True
def check_item(player_x, player_y):
global pick_armor, pick_key, pick_sword
if not pick_armor and -395 < player_x < -405 and 270 < player_y < 280:
armor.hideturtle()
pick_armor = True
elif not pick_sword and 395 < player_x < 405 and 270 < player_y < 280:
sword.hideturtle()
pick_sword = True
elif not pick_key and 395 < player_x < 405 and -320 < player_y < -330:
key.goto(400, 325)
pick_key = True
def move(move_x, move_y):
player.forward(-100)
px, py = player.pos()
if not check_is_inside_screen(px, py):
player.forward(100)
return
check_item(px, py)
def movePlayer():
player.onclick(move, 1)
</code></pre>
<p>The thing is, sometimes it works and sometimes it doesn't. I play test the game and sometimes the <code>armor</code> turtle is successfully hidden, sometimes it just isn't. Also the <code>sword</code> object usually isn't hidden and the <code>key</code> just doesn't work. I tried getting rid of the boolean parameters, but nothing works. It could be also useful to know that the function is called inside of the <code>move()</code> function, which is called from the <code>onclick()</code> event. Basically, whenever I click on the player object, it moves and after that it checks the position. </p>
|
<p>First, turtles crawl a <em>floating point</em> plane so tests like this will sometimes work, and sometimes fail:</p>
<pre><code>x == -400 and y == 275
</code></pre>
<p>as <code>x</code> could come back as <code>-400.0001</code>. You could coerce the points to integers:</p>
<pre><code>int(x) == -400 and int(y) == 245
</code></pre>
<p>or test if the positions fall within a range of values.</p>
<p>Second, this code in your <code>move()</code> function is suspect:</p>
<pre><code> player.forward(100)
return
tx, ty = player.pos()
check_item(tx, ty)
</code></pre>
<p>There shouldn't be code after a <code>return</code> at the same indentation level -- it will never be executed. I would have expected your code to be more like:</p>
<pre><code>def check_item(x, y):
global pick_armor, pick_key, pick_sword
x = int(x)
y = int(y)
if not pick_armor and x == -400 and y == 275:
armor.hideturtle()
pick_armor = True
elif not pick_sword and x == 400 and y == 275:
sword.hideturtle()
pick_sword = True
elif not pick_key and x == 400 and y == -325:
key.goto(400, 300)
pick_key = True
def move(x, y):
player.forward(-100)
tx, ty = player.pos()
if not -450 <= tx <= 450 or not -375 <= ty <= 325:
player.backward(-100)
return
check_item(tx, ty)
def movePlayer():
player.onclick(move, 1)
</code></pre>
<p>I couldn't test the above without more of your code to work with but hopefully you get the idea.</p>
|
python|if-statement|boolean|global|turtle-graphics
| 1 |
1,901,811 | 60,598,563 |
Connection Problem to MSSQL Server with pyodbc from python
|
<p>I'm using ubuntu and trying to connect MSSQL server from python with <code>pyodbc</code>. I'm using pycharm professional.
I'm trying to connect to sql server but I'm getting</p>
<blockquote>
<p>pyodbc.Error: ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'FreeTDS' : file not found (0) (SQLDriverConnect)").</p>
</blockquote>
<p>Here is my code:</p>
<pre><code> def read_data(self):
conn = pyodbc.connect('Trusted_Connection=yes', driver='FreeTDS', TDS_Version=7.3, server='XXXXXXXXXX/SQL2014',
port= 1433, database = 'YYYY')
</code></pre>
<p>I researched from the documents, I've already installed odbc and freetds.</p>
|
<p>My problem is solved. The source of my problem was my local db settings. Thanks for your comments. </p>
|
python|ubuntu|pyodbc
| 0 |
1,901,812 | 58,085,043 |
overwrite slice of multi-index dataframe with series
|
<p>I have a multi-index dataframe and want to set a slice of one of its columns equal to a series, ordered (sorted) according to the column slice' and series' index-match. The column's innermost index and series' index are identical, except their ordering (sorting). (see example below)</p>
<p>I can do this by first sorting the series' index according to the column's index and then using series.values (see below), but this feels like a workaround and I was wondering if it's possible to directly assign the series to the column slice.</p>
<p>example:</p>
<pre><code> import pandas as pd
multi_index=pd.MultiIndex.from_product([['a','b'],['x','y']])
df=pd.DataFrame(0,multi_index,['p','q'])
s1=pd.Series([1,2],['y','x'])
df.loc['a','p']=s1[df.loc['a','p'].index].values
</code></pre>
<p>The code above gives the desired output, but I was wondering if the last line could be done simpler, e.g.:</p>
<pre><code> df.loc['a','p']=s1
</code></pre>
<p>but this sets the column slice to NaNs.</p>
<p>Desired output:</p>
<pre><code> p q
a x 2 0
y 1 0
b x 0 0
y 0 0
</code></pre>
<p>obtained output from df.loc['a','p']=s1:</p>
<pre><code> p q
a x NaN 0
y NaN 0
b x 0.0 0
y 0.0 0
</code></pre>
<p>It seems like a simple issue to me but I haven't been able to find the answer anywhere.</p>
|
<p>Have you tried something like that? </p>
<pre><code>df.loc['a']['p'] = s1
</code></pre>
<p>Resulting df is here</p>
<pre><code> p q
a x 2 0
y 1 0
b x 0 0
y 0 0
</code></pre>
|
python|pandas|dataframe|multi-index
| 0 |
1,901,813 | 18,529,506 |
IOError: No such file or directory:
|
<pre><code> # pick up the file which needs to be processed
current_file = file_names[0]
print "Processing current file: " + current_file
key = bucket.get_key(current_file)
print "Processing key: " + str(key)
key.get_contents_to_filename(working_dir + "test_stats_temp.dat")
print "Current directory: ",outputdir
print "File to process:",current_file
</code></pre>
<p>Processing test output for: ds=2013-08-27</p>
<p>Processing current file: output/test_count_day/ds=2013-08-27/task_201308270934_0003_r_000000</p>
<p>Processing key: Key: hadoop.test.com,output/test_count_day/ds=2013-08-27/task_201308270934_0003_r_000000</p>
<pre><code>Traceback (most recent call last):
File "queue_consumer.py", line 493, in <module>
test_process.load_test_cnv_stats_daily(datestring,working_dir,mysqlconn,s3_conn,test_output_bucket,test_output)
File "/home/sbr/aaa/test_process.py", line 46, in load_test_cnv_stats_daily
key.get_contents_to_filename(working_dir + "test_stats_temp.dat")
File "/usr/lib/python2.7/dist-packages/boto/s3/key.py", line 1275, in get_contents_to_filename
fp = open(filename, 'wb')
IOError: [Errno 2] No such file or directory: '/home/sbr/aaa/test_stats_temp.dat'
</code></pre>
<p>I got this error, when I fetched data to DB from S3 output. I'm confused here. How to handle this issue?</p>
|
<p>The error:</p>
<p><code>IOError: [Errno 2] No such file or directory: '/home/sbr/aaa/test_stats_temp.dat'</code></p>
<p>Indicates that the path set with <code>working_dir</code> does not exist. Creating the directory will fix it.</p>
|
python-2.7|amazon-web-services|amazon-s3|boto
| 2 |
1,901,814 | 18,740,209 |
Get all possible combinations of characters in list
|
<p>I have a list <code>[".","_","<<",">>"]</code></p>
<p>What i need is to get all strings with length of 4 with all possible combinations where each character is one of the above list . </p>
<p>example :
<code>"._<<>>","__<<>>",".<<<<>>" ... etc</code></p>
<p>now i am doing it for length of 4 :</p>
<pre><code>mylist = [".","_","<<",">>"]
for c1 in mylist:
for c2 in mylist:
for c3 in mylist:
for c4 in mylist:
print "".join([c1,c2,c3,c4])
</code></pre>
<p>but that looks ugly , and what if i need to scale it up to length of 10 or more ?</p>
|
<p>Use <a href="http://docs.python.org/2/library/itertools.html#itertools.product" rel="nofollow"><code>itertools.product()</code></a> to generate the combinations for you, without nested loops:</p>
<pre><code>from itertools import product
mylist = [".", "_", "<<", ">>"]
length = 4
for chars in product(mylist, repeat=length):
print ''.join(chars)
</code></pre>
<p>Simply adjust the <code>length</code> variable to get longer combinations.</p>
|
python
| 3 |
1,901,815 | 71,466,680 |
how to visualize sold Categories per each country
|
<p>Hey im currently learning Pandas with some tutorials and im stuck now in this situation:</p>
<p>i have this dataset :</p>
<p><a href="https://i.stack.imgur.com/Y4s1Q.png" rel="nofollow noreferrer">Columns of the dataset</a></p>
<p>i want to visualize the sum of each categorie sold in a country for example :</p>
<p>categorie"pizza" was sold "x" times in country "y"</p>
<p>what i did so far is <code>df_clean["PRODUCTLINE"].value_counts().plot(kind="pie", figsize=(10,10))</code> which plots me the frequency of each Producline in the dataset</p>
<p>how can i visualize this in a graph?</p>
|
<p>You should use a groupby, grouping by category and country and aggregating the sales with sum.</p>
<pre><code>df_grouped=df_clean.groupby (["COUNTRY", "PRODUCTLINE"])["SALES"].sum ()
</code></pre>
<p>Then you can plot this object just calling the .plot() method.</p>
<p>I also suggest you to check some tutorial about groupby in Pandas, since it's one of the most important basic concepts.</p>
|
python|pandas|dataframe|data-visualization
| 0 |
1,901,816 | 69,471,967 |
How to exit a loop with Boolean variables in tKinter
|
<p>What i am trying to do is start a function after I press the Start button and be able to stop it while it is looping with the press of the Stop button with Boolean variables and an if statement. I cannot figure out how to make it work.</p>
<pre><code>from tkinter import *
def listen():
if running==True:
print("running")
def startCommand():
global running
running=True
def stopCommand():
global running
running=False
print("stop")
running=True
root = Tk()
root.geometry("200x100")
startButton=Button(root, text="Start", command=startCommand)
startButton.pack()
exitButton=Button(root, text="Stop", command=stopCommand)
exitButton.pack()
root.after(100, listen)
root.mainloop()
</code></pre>
|
<p>It is a simple fix, it is that <code>.after</code> schedules a function to run only once (after the set amount of milliseconds):</p>
<pre class="lang-py prettyprint-override"><code>def listen():
if running:
print("running")
root.after(100, listen)
</code></pre>
<p>So you reschedule the function to run again in the function itself too, if you want it to loop.</p>
<p>Btw, <code>if variable == True:</code> is the same as <code>if variable:</code> (also if <code>variable</code> is some other object it will too evaluate to True) and <code>if variable == False</code> is the same as <code>if not variable:</code> (tho <code>if not variable:</code> will also get executed if variable is an empty string (<code>""</code>) or integer <code>0</code> or <code>None</code> or <code>False</code>)</p>
<p>Also:<br />
I strongly advise against using wildcard (<code>*</code>) when importing something, You should either import what You need, e.g. <code>from module import Class1, func_1, var_2</code> and so on or import the whole module: <code>import module</code> then You can also use an alias: <code>import module as md</code> or sth like that, the point is that don't import everything unless You actually know what You are doing; name clashes are the issue.</p>
<p>I strongly suggest following <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow noreferrer">PEP 8 - Style Guide for Python Code</a>. Function and variable names should be in <code>snake_case</code>, class names in <code>CapitalCase</code>. Don't have space around <code>=</code> if it is used as a part of keyword argument (<code>func(arg='value')</code>) but have space around <code>=</code> if it is used for assigning a value (<code>variable = 'some value'</code>). Have space around operators (<code>+-/</code> etc.: <code>value = x + y</code>(except here <code>value += x + y</code>)). Have two blank lines around function and class declarations.</p>
|
python|tkinter
| 1 |
1,901,817 | 69,606,197 |
TypeError: 'int' object is not callable facing this error while working with pywhatkit in python
|
<p>I was creating a personal assistant for which I was automating whatsapp chat using pywhatkit module in python suddenly got this error could not fix after working lot of hours on this. Someone please help.
What can I do to fix this problem?</p>
<p>The code:</p>
<pre><code> elif 'send a message' in query:
speak("tell me the phone number")
phone = str(takecommand())
speak("tell me the message")
message = str(takecommand())
speak("when should i send message")
speak("\ntime in hours")
time_hour = int(takecommand())
speak("time in minutes")
time_min = int(takecommand())
speak("how much should I wait before sending message")
wait_time = int(takecommand())
kit.sendwhatmsg(f"+91{phone}", message , time_hour(), time_min(), wait_time())
speak("Ok sir! sending message")
</code></pre>
<p>Error I receievd (After runing program):</p>
<pre><code>User said: send a message
: tell me the phone number
Listening...
recognizing
User said: xxxx xxx xxx
: tell me the message
Listening...
recognizing
User said: testing
: when should i send message
:
time in hours
Listening...
recognizing
User said: 21
: time in minutes
Listening...
recognizing
User said: 24
: how much should I wait before sending message
Listening...
recognizing
User said: 20
Traceback (most recent call last):
File "e:\AI Chat Bot\Project JARVIS\jarvis.py", line 122, in <module>
kit.sendwhatmsg(f"+91{phone}", message , time_hour(), time_min(), wait_time())
TypeError: 'int' object is not callable
</code></pre>
|
<p>You are trying to invoke a variable as if it were a function when it is an integer.</p>
<p>Change this line...</p>
<pre class="lang-py prettyprint-override"><code>kit.sendwhatmsg(f"+91{phone}", message , time_hour(), time_min(), wait_time())
</code></pre>
<p>To this...</p>
<pre class="lang-py prettyprint-override"><code>kit.sendwhatmsg(f"+91{phone}", message , time_hour, time_min, wait_time)
</code></pre>
|
python-3.x
| 0 |
1,901,818 | 55,186,928 |
Using recursion for permutations of a list
|
<p>So been struggling with this one, I'm close and have finally found a way to somewhat desired output now repeats in generated list.</p>
<pre><code>input['a','r','t']
def permutations(string_list):
if len(string_list) <= 1:
return [string_list]
perm_list = []
for letter_index in range(len(string_list)):
perms_1 = string_list[letter_index]
rest = string_list[:letter_index] + string_list[letter_index + 1:]
for perms_2 in permutations(rest):
perm_list.append([perms_1] + perms_2)
return perm_list
</code></pre>
<p>output</p>
<pre><code>[[['a', 'r', 't'], ['a', 't', 'r'], ['r', 'a', 't'], ['r', 't', 'a'],
['t', 'a', 'r'], ['t', 'r', 'a']], [['a', 'r', 't'], ['a', 't', 'r'],
['r', 'a', 't'],
.........repeats.......repeats..
..for quite sometime but not infinite....]]]
</code></pre>
<p>DESIRED output </p>
<pre><code>[['a', 'r', 't'], ['a', 't', 'r'], ['r', 'a', 't'], ['r', 't', 'a'],
['t', 'a', 'r'], ['t', 'r', 'a']]
</code></pre>
<p>so it's permutation but what is tripping me up is having to use the list of strings and outputting a list of lists of strings. I have redone this multiple time and have the basis of recursive permutations down if I was just using a string 'art' as input or having a list output ['art','atr','rat',ect..] just not sure where I am going wrong. No import of itertools allowed and really wish I didn't need for loops but using comprehension recursion call gives me same results...any help or pointers appreciated. Not looking for just a redo I want to understand....</p>
|
<p>Using this, you get the desired output:</p>
<pre><code>from itertools import permutations
inp = ['a', 'r', 't']
list(permutations(inp, 3))
</code></pre>
<p>Out:</p>
<pre><code>[('a', 'r', 't'),
('a', 't', 'r'),
('r', 'a', 't'),
('r', 't', 'a'),
('t', 'a', 'r'),
('t', 'r', 'a')]
</code></pre>
<p>The result is a list of tuples, but you can convert them to lists if you want.</p>
|
python|recursion|permutation
| 0 |
1,901,819 | 57,400,215 |
Is there a way to make sure a highlited parent widget remains highlighted when selecting a child widget
|
<p>I have a radiobutton that highlights the corresponding LabelFrame.</p>
<p>Each LabelFrame has an Entry widget as a child.
When the Entry widget is selected to type in some input, the parent LabelFrame loses the given highlightbackground color (from cyan to gray) but keeps the same highlightthickness.</p>
<p>Is there a way to keep the given highlightbackground color?</p>
<p>(windows 7 64, pycharm 2019.2)</p>
<p>Thanks in advance.</p>
<pre><code>from tkinter import *
from tkinter import ttk
import tkinter as tk
class doSomeStuff(Tk):
def __init__(self):
Tk.__init__(self)
self.radioBtnVar = StringVar() # radiobutton variable
# main canvas
pwdCanvas = tk.Canvas(self, bd=0, highlightthickness=0)
pwdCanvas.pack()
# choiceLabelFrame
choiceLabelFrame = ttk.LabelFrame(pwdCanvas, text='Choice LabelFrame (ttk)')
choiceLabelFrame.grid(column=0, row=11, columnspan=2, sticky='nsew')
# radio button 1
rbtn1 = ttk.Radiobutton(choiceLabelFrame, text='A', variable=self.radioBtnVar, value='PCG', command=self.colorLabels)
rbtn1.pack(side='left')
# radio button 2
rbtn2 = ttk.Radiobutton(choiceLabelFrame, text='B', variable=self.radioBtnVar, value='UG', command=self.colorLabels)
rbtn2.pack(side='right')
# LabelFrame1, left side
self.LabelFrame1 = tk.LabelFrame(pwdCanvas, text="LabelFrame 1 (tk)", bd=0) # I use tk to have access to the 'highlightbackground' option
self.LabelFrame1.grid(column=0, row=12, sticky='nsew', padx=3, pady=3)
entry1Label = ttk.Label(self.LabelFrame1, text='Entry 1')
entry1Label.grid(column=0, row=11, sticky='w')
self.labelEntry1 = ttk.Entry(self.LabelFrame1, state='disabled')
self.labelEntry1.grid(column=1, row=11, sticky='w')
# LabelFrame2, right side
self.LabelFrame2 = tk.LabelFrame(pwdCanvas, text="LabelFrame 2 (tk)", bd=0)
self.LabelFrame2.grid(column=1, row=12, sticky='nw', padx=3, pady=3)
entry2Label = ttk.Label(self.LabelFrame2, text='Entry 2')
entry2Label.grid(column=0, row=0)
labelEntry2 = ttk.Entry(self.LabelFrame2, state='disabled')
labelEntry2.grid(column=1, row=0)
def colorLabels(self): # activates and highlights the chosen option
if self.radioBtnVar.get() == 'PCG':
for child in self.LabelFrame1.winfo_children():
child.config(state='enabled')
self.LabelFrame1.config(highlightbackground='cyan', highlightthickness=2)
for child in self.LabelFrame2.winfo_children():
child.config(state='disabled')
self.LabelFrame2.config(highlightthickness=0)
elif self.radioBtnVar.get() == 'UG':
for child in self.LabelFrame2.winfo_children():
child.config(state='enabled')
self.LabelFrame2.config(highlightbackground='cyan', highlightthickness=2)
for child in self.LabelFrame1.winfo_children():
child.config(state='disabled')
self.LabelFrame1.config(highlightthickness=0)
if __name__ == "__main__":
app = doSomeStuff()
app.mainloop()
</code></pre>
|
<p>The <code>highlightthickness</code> attribute is specifically for highlighting which widget has the keyboard focus. It serves as a clue for the user when traversing the UI with the keyboard.</p>
<p>Because it is tied directly to which widget has focus, and because you can only have focus in one widget at a time, it's not possible to use that feature to highlight more than one thing at a time.</p>
|
python-3.x|tkinter|highlight
| 0 |
1,901,820 | 57,658,473 |
elapsed time between the two loops
|
<p>I want to measure the time difference (inseconds) between two lines of code.</p>
<pre><code>while ret:
ret, image_np=cap.read()
time_1
for condition:
if condition:
time_2
</code></pre>
<p>I want to subtract <code>(time_2) - (time_1)</code>. But the problem is that <code>time_1</code> always changes and I can't calculate the time.</p>
|
<p>You could store the values directly in an array and change your time_1 value each time you append a value to the array. Here is what it could look like :</p>
<pre><code>from datetime import datetime
time_1 = datetime.now()
elapsed_time = []
# In my example I loop from 0 to 9 and take the elapsed time
# when the value is 0 or 5
for i in range(10):
if i in [0,5]:
elapsed_time.append(datetime.now()-time_1)
time_1 = datetime.now()
</code></pre>
|
python|time|elapsedtime
| 3 |
1,901,821 | 42,181,773 |
Pandas: How to not be using the copy/view?
|
<p>I have been looking all around and just cannot figure out how to rewrite this part of my program so it doesn't give me the "A value is trying to be set on a copy of a slice from a DataFrame" message.</p>
<p>here is how my code looks:</p>
<pre><code>list_of_dataframes=[df1,df2,df3,df4]
empty_list=[]
for df in list_of_dataframes:
df["new_column"]=df["column_x"].cumsum()
empty_list.append(df)
</code></pre>
<p>So I am wanting to take the cumsum of "column_x" and the "new_column" will then show that value.</p>
<p>thanks for any help.</p>
|
<p>Change in place:</p>
<pre><code>list_of_dataframes[:] = [df.assign(new=df['column_x'].cumsum())
for df in list_of_dataframes]]
</code></pre>
<p>Creating new list:</p>
<pre><code>empty_list = [df.assign(new=df['column_x'].cumsum())
for df in list_of_dataframes]]
</code></pre>
|
python|pandas
| 1 |
1,901,822 | 53,846,451 |
Impossible to post JSON to ElasticSearch - No handler found for uri [\] and method [POST]
|
<p>I try to post some data, JSON format, from a mongoDB to a elasticSearch.</p>
<p>Here is my code : </p>
<p>First I extract data from mongoDB, then I try to post each "document" to my elastic, which adress is "<a href="http://127.0.0.1:9200" rel="nofollow noreferrer">http://127.0.0.1:9200</a>". I added an extra function "my converter(o)", to make datetime object serialyzable.</p>
<pre><code>from pymongo import MongoClient
import requests
import json
import datetime
mongo_host = 'localhost'
mongo_port = '27017'
client=MongoClient(mongo_host+':'+mongo_port)
db = client.cvedb
collection=db['cves']
def myconverter(o):
if isinstance(o, datetime.datetime):
return o.__str__()
try: db.command("serverStatus")
except Exception as e: print(e)
else: print("You are connected!")
cursor = collection.find({})
for document in cursor:
headers={"content-type":"application/x-www-form-urlencoded"}
url_base = 'http://127.0.0.1:9200'
data=document
data_parsed=(json.dumps(data, default = myconverter))
print("#####")
print("#####")
print(data_parsed)
print("#####")
print("#####")
req = requests.post(url_base,json=data_parsed)
print (req.status_code)
print (req.text)
print("####")
print("#####")
print (req.status_code)
print("#####")
print("####")
client.close()
</code></pre>
<p>But when I hcekc my ES, at the following adress : "<a href="http://127.0.0.1:9200/_cat/indices" rel="nofollow noreferrer">http://127.0.0.1:9200/_cat/indices</a>", nothing appears. Here is what I get with my terminal : </p>
<pre><code>{"Modified": "2008-09-09 08:35:18.883000", "impact": {"confidentiality": "PARTIAL", "integrity": "PARTIAL", "availability": "PARTIAL"}, "summary": "KDE K-Mail allows local users to gain privileges via a symlink attack in temporary user directories.", "cvss": 4.6, "access": {"vector": "LOCAL", "authentication": "NONE", "complexity": "LOW"}, "vulnerable_configuration": ["cpe:2.3:a:kde:k-mail:1.1"], "_id": null, "references": ["http://www.redhat.com/support/errata/RHSA1999015_01.html", "http://www.securityfocus.com/bid/300"], "Published": "2000-01-04 00:00:00", "id": "CVE-1999-0735", "cvss-time": "2004-01-01 00:00:00", "vulnerable_configuration_cpe_2_2": ["cpe:/a:kde:k-mail:1.1"]}
#####
#####
400
No handler found for uri [/] and method [POST]
####
#####
400
#####
####
#####
#####
</code></pre>
<p>I tried to follow tome post which deals with the same issue but nothing worked on me.
Any idea of why it didn't worked ?</p>
|
<p>There are two issues</p>
<ol>
<li><p><code>url_base</code> is missing an index and type</p>
<p>url_base = '<a href="http://127.0.0.1:9200/index/type" rel="nofollow noreferrer">http://127.0.0.1:9200/index/type</a>'</p></li>
<li><p><code>headers</code> must be <code>application/json</code> (you're not seeing this yet, but once you solve the point above, you'll get this error, too.</p>
<p>headers={"content-type":"application/json"}</p></li>
</ol>
|
python|elasticsearch|post|httphandler|http-status-code-400
| 1 |
1,901,823 | 54,080,791 |
Dictionry write to file by values
|
<p>I need to write Dictionray to a file, by values to keys. meaning if I have for exemple dictionary of movies(=keys), and the actors who palyed that movie, I need to write to a file an actor and then write all the movies he played in.</p>
<p>the keys are movies, the values are set of actors.</p>
<p>I mannaged to write movies and values, but as explained that is not what I want.</p>
<p>the file should be like:
Actor Name, Movie1,Movie2 etc...</p>
|
<p>The most straightforward thing to do is to invert the dictionary.</p>
<pre><code>actor2movies = {}
for movie, actors in movie2actors.items():
for actor in actors:
if actor not in actor2movies:
actor2movies[actor] = []
actor2movies[actor].append(movie)
</code></pre>
<p>(If you initialize <code>actor2movies</code> to <code>collections.defaultdict(list)</code> instead, you can omit the <code>if</code> statement.)</p>
<p>Then write the resulting dictionary out, one key and its values at a time.</p>
<pre><code>with open("foo.txt", "w") as f:
for actor, movies in actor2movies.items():
print("{},{}".format(actor, ",".join(movies))
</code></pre>
|
python|python-3.x|list|file|dictionary
| 2 |
1,901,824 | 58,308,021 |
A question about identity and boolean in Python3
|
<p>I have a question about identity in python, i'm a beginner in python and i have read some courses on "is" keyword and "is not". And i don't understand why the operation "False is not True is not True is not False is not True" equals False in python ? For me this operation have to return True.</p>
|
<p>Python <a href="https://docs.python.org/3/reference/expressions.html#comparisons" rel="nofollow noreferrer">chains comparisons</a>:</p>
<blockquote>
<p>Formally, if <code>a, b, c, …, y, z</code> are expressions and <code>op1, op2, …, opN</code> are comparison operators, then <code>a op1 b op2 c ... y opN z</code> is equivalent to <code>a op1 b and b op2 c and ... y opN z</code>, except that each expression is evaluated at most once.</p>
</blockquote>
<p>Your expression is:</p>
<pre><code>False is not True is not True is not False is not True
</code></pre>
<p>Which becomes:</p>
<pre><code>(False is not True) and (True is not True) and (True is not False) and (False is not True)
</code></pre>
<p>Which is equivalent to:</p>
<pre><code>(True) and (False) and (True) and (True)
</code></pre>
<p>Which is <code>False</code>.</p>
|
python|boolean|identity
| 3 |
1,901,825 | 58,345,228 |
AttributeError: 'tuple' object has no attribute 'dumps'
|
<p>I'm new to python, I try to serialize a list of custom object. this is the object I try to serialize:</p>
<pre><code>test = [(deliveryRecipientObject){
deliveryType = "selected"
id = "gkfhgjhfjhgjghkj"
type = "list"
}]
</code></pre>
<p>After I read some post and tutorial I come up with this:</p>
<pre><code>class deliverRecipientObject(object):
def __init__(self):
self.deliveryType = ""
self.id = ""
self.type = ""
class MyJsonEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, deliverRecipientObject):
return {}
return (MyJsonEncoder, self).dumps(obj)
</code></pre>
<p>then I run:</p>
<pre><code>json.dumps(test, cls=MyJsonEncoder)
</code></pre>
<p>and then I got this error: AttributeError: 'tuple' object has no attribute 'dumps'</p>
<p>my goal is to read that as json and then I can flatten it and save it as csv</p>
<p>thank you</p>
|
<p>I think you may have intended <code>(MyJsonEncoder, self).dumps</code> to be <code>super(MyJsonEncoder, self).dumps</code>, though that would have been wrong too. Instead, you should be calling <code>super().default</code> (in python 3 you don't need to pass the arguments)</p>
<pre><code>class MyJsonEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, deliverRecipientObject):
return {
"deliveryType": obj.deliveryType,
"id": obj.id,
"type": obj.type,
}
return super().default(obj)
</code></pre>
|
json|python-3.7
| 0 |
1,901,826 | 58,303,074 |
UnboundLocalError: local variable 'labels' referenced before assignment
|
<p>when i try to run this function it gives me this error 'UnboundLocalError: local variable 'labels' referenced before assignment' can any one help me please</p>
<pre><code>@app.route('/', methods=['GET', 'POST'])
def upload_file():
if request.method == 'POST' and 'photo' in request.files:
filename = photos.save(request.files['photo'])
file_url = photos.url(filename)
with io.open(filename, 'rb') as image_file:
content = image_file.read()
image = types.Image(content=content)
response = vision_client.label_detection(image=image)
labels = response.label_annotations()
return render_template('index.html', thelabels=labels)
</code></pre>
|
<p>The <code>labels</code> variable in your function is only instantiated if the <code>if</code> statement returns <code>True</code>, otherwise the variable never gets created.</p>
<p>You need to set up an <code>else</code> statement to create the variable if the initial check doesn't return <code>True</code> (or give <code>labels</code> a default value before the <code>if</code> statement, as suggested by Ahmad):</p>
<pre><code>@app.route('/', methods=['GET', 'POST'])
def upload_file():
# Option 1: give `labels` a default value here - Doesn't have to be `None`
labels = None
if request.method == 'POST' and 'photo' in request.files:
filename = photos.save(request.files['photo'])
file_url = photos.url(filename)
with io.open(filename, 'rb') as image_file:
content = image_file.read()
image = types.Image(content=content)
response = vision_client.label_detection(image=image)
labels = response.label_annotations()
else: # Option 2: set `labels` to `None` in an `else` statement in case the `if` statement check returns False
labels = None
return render_template('index.html', thelabels=labels)
</code></pre>
|
python|flask
| 1 |
1,901,827 | 65,454,598 |
How to split arguments with a decimal point
|
<p>I am new to python and need some help</p>
<p>This is an Example of my problem:</p>
<pre class="lang-py prettyprint-override"><code>@bot.command()
async def example(ctx, *, arg, arg2):
await ctx.message.delete()
r = requests.get(f"https://example.api/image?first_text={arg}sec_text={arg2}")
await ctx.send(r)
</code></pre>
<p>If I execute the command like this: <code>{prefix}example argument one text, argument two text</code></p>
<p>It should return <code>example.api/image?first_text=argument one text&sec_text= argument two text</code></p>
|
<p>Here's one way you could split an argument. You can use <a href="https://www.w3schools.com/python/ref_string_split.asp" rel="nofollow noreferrer"><code>.split()</code></a> to separate <a href="https://discordpy.readthedocs.io/en/latest/ext/commands/commands.html" rel="nofollow noreferrer"><code>arguments</code></a> as seen below.</p>
<pre class="lang-py prettyprint-override"><code>@client.command()
async def split(ctx, *, arg):
x = arg.split(',')
for arg in x:
await ctx.send(arg)
</code></pre>
<p><strong>Working Result</strong>:</p>
<p><a href="https://i.stack.imgur.com/yID7M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yID7M.png" alt="Result if above works" /></a></p>
<p>Edit: <strong><code>But how do I implement it in the api url</code></strong></p>
<p>One thing you could do is check it as a list. Did you notice how I had the previous code loop through the <code>x</code> variable? When you <code>await ctx.send()</code> or <code>print</code> the variable <code>x</code> by itself, it gives you a list.</p>
<pre class="lang-py prettyprint-override"><code>@client.command()
async def split(ctx, *, arg):
x = arg.split(',')
print(x)
</code></pre>
<p>The above code would print: <code>['arg one', ' arg two', ' another arg']</code>. With this information, you can look through the list.</p>
<pre class="lang-py prettyprint-override"><code>@client.command()
async def split(ctx, *, arg):
x = arg.split(',')
await ctx.send(f"""
Here's one argument: {x[0]}
And another: {x[1]}
And a third one while we're at it: {x[2]}
""")
</code></pre>
<p><a href="https://i.stack.imgur.com/xIKMs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xIKMs.png" alt="Result for looking through list" /></a></p>
|
python|discord|discord.py
| 2 |
1,901,828 | 22,841,888 |
How would I alphabetically sort a (very long) list of objects using radix sort in Python?
|
<p>I am trying to sort lists within a list in alphabetical order using radix sort. I need the lists to be sorted by one of the attributes of an object I've created.</p>
<p>Note: I cannot use built-in sorts - I have to write my own. I was not allowed to use defaultdict so I used lists. </p>
<p>I have a list called results[]. In results[x] for all of results[] I have another list containing words of length x. Those words are stored as word objects containing the original word (originalWord), the alphabetized word (azWord), and its length (wLength). e.g. dog, dgo, 3.</p>
<p>Since I have many, many words, I've decided the radix sort would be most efficient for my purposes. I am relatively new to Python so I'm having some trouble writing the code correctly. I have a rough outline of what it should look like but I would appreciate help with fixing the syntax.</p>
<p>I am planning on using radix_sort in a for loop that iterates through results[]. I have a variable, maxL, that stores the longest word I have (aka the number of lists in results).</p>
<pre><code>for x in range(0,maxL):
radix_sort(results[x], x)
</code></pre>
<p>Here is my attempt at writing radix sort for strings. Please note that the azWord attribute is represented as a char list.</p>
<pre><code>def radix_sort(List, length):
buckets = [[], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], []]
for i in range (0, length-1): #for every letter "column"
for word in List: #for every word
index = ord(word[i].azWord)-ord('a') #get the index of the word
buckets[index].append(word) #add word object to correct bucket
for containedList in buckets:
while(containedList):
#I'm having trouble here with emptying the lists back into the bins
</code></pre>
<p>EDIT: Also, since I don't want to run out of memory (this is being done for a very long list of words), should I be clearing some things as I go that I don't need?</p>
<p>Also, currently, Eclipse is giving me an error "Expected:: Expected::" for this line:</p>
<pre><code>for i in range (0, length-1)
</code></pre>
<p>Current version:</p>
<pre><code>def radix_sort(List, length):
buckets = [[], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], []]
for i in range (length-1, -1, -1): #for every letter "column"
for word in List: #for every word
index = ord(word.azWord[i])-ord('a') #get the index of the word
buckets[index].append(word) #add word object to correct bucket
List[:] = []
for containedList in buckets:
List.extend(containedList)
</code></pre>
|
<p>To put the sorted results back into the list:</p>
<pre><code>List[:] = []
for containedList in buckets:
List.extend(containedList)
</code></pre>
<p>One more thing, you'll need to sort from least significant to most significant if you expect the proper sort:</p>
<pre><code>for i in range(length-1, -1, -1):
</code></pre>
<p>Note that your original range was incorrect anyway, the end point isn't included in the range so stopping at <code>length-1</code> was going to skip the last letter.</p>
|
python|sorting
| 1 |
1,901,829 | 45,644,709 |
Ansible playbook with nested python scripts
|
<p>I am trying to execute a ansible playbook which uses the script module to run a custom python script.
This custom python script is importing another python script.
On execution of the playbook the ansible command fails while trying to import the util script. I am new to ansible, please help!!</p>
<p><strong>helloWorld.yaml:</strong></p>
<pre><code>- hosts: all
tasks:
- name: Create a directory
script: /ansible/ems/ansible-mw-tube/modules/createdirectory.py "{{arg1}}"
</code></pre>
<p><strong>createdirectory.py</strong> -- Script configured in YAML playbook</p>
<pre><code>#!/bin/python
import sys
import os
from hello import HelloWorld
class CreateDir:
def create(self, dirName,HelloWorldContext):
output=HelloWorld.createFolder(HelloWorldContext,dirName)
print output
return output
def main(dirName, HelloWorldContext):
c = CreateDir()
c.create(dirName, HelloWorldContext)
if __name__ == "__main__":
HelloWorldContext = HelloWorld()
main(sys.argv[1],HelloWorldContext)
HelloWorldContext = HelloWorld()
</code></pre>
<p><strong>hello.py</strong> -- util script which is imported in the main script written above</p>
<pre><code>#!/bin/python
import os
import sys
class HelloWorld:
def createFolder(self, dirName):
print dirName
if not os.path.exists(dirName):
os.makedirs(dirName)
print dirName
if os.path.exists(dirName):
return "sucess"
else:
return "failure"
</code></pre>
<p><strong>Ansible executable command</strong></p>
<pre><code>ansible-playbook -v -i /ansible/ems/ansible-mw-tube/inventory/helloworld_host /ansible/ems/ansible-mw-tube/playbooks/helloWorld.yml -e "arg1=/opt/logs/helloworld"
</code></pre>
<p><strong>Ansible version</strong></p>
<pre><code>ansible --version
[WARNING]: log file at /opt/ansible/ansible.log is not writeable and we cannot create it, aborting
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
</code></pre>
|
<p>The <code>script</code> module copies the script to the remote server and executes it there using the <code>shell</code> command. It can't find the util script, since it doesn't transfer that file - it doesn't know that it needs to do it.</p>
<p>You have several options, such as use <code>copy</code> to move both files to the server and use <code>shell</code> to execute them. But since what you seem to be doing is creating a directory, the <code>file</code> module can do that for you with no scripts necessary.</p>
|
python-2.7|module|ansible|ansible-2.x
| 2 |
1,901,830 | 28,644,143 |
what is wrong here : TypeError: string indices must be integers, not str
|
<pre><code>if e.message[0]['code'] == 32: ##Account suspended : [{u'message': u'Could not authenticate you', u'code': 32}]': ##Account suspended : [{u'message': u'Could not authenticate you', u'code': 32}]
</code></pre>
<p>TypeError: string indices must be integers, not str</p>
<p>I think e is <code>[{u'message': u'Could not authenticate you', u'code': 32}]</code></p>
<p>What is problem?</p>
|
<p>You clearly access it wrong, because message and code are alongside each other, code does not belong to the message. But also your error message is inconsistent with what you think <code>e</code> is. So try these options: <code>e.code</code>, <code>e["code"]</code>, <code>e[0].code</code>, <code>e[0]["code"]</code></p>
|
python|tweepy
| 1 |
1,901,831 | 41,356,784 |
How to use type hints in python 3.6?
|
<p>I noticed Python 3.5 and Python 3.6 added a lot of features about static type checking, so I tried with the following code (in python 3.6, stable version).</p>
<pre><code>from typing import List
a: List[str] = []
a.append('a')
a.append(1)
print(a)
</code></pre>
<p>What surprised me was that, Python didn't give me an error or warning, although <code>1</code> was appended to a <code>list</code> which should only contain strings. <code>Pycharm</code> detected the type error and gave me a warning about it, but it was not obvious and it was not shown in the output console, I was afraid sometimes I might miss it. I would like the following effects:</p>
<ol>
<li>If it's obvious that I used the wrong type just as shown above, throw out a warning or error.</li>
<li>If the compiler couldn't reliably check if the type I used was right or wrong, ignore it.</li>
</ol>
<p>Is that possible? Maybe <code>mypy</code> could do it, but I'd prefer to use Python-3.6-style type checking (like <code>a: List[str]</code>) instead of the comment-style (like <code># type List[str]</code>) used in <code>mypy</code>. And I'm curious if there's a switch in native python 3.6 to achieve the two points I said above.</p>
|
<p>Type hints are entirely meant to be ignored by the Python runtime, and are checked only by 3rd party tools like mypy and Pycharm's integrated checker. There are also a variety of lesser known 3rd party tools that do typechecking at either compile time or runtime using type annotations, but most people use mypy or Pycharm's integrated checker AFAIK.</p>
<p>In fact, I actually doubt that typechecking will ever be integrated into Python proper in the foreseable future -- see the 'non-goals' section of <a href="https://www.python.org/dev/peps/pep-0484/#non-goals">PEP 484</a> (which introduced type annotations) and <a href="https://www.python.org/dev/peps/pep-0526/#non-goals">PEP 526</a> (which introduced variable annotations), as well as Guido's comments <a href="https://github.com/python/typing/issues/258#issuecomment-242115218">here</a>.</p>
<p>I'd personally be happy with type checking being more strongly integrated with Python, but it doesn't seem the Python community at large is ready or willing for such a change.</p>
<p>The latest version of mypy should understand both the Python 3.6 variable annotation syntax and the comment-style syntax. In fact, variable annotations were basically Guido's idea in the first place (Guido is currently a part of the mypy team) -- basically, support for type annotations in mypy and in Python was developed pretty much simultaneously.</p>
|
python|python-3.x|type-hinting|mypy|python-typing
| 42 |
1,901,832 | 25,782,796 |
How to get print to format its output in python?
|
<p><code>print(reprt('Hello\nHello'))</code> will print <code>b'Hello\nHello'</code> and I would like it to print </p>
<pre><code>Hello
Hello
</code></pre>
<p>instead. The reason for this is that some functions such as <code>subprocess.check_output</code> send a repr output.</p>
<pre><code>params = r'"C:\cygwin64\bin\bash.exe" --login -c ' + r"""'ls "C:\Users"'"""
print(subprocess.check_output(params, shell=True))
</code></pre>
|
<p>then dont use repr just use</p>
<pre><code>print("hello\nhello")
</code></pre>
<p>demo</p>
<p>tracing.py</p>
<pre><code>print("hello\nhello")
</code></pre>
<p>otherscript.py</p>
<pre><code>import subprocess
print subprocess.check_output('python tracing.py')
</code></pre>
<p>output</p>
<pre><code>hello
hello
</code></pre>
|
python|printing
| -1 |
1,901,833 | 25,538,812 |
Python TypeError on Load Object using Dill
|
<p>Trying to render a large and (possibly very) unpicklable object to a file for later use.</p>
<p>No complaints on the <code>dill.dump(file)</code> side:</p>
<pre><code>In [1]: import echonest.remix.audio as audio
In [2]: import dill
In [3]: audiofile = audio.LocalAudioFile("/Users/path/Track01.mp3")
en-ffmpeg -i "/Users/path/audio/Track01.mp3" -y -ac 2 -ar 44100 "/var/folders/X2/X2KGhecyG0aQhzRDohJqtU+++TI/-Tmp-/tmpWbonbH.wav"
Computed MD5 of file is b3820c166a014b7fb8abe15f42bbf26e
Probing for existing analysis
In [4]: with open('audio_object_dill.pkl', 'wb') as f:
...: dill.dump(audiofile, f)
...:
In [5]:
</code></pre>
<p>But trying to load the <code>.pkl</code> file:</p>
<pre><code>In [1]: import dill
In [2]: with open('audio_object_dill.pkl', 'rb') as f:
...: audio_object = dill.load(f)
...:
</code></pre>
<p>Returns following error:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-2-203b696a7d73> in <module>()
1 with open('audio_object_dill.pkl', 'rb') as f:
----> 2 audio_object = dill.load(f)
3
/Users/mikekilmer/Envs/GLITCH/lib/python2.7/site-packages/dill-0.2.2.dev-py2.7.egg/dill/dill.pyc in load(file)
185 pik = Unpickler(file)
186 pik._main_module = _main_module
--> 187 obj = pik.load()
188 if type(obj).__module__ == _main_module.__name__: # point obj class to main
189 try: obj.__class__ == getattr(pik._main_module, type(obj).__name__)
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.pyc in load(self)
856 while 1:
857 key = read(1)
--> 858 dispatch[key](self)
859 except _Stop, stopinst:
860 return stopinst.value
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.pyc in load_newobj(self)
1081 args = self.stack.pop()
1082 cls = self.stack[-1]
-> 1083 obj = cls.__new__(cls, *args)
1084 self.stack[-1] = obj
1085 dispatch[NEWOBJ] = load_newobj
TypeError: __new__() takes at least 2 arguments (1 given)
</code></pre>
<p>The AudioObject is much more complex (and large) than the <code>class object</code> the above calls are made on (from <a href="https://stackoverflow.com/questions/4529815/how-to-save-an-object-in-python/25119089">SO answer</a>), and I'm unclear as to whether I need to send a second argument via <code>dill</code>, and if so, what that argument would be or how to tell if any approach to pickling is viable for this specific object.</p>
<p>Examining the object itself a bit:</p>
<pre><code>In [4]: for k, v in vars(audiofile).items():
...: print k, v
...:
</code></pre>
<p>returns:</p>
<pre><code>is_local False
defer False
numChannels 2
verbose True
endindex 13627008
analysis <echonest.remix.audio.AudioAnalysis object at 0x103c61bd0>
filename /Users/mikekilmer/Envs/GLITCH/glitcher/audio/Track01.mp3
convertedfile /var/folders/X2/X2KGhecyG0aQhzRDohJqtU+++TI/-Tmp-/tmp9ADD_Z.wav
sampleRate 44100
data [[0 0]
[0 0]
[0 0]
...,
[0 0]
[0 0]
[0 0]]
</code></pre>
<p>And <code>audiofile.analysis</code> seems to contain an attribute called <code>audiofile.analysis.source</code> which contains (or apparently points back to) <code>audiofile.analysis.source.analysis</code></p>
|
<p>In this case, the answer lay within the module itself.</p>
<p>The <code>LocalAudioFile</code> class provides (and each of it's instances can therefor utilize) it's own <code>save</code> method, called via <code>LocalAudioFile.save</code> or more likely <code>the_audio_object_instance.save</code>.</p>
<p>In the case of an <code>.mp3</code> file, the <code>LocalAudioFile</code> instance consists of a pointer to a temporary <code>.wav</code> file which is the decompressed version of the <code>.mp3</code>, along with a whole bunch of analysis data which is returned from the initial audiofile, after it's been interfaced with the (internet-based) <code>Echonest API</code>.</p>
<p><a href="http://echonest.github.io/remix/apidocs/echonest.remix.audio-pysrc.html#LocalAudioFile.save" rel="nofollow">LocalAudioFile.save</a> calls <code>shutil.copyfile(path_to_wave, wav_path)</code> to save the <code>.wav</code> file with same name and path as original file linked to audio object and returns an error if the file already exists. It calls <code>pickle.dump(self, f)</code> to save the analysis data to a file also in the directory the initial audio object file was called from.</p>
<p>The <code>LocalAudioFile</code> object can be reintroduced simply via <code>pickle.load()</code>.</p>
<p>Here's an <code>iPython</code> session in which I used the <code>dill</code>, which is a very useful wrapper or interface that offers most of the standard <code>pickle</code> methods plus a bunch more:</p>
<pre><code>audiofile = audio.LocalAudioFile("/Users/mikekilmer/Envs/GLITCH/glitcher/audio/Track01.mp3")
In [1]: import echonest.remix.audio as audio
In [2]: import dill
# create the audio_file object
In [3]: audiofile = audio.LocalAudioFile("/Users/mikekilmer/Envs/GLITCH/glitcher/audio/Track01.mp3")
en-ffmpeg -i "/Users/path/audio/Track01.mp3" -y -ac 2 -ar 44100 "/var/folders/X2/X2KGhecyG0aQhzRDohJqtU+++TI/-Tmp-/tmp_3Ei0_.wav"
Computed MD5 of file is b3820c166a014b7fb8abe15f42bbf26e
Probing for existing analysis
#call the LocalAudioFile save method
In [4]: audiofile.save()
Saving analysis to local file /Users/path/audio/Track01.mp3.analysis.en
#confirm the object is valid by calling it's duration method
In [5]: audiofile.duration
Out[5]: 308.96
#delete the object - there's probably a "correct" way to do this
in [6]: audiofile = 0
#confirm it's no longer an audio_object
In [7]: audiofile.duration
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-12-04baaeda53a4> in <module>()
----> 1 audiofile2.duration
AttributeError: 'int' object has no attribute 'duration'
#open the pickled version (using dill)
In [8]: with open('/Users/path/audio/Track01.mp3.analysis.en') as f:
....: audiofile = dill.load(f)
....:
#confirm it's a valid LocalAudioFile object
In [8]: audiofile.duration
Out[8]: 308.96
</code></pre>
<p><a href="http://echonest.com/" rel="nofollow">Echonest</a> is a very robust API and the remix package provides a ton of functionality. There's a small list of relevant links assembled <a href="http://www.mzoo.org/getting-the-python-echonest-remix-package-running/" rel="nofollow">here</a>.</p>
|
python|pickle|dill
| 1 |
1,901,834 | 61,950,771 |
extract csv data into xml format
|
<p>I have a csv file with data like</p>
<pre><code>abc1,E,WEL,POI,<DeData L1="Websales" </DeData>
</code></pre>
<p>I want to extract individual columns and save into xml file as </p>
<pre><code><Data>
<element1>abc1</element1>
<element2>E</element2>
<element3>WEL</element3>
<element4>abc1</element4>
<DeData L1="Websales" </DeData>
</Data>
</code></pre>
<p>and each row from <code>csv</code> file should be saved as <code>separate xml</code> file.</p>
<p>Any pointers would be very helpful. </p>
|
<p>Try this code.</p>
<pre><code>
import pandas as pd
with open('a.csv', 'r') as filee:
count=0
temp = filee.readlines()
for val in temp:
values = val.rstrip().split(',')
with open(str(count)+'.xml', 'w') as xml_f:
string = f'''<Data>
<element1>{ values[0] }</element1>
<element2>{ values[1] }</element2>
<element3>{ values[2] }</element3>
<element4>{ values[3] }</element4>
{ values[4] }
</Data>'''
print(string)
xml_f.write(string.lstrip())
count += 1
</code></pre>
<p>Considering csv file contains data in this format</p>
<pre><code>abc1,E,WEL,POI,<DeData L1="Websales" </DeData>
abc1,E,WEL,POI,<DeData L1="Websales" </DeData>
abc1,E,WEL,POI,<DeData L1="Websales" </DeData>
abc1,E,WEL,POI,<DeData L1="Websales" </DeData>
</code></pre>
|
python|bash
| 1 |
1,901,835 | 23,924,936 |
Pygame deleting an object in a list
|
<p>I just started programming in OOP style last week when deciding to make a small game. But now I seem to be stuck. I'll explain what I have so far:</p>
<p>When the player hits a button, a bullet object is created next to the player and the bullet object is added to the bullets[] list. The bullet then travels horizontally across the screen in the designated direction. If the bullet collides with a player or a wall, it is removed from the bullets[] list. So far, so good.</p>
<p>Now I just cant seem to figure out how to remove the bullet from the bullets[] list when it leaves the screen (screen is defined between 0 and xmax). Also, after I remove the bullet from the list, should I also remove the object itsself, or is this done automatically?</p>
<p>Code so far:</p>
<pre><code> class BULLET(object):
#Constructor for the bullet, bullets are stored into array 'bullets'
# The direction is stored to keep track of which player fired the bullet
def __init__(self,location,direction,color):
self.rect = pg.Rect(location[0],location[1],xmax/160,xmax/160)
self.bullet_type="normal"
self.direction=direction
self.color=color
bullets.append(self)
#Moves the bullet horizontally across the screen, in the specified direction
# The move function also checks for collision with any walls or players
# The move function removes the bullet object from the list and destroys it
# when it leaves the left or right side of the screen
def move(self,bullet_speed):
self.rect.x += bullet_speed
for wall in walls:
if self.rect.colliderect(wall.rect):
index=wall.rect.collidelist(bullets)
del bullets[index]
#Do I need to delete the object too? or just the list item?
for player in players:
if self.rect.colliderect(player.rect):
index=player.rect.collidelist(bullets)
if player.immune_timer <= 0:
del bullets[index]
player.immunity(500)
player.life -= 1
if self.rect.centerx > xmax or self.rect.centerx <0:
#This is where I would like this instance of the bullet object to be deleted
# and to have the object removed from the bullets[] list
</code></pre>
|
<p>What I suggest you do is in your main loop:</p>
<pre><code>bullets = [bullet for bullet in bullets if 0 < bullet.rect.centerx < xmax]
</code></pre>
<p>This will only keep the items that should be in the list.</p>
|
python|list|oop|object|pygame
| 1 |
1,901,836 | 24,119,301 |
Convert list with each element an object of class into json
|
<p>I have a python list where each element in the list is the object of a class Summershum.Model.File:</p>
<pre><code>message = [
<File(tar_file:appstream-glib-0.1.5.tar.xz, filename:/appstream-glib-0.1.5/config.guess, sha256:4a86808b907403ad6801c0661a4524dfa07c0b898a2cef6e7fa0cf82a09b9c53)>,
<File(tar_file:totem-3.12.1.tar.xz, filename:/totem-3.12.1/build-aux/config.guess, sha256:4a86808b907403ad6801c0661a4524dfa07c0b898a2cef6e7fa0cf82a09b9c53)>,
<File(tar_file:gvfs-1.20.2.tar.xz, filename:/gvfs-1.20.2/config.guess, sha256:4a86808b907403ad6801c0661a4524dfa07c0b898a2cef6e7fa0cf82a09b9c53)>,
<File(tar_file:gnome-software-3.12.2.tar.xz, filename:/gnome-software-3.12.2/config.guess, sha256:4a86808b907403ad6801c0661a4524dfa07c0b898a2cef6e7fa0cf82a09b9c53)>,
<File(tar_file:gnome-packagekit-3.12.2.tar.xz, filename:/gnome-packagekit-3.12.2/config.guess, sha256:4a86808b907403ad6801c0661a4524dfa07c0b898a2cef6e7fa0cf82a09b9c53)>,
<File(tar_file:gnome-color-manager-3.12.2.tar.xz, filename:/gnome-color-manager-3.12.2/config.guess, sha256:4a86808b907403ad6801c0661a4524dfa07c0b898a2cef6e7fa0cf82a09b9c53)>,
<File(tar_file:gnome-chess-3.12.2.tar.xz, filename:/gnome-chess-3.12.2/config.guess, sha256:4a86808b907403ad6801c0661a4524dfa07c0b898a2cef6e7fa0cf82a09b9c53)>,
<File(tar_file:gnome-power-manager-3.12.2.tar.xz, filename:/gnome-power-manager-3.12.2/config.guess, sha256:4a86808b907403ad6801c0661a4524dfa07c0b898a2cef6e7fa0cf82a09b9c53)>,
<File(tar_file:evolution-mapi-3.12.2.tar.xz, filename:/evolution-mapi-3.12.2/config.guess, sha256:4a86808b907403ad6801c0661a4524dfa07c0b898a2cef6e7fa0cf82a09b9c53)>,
<File(tar_file:cockpit-0.7.tar.bz2, filename:/cockpit-0.7/tools/config.guess, sha256:4a86808b907403ad6801c0661a4524dfa07c0b898a2cef6e7fa0cf82a09b9c53)>,
<File(tar_file:xf86-video-freedreno-1.1.0.tar.bz2, filename:/xf86-video-freedreno-1.1.0/config.guess, sha256:4a86808b907403ad6801c0661a4524dfa07c0b898a2cef6e7fa0cf82a09b9c53)>,
<File(tar_file:elfutils-0.159.tar.bz2, filename:/elfutils-0.159/config/config.guess, sha256:4a86808b907403ad6801c0661a4524dfa07c0b898a2cef6e7fa0cf82a09b9c53)>,
<File(tar_file:ibus-table-1.5.0.20140519.tar.gz, filename:/ibus-table-1.5.0.20140519/config.guess, sha256:4a86808b907403ad6801c0661a4524dfa07c0b898a2cef6e7fa0cf82a09b9c53)>,
<File(tar_file:fence-agents-4.0.9.tar.xz, filename:/fence-agents-4.0.9/config.guess, sha256:4a86808b907403ad6801c0661a4524dfa07c0b898a2cef6e7fa0cf82a09b9c53)>
]
</code></pre>
<p>I am trying to convert this list into json and want to pass it to view for displaying (filename, sha1sum). </p>
<p>I was trying to convert the class object using "<strong>json</strong>" method:
message = [msg.<strong>json</strong>() for msg in message]
But </p>
<pre><code>print message
</code></pre>
<p>gives me nothing empty list. What is the correct way of doing it?</p>
|
<p>Based on the output of <code>dir(message[0])</code> that you provided, the error is simple - _there is no <code>__json__()</code> method for a <code>summershum.model.File</code> object. So I'm guessing that the relevant section of your code looks like this:</p>
<pre><code>try:
...
message = [msg.__json__() for msg in message]
print message
...
except Exception:
# Some code, or just "pass"
</code></pre>
<p>Because there is no <code>__json__()</code> method, you should be getting an <code>AttributeError</code>. However, it sounds like you were not seeing an exception, which is why I assume there's a <code>try...except</code> block surrounding the code.</p>
<p>With no built-in method to convert the file contents to json, you'll have to use the built-in json module. For example:</p>
<pre><code>>>> import json
>>> jsonString = '{"one":"two"}'
>>> jsonObj = json.loads(jsonString)
>>> jsonObj
{u'one': u'two'}
</code></pre>
<p>You'll need to call <code>json.loads</code> and give it the contexts of the file. I'm not sure exactly what you'll need to do that, but based on the available methods I would suggest seeing what the <code>get</code> and <code>tar_file</code> methods give you.</p>
<p>It's also possible that you're just trying to call <code>__json__()</code> too soon - the objects you get from <code>get</code> or <code>tar_file</code> might have a <code>__json__()</code> method. </p>
|
python|json|flask
| 1 |
1,901,837 | 20,625,361 |
index out of range in DES python
|
<p>I am really sorry for the long program I am complaining about it here, I am just trying to make up my own DES encryption code using python with the little knowledge I have. So I have written the following code: It returned an error saying :" m = (B[j][0] << 1) + B[j][5]
IndexError: bitarray index out of range". How can I solve that?</p>
<pre><code>from bitarray import bitarray
iptable=[57, 49, 41, 33, 25, 17, 9, 1,
59, 51, 43, 35, 27, 19, 11, 3,
61, 53, 45, 37, 29, 21, 13, 5,
63, 55, 47, 39, 31, 23, 15, 7,
56, 48, 40, 32, 24, 16, 8, 0,
58, 50, 42, 34, 26, 18, 10, 2,
60, 52, 44, 36, 28, 20, 12, 4,
62, 54, 46, 38, 30, 22, 14, 6
]
pc1=[56, 48, 40, 32, 24, 16, 8,
0, 57, 49, 41, 33, 25, 17,
9, 1, 58, 50, 42, 34, 26,
18, 10, 2, 59, 51, 43, 35,
62, 54, 46, 38, 30, 22, 14,
6, 61, 53, 45, 37, 29, 21,
13, 5, 60, 52, 44, 36, 28,
20, 12, 4, 27, 19, 11, 3
]
expTable=[31, 0, 1, 2, 3, 4,
3, 4, 5, 6, 7, 8,
7, 8, 9, 10, 11, 12,
11, 12, 13, 14, 15, 16,
15, 16, 17, 18, 19, 20,
19, 20, 21, 22, 23, 24,
23, 24, 25, 26, 27, 28,
27, 28, 29, 30, 31, 0]
pc2 = [13, 16, 10, 23, 0, 4,
2, 27, 14, 5, 20, 9,
22, 18, 11, 3, 25, 7,
15, 6, 26, 19, 12, 1,
40, 51, 30, 36, 46, 54,
29, 39, 50, 44, 32, 47,
43, 48, 38, 55, 33, 52,
45, 41, 49, 35, 28, 31]
# The (in)famous S-boxes
__sbox = [
# S1
[14, 4, 13, 1, 2, 15, 11, 8, 3, 10, 6, 12, 5, 9, 0, 7,
0, 15, 7, 4, 14, 2, 13, 1, 10, 6, 12, 11, 9, 5, 3, 8,
4, 1, 14, 8, 13, 6, 2, 11, 15, 12, 9, 7, 3, 10, 5, 0,
15, 12, 8, 2, 4, 9, 1, 7, 5, 11, 3, 14, 10, 0, 6, 13],
# S2
[15, 1, 8, 14, 6, 11, 3, 4, 9, 7, 2, 13, 12, 0, 5, 10,
3, 13, 4, 7, 15, 2, 8, 14, 12, 0, 1, 10, 6, 9, 11, 5,
0, 14, 7, 11, 10, 4, 13, 1, 5, 8, 12, 6, 9, 3, 2, 15,
13, 8, 10, 1, 3, 15, 4, 2, 11, 6, 7, 12, 0, 5, 14, 9],
# S3
[10, 0, 9, 14, 6, 3, 15, 5, 1, 13, 12, 7, 11, 4, 2, 8,
13, 7, 0, 9, 3, 4, 6, 10, 2, 8, 5, 14, 12, 11, 15, 1,
13, 6, 4, 9, 8, 15, 3, 0, 11, 1, 2, 12, 5, 10, 14, 7,
1, 10, 13, 0, 6, 9, 8, 7, 4, 15, 14, 3, 11, 5, 2, 12],
# S4
[7, 13, 14, 3, 0, 6, 9, 10, 1, 2, 8, 5, 11, 12, 4, 15,
13, 8, 11, 5, 6, 15, 0, 3, 4, 7, 2, 12, 1, 10, 14, 9,
10, 6, 9, 0, 12, 11, 7, 13, 15, 1, 3, 14, 5, 2, 8, 4,
3, 15, 0, 6, 10, 1, 13, 8, 9, 4, 5, 11, 12, 7, 2, 14],
# S5
[2, 12, 4, 1, 7, 10, 11, 6, 8, 5, 3, 15, 13, 0, 14, 9,
14, 11, 2, 12, 4, 7, 13, 1, 5, 0, 15, 10, 3, 9, 8, 6,
4, 2, 1, 11, 10, 13, 7, 8, 15, 9, 12, 5, 6, 3, 0, 14,
11, 8, 12, 7, 1, 14, 2, 13, 6, 15, 0, 9, 10, 4, 5, 3],
# S6
[12, 1, 10, 15, 9, 2, 6, 8, 0, 13, 3, 4, 14, 7, 5, 11,
10, 15, 4, 2, 7, 12, 9, 5, 6, 1, 13, 14, 0, 11, 3, 8,
9, 14, 15, 5, 2, 8, 12, 3, 7, 0, 4, 10, 1, 13, 11, 6,
4, 3, 2, 12, 9, 5, 15, 10, 11, 14, 1, 7, 6, 0, 8, 13],
# S7
[4, 11, 2, 14, 15, 0, 8, 13, 3, 12, 9, 7, 5, 10, 6, 1,
13, 0, 11, 7, 4, 9, 1, 10, 14, 3, 5, 12, 2, 15, 8, 6,
1, 4, 11, 13, 12, 3, 7, 14, 10, 15, 6, 8, 0, 5, 9, 2,
6, 11, 13, 8, 1, 4, 10, 7, 9, 5, 0, 15, 14, 2, 3, 12],
# S8
[13, 2, 8, 4, 6, 15, 11, 1, 10, 9, 3, 14, 5, 0, 12, 7,
1, 15, 13, 8, 10, 3, 7, 4, 12, 5, 6, 11, 0, 14, 9, 2,
7, 11, 4, 1, 9, 12, 14, 2, 0, 6, 10, 13, 15, 3, 5, 8,
2, 1, 14, 7, 4, 10, 8, 13, 15, 12, 9, 0, 3, 5, 6, 11],
]
msg= bitarray(endian='little')
msg.frombytes(b'ABCDEFGH')
perm = bitarray(endian='little')
key= bitarray(endian='little')
key.frombytes(b'FFQQSSMM')
keyPc1 = bitarray(endian='little')
keyPc2 = bitarray(endian='little')
exp = bitarray(endian='little')
for z in pc1:
keyPc1.append(key[z])
c0 = keyPc1[0:28]
d0 = keyPc1[28:]
key0 = c0 + d0
#permutation of key
for k in pc2:
keyPc2.append(key0[k])
#permutation of message
for x in iptable:
perm.append(msg[x])
l1 = perm[0:32]
r1 = perm[32:]
#Expansion of R
for y in expTable:
exp.append(r1[y])
#XORing R & key
xor_rk = keyPc2 ^ exp
#Working with S-boxes!
B = [xor_rk[0:6], xor_rk[6:14], xor_rk[14:20], xor_rk[20:26], xor_rk[26:32], xor_rk[32:38], xor_rk[38:42], xor_rk[42:47]]
j = 0
Bn = [0] * 32
pos = 0
while j < 8:
# Work out the offsets
m = (B[j][0] << 1) + B[j][5]
n = (B[j][1] << 3) + (B[j][2] << 2) + (B[j][3] << 1) + B[j][4]
# Find the permutation value
v = __sbox[j][(m << 4) + n]
# Turn value into bits, add it to result: Bn
Bn[pos] = (v & 8) >> 3
Bn[pos + 1] = (v & 4) >> 2
Bn[pos + 2] = (v & 2) >> 1
Bn[pos + 3] = v & 1
pos += 4
j += 1
f = Bn[0] + Bn[1] + Bn[2] + Bn[3] + Bn[4] +Bn[5] + Bn[6] +Bn[7]
xor_lf = l ^ f
</code></pre>
|
<p>Not all parts of your <code>B</code> list are the same length. For example, this part:</p>
<pre><code>xor_rk[38:42]
</code></pre>
<p>has a length of 4, so you can't get the 5th element of that. Is it supposed to have a length of 4? Or did you mean to count by sixes and screw up?</p>
|
python|des
| 1 |
1,901,838 | 20,507,701 |
Explain this syntax error
|
<p>Can anyone tell me why this has a syntax error? I've run this exact code before and it worked perfectly. The line in <strong>strong text</strong> is where Python tells me the syntax error is. Thanks, everyone!</p>
<pre><code>import random
count = 0
while count < 10:
attackerLV = 20
attackerST = 20
attackerSK = 20
baseAtkPwr = 20
attackPWR = ((random.randint(85,100) * (baseAtkPwr + attackerLV + attackerST + attackerSK)) // 100
**defenderLV = 20**
defenderCON = 20
defenderSKa = 20
baseDefPwr = 20
defensePWR = (((random.randint(85,100)) * (baseDefPwr + defenderLV + defenderCON + defenderSKa)) // 4) // 100
damage = attackPWR - defensePWR
if damage <= 1:
damage = 1
print(str(attackPWR))
print(str(defensePWR))
print(str(damage))
print()
count = count + 1
</code></pre>
|
<p>You missed a parenthesis here:</p>
<pre><code>attackPWR = ((random.randint(85,100) * (baseAtkPwr + attackerLV + attackerST + attackerSK)) // 100
</code></pre>
|
python|syntax
| 5 |
1,901,839 | 15,033,298 |
Why am I getting an "int is not callable" TypeError with this (python) code?
|
<p>I'm doing Project Euler's problem 30, which is to find the sum of all the numbers that can be written as the sum of fifth powers of their digits. (<a href="http://projecteuler.net/problem=30" rel="nofollow">http://projecteuler.net/problem=30</a> for more information.)</p>
<p>For some reason, I'm getting an "int is not callable" TypeError when I try to run my attempted solution:</p>
<pre><code>def problem30():
sum = 0
for n in xrange(20000):
if sum([((int(x))**5) for x in list(str(n))]) == n:
sum += n
sum
</code></pre>
<p>Why am I getting such an error, and how might I fix it? Thanks in advance.</p>
|
<p>You named your variable <code>sum</code>, and are trying to use it as the built-in function at the same time.</p>
<p>Rename the <code>sum</code> identifier that is meant to be the total sum:</p>
<pre><code>def problem30():
total = 0
for n in xrange(20000):
if sum(int(x) ** 5 for x in str(n)) == n:
total += n
return total
</code></pre>
<p>I've simplified your expression a little too; most of the parenthesis and lists were surplus.</p>
|
python
| 8 |
1,901,840 | 15,246,523 |
Handling exception in python tkinter
|
<p>I have written an application in Python Tkinter. I recently noticed that for one of the operation, it sometimes closes (without giving any error) if that operation failed. I have written a small program to illustrate the problem :-</p>
<pre><code>import os
from Tkinter import *
def copydir():
src = "D:\\a\\x\\y"
dest = "D:\\a\\x\\z"
os.rename(src,dest)
master = Tk()
def callback():
global master
master.after(1, callback)
copydir()
print "click!"
b = Button(master, text="OK", command=copydir)
b.pack()
master.after(100, callback)
mainloop()
</code></pre>
<p>To reproduce the problem, open the folder which it will rename in “ms command prompt” such that renaming it will throw exception from Tkinter code.</p>
<p>My original code is using threading and is performing other tasks as well, so I have tried to make the operations in this test script as similar as possible. </p>
<p>Now, if I run this code by double clicking it, then program simply closes without throwing any error. But If I had been running this script from console, then exception messages are dumped on the console and atleast I got to know , something is wrong.</p>
<p>I can fix this code by using try/catch in the code where it tried to rename but I want to inform user about this failure as well. So I just want to know what coding approaches should be followed while writing Tkinter App's and I want to know:-</p>
<p>1) Can I make my script dump some stack trace in a file whenever user ran this by double clicking on it. By this atleast, I would know something is wrong and fix it.</p>
<p>2) Can I prevent the tkinter app to exit on such error and throw any exception in some TK dialog.</p>
<p>Thanks for help!!</p>
|
<p>I see you have a non-object oriented example, so I'll show two variants to solve the problem of exception-catching.</p>
<p>The key is in the in the <code>tkinter\__init__.py</code> file. One can see that there is a documented method <code>report_callback_exception</code> of <code>Tk</code> class. Here is its description:</p>
<blockquote>
<p><code>report_callback_exception()</code></p>
<p>Report callback exception on sys.stderr.</p>
<p>Applications may want to override this internal function, and should when sys.stderr is None.</p>
</blockquote>
<p>So as we see it it is supposed to override this method, lets do it!</p>
<p><strong>Non-object oriented solution</strong></p>
<pre><code>import tkinter as tk
from tkinter.messagebox import showerror
if __name__ == '__main__':
def bad():
raise Exception("I'm Bad!")
# any name as accepted but not signature
def report_callback_exception(self, exc, val, tb):
showerror("Error", message=str(val))
tk.Tk.report_callback_exception = report_callback_exception
# now method is overridden
app = tk.Tk()
tk.Button(master=app, text="bad", command=bad).pack()
app.mainloop()
</code></pre>
<p><strong>Object oriented solution</strong></p>
<pre><code>import tkinter as tk
from tkinter.messagebox import showerror
class Bad(tk.Tk):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# or tk.Tk.__init__(*args, **kwargs)
def bad():
raise Exception("I'm Bad!")
tk.Button(self, text="bad", command=bad).pack()
def report_callback_exception(self, exc, val, tb):
showerror("Error", message=str(val))
if __name__ == '__main__':
app = Bad()
app.mainloop()
</code></pre>
<p><a href="https://i.stack.imgur.com/9cgaL.png" rel="noreferrer">The result</a></p>
<p>My environment:</p>
<pre><code>Python 3.5.1 |Anaconda 2.4.1 (64-bit)| (default, Dec 7 2015, 15:00:12) [MSC
v.1900 64 bit (AMD64)] on win32
tkinter.TkVersion
8.6
tkinter.TclVersion
8.6
</code></pre>
|
python|tkinter
| 15 |
1,901,841 | 29,504,981 |
How to run a 30 frame image sequence completely within Python/Tkinter?
|
<p>I have 30 .gif images in a folder. I'd like to create an image sequence that plays these images as a movie (24 or 30 fps) when prompted. I also only want to use Python's internal libraries (no outside downloads). Any help would be greatly appreciated.</p>
|
<p>I'm going to start this Answer with a warning: this feature is not for beginners, and it is almost certainly unnecessary for the project you're working on. I would recommend that you avoid this feature creep and simply make your program fulfill its purpose (enter the right code to open a safe). No one will mind in the slightest that the safe door doesn't animate to the open position. Of course, feel free to try it if you want to, but I <strong><em>very highly recommend</em></strong> finishing the basic functionality first. Then, if you want to add an animation as a sort of "part two" project, you'll at least have something working no matter how far you get with the animation (even professional programmers do this, so that Project X can meet the boss's new deadline even if Unnecessary Feature Y wasn't done yet).</p>
<p>I actually just implemented an animation function recently. I really wish there were a better system in <code>tkinter</code> or <code>pillow</code> or what have you for playing gifs, but this is the best I could come up with so far.</p>
<p>I used four methods (in my <code>tkinter</code> app):</p>
<ul>
<li>Choose the folder with the image sequence</li>
<li>Choose framerate</li>
<li>Start the animation</li>
<li>Keep the animation running</li>
</ul>
<p>The images should be of the form <code>'xyz_1.gif'</code>, <code>'xyz_2.gif'</code>, etc., with some characters (or nothing) followed by an underscore <code>_</code> followed by a dot <code>.</code> and an extension <code>gif</code>. The numbers must be between the last underscore and the last dot in the file name. My program uses <code>PIL</code>'s <code>Pillow</code> fork for its format compatibility and image manipulation features (I mostly just use <code>resize()</code> in this app), so you can ignore the extra steps that convert a <code>pillow</code> image into a <code>tkinter</code>-compatible image.</p>
<p>I've included all the methods relating to the animation for completeness, but you don't need to worry about most of it. The portions of interest are the <code>self.animating</code> flag in <code>start_animation()</code>, and the entire <code>animate()</code> method. The <code>animate()</code> method deserves explanation:</p>
<p>The first thing <code>animate()</code> does is reconfigure the background image to the next image in the <code>list</code> containing the image sequence. Next, it checks if animation is on. If it's off, it stops there. If it's on, the function makes a call to the parent/master/root widget's <code>after()</code> method, which basically tells <code>tkinter</code> "I don't want this done now, so just wait this many milliseconds and then do it." So it calls that, which then waits the specified number of milliseconds before calling <code>animate()</code> with the next number if possible or zero if we were at the end of the list (if you only want it to run once rather than looping, have a simple <code>if</code> statement that only calls <code>after()</code> if there's another image in the <code>list</code>). This is almost an example of a recursive call, but we're not calling the function itself from the function, we're just calling something that "puts it in the queue," so to speak. Such a function can proceed indefinitely without hitting Python's recursion limit.</p>
<pre><code>def choose_animation(self):
"""
Pops a choose so the user can select a folder containing their image sequence.
"""
sequence_dir = filedialog.askdirectory() # ask for a directory for the image sequence
if not sequence_dir: # if the user canceled,
return # we're done
files = os.listdir(sequence_dir) # get a list of files in the directory
try:
# make a list of tkinter images, sorted by the portion of the filenames between the last '_' and the last '.'.
self.image_sequence = [ImageTk.PhotoImage(Image.open(os.path.join(sequence_dir, filename)).resize(((self.screen_size),(self.screen_size))))
for filename in sorted(os.listdir(sequence_dir), key=lambda x: int(x.rpartition('_')[2][:-len(x.rpartition('.')[2])-1]))]
self.start_animation() # no error? start the animation
except: # error? announce it
if self.audio:
messagebox.showerror(title='Error', message='Could not load animation.')
else:
self.status_message.config(text='Error: could not load animation.')
def choose_framerate(self):
"""
Pops a chooser for the framerate.
"""
framerate_window = Toplevel()
framerate_window.focus_set()
framerate_window.title("Framerate selection")
try: # try to load an img for the window's icon (top left corner of title bar)
framerate_window.tk.call('wm', 'iconphoto', framerate_window._w, ImageTk.PhotoImage(Image.open("ico.png")))
except: # if it fails
pass # leave the user alone
enter_field = Entry(framerate_window) # an Entry widget
enter_field.grid(row=0, column=0) # grid it
enter_field.focus_set() # and focus on it
def set_to(*args):
try:
# use this framerate if it's a good value
self.framerate = float(enter_field.get()) if 0.01 < float(enter_field.get()) <= 100 else [][0]
framerate_window.destroy() # and close the window
except:
self.framerate = 10 # or just use 10
framerate_window.destroy() # and close the window
ok_button = Button(framerate_window, text='OK', command=set_to) # make a Button
ok_button.grid(row=1, column=0) # grid it
cancel_button = Button(framerate_window, text='Cancel', command=framerate_window.destroy) # cancel button
cancel_button.grid(row=2, column=0) # grid it
framerate_window.bind("<Return>", lambda *x: set_to()) # user can hit Return to accept the framerate
framerate_window.bind("<Escape>", lambda *x: framerate_window.destroy()) # or Escape to cancel
def start_animation(self):
"""
Starts the animation.
"""
if not self.animating: # if animation is off
try:
self.animating = True # turn it on
self.bg_img = self.image_sequence[0] # set self.bg_img to the first frame
self.set_square_color('light', 'clear') # clear the light squares
self.set_square_color('dark', 'clear') # clear the dark squares
self.animate(0) # start the animation at the first frame
except: # if something failed there,
if self.audio: # they probably haven't set an animation. use messagebox,
messagebox.showerror(title='Error', message='Animation not yet set.')
else: # or a silent status_message update to announce the error.
self.status_message.config(text='Error: animation not yet set.')
else: # otherwise
self.animating = False # turn it off
def animate(self, counter):
"""
Animates the images in self.image_sequence.
"""
self.board.itemconfig(self.bg_ref, image=self.image_sequence[counter]) # set the proper image to the passed element
if self.animating: # if we're animating,
# do it again with the next one
self.parent.after(int(1000//self.framerate), lambda: self.animate(counter+1 if (counter+1)<len(self.image_sequence) else 0))
</code></pre>
|
python|tkinter
| 1 |
1,901,842 | 29,537,219 |
Flask upload file to static folder "static/avatars"
|
<p>I have one problem during I try to upload photo to app/static/avatars folder in python flask.</p>
<p>my folder structure:</p>
<pre><code>Project/
app/
static/
avatars/
Upload/
upload.py
</code></pre>
<p>my destination folder is "avatars" and my codes is in "Upload/upload.py" How could I get realpath to upload?</p>
<p>Sample codes</p>
<pre><code>UPLOAD_FOLDER = 'app/static/avatars/'
ALLOWED_EXTENSIONS = set(['png', 'jpg', 'jpeg', 'gif'])
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
app.config['MAX_CONTENT_LENGTH'] = 1 * 600 * 600
</code></pre>
<p>Error Message:</p>
<pre><code>IOError: [Errno 2] No such file or directory: u'//app/static/avatars/002.png'
</code></pre>
<p>Thanks in advance!!</p>
|
<p>Okay, in <code>upload.py</code> you could do something like</p>
<pre><code>>>> import os
>>> absolute_path = os.path.abspath("../"+UPLOAD_FOLDER+file_name)
</code></pre>
<p><code>os.path.abspath</code> returns the absolute path from the given relative path, starting in your current working directory.</p>
|
python|flask
| 0 |
1,901,843 | 46,616,058 |
"Invalid credentials" error when accessing Redshift from Python
|
<p>I am trying to write a Python script to access Amazon Redshift to create a table in Redshift and copy data from S3 to the Redshift table.</p>
<p>My code is:</p>
<pre><code>import psycopg2
import os
#import pandas as pd
import requests
requests.packages.urllib3.disable_warnings()
redshift_endpoint = os.getenv("END-point")
redshift_user = os.getenv("user")
redshift_pass = os.getenv("PASSWORD")
port = 5439
dbname = 'DBNAME'
conn = psycopg2.connect(
host="",
user='',
port=5439,
password='',
dbname='')
cur = conn.cursor()
aws_key = os.getenv("access_key") # needed to access S3 Sample Data
aws_secret = os.getenv("secret_key")
#aws_iam_role= os.getenv('iam_role') #tried using this too
base_copy_string= """copy %s from 's3://mypath/%s'.csv
credentials 'aws_access_key_id= %s aws_access_secrect_key= %s'
delimiter '%s';""" # the base COPY string that we'll be using
#easily generate each table that we'll need to COPY data from
tables = ["employee"]
data_files = ["test"]
delimiters = [","]
#the generated COPY statements we'll be using to load data;
copy_statements = []
for tab, f, delim in zip(tables, data_files, delimiters):
copy_statements.append(base_copy_string % (tab, f, aws_key, aws_secret, delim)%)
#create Table
cur.execute(""" create table employee(empname varchar(30),empno integer,phoneno integer,email varchar(30))""")
for copy_statement in copy_statements: # execute each COPY statement
cur.execute(copy_statement)
conn.commit()
for table in tables + ["employee"]:
cur.execute("select count(*) from %s;" % (table,))
print(cur.fetchone())
conn.commit() # make sure data went through and commit our statements permanently.
</code></pre>
<p>When I run this command I getting an Error at cur.execute(copy_statement)</p>
<pre><code>**Error:** error: Invalid credentials. Must be of the format: credentials 'aws_iam_role=...' or 'aws_access_key_id=...;aws_secre
t_access_key=...[;token=...]'
code: 8001
context:
query: 582
location: aws_credentials_parser.cpp:114
process: padbmaster [pid=18692]
</code></pre>
<p>Is there a problem in my code? Or is it is an AWS access_key problem?</p>
<p>I even tried using an <strong>iam_role</strong> but I get an error:</p>
<blockquote>
<p>IAM role cannot assume role even in Redshift</p>
</blockquote>
<p>I have a managed IAM role permission by attaching <strong>S3FullAccess</strong> policy.</p>
|
<p>There are some errors in your script.</p>
<p>1) Change <strong><em>base_copy_string</em></strong> as below:</p>
<blockquote>
<p>base_copy_string= """copy %s from 's3://mypath/%s.csv' credentials
'aws_access_key_id=%s;aws_secret_access_key=%s' delimiter '%s';""" #
the base COPY string that we'll be using</p>
</blockquote>
<p>There must be a <code>;</code> added in credentials and also other formatting issues with single-quotes. It is <code>aws_secret_access_key</code> and not <code>aws_access_secrect_key</code>.</p>
<p>check this link for detailed info: <a href="http://docs.aws.amazon.com/redshift/latest/dg/copy-usage_notes-access-permissions.html#copy-usage_notes-iam-permissions" rel="nofollow noreferrer">http://docs.aws.amazon.com/redshift/latest/dg/copy-usage_notes-access-permissions.html#copy-usage_notes-iam-permissions</a></p>
<p>I suggest you use iam-roles instead of credentials.
<a href="http://docs.aws.amazon.com/redshift/latest/dg/loading-data-access-permissions.html" rel="nofollow noreferrer">http://docs.aws.amazon.com/redshift/latest/dg/loading-data-access-permissions.html</a></p>
<p>2) change <strong><em>copy_statements.append</em></strong> as below(remove extra <code>%</code> in the end):</p>
<blockquote>
<p>copy_statements.append(base_copy_string % (tab, f, aws_key,
aws_secret, delim))</p>
</blockquote>
<p>Correct these and try again.</p>
|
python|amazon-web-services|amazon-s3|amazon-redshift
| 2 |
1,901,844 | 46,431,260 |
I keep getting an invalid syntax error on line 1 but, everything is written correctly
|
<pre><code>from maya import cmds
sel = cmds.ls(sl=1)
# [u'IK_R_Shoulder', u'FK_R_Shoulder', u'R_Shoulder', u'R_ArmBlender_CtrlGrp']
blend = cmds.createNode("blendColors")
# sel[0] = ik
cmds.connectAttr(sel[0] + '.r', blend + '.color1', f=1 )
# sel[1] = fk
cmds.connectAttr(sel[1] + '.r', blend + '.color2', f=1 )
# sel[2] = skin
cmds.connectAttr(blend + '.output',sel[2] + '.r',, f=1 )
# sel[3] = blenderCtrl
cmds.connectAttr(sel[3] + '.tx', blend + '.blender')
</code></pre>
|
<p>I think that there are some invisible characters in the file, or a BOM (byte-order mark) if the file is encoded in UTF-8. I would recommend you to load the file into a hex editor and search for such characters.</p>
<p>If this does not help, you could upload the file somewhere and tell us the URI so that we can take a look into it (I would be willing to spend some minutes). When uploading, make sure that you are using a <strong>binary transfer method</strong>, i.e. make sure that the file does not get altered in any way by the transfer software / transfer process itself.</p>
<p>A typical example for this would be when an FTP client running under Windows uploads a file to a place which is on a Linux server, and when thereby the line endings get converted from CR+LF (Windows) to LF only (Linux). So please be careful and double check all settings of the software that actually transfers the file. Otherwise, we will examine a file which is not the same as that one on your hard disk; this would lead us nowhere besides wasting time.</p>
|
python|maya
| 0 |
1,901,845 | 46,288,847 |
How to suppress pip upgrade warning?
|
<p>My pip version was off -- every pip command was saying:</p>
<pre><code>You are using pip version 6.0.8, however version 8.1.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
</code></pre>
<p>And I didn't like the answers given here: <a href="https://stackoverflow.com/questions/36410756/how-can-i-get-rid-of-this-warning-to-upgrade-from-pip">How can I get rid of this warning to upgrade from pip?</a> because they all want to get <code>pip</code> out of sync with the RH version.</p>
<p>So I tried a clean system install with this VagrantFile:</p>
<pre><code>Vagrant.configure("2") do |config|
config.ssh.username = 'root'
config.ssh.password = 'vagrant'
config.ssh.insert_key = 'true'
config.vm.box = "bento/centos-7.3"
config.vm.provider "virtualbox" do |vb|
vb.cpus = "4"
vb.memory = "2048"
end
config.vm.synced_folder "..", "/vagrant"
config.vm.network "public_network", bridge: "eth0", ip: "192.168.1.31"
config.vm.provision "shell", inline: <<-SHELL
set -x
# Install pip
yum install -y epel-release
yum install -y python-pip
pip freeze # See if pip prints version warning on fresh OS install.
SHELL
end
</code></pre>
<p>But then I got:</p>
<pre><code>==> default: ++ pip freeze
==> default: You are using pip version 8.1.2, however version 9.0.1 is available.
==> default: You should consider upgrading via the 'pip install --upgrade pip' command.
</code></pre>
<p>So it seems that I'm using the wrong commands to install <code>pip</code>. What are correct commands to use?</p>
|
<p>There are many options (2021 update)...</p>
<p><strong>Use a command line flag</strong></p>
<pre><code>pip <command> --disable-pip-version-check [options]
</code></pre>
<p><strong>Configure pip from the command line</strong></p>
<pre><code>pip config set global.disable-pip-version-check true
</code></pre>
<p><strong>Set an environment variable</strong></p>
<pre><code>export PIP_DISABLE_PIP_VERSION_CHECK=1
</code></pre>
<p><strong>Use a config file</strong></p>
<p>Create a pip configuration file and set <code>disable-pip-version-check</code> to true</p>
<pre><code>[global]
disable-pip-version-check = True
</code></pre>
<p>On many linux the default location for the pip configuration file is <code>$HOME/.config/pip/pip.conf</code>. Locations for Windows, macOS, and virtualenvs are too various to detail here. Refer to the docs:</p>
<p><a href="https://pip.pypa.io/en/stable/user_guide/#config-file" rel="noreferrer">https://pip.pypa.io/en/stable/user_guide/#config-file</a></p>
|
python|pip
| 78 |
1,901,846 | 60,974,080 |
Reaching an unreachable ELSE
|
<p>I included an unreachable condition in this function. The problem is that it was just reached. I don't know how to troubleshoot it.</p>
<pre><code>def bcirrus_func1(Qn):
if Qn <= -1:
bcirrus = 0
elif Qn > -1 and Qn <= 0:
bcirrus = 0.5*(1-Qn)**2
elif Qn > 0 and Qn < 1:
bcirrus = 1 - 0.5*(1-Qn)**2
elif Qn >= 1:
bcirrus = 1
else:
print('Something has gone very wrong')
return(bcirrus)
</code></pre>
<p>How could 'Something has gone very wrong' have been triggered?</p>
<p>Here is the error:</p>
<pre><code>/.local/lib/python3.6/site-packages/pint/numpy_func.py:289: RuntimeWarning: overflow encountered in exp
result_magnitude = func(*stripped_args, **stripped_kwargs)
/.local/lib/python3.6/site-packages/pint/quantity.py:1160: RuntimeWarning: invalid value encountered in double_scalars
magnitude = magnitude_op(new_self._magnitude, other._magnitude)
Something has gone very wrong
Traceback (most recent call last):
File "./make_pcc_layer.py", line 122, in <module>
pcc1, pcc2 = cc.PCC(layer_pressure,layer[i][j].tmp-273.15,layer[i][j].rh,layer[i][j].icmr)
File "//weather_box/earth/clouds_and_contrails.py", line 119, in PCC
PCC1 = bcirrus_func1(Qnstar)-bcirrus_func1(Qn)
File "//weather_box/earth/clouds_and_contrails.py", line 39, in bcirrus_func1
return(bcirrus)
UnboundLocalError: local variable 'bcirrus' referenced before assignment
</code></pre>
<p>EDIT:
I added Qn to the "unreachable" print statement and it is NaN as was suggested. Here is the output:</p>
<pre><code>Something has gone very wrong: Qn nan dimensionless
</code></pre>
<p>It says "dimensionless" because it is using Pint.</p>
|
<p>Wild guess here, but is it possible that <code>Qn</code> is <code>nan</code>? If so it has strange (read: possibly intuitive) comparison behavior</p>
<pre><code>>>> import math
>>> x = math.nan
>>> x < -1
False
>>> x > 1
False
</code></pre>
|
python
| 3 |
1,901,847 | 49,734,616 |
Time Series Data Prediction (on delta with shift(1))
|
<p>I have time series data in increasing order, like the data given below:</p>
<pre><code>**dataset 1**
----------------------
date value
----------------------
date1 10
date2 12
date3 13
date4 15
----------------------
</code></pre>
<p>If I make predictions using standard models, I'm getting good result without any issues.</p>
<p>My question is: can I take a delta of the data with shift(1) and use the resulting series for prediction? This will have the DELTA values like those below:</p>
<pre><code>**dataset 2**
----------------------
date value
----------------------
date1 0
date2 2
date3 1
date4 2
----------------------
</code></pre>
<p>Am I making the good data into 'white noise'? What are your suggestions on this?</p>
|
<p>Taking deltas of time-series is part of the <a href="https://en.wikipedia.org/wiki/Box%E2%80%93Jenkins_method" rel="nofollow noreferrer">Box-Jenkins Method</a>. If the deltas are not stationary, then further analysis of them can show trend and seasonality, for example. This is exactly the case when differencing does not create white noise.</p>
<p>That being said, it might not be necessary for you to develop this from scratch. Libraries such as <a href="https://www.statsmodels.org/stable/index.html" rel="nofollow noreferrer"><code>statsmodels</code></a>, for example, contain ?AR?MA? models (that is auto-regressive moving-average models, possibly integrative). You might want to check <a href="https://www.statsmodels.org/stable/tsa.html" rel="nofollow noreferrer"><code>statsmodels.tsa</code></a> in particular.</p>
|
pandas|time-series|prediction|forecasting|arima
| 1 |
1,901,848 | 62,805,236 |
How to filter a list of tuples with another list of items
|
<p>I have two lists: one list containing items which are reference numbers and a second list containing tuples which some include the reference numbers of the first list.</p>
<p>My list of reference numbers looks like this:</p>
<pre><code>list1 = ['0101', '0202', '0303']
</code></pre>
<p>And my list of tuples like this:</p>
<pre><code>list2 = [
('8578', 'aaa', 'bbb', 'ccc'),
('0101', 'ddd', 'eee', 'fff'),
('9743', 'ggg', 'hhh', 'iii'),
('2943', 'jjj', 'kkk', 'lll'),
('0202', 'mmm', 'nnn', 'ooo'),
('7293', 'ppp', 'qqq', 'rrr'),
('0303', 'sss', 'ttt', 'uuu'),
]
</code></pre>
<p>I want to filter the second list above depending on the presence of the reference numbers from the first list inside tuples: if the reference number is included in a tuple, the script takes it off from the list.</p>
<p>Here is the expected result:</p>
<pre><code>newlist2 = [
('8578', 'aaa', 'bbb', 'ccc'),
('9743', 'ggg', 'hhh', 'iii'),
('2943', 'jjj', 'kkk', 'lll'),
('7293', 'ppp', 'qqq', 'rrr'),
]
</code></pre>
<p>How can I do that?</p>
|
<p>You can use the built-in <a href="https://www.geeksforgeeks.org/filter-in-python/" rel="nofollow noreferrer">filter</a> function with a <a href="https://www.w3schools.com/python/python_lambda.asp" rel="nofollow noreferrer">lambda</a>:</p>
<p><code>list2 = filter(lambda a:a[0] in list1, list2)</code></p>
<p>This will turn list2 into a iterable, if you need it to be a list, not just an iterator, you can use a <a href="https://www.programiz.com/python-programming/list-comprehension" rel="nofollow noreferrer">list comprehension</a> instead:</p>
<p><code>list2 = [element for element in list2 if element[0] not in list1]</code></p>
|
python|list|tuples
| 1 |
1,901,849 | 70,194,809 |
How can I provide a non-fixture pytest parameter via a custom decorator?
|
<p>We have unit tests running via Pytest, which use a custom decorator to start up a context-managed mock echo server before each test, and provide its address to the test as an extra parameter. This works on Python 2.</p>
<p>However, if we try to run them on Python 3, then Pytest complains that it can't find a fixture matching the name of the extra parameter, and the tests fail.</p>
<p>Our tests look similar to this:</p>
<pre><code>@with_mock_url('?status=404&content=test&content-type=csv')
def test_file_not_found(self, url):
res_id = self._test_resource(url)['id']
result = update_resource(None, res_id)
assert not result, result
self.assert_archival_error('Server reported status error: 404 Not Found', res_id)
</code></pre>
<p>With a decorator function like this:</p>
<pre><code>from functools import wraps
def with_mock_url(url=''):
"""
Start a MockEchoTestServer and call the decorated function with the server's address prepended to ``url``.
"""
def decorator(func):
@wraps(func)
def decorated(*args, **kwargs):
with MockEchoTestServer().serve() as serveraddr:
return func(*(args + ('%s/%s' % (serveraddr, url),)), **kwargs)
return decorated
return decorator
</code></pre>
<p>On Python 2 this works; the mock server starts, the test gets a URL similar to "http://localhost:1234/?status=404&content=test&content-type=csv", and then the mock is shut down afterward.</p>
<p>On Python 3, however, we get an error, "fixture 'url' not found".</p>
<p>Is there perhaps a way to tell Python, "This parameter is supplied from elsewhere and doesn't need a fixture"? Or is there, perhaps, an easy way to turn this into a fixture?</p>
|
<p>Looks like Pytest is content to ignore it if I add a default value for the injected parameter, to make it non-mandatory:</p>
<pre><code>@with_mock_url('?status=404&content=test&content-type=csv')
def test_file_not_found(self, url=None):
</code></pre>
<p>The decorator can then inject the value as intended.</p>
|
python|python-3.x|pytest|python-decorators|fixtures
| 1 |
1,901,850 | 70,023,284 |
Code that should transform words into binary doesn't work in python 3.10
|
<p>I know that there are certain built-in functions that do this but for fun, I wrote a code for it. Where is the mistake in this code?</p>
<p>For example the expected outcome for 'abc' is:</p>
<p><code>[[0,1,1,0,0,0,0,1],[0,1,1,0,0,0,1,0],[0,1,1,0,0,0,1,1]]</code></p>
<pre><code>strlist = []
strinput = input("Enter a string only non capital letters to convert to binary: ")
lenstr = len(strinput)
a = lenstr-1
while a >= 0:
listx = []
if strinput[a] == 'a':
x = 97
elif strinput[a] == 'b':
x = 98
elif strinput[a] == 'c':
x = 99
i = 8
while i>=0:
if 2**i > x >= 2**(i-1):
listx.append(1)
x -= 2**(i-1)
else:
listx.append(0)
i -= 1
listx.pop(len(listx)-1)
strlist.append(listx)
a -= 1
print(strlist)
</code></pre>
|
<p>Made minor changes to get it working chars other than just 'abc'. And use <code>ord</code> for ascii code</p>
<pre><code>strlist = []
strinput = input("Enter a string only non capital letters to convert to binary: ")
lenstr = len(strinput)
a = lenstr-1
for c in strinput:
listx = []
x = ord(c)
i = 8
while i >= 0:
if 2**i > x >= 2**(i-1):
listx.append(1)
x -= 2**(i-1)
else:
listx.append(0)
i -= 1
listx.pop(len(listx)-1)
strlist.append(listx)
a -= 1
print(strlist)
</code></pre>
|
python|binary
| 0 |
1,901,851 | 53,705,924 |
sns stripplot with just top n number of categories
|
<p>I have code that plots sns stripplot nicely :</p>
<pre><code>f, ax = plt.subplots(figsize=(15,12))
sns.stripplot(data = cars, x='price', y='model', jitter=.5)
plt.show()
</code></pre>
<p>but there are too many car models so I wish to visualize only top n most frequently appearing car models in dataset.
Also is there any lambda calculations or something similar that I can apply to <code>price</code> or <code>model</code> without creating separate data frame?</p>
<p>If there is better visualization library that can help with that feel free to propose.</p>
|
<p>You can find the most occurring values of a column with <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="nofollow noreferrer">value_counts()</a>. Here I've selected the top 2 most occurring models: </p>
<pre><code>most_occurring_values = cars['model'].value_counts().head(2).index
</code></pre>
<p>Then you could filter your original dataframe and only select rows that contain the models with the highest frequency: </p>
<pre><code>cars_subset = cars[cars['model'].isin(most_occurring_values)]
</code></pre>
<p>Finally, use that subset to plot your data:</p>
<pre><code>f, ax = plt.subplots(figsize=(15,12))
sns.stripplot(data = cars_subset, x='price', y='model', jitter=.5)
plt.show()
</code></pre>
|
python|python-3.x|data-visualization|seaborn
| 2 |
1,901,852 | 53,464,475 |
Missing index when unpivotting / melt() Pandas DataFrame with MultiIndex columns and rows
|
<p>I want to "flatten" an existing Dataframe and came across the Pandas <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.melt.html" rel="nofollow noreferrer"><code>melt()</code></a> command. This seems to be the weapon of choice here, but the behaviour is a bit unexpected (at least to me). Let's start with a fairly innocent MultiIndex DataFrame:</p>
<pre><code>df = pd.DataFrame(np.random.randn(6, 6),
index=pd.MultiIndex.from_arrays([['X','X','X','Y','Y','Y'],
['x','y','z','x','y','z']],
names=['omega1', 'omega2']),
columns=pd.MultiIndex.from_arrays([['A','A','A','B','B','B'],
['a','b','c','a','b','c']],
names=['alpha1', 'alpha2']))
</code></pre>
<p>Gives a nice DataFrame like:</p>
<pre><code>alpha1 A ... B
alpha2 a b ... b c
omega1 omega2 ...
X x 2.362954 0.015595 ... 1.273841 -0.632132
y -0.134122 1.791614 ... 1.101646 -0.181099
z 0.410267 1.063625 ... -1.483590 0.521431
Y x 0.001779 -0.076198 ... -1.395494 1.177853
y 0.453172 1.899883 ... 1.116654 -2.209697
z 1.636227 -0.999949 ... 0.800413 -0.431485
</code></pre>
<p>When I now do <code>df.melt()</code>, I get something like this: </p>
<pre><code> alpha1 alpha2 value
0 A a 2.362954
1 A a -0.134122
2 A a 0.410267
3 A a 0.001779
...
33 B c 1.177853
34 B c -2.209697
35 B c -0.431485
</code></pre>
<p>However I am more expecting something like this:</p>
<pre><code> omega1 omega2 alpha1 alpha2 value
0 X x A a 2.362954
1 X y A a -0.134122
2 X z A a 0.410267
3 Y x A a 0.001779
...
33 Y x B c 1.177853
34 Y y B c -2.209697
35 Y z B c -0.431485
</code></pre>
<p>The exact order does not matter, but it would be nice if column and row names remained intact.
I can't get Pandas to properly return the index with it. What am I doing wrong??</p>
|
<p>You need to <code>reset_index</code> the index ,and pass ids in <code>melt</code> with the index name </p>
<pre><code>df.reset_index().melt(['omega1','omega2'])
</code></pre>
|
python|pandas|dataframe|unpivot
| 4 |
1,901,853 | 53,550,369 |
How can I check if a user and password is already in the MySQL database using Python?
|
<p>I have a html login form.which collects a users username and password . I also have a script which collects them and stores it . I was wondering if it was possible to check if the username and password already exists in the MySQL database before storing in the database.</p>
<p>Html form: </p>
<pre><code><!DOCTYPE html>
<html>
<head>
<title>login</title>
</head>
<body>
<form action="login.py" method="GET">
First name:<br>
<input type="text" name="user_name"><br>
Last name:<br>
<input type="text" name="password"><br>
</form>
</body>
</html>
Login script
import cgitb
cgitb.enable()
import cgi
form = cgi.FieldStorage()
user_name = form["user_name"].value
password = form["password"].value
import pymysql
conn = pymysql.connect(db='userdb',
user='root', passwd='12346',
host='localhost')
cursor = conn.cursor()
query= "INSERT INTO users VALUES
('{user_name}',{password})"
cursor.execute(query.format(user_name=user-
name, password=password))
conn.commit()
</code></pre>
<p>Is it possible to check if the username and password from the html form has a match in the database?</p>
|
<p>Make unique or primary key on <code>user_name</code> and if a duplicate occurs and exception will be thrown.</p>
<p>If you're storing passwords in an unhashed format, shame on you, er <a href="https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet#Use_a_cryptographically_strong_credential-specific_salt" rel="nofollow noreferrer">fix it</a>,</p>
<p>If you have passwords as hashes you can't tell if they are duplicates (which isn't a bad thing).</p>
|
python|mysql|scripting|pymysql
| 0 |
1,901,854 | 55,008,024 |
Extending a list of list with single elements of a list Error
|
<p>I was trying to extend a list of list to add an element at the beginning, which is a number:</p>
<pre><code>groups = list([list([a, b, c]),list([a2, b2, c2])])
numbers = list([1,2])
</code></pre>
<p>The result should be looking like this:</p>
<pre><code>result = [[1,a, b, c],[2,a2, b2, c2]]
</code></pre>
<p>This is my code so far:</p>
<pre><code>result = []
for i in groups :
for j in numbers:
result.append([j,i])
</code></pre>
<p>Do you have any suggestion on what I might be doing wrong? or a hint on how to solve the issue?</p>
<p>Thanks so much in advance</p>
|
<p>The problem with your approach is that you are using two separate <code>for</code> loops, when you really want to iterate over both lists at the same time. For these cases <a href="https://docs.python.org/3.3/library/functions.html#zip" rel="nofollow noreferrer"><code>zip</code></a> comes handy. It allows you to aggregate multiple iterables. So you could instead do:</p>
<pre><code>result = []
for i,j in zip(numbers, groups):
result.append([i]+j)
print(result)
# [[1, 'a', 'b', 'c'], [2, 'a2', 'b2', 'c2']]
</code></pre>
<hr>
<p>For a more concise solution you could use a list comprehesion to add the elements from both lists (note that the elements in <code>numbers</code> have to be turned to lists):</p>
<pre><code>[[i]+j for i,j in zip(numbers, groups)]
</code></pre>
<p><b> Output </b></p>
<pre><code>[[1, 'a', 'b', 'c'], [2, 'a2', 'b2', 'c2']]
</code></pre>
|
python|list|nested-lists
| 2 |
1,901,855 | 73,788,824 |
How can I save / reload a FunctionTransformer object and expect it to always work the same way, even if its internal function definition changed?
|
<p>I am using <code>ColumnTransformer</code> and <code>FunctionTransformer</code> classes to build a preprocessing pipeline for an ML project.</p>
<p>One of my <code>FunctionTransformer</code> uses a function (let's say <code>preprocess_A</code>) defined in my package. I fitted the pipeline and saved it as a pickle file alongside a trained model. Everything seemed to work fine.</p>
<p>I then decided to change <code>preprocess_A</code> definition for a new experiment, but I also noticed a drop in performance for my trained model. The problem is that the pickle file only keeps a reference to <code>preprocess_A</code> but not its definition. Hence, any change in <code>preprocess_A</code> will impact my previous pipelines.</p>
<p>So, is there a way to save my <code>FunctionTransformer</code> object and expect it to always work the same way, even if I later modify <code>preprocess_A</code>?</p>
<p>Code snippet :</p>
<pre class="lang-py prettyprint-override"><code>### Imports
import pickle
import pandas as pd
# Custom package pipeline
from my_package.preprocessing import preprocess
pipeline = preprocess.get_pipeline()
# pipeline = ColumnTransformer([('test', FunctionTransformer(preprocess_A), make_column_selector())])
# def preprocess_A(x): return x ** 2
### Fit
df = pd.DataFrame({'col1': [1, 2, 4], 'col2': [3, 6, 9]})
pipeline.fit(df)
### Save
with open('test.pkl', 'wb') as f:
pickle.dump(pipeline, f)
### Transform
pipeline.transform(df)
# array([[ 1, 9], [ 4, 36], [16, 81]], dtype=int64)
</code></pre>
<pre><code>- def preprocess_A(x): return x ** 2
+ def preprocess_A(x): return x ** 4
</code></pre>
<pre class="lang-py prettyprint-override"><code>### Imports
import pickle
import pandas as pd
### Load pipeline
with open('test.pkl', 'rb') as f:
pipeline = pickle.load(f)
### Transform
df = pd.DataFrame({'col1': [1, 2, 4], 'col2': [3, 6, 9]})
pipeline.transform(df)
# array([[ 1, 81], [ 16, 1296], [ 256, 6561]], dtype=int64)
</code></pre>
<p><em>Note : it works fine with <code>dill</code> and lambda functions, but some preprocessing function can be complex and not easy to transform in lambdas</em></p>
<p><em>Note 2 : even if some functions might be complex, they have no dependency to my own package</em></p>
|
<p>I'm the <code>dill</code> author. With <code>pickle</code>, the behavior is to load the class instance where the class instance refers to the class object by reference (i.e. so if the class definition changes, then the unserialized object uses the new version). <code>dill</code> on the other hand, has a keyword that allows you to toggle the behavior. <code>dill</code> stores the class definition along with the instance, so you can choose to either use or ignore the stored class definition.</p>
<pre><code>Python 3.7.14 (default, Sep 10 2022, 11:17:06)
[Clang 10.0.1 (clang-1001.0.46.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> import dill
>>> class Foo(object):
... def bar(self, x):
... return x+self.y
... y = 1
...
>>> f = Foo()
>>> _Foo = dill.dumps(Foo)
>>> _f = dill.dumps(f)
>>> del Foo, f
>>>
>>> class Foo(object):
... def bar(self, x):
... return x+self.z
... z = -1
...
>>> f_ = dill.loads(_f) # use the new class
>>> f_.__class__ == Foo
True
>>> f_.z
-1
>>> f_.bar(0)
-1
>>> f_ = dill.loads(_f, ignore=True) # ignore the new class
>>> f_.bar(0)
1
>>> f_.y
1
>>>
</code></pre>
<p>EDIT:</p>
<p>If it's a imported class, instead of defined in <code>__main__</code>, it's a bit harder, but you still might be able to do it using <code>dill.source</code>.</p>
<pre><code>>>> import dill
>>> from changeclass import Foo
>>> # stored by reference, so not what you want
>>> dill.dumps(Foo)
b'\x80\x03cchangeclass\nFoo\nq\x00.'
>>> # so, we grab the source
>>> dill.source.getsource(Foo)
'class Foo(object):\n def bar(self, x):\n return self.y + x\n y = 1\n'
>>>
>>> # we can either exec the source here and then
>>> # store an instance to it (see above answer),
>>> # or we can dump the source and exec it later.
>>>
>>> dill.dumps(dill.source.getsource(Foo))
b'\x80\x03XE\x00\x00\x00class Foo(object):\n def bar(self, x):\n return self.y + x\n y = 1\nq\x00.'
>>> dill.loads(_)
'class Foo(object):\n def bar(self, x):\n return self.y + x\n y = 1\n'
>>> exec(_)
>>> Foo
<class '__main__.Foo'>
</code></pre>
<p>If the class is complicated, you might have to dump the entire module source... or worse, multiple modules. It can get complicated. It's not intended to work across different source versions.</p>
|
python|scikit-learn|pickle|dill
| 0 |
1,901,856 | 73,678,943 |
NO VPN/PROXY How to cheat/fake IP location to google translate or websites
|
<p>i was surfing on omegle.com and finding only people from my country (Italy). I was wandering to chat with someone from US, DE, FR or similar since i know these languages, but cannot figure out how it finds out that i'm Italian. I think that it does it via google translate because it let you choose a language when you start chatting in upper right corner. So i was wondering... how to cheat on google translate and fake my language? I think it detects my ip location but i strictly don't want to use VPN or proxy for this experiment</p>
<p>what have i tried:</p>
<p>1- set chrome UI to english (de_DE)</p>
<p>2- start it with selenium+python with chromediver set with option add_argument("--lang=de-DE")</p>
<p>3- add_experimental_option("prefs", {"intl.accept_languages":"de,de_DE","translate_whitelists":{"de":"en"}})</p>
<p>4- change my windows input language to DE</p>
<p>nothing worked, do you have any other idea?</p>
|
<p>i found out how, it was with google translate, setting 2 translate cookies when starting the webdriver did the trick</p>
|
python|python-3.x|selenium|selenium-webdriver|selenium-chromedriver
| 0 |
1,901,857 | 41,028,920 |
Sort a List containing date and time string elements
|
<p>I need assistance in sorting a list by date and time, both date and time are string fields within my list.</p>
<pre><code>myList = ['item1', 'item2', 'mm/dd/yy', 'hh:mmAM', 'item3']
</code></pre>
<p>I'm using Python 2.4.3.</p>
<p>I'm using the Bob Swift CLI to pull page info from Atlassian's Confluence and place into a list so it can then be sorted to showing the oldest pages first.</p>
<p>Sample Data:</p>
<p>pageList = "['Page Title', '383551192', '298288254', 'dt206xxx', '1/7/16 1:05 PM', 'dt206xxx', '1/7/16', '1:16PM', '2', '<a href="http://xx.xxx.xx.xx:8xxx/display/mine/PageTitle" rel="nofollow noreferrer">http://xx.xxx.xx.xx:8xxx/display/mine/PageTitle</a>']"</p>
<p>I need to sort on the last modified date at element 6 and time at 7.</p>
|
<p>You need to have your data in a list of list which I think you probably do. I changed pageList to do this with each inner list having a different time. Now we can use python's sort() but we have to provide our own compare function. So here we have pageList.sort(mySort) The sort() passes two list items for comparison. Here each item is an inner list. We need to combine elements 6 and 7 to form a single date time string; then convert this to a python datetime so we can compare them. </p>
<p>Since the tuples are long I only print out the date and time values from the inner list after the sort to show it works.</p>
<pre><code>import datetime
pageList =[['Page Title', '383551192', '298288254', 'dt206xxx', '1/7/16 1:05 PM', 'dt206xxx', '1/7/16', '1:56PM', '2', 'http://xx.xxx.xx.xx:8xxx/display/mine/PageTitle'],
['Page Title', '383551192', '298288254', 'dt206xxx', '1/7/16 1:05 PM', 'dt206xxx', '1/7/16', '1:46PM', '2', 'http://xx.xxx.xx.xx:8xxx/display/mine/PageTitle'],
['Page Title', '383551192', '298288254', 'dt206xxx', '1/7/16 1:05 PM', 'dt206xxx', '1/7/16', '1:56PM', '2', 'http://xx.xxx.xx.xx:8xxx/display/mine/PageTitle'],
['Page Title', '383551192', '298288254', 'dt206xxx', '1/7/16 1:05 PM', 'dt206xxx', '1/7/16', '1:16PM', '2', 'http://xx.xxx.xx.xx:8xxx/display/mine/PageTitle'],
['Page Title', '383551192', '298288254', 'dt206xxx', '1/7/16 1:05 PM', 'dt206xxx', '1/7/16', '1:36PM', '2', 'http://xx.xxx.xx.xx:8xxx/display/mine/PageTitle']]
def mySort(a,b):
c = a[6] + ' ' + a[7]
d = b[6] + ' ' + b[7]
date1 = datetime.datetime.strptime(c, '%m/%d/%y %I:%M%p')
date2 = datetime.datetime.strptime(d, '%m/%d/%y %I:%M%p')
return cmp(date1, date2)
if __name__ == '__main__':
pageList.sort(mySort)
for i in range(len(pageList)):
print i,pageList[i][6],' ',pageList[i][7]
</code></pre>
<p>Output:<br>
0 1/7/16 1:16PM<br>
1 1/7/16 1:36PM<br>
2 1/7/16 1:46PM<br>
3 1/7/16 1:56PM<br>
4 1/7/16 1:56PM </p>
|
python|list
| 0 |
1,901,858 | 40,806,068 |
installation path for sqlite database
|
<p>I am trying to access a database using python. The src folder is:</p>
<pre><code>ptbl/
├── dialogue.py
├── elem_H.py
├── elems.db
├── __init__.py
├── __main__.py
├── main.py
├── menubar.ui
└── menu.py
</code></pre>
<p>with <code>elem_H.py</code> access <code>elems.db</code> database as:</p>
<pre><code>sqlfile = "elems.db"
conn = sqlite3.connect(sqlfile)
</code></pre>
<p>Ofcourse, when I am running it from terminal, inside src dir(ptbl), everything works fine.
But, when I am outside the <code>src</code> dir,its giving error: </p>
<pre><code>sqlite3.OperationalError: no such table: Overview
</code></pre>
<p>and same, if I install it using <code>autotools</code>.</p>
<p>For elems.db to be worked,I have to run it from a folder where elems.db is present.</p>
<p>How can I make it installed in a path? </p>
|
<p>Mixing python and SQLite files in one single directory is not a good pratice at all. You should fix it and extract <code>elems.db</code> from your libraries directory.</p>
<p>Moreover, as Lutz Horn said in comments, you should make it configurable and not trust that your database file will be always located in the exact same place.</p>
<p>But anyway, to fix your issue without updating these two points, you have to take care of the <code>elem_H.py</code> location. You know <code>elems.db</code> is next to it, so you can do:</p>
<pre><code>import os.path
sqlfile = os.path.join(os.path.dirname(__file__), "elems.db")
</code></pre>
<ul>
<li><code>__file__</code> store the relative path to your file from the place where you run command.</li>
<li><a href="https://docs.python.org/2/library/os.path.html#os.path.dirname" rel="nofollow noreferrer">os.path.dirname</a> remove the filename from the given path.</li>
<li><a href="https://docs.python.org/2/library/os.path.html#os.path.join" rel="nofollow noreferrer">os.path.join</a> will concat the computed path and your filename.</li>
</ul>
|
python
| 0 |
1,901,859 | 40,853,889 |
image pre-processing for image classification and semantic segmentation
|
<p>In terms of training deep learning models for different types of image-related works, such as image classification, semantic segmentation, what kind of pre-processing works need to be performed?</p>
<p>For instance, if I want to train a network for semantic segmentation, do I need to scale the image value (normally represented as an nd-array) to <code>[0,1]</code> range, or keep it as <code>[0,255]</code> range? Thanks.</p>
|
<p>There are few things that are done but really there is no set or fix set of pre-processing that is always done.</p>
<p>Here are some examples:</p>
<ul>
<li>Subtract the mean image,</li>
<li>Divide by the variance (less common)</li>
<li>Normalize the values</li>
<li>if working with "real" images (like an image of people) then horizontal flips</li>
<li>random crops</li>
<li>translations</li>
</ul>
|
computer-vision|tensorflow|deep-learning|caffe|keras
| 0 |
1,901,860 | 29,034,738 |
AttributeError: 'module' object has no attribute 'plotting'
|
<p>I am trying to generate a plot with bokeh in the following way:</p>
<pre><code>import bokeh
p = bokeh.plotting.figure()
# ...
</code></pre>
<p>However, this results in an <strong>error</strong>:</p>
<pre><code>AttributeError: 'module' object has no attribute 'plotting'
</code></pre>
<p>How can I fix this?</p>
|
<p>You need to import <code>bokeh.plotting</code> directly.
The following way works:</p>
<pre><code>import bokeh.plotting as bk
p = bk.figure()
</code></pre>
<p>Alternatively, you could import all plotting functions into the namespace directly and use it like this:</p>
<pre><code>from bokeh.plotting import *
p = figure()
</code></pre>
|
python|bokeh
| 4 |
1,901,861 | 29,337,874 |
How to split a data frame into multiple data frames based on columns
|
<p>I have the following data frames:</p>
<pre><code>import pandas as pd
df = pd.DataFrame( {
"Name" : ["gene1","gene2","gene3","gene4"] ,
"T1" : [0.33,1,3,4],
"T2" : [1.23,2.1,3.5,5.0], } )
</code></pre>
<p>Which looks like this:</p>
<pre><code>In [30]: df
Out[30]:
Name T1 T2
0 gene1 0.33 1.23
1 gene2 1.00 2.10
2 gene3 3.00 3.50
3 gene4 4.00 5.00
</code></pre>
<p>What I want to do is group based on <code>T1</code> and <code>T2</code> (basically all columns from 2nd columns onward). Note that the column names follow no pattern and can be more than two.</p>
<p>The end result I hope to get is this:</p>
<pre><code>T1
Name T1
0 gene1 0.33
1 gene2 1.00
2 gene3 3.00
3 gene4 4.00
T2
Name T2
0 gene1 1.23
1 gene2 2.10
2 gene3 3.50
3 gene4 5.00
</code></pre>
<p>How can I achieve that?</p>
<p>I tried this but doesn't give what I want:</p>
<pre><code>tlist = list(df.columns[1:])
for dft in df.groupby(tlist,axis=1):
print df
</code></pre>
|
<p>You can get there using <code>pd.melt()</code>:</p>
<pre><code>melted = pd.melt(df, id_vars='Name', var_name='t_col')
for t_col, sub_df in melted.groupby('t_col'):
print(sub_df)
Name t_col value
0 gene1 T1 0.33
1 gene2 T1 1.00
2 gene3 T1 3.00
3 gene4 T1 4.00
Name t_col value
4 gene1 T2 1.23
5 gene2 T2 2.10
6 gene3 T2 3.50
7 gene4 T2 5.00
</code></pre>
|
python|pandas
| 2 |
1,901,862 | 52,049,202 |
How to use Docker AND Conda in PyCharm
|
<p>I want to run python in PyCharm by using a Docker image, but also with a Conda environment that is set up <em>in</em> the Docker image. I've been able to set up Docker and (locally) set up Conda in PyCharm independently, but I'm stumped as to how to make all three work together. </p>
<p>The problem comes when I try to create a new project interpreter for the Conda environment inside the Docker image. When I try to enter the python interpreter path, it throws an error saying that the directory/path doesn't exist.</p>
<p>In short, the question is the same as the title: how can I set up PyCharm to run on a Conda environment <em>inside</em> a Docker image?</p>
|
<p>I'm not sure if this is the most eloquent solution, but I do have a solution to this now!</p>
<ol>
<li>Start up a container from the your base image and attach to it</li>
<li>Install the Conda env yaml file inside the docker container</li>
<li>From outside the Docker container stream (i.e. a new terminal window), commit the existing container (and its changes) to a new image: <code>docker commit SOURCE_CONTAINER NEW_IMAGE</code>
<ul>
<li>Note: see <code>docker commit --help</code> for more options here</li>
</ul></li>
<li>Run the new image and start a container for it</li>
<li>From PyCharm, in preferences, go to Project > Project Interpreter</li>
<li>Add a new Docker project interpreter, choosing your new image as the image name, and set the path to wherever you installed your Conda environment on the Docker image (ex: <code>/usr/local/conda3/envs/my_env/bin/python</code>)</li>
</ol>
<p>And just like that, you're good to go!</p>
|
python|docker|pycharm|conda
| 3 |
1,901,863 | 51,736,114 |
TypeError: string indices must be integers when I try and get values from JSON
|
<p>Im trying to get specific values from this data,</p>
<pre><code>data = {'Memory': [{'SensorType': 'Load', 'Value': 51.9246254}], 'CPU Core #2': [{'SensorType': 'Temperature', 'Value': 63}, {'SensorType': 'Load', 'Value': 66.40625}, {'SensorType': 'Clock', 'Value': 2700.006}]}
</code></pre>
<p>Im trying to get the <code>SensorType</code> from <code>Memory</code> by using,</p>
<pre><code>print(data["Memory"]["SensorType"])
</code></pre>
<p>However I get this error,</p>
<pre><code>TypeError: string indices must be integers
</code></pre>
<p>Any Ideas about why this is happening?</p>
|
<p>The value of key <code>Memory</code> of <code>data</code> is a <code>list</code> (of one element: a <code>dict</code>), not a <code>dict</code>. So, you need to get the only element of the <code>list</code> as well:</p>
<pre><code>data["Memory"][0]["SensorType"]
</code></pre>
<hr>
<p>Just to note, for your example, you should get the error:</p>
<pre><code>TypeError: list indices must be integers or slices, not str
</code></pre>
<p>not the one you've posted. I presume the error message is just wrongly put.</p>
|
python|json|python-3.x
| 1 |
1,901,864 | 51,726,907 |
Compare multiple text files, and save commons values
|
<p>My actual code : </p>
<pre><code>import os, os.path
DIR_DAT = "dat"
DIR_OUTPUT = "output"
filenames = []
#in case if output folder doesn't exist
if not os.path.exists(DIR_OUTPUT):
os.makedirs(DIR_OUTPUT)
#isolating empty values from differents contracts
for roots, dir, files in os.walk(DIR_DAT):
for filename in files:
filenames.append("output/" + os.path.splitext(filename)[0] + ".txt")
filename_input = DIR_DAT + "/" + filename
filename_output = DIR_OUTPUT + "/" + os.path.splitext(filename)[0] + ".txt"
with open(filename_input) as infile, open(filename_output, "w") as outfile:
for line in infile:
if not line.strip().split("=")[-1]:
outfile.write(line)
#creating a single file from all contracts, nb the values are those that are actually empty
with open(DIR_OUTPUT + "/all_agreements.txt", "w") as outfile:
for fname in filenames:
with open(fname) as infile:
for line in infile:
outfile.write(line)
#finale file with commons empty data
#creating a single file
with open(DIR_OUTPUT + "/all_agreements.txt") as infile, open(DIR_OUTPUT + "/results.txt", "w") as outfile:
seen = set()
for line in infile:
line_lower = line.lower()
if line_lower in seen:
outfile.write(line)
else:
seen.add(line_lower)
print("Psst go check in the ouptut folder ;)")
</code></pre>
<p>The last lines of my code are checking wether or not, element exists mutliple times. So, may the element exists, once, twice, three, four times. It will add it to results.txt.</p>
<p>But the thing is that I want to save it into results.txt only if it exists 4 times in results.txt.</p>
<p>Or best scenario, compare the 4 .txt files and save elements in commons into results.txt.</p>
<p>But I can't solve it..</p>
<p>Thanks for the help :)</p>
<hr>
<p>To make it easier, </p>
<pre><code>with open(DIR_OUTPUT + "/all_agreements.txt") as infile, open(DIR_OUTPUT + "/results.txt", "w") as outfile:
seen = set()
for line in infile:
if line in seen:
outfile.write(line)
else:
seen.add(line)
</code></pre>
<p>Where can I use the .count() function ?
Because I want to do something like xxx.count(line) == 4 then save it into resulsts.txt</p>
|
<p>If your files are not super big you can use <code>set.intersection(a,b,c,d)</code>.</p>
<pre><code>data = []
for fname in filenames:
current = set()
with open(fname) as infile:
for line in infile:
current.add(line)
data.append(current)
results = set.intersection(*data)
</code></pre>
<p>You also don't need to create one single big file for this issue.</p>
|
python
| -1 |
1,901,865 | 59,533,654 |
Appending (not replacing) items in a nested dictionary via a list of keys
|
<p>This is a follow up to following question on SO:</p>
<p><a href="https://stackoverflow.com/questions/14692690/access-nested-dictionary-items-via-a-list-of-keys">Access nested dictionary items via a list of keys?</a></p>
<p>All the solutions given in the above link <strong>allows to replace/create a value</strong> of the nested dict via a list of keys. <strong>But my requirement is to append (update) another dict to value.</strong></p>
<p>If my dict is as follows:</p>
<pre><code>dataDict = {
"a":{
"r": 1,
"s": 2,
"t": 3
},
"b":{
"u": 1,
"v": {
"x": 1,
"y": 2,
"z": 3
},
"w": 3
}
}
</code></pre>
<p>And my list of keys is as follows:</p>
<pre><code>maplist = ["b", "v"]
</code></pre>
<p>How can I append another dict, say <code>"somevalue": {}</code> to the <code>dataDict['b']['v']</code> using my maplist?</p>
<p>So, after appending new dataDict would look as follows:</p>
<pre><code>dataDict = {
"a":{
"r": 1,
"s": 2,
"t": 3
},
"b":{
"u": 1,
"v": {
"x": 1,
"y": 2,
"z": 3,
"somevalue": {}
},
"w": 3
}
}
</code></pre>
<p>The solutions on the aforementioned SO link changes/creates the value like this:</p>
<pre><code>dataDict[mapList[-1]] = value
</code></pre>
<p>But we can't call the dict's update function like <code>dataDict[mapList[-1]].update()</code>. </p>
<p>Checked all of SO and Google, but no luck. Could someone please provide some help with this. Thanks!</p>
|
<p>This should work for you:</p>
<pre><code>def set_item(this_dict, maplist, key, value):
for k in maplist:
if k not in this_dict:
this_dict[k] = {}
this_dict = this_dict[k]
this_dict[key] = value
dataDict = {
"a":{
"r": 1,
"s": 2,
"t": 3
},
"b":{
"u": 1,
"v": {
"x": 1,
"y": 2,
"z": 3
},
"w": 3
}
}
maplist = ["b", "v"]
new_key = "somevalue"
new_value = {}
set_item(dataDict, maplist, new_key, new_value)
print(dataDict)
</code></pre>
<p>Output:</p>
<pre><code>{'a': {'r': 1, 's': 2, 't': 3}, 'b': {'u': 1, 'v': {'x': 1, 'y': 2, 'z': 3, 'somevalue': {}}, 'w': 3}}
</code></pre>
|
python|dictionary
| 3 |
1,901,866 | 19,083,776 |
How to print the variable name
|
<p>I've designed a code that calculates the percentage of 'CG' appears in a string; GC_content(DNA).</p>
<p>Now I can use this code so that it prints the the value of the string with the highest GC-content; </p>
<pre><code>print (max((GC_content(DNA1)),(GC_content(DNA2)),(GC_content(DNA3)))).
</code></pre>
<p>Now how would I get to print the variable name of this max GC_content? </p>
|
<p>You can get the <code>max</code> of some tuples:</p>
<pre><code>max_content, max_name = max(
(GC_content(DNA1), "DNA1"),
(GC_content(DNA2), "DNA2"),
(GC_content(DNA3), "DNA3")
)
print(max_name)
</code></pre>
|
python|variables|printing
| 5 |
1,901,867 | 63,470,247 |
Sharing single base object across multiple instances
|
<p>In my API design, I have something like this:</p>
<pre class="lang-py prettyprint-override"><code>class APIConnection:
# sets up the session and only contains connection-related methods
def __init__(self):
self.session = requests.Session()
def api_call(self):
# do session-related stuff
class User(APIConnection):
def __init__(self, username, password):
super().__init__()
# do login stuff, get access token
# update inherited session with authorization headers
self.session.headers.update({"Access-Token": access_token})
self.profile = Profile(profile_data) # set up profile object
class Profile:
def __init__(self, profile_data):
pass
# this is where I would like to get access to the session that User inherited from APIConnection
# so that I might call Profile-related functions like this through composition
def edit_profile(self):
self.api_call()
def remove_avatar(self):
self.api_call()
# My endgoal is so the user can write stuff like:
user = User("username", "password")
user.profile.edit_profile()
user.profile.remove_avatar()
# which would only be possible if Profile could share the APIConnection object that User created
</code></pre>
<p>I am new to OO programming and cannot think of a clean way to do this.</p>
<p>I would like for the <code>Profile</code> instance that <code>User</code> created to also get access to the inherited <code>APIConnection</code> without having to re-create it or do anything weird.</p>
<p>Thank you.</p>
|
<p>Yes, in static languages, you could make your <code>Profile</code> take a reference to an <code>APIConnection</code> and the compiler would enforce the interface.</p>
<p>With python you can have a unit test which passes in an actual <code>APIConnection</code> and then any calls to <code>User</code> methods would be caught.</p>
<p>As it is you can do this:</p>
<pre class="lang-py prettyprint-override"><code>class User(APIConnection):
def __init__(self, username, password):
super().__init__()
# do login stuff, get access token
# update inherited session with authorization headers
self.session.headers.update({"Access-Token": access_token})
self.profile = Profile(self, profile_data) # set up profile object
class Profile:
def __init__(self, api, profile_data):
self.api = api
def edit_profile(self):
self.api.api_call()
def remove_avatar(self):
self.api.api_call()
</code></pre>
|
python|python-3.x|oop|inheritance
| 0 |
1,901,868 | 36,623,719 |
using geopy to find the country name from coordinates in a pandas dataframe
|
<p>I am trying to determine the country name for each row in a <code>pandas</code> dataframe using <code>geopy</code>. What I have is:</p>
<pre><code>import pandas as pd
from geopy.geocoders import GoogleV3
df = pd.DataFrame({'ser_no': [1, 1, 1, 2, 2, 2],
'lat': [53.57, 35.52, 35.53, 54.66, 54.67, 55.8],
'lon': [-117.20, -98.29, -98.32, -119.48, -119.47, -119.46]})
def get_country(locations):
locations = geolocator.reverse(row['lat'], row['lon'], timeout = 10)
for location in locations:
for component in location.raw['address_components']:
if 'country' in component['types']:
return component['long_name']
my_key = my_api_key
geolocator = GoogleV3(my_key, proxies ={"http": 'my proxy',
"https": 'my proxy'})
df['country'] = df.apply(lambda row: get_country(row), axis = 1)
</code></pre>
<p>This returns </p>
<pre><code> lat lon ser_no country
0 53.57 -117.20 1 <function get_country at 0x000000000F6F9C88>
1 35.52 -98.29 1 <function get_country at 0x000000000F6F9C88>
2 35.53 -98.32 1 <function get_country at 0x000000000F6F9C88>
3 54.66 -119.48 2 <function get_country at 0x000000000F6F9C88>
4 54.67 -119.47 2 <function get_country at 0x000000000F6F9C88>
5 55.80 -119.46 2 <function get_country at 0x000000000F6F9C88>
</code></pre>
<p>No errors have occurred but my output isn't useful. I am not sure if it just returning incorrectly or if I have something wrong in my <code>apply</code>.</p>
|
<p><a href="https://geopy.readthedocs.org/en/1.10.0/" rel="nofollow"><code>geolocator.reverse</code></a> takes a string so you need to change your function to this:</p>
<pre><code>def get_country(row):
pos = str(row['lat']) + ', ' + str(row['lon'])
locations = geolocator.reverse(pos, timeout = 10)
#... rest of func the same
</code></pre>
|
python|pandas|dataframe|reverse-geocoding|geopy
| 1 |
1,901,869 | 36,555,214 |
Set vs. frozenset performance
|
<p>I was tinkering around with Python's <code>set</code> and <code>frozenset</code> collection types.</p>
<p>Initially, I assumed that <code>frozenset</code> would provide a better lookup performance than <code>set</code>, as its immutable and thus could exploit the structure of the stored items.</p>
<p>However, this does not seem to be the case, regarding the following experiment:</p>
<pre><code>import random
import time
import sys
def main(n):
numbers = []
for _ in xrange(n):
numbers.append(random.randint(0, sys.maxint))
set_ = set(numbers)
frozenset_ = frozenset(set_)
start = time.time()
for number in numbers:
number in set_
set_duration = time.time() - start
start = time.time()
for number in numbers:
number in frozenset_
frozenset_duration = time.time() - start
print "set : %.3f" % set_duration
print "frozenset: %.3f" % frozenset_duration
if __name__ == "__main__":
n = int(sys.argv[1])
main(n)
</code></pre>
<p>I executed this code using both CPython and PyPy, which gave the following results:</p>
<pre><code>> pypy set.py 100000000
set : 6.156
frozenset: 6.166
> python set.py 100000000
set : 16.824
frozenset: 17.248
</code></pre>
<p>It seems that <code>frozenset</code> is actually slower regarding the lookup performance, both in CPython and in PyPy. Does anybody have an idea why this is the case? I did not look into the implementations.</p>
|
<p>The <code>frozenset</code> and <code>set</code> implementations are largely shared; a <code>set</code> is simply a <code>frozenset</code> with mutating methods added, with the exact same hashtable implementation. See the <a href="http://hg.python.org/cpython/file/2.7/Objects/setobject.c"><code>Objects/setobject.c</code> source file</a>; the top-level <a href="https://hg.python.org/cpython/file/2.7/Objects/setobject.c#l2220"><code>PyFrozenSet_Type</code> definition</a> shares functions with the <a href="https://hg.python.org/cpython/file/2.7/Objects/setobject.c#l2121"><code>PySet_Type</code> definition</a>.</p>
<p>There is no optimisation for a frozenset here, as there is no need to calculate the hashes for the items <em>in</em> the <code>frozenset</code> when you are testing for membership. The item that you use to test <em>against</em> the set still needs to have their hash calculated, in order to find the right slot in the set hashtable so you can do an equality test.</p>
<p>As such, your timing results are probably off due to other processes running on your system; you measured wall-clock time, and did not disable Python garbage collection nor did you repeatedly test the same thing.</p>
<p>Try to run your test using the <a href="https://docs.python.org/2/library/timeit.html"><code>timeit</code> module</a>, with one value from <code>numbers</code> and one not in the set:</p>
<pre><code>import random
import sys
import timeit
numbers = [random.randrange(sys.maxsize) for _ in range(10000)]
set_ = set(numbers)
fset = frozenset(numbers)
present = random.choice(numbers)
notpresent = -1
test = 'present in s; notpresent in s'
settime = timeit.timeit(
test,
'from __main__ import set_ as s, present, notpresent')
fsettime = timeit.timeit(
test,
'from __main__ import fset as s, present, notpresent')
print('set : {:.3f} seconds'.format(settime))
print('frozenset: {:.3f} seconds'.format(fsettime))
</code></pre>
<p>This repeats each test 1 million times and produces:</p>
<pre><code>set : 0.050 seconds
frozenset: 0.050 seconds
</code></pre>
|
python|performance|set|frozenset
| 100 |
1,901,870 | 22,395,540 |
Python: Remove elements from the list which are prefix of other
|
<p>Fastest (& python) way to get list of elements which do not contain any other elements as their prefix.</p>
<p>(Elements can be in any order, for the sake of clarity in explanation elements are kept kind of sequential here, so if needed sorting has to be done explicitly)</p>
<p>Input is </p>
<pre><code>['AB', 'ABC', 'ABCDEF', 'ABCDEFG', 'BCD', 'DEF', 'DEFGHI', 'EF', 'GKL', 'JKLM']
</code></pre>
<p>Elements eliminated:</p>
<pre><code>'AB' prefix of 'ABC'
'ABC' prefix of 'ABCDEF'
'ABCDEF' prefix OF 'ABCDEFG'
'DEF' prefix of 'DEFGHI'
</code></pre>
<p>Expected Output</p>
<pre><code>['ABCDEFG', 'BCD', 'DEFGHI', 'EF', 'GKL', 'JKLM']
</code></pre>
<p><strong>Edited</strong>:</p>
<p>Adding a bit more complexity(or clarity). The average length of the list varies from 500 - 900.</p>
|
<p>If your list is sorted, every element is either a prefix of the next one, or not a prefix of any of them. Therefore, you can write:</p>
<pre><code>ls.sort()
[ls[i] for i in range(len(ls))[:-1] if ls[i] != ls[i+1][:len(ls[i])]] + [ls[-1]]
</code></pre>
<p>This will be <code>n log(n)</code> sorting plus one pass through the list (<code>n</code>). </p>
<p>For your current sorted list, it is marginally quicker as well, because it is linear, timeit gives 2.11 us.</p>
<p>A slightly quicker implementation (but not asymptotically), and more pythonic as well, using <code>zip</code>:</p>
<pre><code>[x for x, y in zip(ls[:-1], ls[1:]) if x != y[:len(x)]] + [ls[-1]]
</code></pre>
<p>timeit gives 1.77 us</p>
|
python
| 3 |
1,901,871 | 43,775,448 |
Index out of bounds (Python)
|
<p>I have some data that I would like to aggregate, but I'm getting the index out of bounds error, and I can't seem to figure out why. Here's my code:</p>
<pre><code>if period == "hour":
n=3
tvec_a=np.zeros([24,6])
tvec_a[:,3]=np.arange(0,24)
data_a=np.zeros([24,4])
elif period == "day":
n=2
tvec_a=np.zeros([31,6])
tvec_a[:,2]=np.arange(1,32)
data_a=np.zeros([31,4])
elif period == "month":
n=1
tvec_a=np.zeros([12,6])
tvec_a[:,1]=np.arange(1,13)
data_a=np.zeros([12,4])
elif period == "hour of the day":
tvec_a=np.zeros([24,6])
tvec_a[:,3]=np.arange(0,24)
data_a=np.zeros([24,4])
i=0
if period == "hour" or period == "day" or period == "month":
while i <= np.size(tvec[:,0]):
data_a[tvec[i,n],:]=data_a[tvec[i,n],:]+data[i,:]
i=i+1
if i > np.size(tvec[:,0]):
break
</code></pre>
<p>I only get the error if I make period day or month. Hour works just fine.
(The code is part of a function that takes in a tvec, data and period) </p>
<pre><code>Traceback (most recent call last):
File "<ipython-input-23-7fb910c0f29b>", line 1, in <module>
aggregate_measurements(tvec,data,"month")
File "C:/Users/Julie/Documents/DTU - design og innovation/4. semester/Introduktion til programmering og databehandling (Python)/Projekt 2 electricity/agg_meas.py", line 33, in aggregate_measurements
data_a[tvec[i,n],:]=data_a[tvec[i,n],:]+data[i,:]
IndexError: index 12 is out of bounds for axis 0 with size 12
</code></pre>
<p>EDIT: Fixed it, by writing minus 1 on the value from the tvec:</p>
<pre><code>data_a[tvec[i,n]-1,:]=data_a[tvec[i,n]-1,:]+data[i,:]
</code></pre>
|
<p>Since lists are 0-indexed, you can only go up to index 11 on a 12-element array.</p>
<p>Therefore <code>while i <= np.size(tvec[:,0])</code> should probably be <code>while i < np.size(tvec[:,0])</code>.</p>
<p>Extra Note: The <code>break</code> is unnecessary because the while loop will stop once the condition is met anyway.</p>
|
python|indexing|bounds|out
| 1 |
1,901,872 | 71,132,961 |
How to replace a row in pandas with multiple rows after applying a function?
|
<p>I have a pandas dataframe that contains only one column which contains a string. I want to apply a function to each row that will split the string by sentence and replace that row with rows generated from the function.</p>
<p>Example dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(["A sentence. Another sentence. More sentences here.", "Another line of text"])
</code></pre>
<p>Output of <code>df.head()</code>:</p>
<pre><code> 0
0 A sentence. Another sentence. More sentences h...
1 Another line of text
</code></pre>
<p>I have tried using <code>apply()</code> method as follows:</p>
<pre class="lang-py prettyprint-override"><code>def get_sentence(row):
return pd.DataFrame(re.split('\.', row[0]))
df.apply(get_sentence, axis=1)
</code></pre>
<p>But then <code>df.head()</code> gives:</p>
<pre><code>0 0
0 A sentenc...
1 0
0 Another line of text
</code></pre>
<p>I want the output as:</p>
<pre><code> 0
0 A sentence
1 Another sentence
2 More sentences here
3 Another line of text
</code></pre>
<p>What is the correct way to do this?</p>
|
<p>You can use</p>
<pre class="lang-py prettyprint-override"><code>df[0].str.split(r'\.(?!$)').explode().reset_index(drop=True).str.rstrip('.')
</code></pre>
<p>Output:</p>
<pre><code>0 A sentence
1 Another sentence
2 More sentences here
3 Another line of text
</code></pre>
<p>The <code>\.(?!$)</code> regex matches a dot not at the end of the string. The <code>.explode()</code> splits the results across rows and the <code>.reset_index(drop=True)</code> resets the indices. <code>.str.rstrip('.')</code> will remove trailing dots.</p>
<p>You can also use <code>Series.str.findall</code> version:</p>
<pre class="lang-py prettyprint-override"><code>>>> df[0].str.findall(r'[^.]+').explode().reset_index(drop=True)
0 A sentence
1 Another sentence
2 More sentences here
3 Another line of text
</code></pre>
<p>where <code>[^.]+</code> matches any one or more chars other than <code>.</code> char.</p>
|
python|pandas|dataframe|text-processing|data-processing
| 2 |
1,901,873 | 9,432,332 |
Are there any libraries (or frameworks) that aid in implementing the server side of ODBC?
|
<p>I'd like to add a feature to my behind the firewall webapp that exposes and ODBC interface so users can connect with a spreadsheet program to explore our data.</p>
<p>We don't use a RDBMS so I want to emulate the server side of the connection.</p>
<p>I've searched extensively for a library or framework that helps to implement the server side component of an ODBC connection with no luck. Everything I can find is for the other side of the equation - connecting one's client program to a database using an ODBC driver.</p>
<p>It would be great to use Python but at this point language preference is secondary, although it does have to run on *nix.</p>
|
<p>The server side of ODBC is already done, it is your RDBMS.</p>
<p>ODBC is a client side thing, most implementations are just a bridge between ODBC interface and the native client interface for you-name-your-RDBMS-here.</p>
<p>That is why you will not find anything about the server side of ODBC... :-)</p>
<p>Implementing a RDBMS (even with a subset of SQL) is no easy quest. My advice is to expose your underlying database storage, the best solution depends on what database are you using.</p>
<p>If its a read-only interface, expose a database mirror using some sort of asynchronous replication.</p>
<p>If you want it read/write, trust me, you better don't. If your customer is savvy, expose an API, it he isn't you don't want him fiddling with your database. :-)</p>
<p>[updated]</p>
<p>If your data is not stored on a RDBMS, IMHO there is no point in exposing it through a relational interface like ODBC. The advice to use some sort of asynchronous replication with a relational database is still valid and probably the easiest approach. </p>
<p>Otherwise you will have to reinvent the wheel implementing an SQL parser, network connection, authentication and related logic. If you think it's worth, go for it!</p>
|
python|odbc
| 0 |
1,901,874 | 9,453,986 |
easy_install lxml on Python 2.7 on Windows
|
<p>I'm using python 2.7 on Windows. How come the following error occurs when I try to install [lxml][1] using [setuptools][2]'s easy_install?</p>
<pre><code>C:\>easy_install lxml
Searching for lxml
Reading http://pypi.python.org/simple/lxml/
Reading http://codespeak.net/lxml
Best match: lxml 2.3.3
Downloading http://lxml.de/files/lxml-2.3.3.tgz
Processing lxml-2.3.3.tgz
Running lxml-2.3.3\setup.py -q bdist_egg --dist-dir c:\users\my_user\appdata\local\temp\easy_install-mtrdj2\lxml-2.3.3\egg-dist-tmp-tq8rx4
Building lxml version 2.3.3.
Building without Cython.
ERROR: 'xslt-config' is not recognized as an internal or external command,
operable program or batch file.
** make sure the development packages of libxml2 and libxslt are installed **
Using build configuration of libxslt
warning: no files found matching 'lxml.etree.c' under directory 'src\lxml'
warning: no files found matching 'lxml.objectify.c' under directory 'src\lxml'
warning: no files found matching 'lxml.etree.h' under directory 'src\lxml'
warning: no files found matching 'lxml.etree_api.h' under directory 'src\lxml'
warning: no files found matching 'etree_defs.h' under directory 'src\lxml'
warning: no files found matching 'pubkey.asc' under directory 'doc'
warning: no files found matching 'tagpython*.png' under directory 'doc'
warning: no files found matching 'Makefile' under directory 'doc'
error: Setup script exited with error: Unable to find vcvarsall.bat
</code></pre>
<p>Downloading the package and running <code>setup.py install</code> also doesn't help:</p>
<pre><code>D:\My Documents\Installs\Dev\Python\lxml\lxml-2.3.3>setup.py install
Building lxml version 2.3.3.
Building without Cython.
ERROR: 'xslt-config' is not recognized as an internal or external command,
operable program or batch file.
** make sure the development packages of libxml2 and libxslt are installed **
Using build configuration of libxslt
running install
running bdist_egg
running egg_info
writing src\lxml.egg-info\PKG-INFO
writing top-level names to src\lxml.egg-info\top_level.txt
writing dependency_links to src\lxml.egg-info\dependency_links.txt
reading manifest file 'src\lxml.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'lxml.etree.c' under directory 'src\lxml'
warning: no files found matching 'lxml.objectify.c' under directory 'src\lxml'
warning: no files found matching 'lxml.etree.h' under directory 'src\lxml'
warning: no files found matching 'lxml.etree_api.h' under directory 'src\lxml'
warning: no files found matching 'etree_defs.h' under directory 'src\lxml'
warning: no files found matching 'pubkey.asc' under directory 'doc'
warning: no files found matching 'tagpython*.png' under directory 'doc'
warning: no files found matching 'Makefile' under directory 'doc'
writing manifest file 'src\lxml.egg-info\SOURCES.txt'
installing library code to build\bdist.win32\egg
running install_lib
running build_py
creating build
creating build\lib.win32-2.7
creating build\lib.win32-2.7\lxml
copying src\lxml\builder.py -> build\lib.win32-2.7\lxml
copying src\lxml\cssselect.py -> build\lib.win32-2.7\lxml
copying src\lxml\doctestcompare.py -> build\lib.win32-2.7\lxml
copying src\lxml\ElementInclude.py -> build\lib.win32-2.7\lxml
copying src\lxml\pyclasslookup.py -> build\lib.win32-2.7\lxml
copying src\lxml\sax.py -> build\lib.win32-2.7\lxml
copying src\lxml\usedoctest.py -> build\lib.win32-2.7\lxml
copying src\lxml\_elementpath.py -> build\lib.win32-2.7\lxml
copying src\lxml\__init__.py -> build\lib.win32-2.7\lxml
creating build\lib.win32-2.7\lxml\html
copying src\lxml\html\builder.py -> build\lib.win32-2.7\lxml\html
copying src\lxml\html\clean.py -> build\lib.win32-2.7\lxml\html
copying src\lxml\html\defs.py -> build\lib.win32-2.7\lxml\html
copying src\lxml\html\diff.py -> build\lib.win32-2.7\lxml\html
copying src\lxml\html\ElementSoup.py -> build\lib.win32-2.7\lxml\html
copying src\lxml\html\formfill.py -> build\lib.win32-2.7\lxml\html
copying src\lxml\html\html5parser.py -> build\lib.win32-2.7\lxml\html
copying src\lxml\html\soupparser.py -> build\lib.win32-2.7\lxml\html
copying src\lxml\html\usedoctest.py -> build\lib.win32-2.7\lxml\html
copying src\lxml\html\_dictmixin.py -> build\lib.win32-2.7\lxml\html
copying src\lxml\html\_diffcommand.py -> build\lib.win32-2.7\lxml\html
copying src\lxml\html\_html5builder.py -> build\lib.win32-2.7\lxml\html
copying src\lxml\html\_setmixin.py -> build\lib.win32-2.7\lxml\html
copying src\lxml\html\__init__.py -> build\lib.win32-2.7\lxml\html
creating build\lib.win32-2.7\lxml\isoschematron
copying src\lxml\isoschematron\__init__.py -> build\lib.win32-2.7\lxml\isoschematron
copying src\lxml\etreepublic.pxd -> build\lib.win32-2.7\lxml
copying src\lxml\tree.pxd -> build\lib.win32-2.7\lxml
copying src\lxml\etree_defs.h -> build\lib.win32-2.7\lxml
creating build\lib.win32-2.7\lxml\isoschematron\resources
creating build\lib.win32-2.7\lxml\isoschematron\resources\rng
copying src\lxml\isoschematron\resources\rng\iso-schematron.rng -> build\lib.win32-2.7\lxml\isoschematron\resources\rng
creating build\lib.win32-2.7\lxml\isoschematron\resources\xsl
copying src\lxml\isoschematron\resources\xsl\RNG2Schtrn.xsl -> build\lib.win32-2.7\lxml\isoschematron\resources\xsl
copying src\lxml\isoschematron\resources\xsl\XSD2Schtrn.xsl -> build\lib.win32-2.7\lxml\isoschematron\resources\xsl
creating build\lib.win32-2.7\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_abstract_expand.xsl -> build\lib.win32-2.7\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_dsdl_include.xsl -> build\lib.win32-2.7\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_message.xsl -> build\lib.win32-2.7\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_skeleton_for_xslt1.xsl -> build\lib.win32-2.7\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_svrl_for_xslt1.xsl -> build\lib.win32-2.7\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt -> build\lib.win32-2.7\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
running build_ext
building 'lxml.etree' extension
error: Unable to find vcvarsall.bat
[1]: http://lxml.de/
[2]: http://pypi.python.org/pypi/setuptools
</code></pre>
|
<p><strong>lxml >= 3.x.x</strong> </p>
<ol>
<li>download one of the MS Windows Installer packages</li>
<li><code>easy_install "c:/lxml_installer.exe"</code> <a href="https://stackoverflow.com/questions/9453986/easy-install-lxml-on-python-2-7-on-windows#comment19604831_9819303">(credit kobejohn)</a></li>
</ol>
<p>MS Windows Installer <a href="https://pypi.python.org/pypi/lxml/3.3.5#downloads" rel="noreferrer">downloads available for lxml 3.3.5</a></p>
<p>a <a href="https://pypi.python.org/packages/2.7/l/lxml/" rel="noreferrer">list of all binary/egg lxml package downloads</a>.</p>
<p><br>
<strong>lxml 2.3.x</strong><br>
<br>
there is no windows binary egg for lxml 2.3.3 (2.3.0 is the latest from 2.x.x).<br>
without giving a version number easy_install will download the latest sources,<br>
but you dont have the libxml2 and libxslt installed.</p>
<p>you could install the missing libs or you could try the 2.3 as there are binary eggs for windows:<br>
<code>easy_install lxml==2.3</code></p>
|
python|lxml|python-2.7|setuptools|easy-install
| 46 |
1,901,875 | 52,770,935 |
Replacing Strings using Loops in Python
|
<p>I'm still new to Python and I've had a hard time on how to Loop this.</p>
<pre><code>mynewvar2=varlist3.replace('R0','_0').replace('R1','_1').replace('R2','_2').replace('R3','_3').replace('R4','_4').replace('R5','_5').replace('R6','_6').replace('R7','_7').replace('R8','_8').replace('R9','_9')
</code></pre>
<p>The problem here is that I'm gonna add many .replace() functions if I'm given more than that.</p>
<p>Thank you so much for your help guys!</p>
|
<pre><code>In [1]: s = 'XR0R1R2R3'
In [2]: before = ['R0','R1','R2','R3']
In [3]: after = ['_0','_1','_2','_3']
In [4]: for a, b in zip(before, after):
...: s = s.replace(a, b)
...:
In [5]: s
Out[5]: 'X_0_1_2_3'
</code></pre>
|
python|string|loops|replace
| 0 |
1,901,876 | 47,558,827 |
Flask - reason for view function mapping is overwriting error
|
<p>Why I get this error when tried to use rendering:</p>
<pre><code>Traceback (most recent call last):
File "d:\Projects\jara.md\backend\flask\__init__.py", line 31, in <module>
@app.route('/update/<int:adv_id>', methods=['PUT'])
File "c:\Python27\lib\site-packages\flask\app.py", line 1080, in decorator
self.add_url_rule(rule, endpoint, f, **options)
File "c:\Python27\lib\site-packages\flask\app.py", line 64, in wrapper_func
return f(self, *args, **kwargs)
File "c:\Python27\lib\site-packages\flask\app.py", line 1051, in add_url_rule
'existing endpoint function: %s' % endpoint)
AssertionError: View function mapping is overwriting an existing endpoint function: start
</code></pre>
<p>Code list is:</p>
<pre><code>app = Flask(__name__)
@app.route('/add', methods=['POST'])
def add():
return 'Add'
@app.route('/start/<int:adv_id>', methods=['PUT'])
def start(adv_id):
return 'start'
### Rendering ###
@app.route('/add', methods=['GET'])
def add():
return render_template('add.html')
if __name__ == "__main__":
app.run()
</code></pre>
<p>As you can see I have two methods <code>add()</code> for GET and POST requests.</p>
<p>What does this message mean?</p>
<pre><code> self.add_url_rule(rule, endpoint, f, **options)
</code></pre>
<hr>
<pre><code>@app.route('/update/<int:adv_id>', methods=['PUT'])
def start(adv_id):
return 'update'
</code></pre>
|
<p>This is the issue:</p>
<pre><code>@app.route('/update/<int:adv_id>', methods=['PUT'])
def start(adv_id):
return 'update'
</code></pre>
<p>You view names should be unique. You can't have two flask view methods with the same name. Name <code>start</code> and <code>add</code> methods to something else which is unique.</p>
<p>[Edit]</p>
<p>As @Oleg asked/commented that this unique name is a disadvantage. The reason for this is clear if you read the source code of Flask. From the <a href="https://github.com/pallets/flask/blob/master/flask/app.py#L1058" rel="nofollow noreferrer">source code</a></p>
<pre><code>"""
Basically this example::
@app.route('/')
def index():
pass Is equivalent to the following::
def index():
pass
app.add_url_rule('/', 'index', index)
If the view_func is not provided you will need to connect the endpoint to a view function like so::
app.view_functions['index'] = index
"""
</code></pre>
<p>So flask maps URL rule with the name of view function. in @app.route you are not passing a name so flask takes the method name creates the rule from it. Since this map is a dictionary it needs to be unique.</p>
<p>So, you can have view function with the same names(which you shouldn't as long as you pass the different name for the view)</p>
|
python|python-3.x|flask
| 3 |
1,901,877 | 37,291,539 |
pip install not working windows 7
|
<p>I have downloaded pip frpm <code>https://sites.google.com/site/pydatalog/python/pip-for-windows</code></p>
<p>Now when I type any package name or upgrade in the command section I get the following error</p>
<pre><code>Downloading https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py
Traceback (most recent call last):
File "C:\Python33\lib\urllib\request.py", line 1248, in do_open
h.request(req.get_method(), req.selector, req.data, headers)
File "C:\Python33\lib\http\client.py", line 1065, in request
self._send_request(method, url, body, headers)
File "C:\Python33\lib\http\client.py", line 1103, in _send_request
self.endheaders(body)
File "C:\Python33\lib\http\client.py", line 1061, in endheaders
self._send_output(message_body)
File "C:\Python33\lib\http\client.py", line 906, in _send_output
self.send(msg)
File "C:\Python33\lib\http\client.py", line 844, in send
self.connect()
File "C:\Python33\lib\http\client.py", line 1198, in connect
self.timeout, self.source_address)
File "C:\Python33\lib\socket.py", line 435, in create_connection
raise err
File "C:\Python33\lib\socket.py", line 426, in create_connection
sock.connect(sa)
TimeoutError: [WinError 10060] A connection attempt failed because the connected
party did not properly respond after a period of time, or established connectio
n failed because connected host has failed to respond
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Admin~1\AppData\Local\Temp\rmv_setup.py", line 60, in <module>
download(url, "ez_setup.py")
File "C:\Users\Admin~1\AppData\Local\Temp\rmv_setup.py", line 30, in download
src = urlopen(url)
File "C:\Python33\lib\urllib\request.py", line 156, in urlopen
return opener.open(url, data, timeout)
urllib.error.URLError: <urlopen error [WinError 10060] A connection attempt fail
ed because the connected party did not properly respond after a period of time,
or established connection failed because connected host has failed to respond>
</code></pre>
<p>I am behind a proxy. but I can access <code>bitbucket.org</code> through browser. How can I fix this issue?</p>
|
<p>Considering that you successfully installed pip and now you are trying to install another module using pip.</p>
<p>pip has "proxy" option.Please try to use it and check whether it is helpful.</p>
<pre><code>C:\Users\Administrator\Desktop>pip --help
Usage:
pip <command> [options]
Commands:
install Install packages.
uninstall Uninstall packages.
freeze Output installed packages in requirements format.
list List installed packages.
show Show information about installed packages.
search Search PyPI for packages.
wheel Build wheels from your requirements.
zip DEPRECATED. Zip individual packages.
unzip DEPRECATED. Unzip individual packages.
bundle DEPRECATED. Create pybundles.
help Show help for commands.
General Options:
-h, --help Show help.
-v, --verbose Give more output. Option is additive, and can be
used up to 3 times.
-V, --version Show version and exit.
-q, --quiet Give less output.
--log-file <path> Path to a verbose non-appending log, that only
logs failures. This log is active by default at
C:\Users\Administrator\pip\pip.log.
--log <path> Path to a verbose appending log. This log is
inactive by default.
--proxy <proxy> Specify a proxy in the form
[user:passwd@]proxy.server:port.
--timeout <sec> Set the socket timeout (default 15 seconds).
--exists-action <action> Default action when a path already exists:
(s)witch, (i)gnore, (w)ipe, (b)ackup.
--cert <path> Path to alternate CA bundle.
C:\Users\Administrator\Desktop>
</code></pre>
|
python
| 2 |
1,901,878 | 34,113,083 |
numpy contour: TypeError: Input z must be a 2D array
|
<p>I have data in a list of lists format.</p>
<p>It looks like this:</p>
<pre><code>[(x_1, y_1, Z_1),...(x_i, y_j, z_k),...(x_p, y_q, z_r)]
</code></pre>
<p>For every x and y there is one z. Lengths of X, Y and Z are p, q and r(=p*q) respectively.</p>
<p>I intend to plot a contour plot with X and Y as mesh and Z as the value to be plotted.</p>
<p>I have following code (just representative):</p>
<pre><code>import csv
import sys
import statistics
import numpy as np
from scipy.interpolate import UnivariateSpline
from matplotlib import pyplot as plt
...........
#format of data = [(x, y, z)......]
#x, y, z are lists
X = [X1,..........,Xp] #length, p
Y = [Y1,..........,Yq] #length, q
Z = [Z1,..........,Zpq] #length, pq
#np.mesh
X1, Y1 = np.meshgrid(X, Y)
plt.figure()
CS = plt.contour(X1, Y1, Z)
plt.clabel(CS, inline=1, fontsize=10)
</code></pre>
<p>I get following error:</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/uname/PycharmProjects/waferZoning/contour.py", line 49, in <module>
CS = plt.contour(X1, Y1, Z)
File "C:\Users\uname\AppData\Local\Programs\Python\Python35\lib\site-packages\matplotlib\pyplot.py", line 2766, in contour
ret = ax.contour(*args, **kwargs)
File "C:\Users\uname\AppData\Local\Programs\Python\Python35\lib\site-packages\matplotlib\__init__.py", line 1811, in inner
return func(ax, *args, **kwargs)
File "C:\Users\uname\AppData\Local\Programs\Python\Python35\lib\site-packages\matplotlib\axes\_axes.py", line 5640, in contour
return mcontour.QuadContourSet(self, *args, **kwargs)
File "C:\Users\uname\AppData\Local\Programs\Python\Python35\lib\site-packages\matplotlib\contour.py", line 1428, in __init__
ContourSet.__init__(self, ax, *args, **kwargs)
File "C:\Users\uname\AppData\Local\Programs\Python\Python35\lib\site-packages\matplotlib\contour.py", line 873, in __init__
self._process_args(*args, **kwargs)
File "C:\Users\uname\AppData\Local\Programs\Python\Python35\lib\site-packages\matplotlib\contour.py", line 1445, in _process_args
x, y, z = self._contour_args(args, kwargs)
File "C:\Users\uname\AppData\Local\Programs\Python\Python35\lib\site-packages\matplotlib\contour.py", line 1532, in _contour_args
x, y, z = self._check_xyz(args[:3], kwargs)
File "C:\Users\uname\AppData\Local\Programs\Python\Python35\lib\site-packages\matplotlib\contour.py", line 1566, in _check_xyz
raise TypeError("Input z must be a 2D array.")
TypeError: Input z must be a 2D array.
</code></pre>
<p>I understand what the error is, but I am not able to rectify it.</p>
<p>I can't give a MWE but I guess I have made my problem quite clear. </p>
<p><a href="https://www.dropbox.com/s/33jmfcjzikl4w5g/contour_synthetic.txt?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/33jmfcjzikl4w5g/contour_synthetic.txt?dl=0</a></p>
|
<p>You need to have a <code>z</code> that has length equal to the product of the length of <code>x</code>and <code>y</code>: </p>
<pre><code>assert len(z) == (len(x) * len(y))
</code></pre>
<p>Make <code>z</code> a 2D array:</p>
<pre><code>z = np.array(z)
z = z.reshape((len(x), len(y)))
</code></pre>
<p>Here a MCVE:</p>
<pre><code>x = np.arange(5)
y = np.arange(5)
z = np.arange(25).reshape(5, 5)
x1, y1 = np.meshgrid(x, y)
plt.contour(x1, y1, z)
</code></pre>
<p>Make sure your data is structured like this.</p>
|
python-3.x|numpy|scipy|contour
| 11 |
1,901,879 | 34,207,266 |
PJSIP/PJSUA - Python - Check registration status
|
<p>Following <em>registration.py</em> and <em>call.py</em> examples <a href="http://trac.pjsip.org/repos/browser/pjproject/trunk/pjsip-apps/src/python/samples?rev=2171" rel="nofollow">here</a> I have developed my SIP client that works quite well.</p>
<p>Unfortunately, if the SIP server restarts the client thinks it is still registered, the server doesn't have it registered anymore and therefore the client does not receive calls.</p>
<p>I tried checking in a while loop the <code>acc.info().reg_status</code> but it always report "200" (OK)...</p>
<p>How would you make the client continuously check if it is actually registered and if not run the registration again?</p>
<p>Thank you, </p>
<p>dk</p>
<hr>
<p>This is the registration code:</p>
<pre><code># Register to SIP server
acc = lib.create_account(pj.AccountConfig(sip_server, sip_user, sip_password))
acc_cb = MyAccountCallback(acc)
acc.set_callback(acc_cb)
acc_cb.wait()
print "Registration complete [server: %s, user: %s]: status: %s (%s)" % (sip_server, sip_user, acc.info().reg_status, acc.info().reg_reason)
my_sip_uri = "sip:" + transport.info().host + ":" + str(transport.info().port)
print my_sip_uri
</code></pre>
|
<p>By default pjsip sends re-registration request after every 600s.I mean the keep-alive timeout is 600s by default.So you can change it as you want.Change it for a different value.Here is a sample...</p>
<pre><code>acc_cfg.ka_interval =30;
</code></pre>
|
python|sip|voip|pjsip
| 2 |
1,901,880 | 66,144,076 |
Is there a way to get answers to questions asked to prospective new members of a Facebook group?
|
<p>From the internet search it seems to me that there is no way to get programmatically the 3 answers questions asked to prospective new members of a Facebook group.</p>
<p>So far I found that I am able to see the answers only when I see awaiting requests to join a group.</p>
<p>It might good enough to capture the answers. But how?</p>
|
<p>According to Facebook - This is currently not an option
refer - <a href="https://www.facebook.com/help/community/question/?id=2042227765852802" rel="nofollow noreferrer">FB Community Link</a></p>
<p>Alternatively, You Can Use Group Collector To Collect New Facebook Group Member Answers, Including Their Email And Save Them Into Google Sheet And Your Favorite Email Marketing Software Without Zapier.</p>
<p>Go to <a href="https://groupcollector.com/" rel="nofollow noreferrer">https://groupcollector.com/</a></p>
<p>You can also set up Group Collector Auto Approval, which allows you to auto-approve new group members after certain intervals that meet your specific criteria.</p>
<p>There are also other Chrome extensions to Capture the answers and store them in Excel Sheets.</p>
<p>Check</p>
<ul>
<li><a href="https://chrome.google.com/webstore/detail/extract-facebook-group-an/dfcmfhebmcphgldfbejjflnojabpemoi?hl=en-GB" rel="nofollow noreferrer">Extract Facebook Group Answers</a></li>
<li><a href="https://chrome.google.com/webstore/detail/gmac-%E2%80%94-group-answers-coll/omefdeapnicniondiefejngddkjikegn" rel="nofollow noreferrer">Group Answers Collector For FB</a></li>
</ul>
|
python|facebook|facebook-graph-api
| 1 |
1,901,881 | 7,136,513 |
learn python the hard way exercise 26 trouble
|
<p>I am doing exercise 26 in learn python the hard way and have been struggling on it. I have read it backwards and everything, but in line 77 I get :</p>
<pre><code>sentence = "All god \t things come to those who weight."
^
SyntaxError: invalid syntax
</code></pre>
<p>I dont know why the arrow is at the e .
I dont just want to know how to fix it but what the problem is.</p>
|
<p>You're probably missing a close bracket, brace, or parenthesis on the previous line of code.</p>
<p><strong>Edit:</strong> From the code:</p>
<pre><code>print "We'd have %d beans, %d jars, and %d crabapples." % secret_formula(start_pont
</code></pre>
<p>There is no close parenthesis at the end of the line.</p>
<p>This is <strong>almost always</strong> the problem if you have a <code>SyntaxError</code> near the beginning of a line that looks right.</p>
|
python
| 4 |
1,901,882 | 40,365,082 |
Unable to train pySpark SVM, labeled point issue
|
<p>I am trying to turn an spark Dataframe to labeled point.
The Dataframe is named DF and looks like:</p>
<pre><code>+-----+-----+-----+-----+-----+-----+-----+-----+-----+------+-----+
|step1|step2|step3|step4|step5|step6|step7|step8|step9|step10|class|
+-----+-----+-----+-----+-----+-----+-----+-----+-----+------+-----+
| 14| 14| 0| 14| 14| 4| 11| 10| 0| 7| 1|
| 11| 10| 14| 0| 14| 18| 18| 14| 7| 7| 1|
| 14| 14| 14| 14| 14| 14| 7| 0| 7| 0| 1|
| 14| 14| 14| 14| 7| 7| 14| 14| 0| 11| 1|
| 14| 14| 14| 14| 14| 14| 14| 7| 14| 7| 1|
| 14| 14| 14| 14| 14| 14| 14| 0| 7| 7| 1|
| 14| 14| 14| 14| 14| 14| 14| 7| 14| 7| 1|
| 17| 14| 0| 7| 0| 0| 14| 7| 0| 7| 1|
| 14| 14| 14| 7| 7| 14| 7| 14| 14| 7| 1|
| 14| 14| 14| 14| 14| 14| 14| 7| 7| 7| 1|
| 7| 14| 14| 14| 14| 0| 14| 7| 0| 14| 1|
| 14| 14| 14| 14| 14| 0| 14| 7| 7| 7| 1|
</code></pre>
<p>what I am trying to do, following the documentation, is:</p>
<pre><code>(training, test) = DF.randomSplit([0.8,0.2])
print training
def parsePoint(line):
values = [float(x) for x in line.split(' ')]
return LabeledPoint(values[0], values[:1])
trainLabeled = training.rdd.map(parsePoint)
model = SVMWithSGD.train(trainLabeled, iterations=100)
</code></pre>
<p>But I am getting the error:</p>
<pre><code>Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
</code></pre>
<p>Spark version 2.0.1</p>
|
<p>I can't be sure without seing your data, but an usual issue with <code>SVMWithSGD</code> comes from the label.</p>
<p>You need to use <code>LabeledPoint</code> (as you did) and ensure that first parameter is either 0.0 or 1.0. The error might come from <code>x[-1]</code> being another value (not 0 nor 1).</p>
<p>Could you check it? </p>
<p>Hope it helps,
pltrdy</p>
<hr>
<p><strong>Edit (after seing data):</strong></p>
<p>Hem. Let's go back to basics: SVM (roughly) <strong>"learns how to split data in two classes"</strong> (this is not very formal, but lets take it for now). This being said, your dataset must be: a <code>X</code> matrix of shape <code>n x D</code> (<code>n</code> number of rows, <code>D</code> number of features), and a <code>y</code> matrix <code>n x 1</code> containing labels of data. Labels are binary usually denoted <code>{0, 1}</code> (or <code>{-1, 1}</code> which is more convenient for maths).
This is quite the "math" approach. Usually you have one <code>data</code> matrix which you split between <code>X</code> and <code>y</code> by "extracting" a column as the label. (all values in this column must be either 0, 1).</p>
<p>This being said, making long story short: SVM will classify your data on <strong>two</strong> classes.</p>
<p>The label (= classes, which value is either 0 or 1) can be seen as a two categories used to split your data. So you must have a column with only 0 or 1. </p>
<p><strong><em>For example</em></strong> if I build my movie dataset, I could set a column "do i like it?" with <code>label=1</code> if I like the film and <code>label=0</code> if i don't, then train my SVM to predict which movie I am supposed to like</p>
<p>I don't see in your data which column is the <strong>label</strong>. If you have more than 2 classes, SVM is not for you, you will have to take a look to multivariate classification (which is out of scope here, tell me if it is what you want).</p>
<p>I do guess that your objective isn't really clear to you. As an example, one will not train classification using ID column, this rarely makes sense. If I'm wrong, please explain what do you expect from your data. (you can also explain what columns refers). </p>
<pre><code>pltrdy
</code></pre>
|
python|machine-learning|pyspark|apache-spark-mllib|training-data
| 1 |
1,901,883 | 40,596,748 |
Concatenate path and filename
|
<p>I have to build the full path together in python. I tried this:</p>
<pre><code>filename= "myfile.odt"
subprocess.call(['C:\Program Files (x86)\LibreOffice 5\program\soffice.exe',
'--headless',
'--convert-to',
'pdf', '--outdir',
r'C:\Users\A\Desktop\Repo\',
r'C:\Users\A\Desktop\Repo\'+filename])
</code></pre>
<p>But I get this error</p>
<blockquote>
<p>SyntaxError: EOL while scanning string literal.</p>
</blockquote>
|
<p>Try:</p>
<pre><code>import os
os.path.join('C:\Users\A\Desktop\Repo', filename)
</code></pre>
<p>The os module contains many useful methods for directory and path manipulation</p>
|
python|path|filepath
| 25 |
1,901,884 | 40,593,909 |
Python sftp error with paramiko
|
<p>Why i get a connection dropped error with paramiko when i invoke get function ? </p>
<pre><code>class AllowAnythingPolicy(paramiko.MissingHostKeyPolicy):
def missing_host_key(self, client, hostname, key):
return
client = paramiko.SSHClient()
client.set_missing_host_key_policy(AllowAnythingPolicy())
client.connect('', username='',password='')
sftp.get('','')
</code></pre>
<p>i have a file of 70 mb, the function download 20mb after i get an error.
this function worke fine when the size file is under 20mb </p>
<p>this is paramiko log file : </p>
<pre><code>DEB [20161115-10:25:47.792] thr=1 paramiko.transport: starting thread (client mode): 0x472a3d0
DEB [20161115-10:25:47.793] thr=1 paramiko.transport: Local version/idstring: SSH-2.0-paramiko_2.0.2
DEB [20161115-10:25:47.793] thr=1 paramiko.transport: Remote version/idstring: SSH-2.0-SshServer
INF [20161115-10:25:47.794] thr=1 paramiko.transport: Connected (version 2.0, client SshServer)
DEB [20161115-10:25:47.795] thr=1 paramiko.transport: kex algos:['ecdh-sha2-nistp256', 'ecdh-sha2-nistp384', 'ecdh-sha2-nistp521', 'diffie-hellman-group-exchange-sha1', 'diffie-hellman-group-exchange-sha256', 'diffie-hellman-group14-sha1', 'diffie-hellman-group1-sha1'] server key:['ssh-rsa'] client encrypt:['aes256-ctr', 'aes256-cbc'] server encrypt:['aes256-ctr', 'aes256-cbc'] client mac:['hmac-sha1', 'hmac-sha2-256', 'hmac-sha2-512', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com'] server mac:['hmac-sha1', 'hmac-sha2-256', 'hmac-sha2-512', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com'] client compress:['none'] server compress:['none'] client lang:['en-US'] server lang:['en-US'] kex follows?False
DEB [20161115-10:25:47.795] thr=1 paramiko.transport: Kex agreed: diffie-hellman-group1-sha1
DEB [20161115-10:25:47.796] thr=1 paramiko.transport: Cipher agreed: aes256-ctr
DEB [20161115-10:25:47.796] thr=1 paramiko.transport: MAC agreed: hmac-sha2-256
DEB [20161115-10:25:47.796] thr=1 paramiko.transport: Compression agreed: none
DEB [20161115-10:25:48.054] thr=1 paramiko.transport: kex engine KexGroup1 specified hash_algo <built-in function openssl_sha1>
DEB [20161115-10:25:48.054] thr=1 paramiko.transport: Switch to new keys ...
DEB [20161115-10:25:48.057] thr=1 paramiko.transport: userauth is OK
INF [20161115-10:25:48.137] thr=1 paramiko.transport: Authentication (password) successful!
DEB [20161115-10:25:57.677] thr=2 paramiko.transport: [chan 0] Max packet in: 32768 bytes
DEB [20161115-10:25:57.680] thr=1 paramiko.transport: [chan 0] Max packet out: 32768 bytes
DEB [20161115-10:25:57.681] thr=1 paramiko.transport: Secsh channel 0 opened.
DEB [20161115-10:25:57.682] thr=1 paramiko.transport: [chan 0] Sesch channel 0 request ok
INF [20161115-10:25:57.684] thr=2 paramiko.transport.sftp: [chan 0] Opened sftp connection (server version 3)
DEB [20161115-10:25:57.685] thr=2 paramiko.transport.sftp: [chan 0] stat(b'/GEO/OUT')
DEB [20161115-10:25:57.688] thr=2 paramiko.transport.sftp: [chan 0] normalize(b'/GEO/OUT')
DEB [20161115-10:25:57.690] thr=2 paramiko.transport.sftp: [chan 0] listdir(b'/GEO/OUT/.')
DEB [20161115-10:26:02.008] thr=2 paramiko.transport.sftp: [chan 0] stat(b'/GEO/OUT/test.csv')
DEB [20161115-10:26:02.012] thr=2 paramiko.transport.sftp: [chan 0] open(b'/GEO/OUT/test.csv', 'rb')
DEB [20161115-10:26:02.016] thr=2 paramiko.transport.sftp: [chan 0] open(b'/GEO/OUT/test.csv', 'rb') -> b'2f47454f2f4f55542f746573742e637376'
DEB [20161115-10:28:10.626] thr=1 paramiko.transport: EOF in transport thread
DEB [20161115-10:28:10.626] thr=2 paramiko.transport.sftp: [chan 0] close(b'2f47454f2f4f55542f746573742e637376')
</code></pre>
<p>and Python error : </p>
<pre><code>EOFError Traceback (most recent call last)
C:\Anaconda3\lib\site-packages\paramiko\sftp_client.py in _read_response(self, waitfor)
759 try:
--> 760 t, data = self._read_packet()
761 except EOFError as e:
C:\Anaconda3\lib\site-packages\paramiko\sftp.py in _read_packet(self)
172 def _read_packet(self):
--> 173 x = self._read_all(4)
174 # most sftp servers won't accept packets larger than about 32k, so
C:\Anaconda3\lib\site-packages\paramiko\sftp.py in _read_all(self, n)
158 if len(x) == 0:
--> 159 raise EOFError()
160 out += x
EOFError:
During handling of the above exception, another exception occurred:
SSHException Traceback (most recent call last)
<ipython-input-49-b52d34c6bd07> in <module>()
----> 1 sftp.get('/GEO/OUT/test.csv','ESRI_OUT/te.csv')
C:\Anaconda3\lib\site-packages\paramiko\sftp_client.py in get(self, remotepath, localpath, callback)
719 """
720 with open(localpath, 'wb') as fl:
--> 721 size = self.getfo(remotepath, fl, callback)
722 s = os.stat(localpath)
723 if s.st_size != size:
C:\Anaconda3\lib\site-packages\paramiko\sftp_client.py in getfo(self, remotepath, fl, callback)
697 fr.prefetch(file_size)
698 return self._transfer_with_callback(
--> 699 reader=fr, writer=fl, file_size=file_size, callback=callback
700 )
701
C:\Anaconda3\lib\site-packages\paramiko\sftp_client.py in _transfer_with_callback(self, reader, writer, file_size, callback)
596 size = 0
597 while True:
--> 598 data = reader.read(32768)
599 writer.write(data)
600 size += len(data)
C:\Anaconda3\lib\site-packages\paramiko\file.py in read(self, size)
209 read_size = max(self._bufsize, read_size)
210 try:
--> 211 new_data = self._read(read_size)
212 except EOFError:
213 new_data = None
C:\Anaconda3\lib\site-packages\paramiko\sftp_file.py in _read(self, size)
163 size = min(size, self.MAX_REQUEST_SIZE)
164 if self._prefetching:
--> 165 data = self._read_prefetch(size)
166 if data is not None:
167 return data
C:\Anaconda3\lib\site-packages\paramiko\sftp_file.py in _read_prefetch(self, size)
143 if self._prefetch_done or self._closed:
144 break
--> 145 self.sftp._read_response()
146 self._check_exception()
147 if offset is None:
C:\Anaconda3\lib\site-packages\paramiko\sftp_client.py in _read_response(self, waitfor)
760 t, data = self._read_packet()
761 except EOFError as e:
--> 762 raise SSHException('Server connection dropped: %s' % str(e))
763 msg = Message(data)
764 num = msg.get_int()
SSHException: Server connection dropped:
</code></pre>
|
<p>the solution to my problem is : </p>
<pre><code>tr = client.get_transport()
tr.default_max_packet_size = 100000000
tr.default_window_size = 100000000
</code></pre>
|
python|sftp|paramiko
| 2 |
1,901,885 | 68,384,427 |
Calculate rolling mean, max, min, std of time series pandas dataframe
|
<p>I'm trying to calculate a rolling mean, max, min, and std for specific columns inside a time series pandas dataframe. But I keep getting NaN for the lagged values and I'm not sure how to fix it. My MWE is:</p>
<pre><code>import numpy as np
import pandas as pd
# original data
df = pd.DataFrame()
np.random.seed(0)
days = pd.date_range(start='2015-01-01', end='2015-05-01', freq='1D')
df = pd.DataFrame({'Date': days, 'col1': np.random.randn(len(days)), 'col2': 20+np.random.randn(len(days)), 'col3': 50+np.random.randn(len(days))})
df = df.set_index('Date')
print(df.head(10))
def add_lag(dfObj, window):
cols = ['col2', 'col3']
for col in cols:
rolled = dfObj[col].rolling(window)
lag_mean = rolled.mean().reset_index()#.astype(np.float16)
lag_max = rolled.max().reset_index()#.astype(np.float16)
lag_min = rolled.min().reset_index()#.astype(np.float16)
lag_std = rolled.std().reset_index()#.astype(np.float16)
dfObj[f'{col}_mean_lag{window}'] = lag_mean[col]
dfObj[f'{col}_max_lag{window}'] = lag_max[col]
dfObj[f'{col}_min_lag{window}'] = lag_min[col]
dfObj[f'{col}_std_lag{window}'] = lag_std[col]
# add lag feature for 1 day, 3 days
add_lag(df, window=1)
add_lag(df, window=3)
print(df.head(10))
print(df.tail(10))
</code></pre>
|
<p>Just don't do <code>reset_index()</code>. Then it works.</p>
<pre><code>import numpy as np
import pandas as pd
# original data
df = pd.DataFrame()
np.random.seed(0)
days = pd.date_range(start='2015-01-01', end='2015-05-01', freq='1D')
df = pd.DataFrame({'Date': days, 'col1': np.random.randn(len(days)), 'col2': 20+np.random.randn(len(days)), 'col3': 50+np.random.randn(len(days))})
df = df.set_index('Date')
print(df.head(10))
def add_lag(dfObj, window):
cols = ['col2', 'col3']
for col in cols:
rolled = dfObj[col].rolling(window)
lag_mean = rolled.mean()#.reset_index()#.astype(np.float16)
lag_max = rolled.max()#.reset_index()#.astype(np.float16)
lag_min = rolled.min()#.reset_index()#.astype(np.float16)
lag_std = rolled.std()#.reset_index()#.astype(np.float16)
dfObj[f'{col}_mean_lag{window}'] = lag_mean#[col]
dfObj[f'{col}_max_lag{window}'] = lag_max#[col]
dfObj[f'{col}_min_lag{window}'] = lag_min#[col]
dfObj[f'{col}_std_lag{window}'] = lag_std#[col]
# add lag feature for 1 day, 3 days
add_lag(df, window=1)
add_lag(df, window=3)
print(df.head(10))
print(df.tail(10))
</code></pre>
|
python|pandas|dataframe|datetime
| 1 |
1,901,886 | 60,252,743 |
os.walk() is not throwing any error nor the result in python
|
<p>I am trying to print the file names which are in a predefined paths where the paths are stored in paths.txt. But when I execute the below code, I'm not getting any error nor the files names printed.</p>
<pre><code>import os
with open('D:\paths.txt', 'r') as file:
data = file.read()
path = data.split(";")
print(path)
for line in path:
for root, dirs, files in os.walk(line):
for name in files:
print(name)
</code></pre>
<p><a href="https://i.stack.imgur.com/t1QDJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t1QDJ.png" alt="Example of paths.txt"></a></p>
|
<p>You need to remove the double-quotes from the file (""). Here is why; When the file gets read by Python, after it does the <code>.split()</code>, the double-quote characters are part of the Python string. So instead of passing into <code>os.walk()</code> the path <code>D:\bp1</code>, you were actually passing in <code>"D:\bp1"</code>, and there was no path that starts with a <code>"</code> that's why nothing was happening.</p>
<p>You would only need to provide the double quotes if you're writing the name in a terminal/command prompt and don't want to escape the double quotes, or if you're trying to define the string inside Python using the double quote literal, for example <code>path = "D:\\bp1"</code> (notice that in that case you also have to escape the <code>\</code> with another one.</p>
|
python-3.7|os.walk
| 1 |
1,901,887 | 59,990,647 |
parse and decode mail text in aiosmtpd, perform string substitution, and reinject
|
<p>I began with <strong>smtpd</strong> in order to process mailqueue, parse inbound emails and send them back to recipients (using <strong>smtpdlib.sendmail</strong>).
I switched to <strong>aiosmtpd</strong> since i needed multithread processing (while <strong>smtpd</strong> is single-threaded, and besides that looks like discontinued).</p>
<p>By the way I'm puzzled by <strong>aiosmtpd</strong> management of mail <strong>envelope contents</strong>, that seems much more granular than before, so good if you need really fine tuning, but somewhat oversized if you just want to process body without modifying the rest.</p>
<p>To make an example, smtpd <strong>process_message</strong> method just needed <strong>data_decode=True</strong> parameter to process and decode mail body without touching anything, while aiosmtpd <strong>HANDLE_data</strong> method seems unable to <em>automagically</em> decode mail envelope and often gives exceptions with embedded images, attachments, and so on...</p>
<p><strong>EDIT</strong> added code examples, smtpd first: following code will instantiate smtp server waiting for mail on port 10025 and delivering to 10027 via smtplib (both localhost). It is safe to work on <strong>data</strong> variable (basically perform string substitutions, my goal) for all kind of mail (text/html based, with embedded images, attachments...)</p>
<pre><code>class PROXY_SMTP(smtpd.SMTPServer):
def process_message(self, peer, mailfrom, rcpttos, data, decode_data=True):
server = smtplib.SMTP('localhost', 10027)
server.sendmail(mailfrom, rcpttos, data)
server.quit()
server = PROXY_SMTP(('127.0.0.1', 10025), None)
asyncore.loop()
</code></pre>
<p>Previous code works well but in a single thread fashion (= 1 mail at once), so i switched to aiosmtpd to have concurrent mail processing. Same example with aiosmtpd would be roughly:</p>
<pre><code>class MyHandler:
async def handle_DATA(self, server, session, envelope):
peer = session.peer
mailfrom = envelope.mail_from
rcpttos = envelope.rcpt_tos
data = envelope.content.decode()
server = smtplib.SMTP('localhost', 10027)
server.sendmail(mailfrom, rcpttos, data)
server.quit()
my_handler = MyHandler()
async def main(loop):
my_controller = Controller(my_handler, hostname='127.0.0.1', port=10025)
my_controller.start()
loop = asyncio.get_event_loop()
loop.create_task(main(loop=loop))
try:
loop.run_forever()
</code></pre>
<p>This code works well for text emails, but will give exceptions when decoding envelope.content with any complex mail (mime content, attachments...)</p>
<p>How could I parse and decode mail text in aiosmtpd, perform string substitution as I did with smtpd, and reinject via smtplib?</p>
|
<p>You are calling <code>decode()</code> on something whose encoding you can't possibly know or predict in advance. Modifying the raw RFC5322 message is extremely problematic anyway, because you can't easily look inside quoted-printable or base64 body parts if you want to modify the contents. Also watch out for RFC2047 encapsulation in human-visible headers, file names in RFC2231 (or some dastardly non-compliant perversion - many clients don't get this even almost right) etc. See below for an example.</p>
<p>Instead, if I am guessing correctly what you want, I would parse it into an <code>email</code> object, then take it from there.</p>
<pre class="lang-py prettyprint-override"><code>from email import message_from_bytes
from email.policy import default
class MyHandler:
async def handle_DATA(self, server, session, envelope):
peer = session.peer
mailfrom = envelope.mail_from
rcpttos = envelope.rcpt_tos
message = message_from_bytes(envelope.content, policy=default)
# ... do things with the message,
# maybe look into the .walk() method to traverse the MIME structure
server = smtplib.SMTP('localhost', 10027)
server.send_message(message, mailfrom, rcpttos)
server.quit()
return '250 OK'
</code></pre>
<p>The <code>policy</code> argument selects the modern <code>email.message.EmailMessage</code> class which replaces the legacy <code>email.message.Message</code> class from Python 3.2 and earlier. (A lot of online examples still promote the legacy API; the new one is more logical and versatile, so you want to target that if you can.)</p>
<p>This also adds the missing <code>return</code> statement which each handler should provide as per <a href="https://aiosmtpd.readthedocs.io/en/latest/aiosmtpd/docs/handlers.html" rel="nofollow noreferrer">the documentation.</a></p>
<hr>
<p>Here's an example message which contains the string "Hello" in two places. Because the content-transfer-encoding obscures the content, you need to analyze the message (such as by parsing it into an <code>email</code> object) to be able to properly manipulate it.</p>
<pre><code>From: me <me@example.org>
To: you <recipient@example.net>
Subject: MIME encapsulation demo
Mime-Version: 1.0
Content-type: multipart/alternative; boundary="covfefe"
--covfefe
Content-type: text/plain; charset="utf-8"
Content-transfer-encoding: quoted-printable
You had me at "H=
ello."
--covfefe
Content-type: text/html; charset="utf-8"
Content-transfer-encoding: base64
PGh0bWw+PGhlYWQ+PHRpdGxlPkhlbGxvLCBpcyBpdCBtZSB5b3UncmUgbG9va2luZyBmb3I/PC
90aXRsZT48L2hlYWQ+PGJvZHk+PHA+VGhlIGNvdiBpbiB0aGUgZmUgZmU8L3A+PC9ib2R5Pjwv
aHRtbD4K
--covfefe--
</code></pre>
|
python|email|smtp|email-attachments|aiosmtpd
| 2 |
1,901,888 | 1,645,028 |
Trace Table for Python Programs
|
<p>Is there a way to get the trace table for a Python program? Or for a program to run another program and get its trace table? I'm a teacher trying to flawlessly verify the answers to the tracing problems that we use on our tests.</p>
<p>So, for example, assuming I have a Python program named <code>problem1.py</code> with the following content:</p>
<h3>problem1.py</h3>
<pre><code> a = 1
b = 2
a = a + b
</code></pre>
<p>Executing the presumed program <code>traceTable.py</code> should go as:</p>
<pre><code> $ python traceTable.py problem1.py
L || a | b
1 || 1 |
2 || 1 | 2
4 || 3 | 2
</code></pre>
<p>(Or the same information with a different syntax)</p>
<p>I've looked into the <code>trace</code> module, and I can't see a way that it supports this.</p>
<hr>
<h3>Updated</h3>
<p>Ladies and gentlemen: using Ned Batchelder's excellent advice, I give you <code>traceTable.py</code>! </p>
<p>Well.. almost. As you can see in Ned Batchelder's example, <code>frame.f_lineno</code> doesn't always behave intuitively (e.g. both lines 3 & 4 are counted as line 4), but the line numbers are close enough for a fairly good reference. Also, all calculations are correct. </p>
<p>I have tested this with a long program containing an <code>if</code> statement and it gave the correct table (sans the line numbers).</p>
<p>You will also notice that my program is significantly longer than Ned Batchelder's proof of concept due to accounting for the "more interesting ecosystems of data" in larger programs he mentioned. In the scope of using <code>execfile</code> and all the variables needed to manage it and reduce noise (ala <code>ignored_variables</code>) as well as produce proper string output, a lot more code is needed:</p>
<h2>traceTable.py</h2>
<pre><code> '''
Usage: python traceTable.py program
-program Python program to be traced
'''
import sys
if len(sys.argv) < 2:
print __doc__
exit()
else:
file_name = sys.argv[1]
past_locals = {}
variable_list = []
table_content = ""
ignored_variables = set([
'file_name',
'trace',
'sys',
'past_locals',
'variable_list',
'table_content',
'getattr',
'name',
'self',
'object',
'consumed',
'data',
'ignored_variables'])
def trace(frame, event, arg_unused):
global past_locals, variable_list, table_content, ignored_variables
relevant_locals = {}
all_locals = frame.f_locals.copy()
for k,v in all_locals.items():
if not k.startswith("__") and k not in ignored_variables:
relevant_locals[k] = v
if len(relevant_locals) > 0 and past_locals != relevant_locals:
for i in relevant_locals:
if i not in past_locals:
variable_list.append(i)
table_content += str(frame.f_lineno) + " || "
for variable in variable_list:
table_content += str(relevant_locals[variable]) + " | "
table_content = table_content[:-2]
table_content += '\n'
past_locals = relevant_locals
return trace
sys.settrace(trace)
execfile(file_name)
table_header = "L || "
for variable in variable_list:
table_header += variable + ' | '
table_header = table_header[:-2]
print table_header
print table_content
</code></pre>
<p>When called, it produces the output</p>
<pre><code> $ python traceTable.py problem1.py
L || a | b
2 || 1
4 || 1 | 2
4 || 3 | 2
</code></pre>
|
<p>This isn't a use case that the current Python tracing tool support, but it should be possible to build. I don't know how you decide what columns to output. In your sample, a and b are the only local variables, but larger programs would have more interesting ecosystems of data.</p>
<p>Updated: here's a simple proof of concept:</p>
<pre class="lang-none prettyprint-override"><code> 1 import sys
2
3 def trace(frame, event, arg_unused):
4 print event, frame.f_lineno, frame.f_locals
5 return trace
6
7 sys.settrace(trace)
8
9 def foo():
10 a = 1
11 b = 2
12
13 a = a + b
14
15 foo()
</code></pre>
<p>when run, the output is:</p>
<pre class="lang-none prettyprint-override"><code>call 9 {}
line 10 {}
line 11 {'a': 1}
line 13 {'a': 1, 'b': 2}
return 13 {'a': 3, 'b': 2}
</code></pre>
|
python
| 11 |
1,901,889 | 28,291,085 |
Extract integer part from String in python
|
<p>Suppose I have a String like <code>gi|417072228|gb|JX515788.1|</code>. I need to extract the digit part <code>417072228</code> out of it using <code>python</code>. How can I split that part from the string? Should I use regular expression? </p>
<p>Can anyone help me with this? Thanks in advance..</p>
|
<p>It looks like you have delimiters in your input string already, which makes this easy to do with the methods built in to the string data type. No need for regular expressions. </p>
<pre><code>for segment in s.split('|'):
if segment.isdigit():
# do your stuff with the number
</code></pre>
|
python|regex
| 2 |
1,901,890 | 44,272,726 |
python celery - get() is delayed
|
<p>I am running the following simple example.
Submit 20 jobs that take 2 seconds each using a single worker:</p>
<pre><code>celery -A celery_test worker --concurrency 10 -l INFO
</code></pre>
<p>This should take 2 * 2 = 4 seconds. This is true for the worker to process the data. However, getting the data adds an additional delay of 6 seconds.</p>
<p><strong>Any ideas how to get rid of this delay?</strong></p>
<p>For scripts and outputs see below:</p>
<p><em>celery_call.py:</em></p>
<pre><code>from celery_test import add
import time
results = []
for i in range(20):
results.append(add.delay(i))
for result in results:
timeStart = time.time()
resultValue = result.get(timeout=10)
timePassed = time.time() - timeStart
print(timePassed, resultValue)
</code></pre>
<p><em>celery_test.py:</em></p>
<pre><code>from celery import Celery
app = Celery('celery_test', backend='redis://localhost', broker='redis://localhost')
@app.task
def add(x):
import time
time.sleep(2)
return x
</code></pre>
<p><em>Output celery_call.py</em> <strong>-> in total execution takes 10s!!!</strong>:</p>
<pre><code>1.9161145687103271 0
0.0035011768341064453 1
0.016004323959350586 2
0.017235994338989258 3
0.01010441780090332 4
0.0038263797760009766 5
0.005273342132568359 6
0.004664897918701172 7
0.012930631637573242 8
0.003242015838623047 9
1.9315376281738281 10
0.0010662078857421875 11
0.013183832168579102 12
0.11239218711853027 13
1.001314640045166 14
1.0015337467193604 15
1.002277135848999 16
1.0016703605651855 17
1.0015861988067627 18
1.0017943382263184 19
</code></pre>
<p><em>Logging output of worker</em> <strong>-> as expected it takes 4s to process the data</strong>:</p>
<pre><code>[2017-05-30 14:54:21,475: INFO/MainProcess] Received task: celery_test.add[8a4a00cc-29a1-4a2f-a659-0ea7eb3aabb1]
[2017-05-30 14:54:21,479: INFO/MainProcess] Received task: celery_test.add[498a19df-0dfa-49f2-b4d8-c9eaa0b8782c]
[2017-05-30 14:54:21,483: INFO/MainProcess] Received task: celery_test.add[7bc232ca-e85c-4ae7-90bf-1d65c919fa4e]
[2017-05-30 14:54:21,500: INFO/MainProcess] Received task: celery_test.add[12cdb039-00d2-4471-8ce7-4da256dc83ef]
[2017-05-30 14:54:21,502: INFO/MainProcess] Received task: celery_test.add[931e1d19-640b-4f30-9b04-b65a165a1bc2]
[2017-05-30 14:54:21,515: INFO/MainProcess] Received task: celery_test.add[dd78de2e-b9a8-465e-a902-6f9eab1386e9]
[2017-05-30 14:54:21,518: INFO/MainProcess] Received task: celery_test.add[fb27c545-ad48-4d84-a5a2-154c1290aba3]
[2017-05-30 14:54:21,523: INFO/MainProcess] Received task: celery_test.add[ce079e0a-6fdf-4ee2-a6bf-ea349a435c4f]
[2017-05-30 14:54:21,534: INFO/MainProcess] Received task: celery_test.add[1222d9e2-9496-4b83-8cba-ad0b34c4df3d]
[2017-05-30 14:54:21,542: INFO/MainProcess] Received task: celery_test.add[67c2bf84-b39e-40bb-b1f5-a78b902d92a8]
[2017-05-30 14:54:21,551: INFO/MainProcess] Received task: celery_test.add[8aee72dd-2230-4d0a-8e4e-e7a3d5ca245c]
[2017-05-30 14:54:21,558: INFO/MainProcess] Received task: celery_test.add[e636f1ab-54cb-47a1-b1da-19744050566a]
[2017-05-30 14:54:21,561: INFO/MainProcess] Received task: celery_test.add[67a45660-2383-4d00-aaea-30e027a37a7d]
[2017-05-30 14:54:21,563: INFO/MainProcess] Received task: celery_test.add[4aa3227b-2ea4-4406-b205-d118c31c43bc]
[2017-05-30 14:54:21,565: INFO/MainProcess] Received task: celery_test.add[de317340-1012-4c9e-9bf1-4fa7248a91fc]
[2017-05-30 14:54:21,566: INFO/MainProcess] Received task: celery_test.add[791cf66e-2bff-4571-8209-a068451d1cb5]
[2017-05-30 14:54:21,569: INFO/MainProcess] Received task: celery_test.add[23701df3-138b-4248-a529-fba6789c2c0d]
[2017-05-30 14:54:21,569: INFO/MainProcess] Received task: celery_test.add[e3154044-39bd-481f-aadf-21e61d95f99e]
[2017-05-30 14:54:21,570: INFO/MainProcess] Received task: celery_test.add[0770e885-901e-45c0-a269-42c86aba7d05]
[2017-05-30 14:54:21,571: INFO/MainProcess] Received task: celery_test.add[a377fe5c-eb4e-44a7-9284-e83a67743096]
[2017-05-30 14:54:23,480: INFO/PoolWorker-7] Task celery_test.add[8a4a00cc-29a1-4a2f-a659-0ea7eb3aabb1] succeeded in 2.003492763997201s: 0
[2017-05-30 14:54:23,483: INFO/PoolWorker-9] Task celery_test.add[498a19df-0dfa-49f2-b4d8-c9eaa0b8782c] succeeded in 2.00371297500169s: 1
[2017-05-30 14:54:23,500: INFO/PoolWorker-1] Task celery_test.add[7bc232ca-e85c-4ae7-90bf-1d65c919fa4e] succeeded in 2.002869830997952s: 2
[2017-05-30 14:54:23,536: INFO/PoolWorker-8] Task celery_test.add[fb27c545-ad48-4d84-a5a2-154c1290aba3] succeeded in 2.016123138000694s: 6
[2017-05-30 14:54:23,536: INFO/PoolWorker-3] Task celery_test.add[12cdb039-00d2-4471-8ce7-4da256dc83ef] succeeded in 2.032121352000104s: 3
[2017-05-30 14:54:23,562: INFO/PoolWorker-10] Task celery_test.add[67c2bf84-b39e-40bb-b1f5-a78b902d92a8] succeeded in 2.005405851999967s: 9
[2017-05-30 14:54:23,562: INFO/PoolWorker-5] Task celery_test.add[1222d9e2-9496-4b83-8cba-ad0b34c4df3d] succeeded in 2.0252396640025836s: 8
[2017-05-30 14:54:23,562: INFO/PoolWorker-4] Task celery_test.add[931e1d19-640b-4f30-9b04-b65a165a1bc2] succeeded in 2.0579610860004323s: 4
[2017-05-30 14:54:23,563: INFO/PoolWorker-2] Task celery_test.add[ce079e0a-6fdf-4ee2-a6bf-ea349a435c4f] succeeded in 2.026003548002336s: 7
[2017-05-30 14:54:23,574: INFO/PoolWorker-6] Task celery_test.add[dd78de2e-b9a8-465e-a902-6f9eab1386e9] succeeded in 2.0539962090006156s: 5
[2017-05-30 14:54:25,492: INFO/PoolWorker-9] Task celery_test.add[e636f1ab-54cb-47a1-b1da-19744050566a] succeeded in 2.005732863002777s: 11
[2017-05-30 14:54:25,493: INFO/PoolWorker-7] Task celery_test.add[8aee72dd-2230-4d0a-8e4e-e7a3d5ca245c] succeeded in 2.0076579160013353s: 10
[2017-05-30 14:54:25,509: INFO/PoolWorker-1] Task celery_test.add[67a45660-2383-4d00-aaea-30e027a37a7d] succeeded in 2.007014112001343s: 12
[2017-05-30 14:54:25,588: INFO/PoolWorker-10] Task celery_test.add[a377fe5c-eb4e-44a7-9284-e83a67743096] succeeded in 2.0102590669994242s: 19
[2017-05-30 14:54:25,588: INFO/PoolWorker-6] Task celery_test.add[e3154044-39bd-481f-aadf-21e61d95f99e] succeeded in 2.0111475869998685s: 17
[2017-05-30 14:54:25,589: INFO/PoolWorker-3] Task celery_test.add[de317340-1012-4c9e-9bf1-4fa7248a91fc] succeeded in 2.0130576739975368s: 14
[2017-05-30 14:54:25,589: INFO/PoolWorker-8] Task celery_test.add[0770e885-901e-45c0-a269-42c86aba7d05] succeeded in 2.0113905420002993s: 18
[2017-05-30 14:54:25,589: INFO/PoolWorker-5] Task celery_test.add[23701df3-138b-4248-a529-fba6789c2c0d] succeeded in 2.012135950000811s: 16
[2017-05-30 14:54:25,617: INFO/PoolWorker-4] Task celery_test.add[791cf66e-2bff-4571-8209-a068451d1cb5] succeeded in 2.04044298000008s: 15
[2017-05-30 14:54:25,619: INFO/PoolWorker-2] Task celery_test.add[4aa3227b-2ea4-4406-b205-d118c31c43bc] succeeded in 2.043387800000346s: 13
</code></pre>
|
<p>It's because you wait for each job result in the loop. So you loose somewhat the benefits of concurrency because jobs results don't arrive in the same order as you request the results. See the example below with some timings added to get all time :</p>
<pre><code>from celery_test import add
import time
results = []
for i in range(20):
results.append(add.delay(i))
allTimeStart = time.time()
for result in results:
timeStart = time.time()
resultValue = result.get(timeout=10)
timePassed = time.time() - timeStart
allTimePassed = time.time() - allTimeStart
print(allTimePassed, timePassed, resultValue)
</code></pre>
<p>Gives </p>
<pre><code>(1.9835469722747803, 1.9835450649261475, 0)
(1.9858801364898682, 0.0022699832916259766, 1)
(1.988955020904541, 0.003039121627807617, 2)
(1.9928300380706787, 0.003849029541015625, 3)
(1.9935901165008545, 0.0007331371307373047, 4)
(1.9967319965362549, 0.0031011104583740234, 5)
(1.9973289966583252, 0.0005509853363037109, 6)
(2.0004770755767822, 0.003117084503173828, 7)
(2.0007641315460205, 0.00026702880859375, 8)
(3.00203800201416, 1.001255989074707, 9)
(3.9891350269317627, 0.9870359897613525, 10)
(3.9914891719818115, 0.0023059844970703125, 11)
(3.99283504486084, 0.001302957534790039, 12)
(3.99426007270813, 0.0013878345489501953, 13)
(3.997709035873413, 0.003403902053833008, 14)
(3.9984171390533447, 0.0006809234619140625, 15)
(4.000844955444336, 0.0024080276489257812, 16)
(4.004598140716553, 0.003731966018676758, 17)
(4.0053839683532715, 0.0007598400115966797, 18)
(5.006708145141602, 1.0012950897216797, 19)
</code></pre>
<p>But if you look the order of celery tasks results in celery log, you see that results don't arrive ordered as you request them :</p>
<pre><code>[2017-05-31 01:06:39,067: INFO/PoolWorker-2] Task celery_test.add[01fe4581-7982-40f3-92d3-9f352d0b8eca] succeeded in 2.00315466001s: 0
[2017-05-31 01:06:39,069: INFO/PoolWorker-8] Task celery_test.add[f468849c-76d9-4479-b7e2-850aab640437] succeeded in 2.003014307s: 1
[2017-05-31 01:06:39,072: INFO/PoolWorker-3] Task celery_test.add[db6a0064-0a83-49dc-a731-54264651a32f] succeeded in 2.002590772s: 3
[2017-05-31 01:06:39,072: INFO/PoolWorker-4] Task celery_test.add[421b1213-e1b7-4c73-8477-1554c53c4b14] succeeded in 2.002614007s: 2
[2017-05-31 01:06:39,076: INFO/PoolWorker-7] Task celery_test.add[90bdde7f-9740-4d18-820d-dc4c66090b2b] succeeded in 2.00297982999s: 4
[2017-05-31 01:06:39,077: INFO/PoolWorker-5] Task celery_test.add[661cba10-326a-4351-9fec-56d029847939] succeeded in 2.003134354s: 5
[2017-05-31 01:06:39,080: INFO/PoolWorker-10] Task celery_test.add[31903dfe-4b35-49b8-bc66-8c8807a1ee53] succeeded in 2.00229301301s: 6
[2017-05-31 01:06:39,080: INFO/PoolWorker-9] Task celery_test.add[60049a1b-009b-4d7b-ad4e-284f0d2e7147] succeeded in 2.00245238301s: 7
[2017-05-31 01:06:39,084: INFO/PoolWorker-1] Task celery_test.add[4e673409-af0e-4a59-8a42-38f0179b495a] succeeded in 2.00299428699s: 8
[2017-05-31 01:06:39,084: INFO/PoolWorker-6] Task celery_test.add[818bcea5-5654-4ec6-8706-1b6ca58f8735] succeeded in 2.002899974s: 9
[2017-05-31 01:06:41,072: INFO/PoolWorker-2] Task celery_test.add[4ab62e6d-ada3-4e0d-82e2-356eb054631f] succeeded in 2.00349172599s: 10
[2017-05-31 01:06:41,074: INFO/PoolWorker-8] Task celery_test.add[649c83db-a065-4cdd-9f5e-32ae1e5047f4] succeeded in 2.003091722s: 11
[2017-05-31 01:06:41,076: INFO/PoolWorker-4] Task celery_test.add[f6a6e067-7f60-4c1f-b8f4-dce40a6094c0] succeeded in 2.00157168499s: 12
[2017-05-31 01:06:41,077: INFO/PoolWorker-3] Task celery_test.add[ee7b0e01-2fa7-4bd0-b2f2-f5636155209b] succeeded in 2.00259804301s: 13
[2017-05-31 01:06:41,081: INFO/PoolWorker-7] Task celery_test.add[521f7903-3594-4aab-b4df-3a4e723347cd] succeeded in 2.002994123s: 14
[2017-05-31 01:06:41,081: INFO/PoolWorker-5] Task celery_test.add[26a3627c-7934-4613-b3c1-618784bbce26] succeeded in 2.003302467s: 15
[2017-05-31 01:06:41,084: INFO/PoolWorker-9] Task celery_test.add[8e796394-b05f-439b-b695-6d3ff3230844] succeeded in 2.00281064s: 17
[2017-05-31 01:06:41,084: INFO/PoolWorker-10] Task celery_test.add[13b40cd8-b0e4-4788-a3bb-4d050c1b6ad0] succeeded in 2.00298337401s: 16
[2017-05-31 01:06:41,088: INFO/PoolWorker-6] Task celery_test.add[cb8f1303-4d05-4eae-9b40-b2d221f20140] succeeded in 2.00274520101s: 19
[2017-05-31 01:06:41,088: INFO/PoolWorker-1] Task celery_test.add[0900bb54-8e2a-472c-99a8-ee18a8f4857c] succeeded in 2.003100015s: 18
</code></pre>
<p>One solution : use <code>group</code> to get all results :</p>
<pre><code>from celery_test import add
from celery import group
import time
results = []
jobs = []
for i in range(20):
jobs.append(add.s(i))
result = group(jobs).apply_async()
timeStart = time.time()
print(result.join())
timePassed = time.time() - timeStart
print(timePassed)
</code></pre>
<p>Returns </p>
<pre><code>[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
4.00328302383
</code></pre>
|
python|celery
| 1 |
1,901,891 | 44,314,704 |
Why are not all results of my calculation being printed
|
<p>i am wondering how i can print and see all the results of my calculation? At the moment i can only see 5 of the result. It should be possible to see all results right? But how? Using python 2.7. Thanks in advance for your help and fast responses. </p>
<pre><code>in[] print optv['x'].round(3)
out[] [ 0.123 0.122 0. ..., 0. 0. 0. ]
</code></pre>
|
<p>NumPy automatically summarizes the printed representation of arrays with more than 1000 elements. To disable this, use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.set_printoptions.html" rel="nofollow noreferrer"><code>numpy.set_printoptions</code></a>:</p>
<pre><code>numpy.set_printoptions(threshold=numpy.inf)
</code></pre>
|
python
| 1 |
1,901,892 | 44,079,332 |
Passing a string as an argument to a python script
|
<p>I want to pass a string of ZPL codes from one python script to another python script. The string becomes malformed when used in the second script. How can I pass a string literal as an argument to another python script without it being malformed?</p>
<p><strong>Original String</strong></p>
<p><code>^XA^FO20,20^BQ,2,3^FDQA,001D4B02107A;1001000;49681207^FS^FO50,50^ADN,36,20^FDMAC: 001D4B02107A^FS^FO50,150^ADN,36,20^FDSN: 1001000^FS^FO50,250^ADN,36,20^FDCode: 49681207^FS^XZ</code></p>
<p><strong>Malformed string</strong></p>
<p><code>XAFO20,20BQ,2,3FDQA,001D4B02107A;1001000;49681207FSFO50,50ADN,36,20FDMAC:</code></p>
<p><strong>Code where I call the second script</strong></p>
<pre><code>def printLabel():
label = "^XA"+"^FO20,20^BQ,2,3^FDQA,"+"001D4B02107A;1001000;49681207"+"^FS"+"^FO50,50"+"^ADN,36,20"+"^FD"+"MAC: "+"001D4B02107A"+"^FS"+"^FO50,150"+"^ADN,36,20"+"^FD"+"SN: "+"1001000"+"^FS"+"^FO50,250"+"^ADN,36,20"+"^FD" + "Code: "+"49681207"+"^FS"+"^XZ"
command = "zt320print.py "+label
print command
sys.stdout.flush()
exitCode = os.system(str(command))
</code></pre>
<p><strong>Code that receives the argument</strong></p>
<pre><code>if __name__ == "__main__":
zplString = str(sys.argv[1])
print zplString
printZPL(zplString)
</code></pre>
|
<p>If your code needs to be written just as it is (including the rather odd way of stringing together the ZPL code, and calling a separate script via a shell intermediary, and the avoidance of <code>subprocess</code>, for that matter), you can resolve your issue with a few small adjustments:</p>
<p>First, wrap your code string in double-quotes.</p>
<pre><code>label= '"^XA'+"^FO20,20^BQ,2,3^FDQA,"+"001D4B02107A;1001000;49681207"+"^FS"+"^FO50,50"+"^ADN,36,20"+"^FD"+"MAC: "+"001D4B02107A"+"^FS"+"^FO50,150"+"^ADN,36,20"+"^FD"+"SN: "+"1001000"+"^FS"+"^FO50,250"+"^ADN,36,20"+"^FD" + "Code: "+"49681207"+"^FS"+'^XZ"'
</code></pre>
<p>Second, make sure you're actually calling <code>python</code> from the shell:</p>
<pre><code>command = "python script2.py "+label
</code></pre>
<p>Finally, if you're concerned about special characters not being read in correctly from the command line, use <code>unicode_escape</code> from <code>codecs.decode</code> to ensure correct transmission.<br>
See <a href="https://stackoverflow.com/a/34974354/2799941">this answer</a> for more on <code>unicode_escape</code>.</p>
<pre><code># contents of second script
if __name__ == "__main__":
from codecs import decode
import sys
zplString = decode(sys.argv[1], 'unicode_escape')
print(zplString)
</code></pre>
<p>Now the call from your first script will transmit the code correctly:</p>
<pre><code>import sys
import os
sys.stdout.flush()
exitCode = os.system(str(command))
</code></pre>
<p>Output:</p>
<pre><code>^XA^FO20,20^BQ,2,3^FDQA,001D4B02107A;1001000;49681207^FS^FO50,50^ADN,36,20^FDMAC: 001D4B02107A^FS^FO50,150^ADN,36,20^FDSN: 1001000^FS^FO50,250^ADN,36,20^FDCode: 49681207^FS^XZ
</code></pre>
|
python|arguments
| 2 |
1,901,893 | 32,681,532 |
Python program with Notification in Gnome Shell doesn't work
|
<p>I'm writing a python program that takes info from a webpage and show it on Notification in Gnome Shell. I'm using Arch, so I want to start this program at startup and if there is any change on the webpage, it will notify me. Here is my code:</p>
<pre><code>import time
import webbrowser
import requests
from bs4 import BeautifulSoup
from gi.repository import Notify, GLib
IPS = {'Mobifone': True, 'Viettel': False, 'Vinaphone': False}
LINK = "https://id.vtc.vn/tin-tuc/chuyen-muc-49/tin-khuyen-mai.html"
def set_ips_state(ips_name, state):
global IPS
for key in IPS.iterkeys():
if key == ips_name:
IPS[key] = state
def call_webbrowser(notification, action_name, link):
webbrowser.get('firefox').open_new_tab(link)
def create_notify(summary, body, link):
Notify.init("Offer")
noti = Notify.Notification.new(summary, body, 'dialog-information')
noti.add_action('action_click', 'Read more...', call_webbrowser, link)
noti.show()
# GLib.MainLoop().run()
def save_to_file(path_to_file, string):
file = open(path_to_file, 'w')
file.write(string)
file.close()
def main():
global IPS
global LINK
result = []
offer_news = open('offer_news.txt')
tag_in_file = BeautifulSoup(offer_news.readline(), 'html.parser')
tag = tag_in_file.a
offer_news.close()
page = requests.get(LINK)
soup = BeautifulSoup(page.text, 'html.parser')
for div in soup.find_all('div', 'tt_dong1'):
# first_a = div.a
# main_content = first_a.find_next_subling('a')
main_content = div.find_all('a')[1]
for k, v in IPS.iteritems():
if v:
if main_content.text.find(k) != -1:
result.append(main_content)
print result[1].encode('utf-8')
if tag_in_file == '':
pass
else:
try:
old_news_index = result.index(tag)
print old_news_index
for idx in range(old_news_index):
create_notify('Offer News', result[idx].text.encode('utf-8'), result[idx].get('href'))
print "I'm here"
except ValueError:
pass
offer_news = open('offer_news.txt', 'w')
offer_news.write(result[0].__str__())
offer_news.close()
if __name__ == '__main__':
while 1:
main()
time.sleep(10)
</code></pre>
<p>The problem is when I click on "Read more..." button in the Notification, it does not open Firefox unless I uncomment <code>GLib.MainLoop().run()</code> in create_notify function, but that makes the program freeze. Could anybody help?</p>
|
<p>The GUI applications usually use three main components: widgets, event loop and callbacks. When you start that application, you create widgets, register callbacks and start event loop. Event loop is infinite loop which looks for events from widgets (such as 'clicked button') and fires corresponding callbacks.</p>
<p>Now, in your application you have another infinite loop, so these two will not play along. Instead, you should make use of the <code>GLib.MainLoop().run()</code> to fire events. You can use <code>GLib.timeout_add_seconds</code> to fire periodic events such as your every 10 seconds.</p>
<p>Second problem is that you need to hold reference to a notification which is supposed to call callbacks. The reason why it worked when you added <code>GLib.MainLoop().run()</code> after <code>noti.show()</code> is that reference to <code>noti</code> still exists, but it would not work if you would do changes as I have suggested earlier. If you are sure there is always going to be just one notification active, you can hold the reference to the last notification. Otherwise you would need a list and periodically purge it or something along the lines. </p>
<p>The following example should set you in the right direction:</p>
<pre><code>from gi.repository import GLib, Notify
class App():
def __init__(self):
self.last_notification = None
Notify.init('Test')
self.check()
def check(self):
self.last_notification = Notify.Notification.new('Test')
self.last_notification.add_action('clicked', 'Action',
self.notification_callback, None)
self.last_notification.show()
GLib.timeout_add_seconds(10, self.check)
def notification_callback(self, notification, action_name, data):
print(action_name)
app = App()
GLib.MainLoop().run()
</code></pre>
|
python|linux|pygobject|gnome-shell
| 7 |
1,901,894 | 32,862,732 |
Reading a large amount of numbers in python
|
<p>I am trying to read a large amount of numbers (8112 in total) and rearrange them into 6 columns. First, I want to add 52 numbers to the first column, then I want to add 52 to the second, then 52 to the third, and so on. When I have the resultant 6 columns, each containing 52 numbers, I want to continue to reading in the same way till the end of data. I have try this:</p>
<pre><code>with open('file.dat') as f:
line = f.read().split()
for row in range(len(line)):
for col in range(6):
print line[row + 52*col],
print
</code></pre>
<p>The code is not reading correctly the numbers and does not got till the end. It is stooping after reading about 7000 numbers. I get an <code>Index error: list index out of range</code>. </p>
<p>The input file contains the numbers listed like this: </p>
<p><code>-0.001491728991 -0.001392067804 -0.001383514062 -0.000777354202 -0.000176516325 -0.00066003232 0.001491728657 0.001392067465 0.00138351373 0.00077735388 0.000176516029 0.000660032023 -0.001491728966 -0.001392067669 -0.001383513988 -0.000777354111 -0.000176516303 2.5350931e-05 -0.000660032277 0.001491728631 0.00139206733 0.001383513657 0.000777353789 0.000176516006 0.000660031981 -0.003692742099 -0.003274685372 -0.001504168916 0.003692740966 0.003274684254 0.001504167874 -0.003692741847 -0.003274685132 -0.001504168791 (...)</code> </p>
<p>(8112 numbers in total)</p>
|
<p>try this:</p>
<pre><code>data = range(8112) # replace with input from file
col_size = 52
col_count = 6
batch_size = (col_size*col_count)
# split input into batches of 6 columns of 52 entries each
for batch in range(0,len(data),batch_size):
# rearrange line data into 6 column format
cols = zip(*[data[x:x+col_size] for x in range(batch,batch+batch_size,col_size)])
for c in cols:
print c
</code></pre>
<p>output:</p>
<pre><code>(0, 52, 104, 156, 208, 260)
(1, 53, 105, 157, 209, 261)
...
(50, 102, 154, 206, 258, 310)
(51, 103, 155, 207, 259, 311)
(312, 364, 416, 468, 520, 572)
(313, 365, 417, 469, 521, 573)
...
(362, 414, 466, 518, 570, 622)
(363, 415, 467, 519, 571, 623)
(624, 676, 728, 780, 832, 884)
(625, 677, 729, 781, 833, 885)
...
</code></pre>
|
python
| 0 |
1,901,895 | 34,638,092 |
access properties of widget inside canvas.create_window
|
<p>I am having the following problem. I am making a tkinter GUI, and I need to access an object that is inside a canvas, inside a Canvas.create_window widget, packed with some other objects. For example:</p>
<pre><code>import Tkinter as tk
class Demo:
def __init__(self, master):
self.canvas = tk.Canvas()
self.canvas.pack(fill="both", expand=True)
f = tk.Frame(self.canvas)
f.pack()
self.container = self.canvas.create_window(50,50, window = f)
l = tk.Label(f, text='abc')
e = tk.Entry(f, width = 5)
l.pack()
e.pack()
if __name__ == '__main__':
root = tk.Tk()
app = Demo(root)
root.mainloop()
</code></pre>
<p>I am trying to edit the text of the l label (which is currently 'abc'), when some other event is triggered. I suppose I need to use canvas.itemconfig, but I can't find a way to pass to this function the correct reference to the label. Any Ideas?
Thank you</p>
|
<p>You don't need to use <code>itemconfigure</code> -- that is only for configuring canvas items. Your label is not a canvas item, it's just a normal tkinter widget that you access like any other widget. Save a reference, and then use the reference to call a method.</p>
<p>For example:</p>
<pre><code>class Demo:
def __init__(...):
...
self.l = tk.Label(f, text='abc')
...
def some_event_handler(event):
self.l.configure(text="xyz")
</code></pre>
|
python|tkinter|tkinter-canvas
| 2 |
1,901,896 | 34,614,532 |
Rotating a 5D array in the last 2 dimensions
|
<p>I have a 5D array 'a', of size (3,2,2,2,2).</p>
<pre><code>import numpy as np
a = np.arange(48).reshape(3,2,2,2,2)
a[0,0,0]:
array([[0, 1],
[2, 3]])
</code></pre>
<p>What I want to do is rotate this 5D array by 180 degrees, but only in the last two dimensions, without their positions changed. So output[0,0,0] should look like this:</p>
<pre><code>out[0,0,0]:
array([[3, 2],
[1, 0]])
</code></pre>
<p>What I have tried:</p>
<pre><code>out = np.rot90(a, 2)
out[0,0,0]:
array([[40, 41],
[42, 43]])
</code></pre>
<p>The <code>rot90</code> function apparently rotates the whole array.</p>
<p>Note: I want to avoid using for loops if possible</p>
|
<p>To rotate the last two axes 180 degrees, pass <code>axes=(-2, -1)</code> to <code>np.rot90</code>:</p>
<pre><code>>>> a180 = np.rot90(a, 2, axes=(-2, -1))
>>> a180[0, 0, 0]
array([[3, 2],
[1, 0]])
</code></pre>
<p>If your version of NumPy does not have the <code>axes</code> parameter with <code>np.rot90</code>, there are alternatives.</p>
<p>One way is with indexing:</p>
<pre><code>a180 = a[..., ::-1, ::-1]
</code></pre>
<p><code>rot90</code> flips the <em>first</em> two axes of the array, therefore to use it you'd need to transpose (to reverse the axes), rotate, and transpose back again. For example:</p>
<pre><code>np.rot90(a.T, 2).T
</code></pre>
|
python|arrays|numpy|rotation|vectorization
| 2 |
1,901,897 | 34,650,576 |
Opening a file in an editor while it's open in a script
|
<p>I have the following code:</p>
<pre><code>import os
import sys
import tempfile
import subprocess
with tempfile.NamedTemporaryFile('w+') as f:
if sys.platform == 'linux':
subprocess.call('vim', f.name)
elif sys.platform == 'nt':
os.system(f.name)
</code></pre>
<p>It opens <code>foobar.txt</code> using either <code>vim</code> on Linux, or the default editor on Windows. On Linux it works fine: <a href="http://docs.python.org/3/library/tempfile.html#tempfile.NamedTemporaryFile" rel="noreferrer"><code>tempfile.NamedTemporaryFile()</code></a> creates a temporary file and <code>vim</code> opens it. On Windows, however, the system says:</p>
<blockquote>
<p>The process cannot access the file because it is being used by another process.</p>
</blockquote>
<p>I guess that's because the script is currently <em>using</em> the file.</p>
<p>Why does it work on Linux, and how do I get it to work on Windows?</p>
|
<p>I've run into this problem before. My problem was that I had to write to a file and then use that file's name as an argument in a command.</p>
<p>The reason this works in Linux is that, as <a href="https://stackoverflow.com/users/4014959/pm-2ring">@PM 2Ring</a> said in the comments, Linux allows multiple processes to write to the same file, but Windows does not.</p>
<p>There are two approaches to tackle this.</p>
<p>One is to create a <a href="https://docs.python.org/2/library/tempfile.html#tempfile.mkdtemp" rel="nofollow noreferrer">temporary directory</a> and create a file in that directory.</p>
<pre><code># Python 2 and 3
import os
import tempfile
temp_dir = tempfile.mkdtemp()
try:
temp_file = os.path.join(temp_dir, 'file.txt')
with open(temp_file, 'w') as f:
pass # Create the file, or optionally write to it.
try:
do_stuff(temp_file) # In this case, open the file in an editor.
finally:
os.remove(file_name)
finally:
os.rmdir(temp_dir)
</code></pre>
<pre><code># Python 3 only
import tempfile
with tempfile.TemporaryDirectory() as temp_dir:
temp_file = os.path.join(temp_dir, 'file.txt')
with open(temp_file, 'w') as f:
pass # Create the file, or optionally write to it.
do_stuff(temp_file)
# with tempfile.TemporaryDirectory(): automatically deletes temp_file
</code></pre>
<p>Another approach is to create the temporary file with <code>delete=False</code> so that when you close it, it isn't deleted, and then delete it manually later.</p>
<pre><code># Python 2 and 3
import os
import tempfile
fp = tempfile.NamedTemporaryFile(suffix='.txt', delete=False)
try:
fp.close()
do_stuff(fp.name)
finally:
os.remove(fp.name)
</code></pre>
<p>Here is a little context manager that can make files:</p>
<pre><code>import os
import tempfile
_text_type = type(u'')
class ClosedTemporaryFile(object):
__slots__ = ('name',)
def __init__(self, data=b'', suffix='', prefix='tmp', dir=None):
fp = tempfile.mkstemp(suffix, prefix, dir, isinstance(data, _text_type))
self.name = fp.name
if data:
try:
fp.write(data)
except:
fp.close()
self.delete()
raise
fp.close()
def exists(self):
return os.path.isfile(self.name)
def delete(self):
try:
os.remove(self.name)
except OSError:
pass
def open(self, *args, **kwargs):
return open(self.name, *args, **kwargs)
def __enter__(self):
return self.name
def __exit__(self, exc_type, exc_val, exc_tb):
self.delete()
def __del__(self):
self.delete()
</code></pre>
<p>Usage:</p>
<pre><code>with ClosedTemporaryFile(suffix='.txt') as temp_file:
do_stuff(temp_file)
</code></pre>
|
python|linux|windows|file-io|io
| 0 |
1,901,898 | 12,140,238 |
General formula for program?
|
<p>Can any one help to get the general formula, when i=1 it should be zero('0') for other numbers(2,3,4....5) it should be one('1'). 'i' starts from 1.Please someone help. Thanks in advance.</p>
|
<p>Try this</p>
<pre><code> result = (i != 1) ? 1 : 0
</code></pre>
<p>seems there is some ambiguity as to what language you are using but anything that supports the ternary operator will work in this way you just may have to tweak the syntax.</p>
<p>If you cant access ternary operators then pseudo-code is</p>
<pre><code> result = 1;
if (i == 1) {
result = 0;
}
</code></pre>
<p>In python this would be</p>
<pre><code>result = 1
if i == 1:
result = 0
</code></pre>
|
python|math
| 2 |
1,901,899 | 22,958,839 |
python Occurring letters in a string. Index Counting
|
<p>Just trying to write a basic function where the function should print index numbers of a certain letter in a word for.</p>
<p>Below is the function is wrote. It is only printing out the first index a the letter I give</p>
<pre><code>def ind_fnd(word, char):
"""
>>> ind_fnd("apollo", "o")
'2 5 '
>>> ind_fnd("apollo", "l")
'3 4 '
>>> ind_fnd("apollo", "n")
''
>>> ind_fnd("apollo", "a")
'0'
"""
index = 0
while index < len(word):
if word [index] == char:
return index
index += 1
</code></pre>
<p>The above is the type of function i need. I cant figure out what is missing.</p>
|
<p>You shouldn't return the index straightaway since that terminates the function. Instead do it as follows:</p>
<pre><code>def ind_fnd(word, char):
"""
>>> ind_fnd("apollo", "o")
'2 5 '
>>> ind_fnd("apollo", "l")
'3 4 '
>>> ind_fnd("apollo", "n")
''
>>> ind_fnd("apollo", "a")
'0'
"""
index = 0
indices = []
while index < len(word):
if word [index] == char:
indices.append(index)
index += 1
return indices
</code></pre>
<p>This creates a list of all the indices and then returns it.</p>
<h3>Examples</h3>
<pre><code>>>> ind_fnd('apollo','o')
[2,5]
>>> ind_fnd('eevee','e')
[0,1,3,4]
>>> ind_fnd('hello, 'z')
[]
</code></pre>
|
python
| 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.