Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,904,300
32,962,732
How to read & update a mongodb collection using python 2.7?
<p>I’m trying to connect to the mongo db and update the collection, my use case is:- Search the collection with a particular runid/batchid, if it is found then insert a new column as “Status” and value = “complete for that document." Here is an array of run_ids:</p> <pre><code>144019800231859 144019800231659 144222840203400 144019800231559 144019800231759 144019800231459 144019080231459 144223560212920 144222120225005 </code></pre> <p>This is how my collection looks like:</p> <pre><code>{ "_id" : ObjectId("537d171c2273d16335000001"), "file" : "/data/raw/xact/logs/edw/*/xxxx8835/ex14052114.log.gz", "created_at" : 1400706844, "batch_id" : NumberLong(1400706390198590000), "run_id" : NumberLong(140070639019859) } </code></pre> <p>So, if I find a run_id from my list in the collection then I want to add one more column to the collection. So final output should look like this.</p> <pre><code> { "_id" : ObjectId("537d171c2273d16335000001"), "file" : "/data/raw/xact/logs/edw/*/qaz02htp8835/noarchive/ex14052114.log.gz", "created_at" : 1400706844, "batch_id" : NumberLong(1400706390198590000), "run_id" : NumberLong(140070639019859) “Status” : “complete" } </code></pre> <p>Can anyone please help me achieve this?</p> <p>Here is what I have written so far.</p> <pre><code>from pymongo import MongoClient # Comes with pymongo client = MongoClient('mongodb://user:hadoop_qa@localhost.corp.homestore.net:33307/') db = client['jobs_qa'] collection = db['export_batch_tracker'] print collection </code></pre>
<p><strong>Notes:</strong></p> <p>Loop through the given ids and <strong><code>set status ="Complete"</code></strong> where run id matches </p> <p>You could do the same thing for batch_id</p> <p><strong>Code:</strong></p> <pre><code>from pymongo import MongoClient # Comes with pymongo client = MongoClient('mongodb://user:hadoop_qa@localhost.corp.homestore.net:33307/') db = client['jobs_qa'] collection = db['export_batch_tracker'] run_ids=[144019800231859,144019800231659,144222840203400,144019800231559,144019800231759,144019800231459,144019080231459,144223560212920,144222120225005] for ids in run_ids: collection.update({"run_id":ids},{"$set":{"Status":"Complete"}}) </code></pre>
mongodb|python-2.7|amazon-s3
0
1,904,301
47,170,879
What is partitioner parameter in Tensorflow variable_scope used for?
<p><code>tf.variable_scope</code> has a <code>partitioner</code> parameter as mentioned in <a href="https://www.tensorflow.org/api_docs/python/tf/variable_scope#__init__" rel="noreferrer">documentation</a>.</p> <p>As I understand it's used for distributed training. Can anyone explain it in more details what is the correct use of it?</p>
<p>Huge tensorflow variables can be sharded across several machines (see <a href="https://stackoverflow.com/q/38352494/712995">this discussion</a>). Partitioner is the mechanism, through which tensorflow distributes and assembles the tensors back, so that the rest of the program doesn't know these implementation details and works with tensors the usual way.</p> <p>You can specify the partitioner per one variable via <a href="https://www.tensorflow.org/api_docs/python/tf/get_variable" rel="noreferrer"><code>tf.get_variable</code></a>:</p> <blockquote> <p>If a partitioner is provided, a PartitionedVariable is returned. Accessing this object as a Tensor returns the shards concatenated along the partition axis.</p> </blockquote> <p>Or you define the default partitioner for the whole scope via <a href="https://www.tensorflow.org/api_docs/python/tf/variable_scope" rel="noreferrer"><code>tf.variable_scope</code></a>, which will affect all variables defined in it.</p> <p>See the list of available partitioners in tensorflow 1.3 on <a href="https://www.tensorflow.org/versions/r0.12/api_docs/python/state_ops/variable_partitioners_for_sharding" rel="noreferrer">this page</a>. The simplest one is <code>tf.fixed_size_partitioner</code>, which shards the tensor along the specified axis. Here's an example usage (from <a href="https://stackoverflow.com/q/40628977/712995">this question</a>):</p> <pre class="lang-py prettyprint-override"><code>w = tf.get_variable("weights", weights_shape, partitioner=tf.fixed_size_partitioner(num_shards, axis=0), initializer=tf.truncated_normal_initializer(stddev=0.1)) </code></pre>
python|tensorflow|sharding|partition
8
1,904,302
70,708,634
Reading a text file (tab/space delimited) having named columns into lists with the lists having the same name as the column name
<p>My text file looks like this:</p> <pre><code>x y z D 0 0 350 10 50 -50 400 15 100 50 450 10 -25 100 500 10 </code></pre> <p>where the columns are tab-separated. I want to import it into 4 Python lists having the name of the columns:</p> <pre><code>x = [0, 50, 100, -25] y = [0, -50, 50, 100] z = [350, 400, 450, 500] D = [10, 15, 10, 10] </code></pre> <p>Is it possible to do such using some in-built functions without resorting to importing Pandas or some special packages?</p>
<p>I suggest this approach...</p> <p>Construct a dictionary keyed on the column names (x, y, z, D)</p> <p>Each key has a value which is a list.</p> <p>Consume the file appending the individual values to the appropriate keys.</p> <pre><code>from collections import defaultdict with open('t.txt') as infile: cols = next(infile).strip().split() d = defaultdict(list) for line in infile: for i, t in enumerate(line.strip().split()): d[cols[i]].append(int(t)) for k, v in d.items(): print(f'{k} = {v}') </code></pre> <p><strong>Output</strong>:</p> <pre><code>x = [0, 50, 100, -25] y = [0, -50, 50, 100] z = [350, 400, 450, 500] D = [10, 15, 10, 10] </code></pre>
python|parsing|file-handling
0
1,904,303
70,732,534
Change Tkinter Background color when a function runs
<p>I'm trying to change the background color of tkinter when a particular function runs.</p> <p>Problem i'm facing is the background color doesnt change until after the function finishes</p> <p>Is there a better way to do this?</p> <p>End results will have more methods when each method is called it should change the background color.</p> <pre><code>class Test: window = Tk() def TT(): Test.window.configure(background='red') tester_function() def GUI(self): Button(Test.window , text=&quot;ON&quot;,width=6, command=Test.TT).grid(row=0,column=0, sticky=W) Test.window.configure(background='black') Test.window.mainloop() Test().GUI() </code></pre>
<p>Try adding this line after you change the background color :</p> <blockquote> <p>Test.window.update()</p> </blockquote> <p>This will effectivily refresh the window thus changing the background color.</p>
python|class|tkinter|tk
1
1,904,304
70,448,781
Combine numpy arrays to list if first index value is similar
<p>How to combine arrays inside 2D array to a list if the <strong>first index value</strong> is the same?</p> <p>Say for example, this 2D array:</p> <pre><code>[[0 0] [0 3] [0 4] [1 1] [2 2] [3 0] [3 3] [3 4] [4 0] [4 3] [4 4]] </code></pre> <p>How will I make it to something like this?</p> <pre><code>[[0, 0, 3, 4], [1 1], [2 2], [3 0 3 4], [4 0 3 4]] </code></pre> <p>When converting from numpy to list, I need it to be optimized as there are thousands of rows on my end.</p> <p>Any suggestion is much appreciated. The end goal is I want the <strong>second index value</strong> to be combined together.</p> <p><em>Also, take into consideration if ever the first index value is not in ascending order.</em></p>
<p>You can do it like that:</p> <pre><code>my_list = [ [0, 0], [0, 3], [0, 4], [1, 1], [2, 2], [3, 0], [3, 3], [3, 4], [4, 0], [4, 3], [4, 4] ] from collections import defaultdict d = defaultdict(list) for i, j in my_list: d[i].append(j) combined = [[i]+l for i,l in d.items()] </code></pre>
python|python-3.x|numpy
2
1,904,305
55,794,829
How to solve the error "AttributeError: type object 'Image' has no attribute 'open'" in python?
<p>I am trying to display an image in Tkinter canvas along with some text and I am running into the following error. Also, my mac doesn't show background colors for buttons when run using Spyder in anaconda (Spyder up-to-date).</p> <p>My python code is:</p> <pre><code>from tkinter import * from PIL import ImageTk,Image def plot_best_batsmen(): best_batsmen = dataset.loc[dataset.loc[dataset['Innings']&gt;=15,'Average'].idxmax(),'Names'] message = ("The best Batsman of the Tournament could possibly be: ", best_batsmen) canvas_width = 500 canvas_height = 500 root = Tk() root.geometry("600x600") root.title("New Window") canvas = Canvas(root, width=canvas_width, height=canvas_height) canvas.create_text(1, 10, anchor=W, text=message) img = ImageTk.PhotoImage(Image.open("prediction.jpg")) canvas.create_image(20, 20, anchor=NW, image=img) canvas.image = img canvas.pack() root.mainloop() </code></pre> <p>It's displaying an error message as follows when running:</p> <pre><code>Exception in Tkinter callback Traceback (most recent call last): File "/anaconda3/lib/python3.7/tkinter/__init__.py", line 1705, in __call__ return self.func(*args) File "/Users/vivekchowdary/Documents/PS3_Final_Project/Batsmen.py", line 110, in plot_best_batsmen canvas.create_image(20, 20, anchor=NW, image=img) File "/anaconda3/lib/python3.7/tkinter/__init__.py", line 2489, in create_image return self._create('image', args, kw) File "/anaconda3/lib/python3.7/tkinter/__init__.py", line 2480, in _create *(args + self._options(cnf, kw)))) _tkinter.TclError: image "pyimage3" doesn't exist </code></pre> <p>Code for buttons is as follows:</p> <pre><code>b1 = Button(root, text="Elbow Method", command=plot_elbow, bg="green", fg="white").pack(side = LEFT) b2 = Button(root, text="K-Means Clustering", command=plot_kmeans, bg="blue", fg="white").pack(side = LEFT) b3 = Button(root, text="Batsmen who scored 4 or more Hundreds", command=plot_hundreds, bg="#D35400", fg="white").pack(side = LEFT) b4 = Button(root, text="Runs Scored by Various Players", command=plot_runs, bg="#117A65", fg="white").pack(side = LEFT) b5 = Button(root, text="Best Batsmen", command=plot_best_batsmen, bg="#34495E", fg="white").pack(side = LEFT) b6 = Button(root, text="Stop", command=root.destroy, bg="red", fg="white").pack(side = BOTTOM) </code></pre> <p>I want Tkinter to display the following image. But it's reporting an error instead. Can anyone please help me in solving this error?</p>
<p>tkinter also has a class/function called Image. You also imported Image from PIL. You need to make sure which Image.open you are trying to use. tkinter.Image doesn't have an attribute 'open'.</p>
python|python-3.x|tkinter|python-imaging-library|tkinter-canvas
2
1,904,306
66,363,756
How to get asterisk pattern one after another?
<p>I want to print two patterns in the same program but one in front of the other like this:</p> <p><a href="https://i.stack.imgur.com/0xk2V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0xk2V.png" alt="enter image description here" /></a></p> <p>I have written the rest of the code but my pattern is indented normally:</p> <pre><code>x = input(&quot;Please enter any number: &quot;) for i in range(0,5): for j in range(0,5): print('*', end=&quot;&quot;) print() print(&quot;\n&quot;) for i in range(0,5): for j in range(0,5): if (i==0 or i==5-1 or j==0 or j==5-1): print('*', end='') else: print(' ', end= '') print() </code></pre> <p>My output: <a href="https://i.stack.imgur.com/yl2Ei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yl2Ei.png" alt="enter image description here" /></a></p> <p>Any help would be appreciated!</p>
<p>Your <code>print</code> statements are not sequential. You are first creating</p> <pre><code>***** ***** ***** ***** ***** </code></pre> <p>Then</p> <pre><code>***** * * * * * * ***** </code></pre> <p>But you need to create each row at the same time.</p> <p><strong>How it is working</strong></p> <p>Print <code>*</code> <code>x</code> number of times. In the same row give a space <code> </code>. Then print <code>*****</code> or <code>* *</code> depending on whether it is the first or the last row. <code>('*' if i in [0, x-1] else ' ')*(x-2)</code> is checking the condition for the same.</p> <pre class="lang-py prettyprint-override"><code>x = int(input(&quot;Please enter any number: &quot;)) for i in range(x): print('*'*x + ' ' + '*' + ('*' if i in [0, x-1] else ' ')*(x-2) + '*') </code></pre> <pre><code>***** ***** ***** * * ***** * * ***** * * ***** ***** </code></pre>
python|python-3.x|nested-for-loop
0
1,904,307
66,526,801
Telegram python check key value from JSON
<p>i'm creating a telegram bot using pyhton, specifically the following module</p> <blockquote> <p><a href="https://github.com/python-telegram-bot/python-telegram-bot" rel="nofollow noreferrer">https://github.com/python-telegram-bot/python-telegram-bot</a></p> </blockquote> <p>What i want to do is:</p> <ul> <li>send input to the bot</li> <li>read the update object to analyze each field</li> <li>check if 'text' key is present</li> <li>do something if yes</li> </ul> <p>My current python implementation:</p> <pre><code>def echo(update: Update, context: CallbackContext) -&gt; None: if 'text' in update.message: update.message.reply_text('I found your key value you are looking for') else: update.message.reply_text('Key not found') def main(): &quot;&quot;&quot;Start the bot.&quot;&quot;&quot; # Create the Updater and pass it your bot's token. updater = Updater(MY_TOKEN) # Get the dispatcher to register handlers dispatcher = updater.dispatcher dispatcher.add_handler(MessageHandler(Filters.text &amp; ~Filters.command, echo)) updater.start_polling() updater.idle() if __name__ == '__main__': main() </code></pre> <p>Structure of <code>update</code> object:</p> <pre><code>{ &quot;update_id&quot;:id_update, &quot;message&quot;:{ &quot;message_id&quot;:1579, &quot;date&quot;:1615193338, &quot;chat&quot;:{ &quot;id&quot;:id_chat, &quot;type&quot;:&quot;private&quot;, &quot;username&quot;:&quot;XXX&quot;, &quot;first_name&quot;:&quot;XXX&quot; }, &quot;text&quot;:&quot;Hello Bot&quot;, &quot;entities&quot;:[ ], &quot;caption_entities&quot;:[ ], &quot;photo&quot;:[ ], &quot;new_chat_members&quot;:[ ], &quot;new_chat_photo&quot;:[ ], &quot;delete_chat_photo&quot;:false, &quot;group_chat_created&quot;:false, &quot;supergroup_chat_created&quot;:false, &quot;channel_chat_created&quot;:false, &quot;from&quot;:{ &quot;id&quot;:id_chat, &quot;first_name&quot;:&quot;xxx&quot;, &quot;is_bot&quot;:false, &quot;username&quot;:&quot;xxx&quot;, &quot;language_code&quot;:&quot;it&quot; } } } </code></pre> <p>When i test it i didn't get any output from the bot, it seems like it is ignoring the if/else condition. If i print the <code>update.message.text</code> i see correctly the input sent to the bot.</p> <p>Thank you all</p> <blockquote> <p>EDIT</p> </blockquote> <p>I found the solution, i had to change the filter passed to MessageHandler in this way</p> <pre><code>dispatcher.add_handler(MessageHandler(Filters.all, echo)) </code></pre> <p>Thanks anyway for the help</p>
<p>Your edit is very likely not the actual solution. Using <code>Filters.all</code> instead of <code>Filters.text &amp; ~Filters.command</code> just says that the <code>MessageHandler</code> will catch <em>any</em> message and not just messages that contain text and don't start with a botcommand.</p> <p>The problem rather is that <code>'text' in update.message</code> can't work as <code>update.message</code> is a <code>telegram.Message</code> object and not iterable. Therefore <code>'text' in update.message</code> will probably throw an error, which you don't see as you have neither logging enabled nor an error handler registered (see the PTB readme and wiki, respectively for info on logging &amp; error handlers).</p> <p>My guess is that changing to <code>'text' in update.message.text</code> should do the trick.</p>
python|json|python-3.x|telegram-bot|python-telegram-bot
1
1,904,308
65,044,660
How to efficiently filter pandas dataframe
<p>I have this huge dataset (100M rows) of consumer transactions that looks as follows:</p> <pre><code>df = pd.DataFrame({'id':[1, 1, 2, 2, 3],'brand':['a','b','a','a','c'], 'date': ['01-01-2020', '01-02-2020', '01-05-2019', '01-06-2019', '01-12-2018']}) </code></pre> <p>For each row (each transaction), I would like to check if the same person (same &quot;id&quot;) bought something <strong>in the past</strong> for a <strong>different</strong> brand. The resulting dataset should look like this:</p> <pre><code> id brand date check 0 1 a 01-01-2020 0 1 1 b 01-02-2020 1 2 2 a 01-05-2019 0 3 2 a 01-06-2019 0 4 3 c 01-12-2018 0 </code></pre> <p>Now, my solution was:</p> <pre><code>def past_transaction(row): x = df[(df['id'] == row['id']) &amp; (df['brand'] != row['brand']) &amp; (df['date'] &lt; row['date'])] if x.shape[0]&gt;0: return 1 else: return 0 df['check'] = df.appy(past_transaction, axis=1) </code></pre> <p>This works well, but the performance is abysmal. Is there a more efficient way to do this (with or without Pandas)? Thanks!</p>
<p>I would personally use two booleans,</p> <p>First check if the <code>id</code> is duplicated. Second is to check for those that are <em>not</em> duplicated id &amp; brand</p> <pre><code>import numpy as np s = df.duplicated(subset=['id'],keep='first') s1 = ~df.duplicated(subset=['id','brand'],keep=False) df['check'] = np.where(s &amp; s1,1,0) id brand date check 0 1 a 01-01-2020 0 1 1 b 01-02-2020 1 2 2 a 01-05-2019 0 3 2 a 01-06-2019 0 4 3 c 01-12-2018 0 </code></pre>
python|pandas|dataframe
3
1,904,309
64,788,988
max() function in Pandas Dataframe
<p>I am trying to create a derived column from existing columns in my dataframe. The line of code looks like this:</p> <pre><code>df['New_Column'] = (df['Column1']-df['Column2'])/max(df['Column2'], 5) </code></pre> <p>Obviously, this returns an error:</p> <blockquote> <p>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p> </blockquote> <p>This is because the max() function is comparing &quot;5&quot; with all values of the series, rather than each specific value and &quot;5&quot;.</p> <p>Is there an easy way to fix this?</p>
<p><code>max</code> is a Python function and only accept one single scalar. I believe you want <code>clip(lower=5)</code>, i.e. if some values in the series are less than <code>5</code> replace them with <code>5</code>:</p> <pre><code>df['New_Column'] = (df['Column1']-df['Column2'])/df['Column2'].clip(lower=5) </code></pre>
python|python-3.x|pandas|dataframe
0
1,904,310
63,779,514
Create new dataframe based on the absence / presence of all words in a dictionary
<p>I would like to process a list of sentences into a new dataframe, the dataframe should have the maximum number of columns based on the number of unique number of words in a vocabulary.</p> <p>In the dataframe each column should indicate if a word of the sentence is present in the dictionary- if true (fill in value 1) or not true (fill in value 0).</p> <p>List of sentences:</p> <pre><code>sentence = [['I','like','fruit'],['cars','are','great'],['great','time','eating','fruit']] </code></pre> <p>Vocabulary that contains all unique words - total length of vocabulary = 8</p> <pre><code>vocab = ['I','like','fruit','cars','are','great','time','eating'] </code></pre> <p>Finally, I would like to add the corresponding label to each sentence.</p> <p>Labels:</p> <pre><code>labels = ['Fruit','Cars','Fruit'] </code></pre> <p>The dataframe filled with 0 values is created like so for now:</p> <pre><code>new_df = pd.DataFrame(index=np.arange(4), columns=np.arange(8)) new_df = new_df.fillna(0) </code></pre> <p>Expected Results:</p> <pre><code> Word1 Word2 Word3 Word4 Word5 Word6 Word7 Word8 Label Sentence1 1 1 1 0 0 0 0 0 Fruit Sentence2 0 0 0 1 1 1 0 0 Car Sentence3 0 0 1 0 0 1 1 0 Fruit </code></pre>
<pre><code>sentences = [ ['I','like','fruit'], ['cars','are','great'], ['great','time','eating','fruit'] ] # For each sentence, create a dictionary of &lt;word&gt;: 1 for each word words_dict = [{word: 1 for word in sentence} for sentence in sentences] # Convert to data frame, fill in the empty values and rename the columns as required df = pd.DataFrame(words_dict).fillna(0) df.columns = ['Word{}'.format(i+1) for i in range(len(df.columns))] </code></pre> <p>This is pretty naive; you'd have to investigate how efficient panda's &quot;list of dictionaries to DataFrame&quot; and &quot;fill in sparse data frame&quot; are.</p>
python-3.x|pandas|dictionary
1
1,904,311
65,100,675
Is there a way to multiply the values in two columns while grouping by values in third column using pandas?
<p>So I'm trying to avoid using a loop while calculating the mean of the weighted grades in each of these courses.</p> <p>I just can't wrap my head around what to do. I assume I can use groupby and perform the appropriate calcualtions?</p> <p>This is the dataframe:</p> <pre><code>data = mark weight course_id 78 10 1 87 40 1 15 50 1 78 90 3 40 10 3 </code></pre> <p>This is the desired result:</p> <pre><code>result= course_id course_average 1 50.1 3 74.2 </code></pre>
<p>This is one way to go about it :</p> <pre><code>(df.assign(course_average=df.mark * df.weight) .groupby(&quot;course_id&quot;) .pipe(lambda x: x.course_average.sum().div(x.weight.sum())) .reset_index(name=&quot;course_average&quot;)) course_id course_average 0 1 50.1 1 3 74.2 </code></pre>
python|pandas
1
1,904,312
65,462,912
python sqlite3 partial search
<p>I want to do a partial search with <code>python</code> <code>sqlite3</code>. My initial query is:</p> <pre><code>cur.execute(&quot;SELECT * FROM book WHERE title=? OR author=? OR year=? OR isbn=?&quot;, (title, author, year, isbn)) </code></pre> <p>Then I tried using the <code>LIKE</code> keyword and string formatting to obtain partial search results for the <code>title</code>, like this:</p> <pre><code>cur.execute(&quot;SELECT * FROM book WHERE title LIKE ? OR author=? OR year=? OR isbn=?&quot;, ('%{}%'.format(title), author, year, isbn)) </code></pre> <p>As in <a href="https://stackoverflow.com/a/20904256/13290801">https://stackoverflow.com/a/20904256/13290801</a></p> <p>This seems to do the trick for <code>title</code>, but now when searching on the other parameters, it's not working at all although there is no error on the terminal. What am I missing?</p> <p><strong>EDIT</strong> I tried the answer posted by @forpas, but it gives me the same results.</p> <p>So my new code for my search function is:</p> <pre><code>def search(title=&quot;&quot;, author=&quot;&quot;, year=&quot;&quot;, isbn=&quot;&quot;): conn = sqlite3.connect('books.db') cur = conn.cursor() cur.execute(&quot;SELECT * FROM book WHERE title LIKE '%' || ? || '%' OR author=? OR year=? OR isbn=?&quot;, (title, author, year, isbn)) </code></pre> <p>It works for title. If I search for &quot;amst&quot;, I get the Amsterdam title: <a href="https://i.stack.imgur.com/hKUkY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hKUkY.png" alt="Title partial search works" /></a></p> <p>But if I search by year for 1970 I get all the results: <a href="https://i.stack.imgur.com/1gs4t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1gs4t.png" alt="Year search doesn't work" /></a></p>
<p>If you want to do partial search for the title then you must concatenate the <code>'%'</code> wildcard at the start and at the end of the title that you search.<br/> Also you need an additional condition for the case that you pass an empty string for a column and the operator <code>AND</code> instead of <code>OR</code>:</p> <pre><code>sql = &quot;&quot;&quot; SELECT * FROM book WHERE (title LIKE '%' || ? || '%' OR LENGTH(?) = 0) AND (author = ? OR LENGTH(?) = 0) AND (year = ? OR LENGTH(?) = 0) AND (isbn = ? OR LENGTH(?) = 0) &quot;&quot;&quot; cur.execute(sql, (title, title, author, author, year, year, isbn, isbn)) </code></pre>
python|sql|python-3.x|sqlite|search
1
1,904,313
68,520,536
Tkinter and Flask integration not working after compiling
<p>I have a small web app designed for Flask that is designed to run locally. I used Flask because I prefer to use a web browser for the GUI even though this is a local app. I want to share this app with my coworkers, so I used pyinstaller to create an executable. This works perfectly, but terminating the process is required to stop Flask. I was hoping to do this more gracefully, and I found this:</p> <p><a href="https://stackoverflow.com/questions/47169221/run-gui-concurrently-with-flask-application">Run GUI concurrently with Flask application</a></p> <p>This sounds exactly like what I'm trying to do. After some experimenting, I finally got everything working. Launching the app opens a tkinter window. The user can either Start or Stop the Flask app with the Buttons presented. Starting the Flask app opens the default web browser to the correct page. The Flask app is started as a new process with the tkinter window being the parent/daemon. Closing the tkinter window or pressing the Stop button terminates the Flask server. Everything works as expected in my development environment, but not when I compile it for Windows with pyinstaller.</p> <p>As a .exe file, the tkinter window opens, but the Flask server is not started. When the Start button is pressed, the web page opens but a standard &quot;site not found&quot; message is displayed. Then, my tkinter window duplicates itself so there are two windows displayed. If I edit my code to start the Flask server immediately and bypass opening the tkinter window, everything works again.</p> <p>Below is the layout of my app:</p> <pre><code>| run.py +---my_flask_app | | forms.py | | routes.py | | __init__.py | | | +---static | | +---css | | +---img | | \---js | | | +---templates </code></pre> <p>I'll just post the code where Flask is being started from tkinter in run.py unless more is needed:</p> <pre><code>def start(self): global p #global so can be accessed in stop function. p = Process(target=startFlask,) p.daemon=True p.start() webbrowser.open(url='http:127.0.0.1:5000', new=2) def startFlask(): app.run() </code></pre> <p>The pyinstaller commandline is:</p> <pre><code>pyinstaller -F --add-data &quot;my_flask_app;my_flask_app&quot; --onefile run.py </code></pre> <p>I've tried a few variations of this will no luck. In summary, I can start the Flask app from a .exe as long as I don't involve the tkinter GUI. Also, I can start the Flask app from the tkinkter GUI in my development environment, but this breaks as described when I compile to a .exe.</p>
<p>The answer to this problem is to use &quot;freeze_support()&quot;. Apparently, this is required when converting a python app using multiprocessing to a Windows executable. I don't exactly understand why, but I'm glad everything is working now!</p> <p><a href="https://docs.python.org/3/library/multiprocessing.html#multiprocessing.freeze_support" rel="nofollow noreferrer">https://docs.python.org/3/library/multiprocessing.html#multiprocessing.freeze_support</a></p>
python|user-interface|flask|python-multiprocessing
0
1,904,314
71,514,400
Pandas How to retrieve the value of datetime index with apply lambda?
<p>Here is the Demo:</p> <pre><code>import pandas as pd from datetime import datetime df_test2 = pd.DataFrame({'dt_str': {0: '03/01/2017-09:16:00', 1: '03/01/2017-09:17:00', 2: '03/01/2017-09:18:00', 3: '03/01/2017-09:19:00', 4: '03/01/2017-09:20:00'}, 'Open': {0: 21920, 1: 21945, 2: 21940, 3: 21981, 4: 21988}, 'Close': {0: 21945, 1: 21940, 2: 21981, 3: 21988, 4: 21977}, 'Volume': {0: 935, 1: 541, 2: 651, 3: 314, 4: 318}}) df_test2['dt'] = df_test2['dt_str'].apply(lambda x: datetime.strptime(x, '%d/%m/%Y-%H:%M:%S')) df_test2.reset_index(drop=True, inplace=True) df_test2.set_index('dt', inplace=True) # The question is how can I get the value of the datetime index in lambda function def condition(x): # handling index value, like checking the datetime's weekday print(x['dt'].weekday()) return True df_test2.apply(lambda x: condition(x), axis=1) </code></pre> <p>When I'm calling <code>x['dt']</code> it raised Key error:</p> <pre><code>KeyError Traceback (most recent call last) xxx pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() KeyError: 'dt' </code></pre> <h2>Update</h2> <p>if I use <code>x.index</code>, it raised:</p> <pre><code>AttributeError: 'Index' object has no attribute 'weekday' </code></pre>
<p>Strangely enough, you need to use <code>x.name</code> instead of <code>x['dt']</code>, since <code>dt</code> is now the index:</p> <pre><code>def condition(x): # handling index value, like checking the datetime's weekday print(x.name.weekday()) return True df_test2.apply(lambda x: condition(x), axis=1) </code></pre> <p>(Also, note that when your function takes one param, it's redundant to use <code>.apply(lambda x: condition(x), axis=1)</code> - instead just write <code>.apply(condition, axis=1)</code>)</p> <p>Output:</p> <pre><code>In [289]: def condition(x): ...: # handling index value, like checking the datetime's weekday ...: print(x.name.weekday()) ...: return True ...: ...: df_test2.apply(lambda x: condition(x), axis=1) 1 1 1 1 1 Out[289]: dt 2017-01-03 09:16:00 True 2017-01-03 09:17:00 True 2017-01-03 09:18:00 True 2017-01-03 09:19:00 True 2017-01-03 09:20:00 True dtype: bool </code></pre>
python|pandas
1
1,904,315
71,691,311
.py file executed by C# process not waiting to finish
<p>I want to run .py file from my C# project, and get the result. The python script is making an API request, and returns an auth_key token, which I want to use in my C# code. The only problem is that, for some reason the C# code doesn't wait for the process to finish, and thus that not every account has auth_key. Here is my C# code.</p> <pre><code>private static void GenerateTokens() { var url = ConfigurationManager.AppSetting[GeSettingsNode() + &quot;:ip&quot;]; for (int i = 0; i &lt; accounts.Count; i++) { ProcessStartInfo start = new ProcessStartInfo(); start.FileName = ConfigurationManager.AppSetting[&quot;PythonPath&quot;]; start.Arguments = string.Format($&quot;python_operation_processor.py {accounts[i].client_key_id} {accounts[i].key_sercret_part} {url}&quot;); start.UseShellExecute = false; start.RedirectStandardOutput = true; Process process = Process.Start(start); using (StreamReader reader = process.StandardOutput) { accounts[i].auth_key = reader.ReadToEnd().Trim(); } } } </code></pre> <p>And here is my Python script ( python_operation_processor.py )that's making the API requests.</p> <pre><code>if __name__ == '__main__': client_key_id = sys.argv[1] client_secret = sys.argv[2] API_URL = sys.argv[3] nonce = str(uuid.uuid4()) d = datetime.datetime.now() - datetime.timedelta(hours=3) timestamp = d.strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3] + 'Z' signature = b64encode(hmac.new(b64decode(client_secret), msg=bytes(client_key_id + nonce + timestamp, 'utf-8'), digestmod=hashlib.sha256).digest()).decode('utf-8') r = requests.post(API_URL + '/v1/authenticate', json={'client_key_id': client_key_id, 'timestamp': timestamp, 'nonce': nonce, 'signature': signature}) if r.status_code != 200: raise Exception('Failed to authenticate: ' + r.text) auth_token = r.json()['token'] print(auth_token) </code></pre> <p>Do you have any idea, how I can wait for the execution of every process, and get the token for every account ?</p>
<p>I recently created something similar and ended up with this because, whilst waiting for the process is easy, it is tricky to get the output stream filled correctly.</p> <p>The method presented also allow you to display the output into a textblock or similar in your application.</p> <p>If you use it like this, the token will be written to the StringBuilder, and used as return value.</p> <pre><code>private async Task&lt;string&gt; RunCommand(string fileName, string args) { var timeoutSignal = new CancellationTokenSource(TimeSpan.FromMinutes(3)); ProcessStartInfo start = new ProcessStartInfo(); start.FileName = fileName; start.Arguments = string.Format(&quot;{0}&quot;, args); start.RedirectStandardOutput = true; start.RedirectStandardError = true; start.UseShellExecute = false; start.CreateNoWindow = true; var sb = new StringBuilder(); using (Process process = new Process()) { process.StartInfo = start; process.OutputDataReceived += (sender, eventArgs) =&gt; { sb.AppendLine(eventArgs.Data); //allow other stuff as well }; process.ErrorDataReceived += (sender, eventArgs) =&gt; {}; if (process.Start()) { process.EnableRaisingEvents = true; process.BeginOutputReadLine(); process.BeginErrorReadLine(); await process.WaitForExitAsync(timeoutSignal.Token); //allow std out to be flushed await Task.Delay(100); } } return sb.ToString(); } </code></pre> <p>To render this to a textblock in a UI application, you'll need to:</p> <ul> <li>implement an event which signals a new line has been read, which means forwarding the <code>process.OutputDataReceived</code> event.</li> <li>if your thinking about a live feed, make sure you flush the stdio buffer in python setting flush to true: <code>print(&quot;&quot;hello world&quot;&quot;, flush=True)</code></li> </ul> <p>If you're using an older .net version; you can implement the <code>WaitForExitAsync</code> as described here: <a href="https://stackoverflow.com/a/17936541/2416958">https://stackoverflow.com/a/17936541/2416958</a> as an extention method:</p> <pre><code>public static class ProcessHelpers { public static Task&lt;bool&gt; WaitForExitAsync(this Process process, TimeSpan timeout) { ManualResetEvent processWaitObject = new ManualResetEvent(false); processWaitObject.SafeWaitHandle = new SafeWaitHandle(process.Handle, false); TaskCompletionSource&lt;bool&gt; tcs = new TaskCompletionSource&lt;bool&gt;(); RegisteredWaitHandle registeredProcessWaitHandle = null; registeredProcessWaitHandle = ThreadPool.RegisterWaitForSingleObject( processWaitObject, delegate(object state, bool timedOut) { if (!timedOut) { registeredProcessWaitHandle.Unregister(null); } processWaitObject.Dispose(); tcs.SetResult(!timedOut); }, null /* state */, timeout, true /* executeOnlyOnce */); return tcs.Task; } } </code></pre>
python|c#|api|request
1
1,904,316
10,637,483
How to delete parts of array efficiently
<p>I am writing a numpy/cython program to compute the minors of small matrices (a lot of them).</p> <p>My current function looks like (computes the minor of mat wrt. to row ii, col jj):</p> <pre><code>cdef float minor(np.ndarray[DTYPE_t, ndim = 2] mat,int ii,int jj): rows = range(mat.shape[0]) col = range(mat.shape[0]) del rows[ii] del col[jj] cdef np.ndarray[DTYPE_t, ndim = 2] rM = (mat[rows])[:,col] cdef float val = (-1)**(ii+jj) * np.linalg.det(rM) return val </code></pre> <p>After a little benchmarking, the line</p> <pre><code>cdef np.ndarray[DTYPE_t, ndim = 2] rM = (mat[rows])[:,col] </code></pre> <p>Is rather time consuming. Is there a better way to remove one row and one column from an two dimensional array?</p> <p>Yours,</p> <p>cp3028</p>
<p>It looks like you're copying the memory from <code>(mat[rows])[:,col]</code>, allocation and copying is a slow process. Is it not possible to simply make the function call <code>np.linalg.deg</code> on the chunks of <code>mat</code> in place, instead of copying it and calculating the determinant on the copy?</p>
numpy|scipy|cython
1
1,904,317
62,634,182
Are there any rules when it comes to determining the Order and the Seasonal Order in a SARIMA?
<p>Are there any rules when it comes to determining the Order and the Seasonal Order in a SARIMA?</p> <p>I have noticed that when I use StatsModels in Python I can't choose a seasonal lag that is below or equal to the number of AR lags.</p> <p><strong>Example</strong>:</p> <p>I am running a SARIMA with order (3,1,3) and seasonal order (3,1,3,3).</p> <p>This generates an error: ValueError: Invalid model: autoregressive lag(s) {3} are in both the seasonal and non-seasonal autoregressive components.</p>
<ul> <li>Specifying an order of (3, *, *) includes lags 1, 2, and 3</li> <li>Specifying a seasonal order of (3,<em>,</em>,3) includes lags 3, 6, 9, and 12.</li> </ul> <p>By specifying this model, you would be including the third lag twice, which can cause numerical problems when estimating the parameters.</p> <p>Instead, you should specify: order=(2, 1, 3) and seasonal_order=(3, 1, 3, 3). Then you will include the third lag as you want, but you won't have a duplicate.</p>
python|time-series|arima
4
1,904,318
61,959,004
Should I be able to use gspread with Python to update a cell in Google Sheets with a formula/function?
<p>I'm trying to set a cell value to a formula/function to use in the Google Sheet, the function will reference other tabs in the workbook. Whenever it updates the cell with the function, it has an apostrophe (') at the beginning of the function. When I remove the apostrophe in Google Sheets, the function performs as I hoped but I'm trying to automate this process. </p> <p>This is the code I use to set the cell value:</p> <pre><code>func = '=IF(NOT(ISERROR(MATCH("' + c + '", Dictionary!A:A, 0))), VLOOKUP("' + c + '", Dictionary!A:D, 4,FALSE), "' + c + '")' cells.append( Cell(row=rnum+1, col=cnum+1, value=func) </code></pre> <p>The function is sent to the Google Sheet but looks like this, I don't see the apostrophe in the cell until I click on it: <code>'=IF(NOT(ISERROR(MATCH("Word", Dictionary!A:A, 0))), VLOOKUP("Word", Dictionary!A:D, 4,FALSE), "Word")</code></p> <p>Is this a bug? When I debug, I am able to see the function as a string with no apostrophes except for those that the object uses to classify it as a string, which makes me wonder if the gspread library is accidentally sending the function to the cell with the apostrophe. </p> <p>EDIT: The work around I found was to use update_acell to find the specific cells that needed the formula. It's not perfect but it works for now. I'd still love to know why the extra apostrophe is being added. Thank you!!</p>
<p>When you put the value of <code>cells</code> using <code>update_cells</code>, how about this modification?</p> <h3>Modification points:</h3> <ul> <li>In this modification, <code>value_input_option="USER_ENTERED"</code> is added to <code>update_cells</code>. By this, the formula is put to the cell as the formula.</li> <li>When <code>update_cells</code> is used without <code>value_input_option="USER_ENTERED"</code>, the default value of <code>value_input_option</code> is <code>RAW</code>. By this, <code>'</code> is added to the top character of the formula. I think that this is the reason of your issue.</li> </ul> <h3>Modified script:</h3> <pre><code>func = '=IF(NOT(ISERROR(MATCH("' + c + '", Dictionary!A:A, 0))), VLOOKUP("' + c + '", Dictionary!A:D, 4,FALSE), "' + c + '")' cells.append(Cell(row=rnum+1, col=cnum+1, value=func)) worksheet.update_cells(cells, value_input_option="USER_ENTERED") # Modified </code></pre> <h3>References:</h3> <ul> <li><a href="https://gspread.readthedocs.io/en/latest/api.html?highlight=update_cells#gspread.models.Worksheet.update_cells" rel="nofollow noreferrer">update_cells(cell_list, value_input_option='RAW')</a></li> <li><a href="https://developers.google.com/sheets/api/reference/rest/v4/ValueInputOption" rel="nofollow noreferrer">ValueInputOption</a></li> </ul>
python|gspread
1
1,904,319
60,344,277
asyncio across multiple long lived client connections
<p>I am trying to write a simple application that connects to multiple rudimentary TCP servers with long lived connections where I send/receive data from each connection. Basically receiving events from the server, sending commands back and receiving results from the commands. Think of controlling a device, but i want to control <strong>N</strong> devices on a single thread. </p> <p>I have a working non-async, non-blocking implementation, but the time.sleep() are killing the responsiveness or killing the CPU, and using <code>select</code> is so much cooler.</p> <p>Given the below example, i want to connect to all three servers, and <code>await receive_message</code> on each one simultaneously. Currently, it's blocked in the <code>connect</code>'s receive_message(), so I only get this output:</p> <pre><code>Connecting to 1 Sending password to 1 Waiting for message from 1 </code></pre> <p>I'd like to get something similar to this, not exactly, but showing that the connections are all independently scheduled.</p> <pre><code>Connecting to 1 Connecting to 2 Connecting to 3 Sending password to 1 Sending password to 2 Sending password to 3 Waiting for message from 1 Waiting for message from 2 Waiting for message from 3 Connected to 1 Connected to 2 Connected to 3 Waiting for message from 1 Waiting for message from 2 Waiting for message from 3 </code></pre> <p>Watered down version of what i'm trying. No, the real server isn't this insecure.... this is just an example.</p> <pre class="lang-py prettyprint-override"><code>import asyncio class Connection: def __init__(self, name, host, port): self.name = name self.host = host self.port = port self.reader, self.writer = None, None self.connected = False async def connect(self): print(f'Connecting to {self.name}') self.reader, self.writer = await asyncio.open_connection(self.host, self.port) await self.send_message('password') response = await self.receive_message() if response == 'SUCCESS': self.connected = True print(f'Connected to {self.name}') else: raise Exception('unsuccessful connection') print(f'Connected to {self.name}') async def send_message(self, message): print(f'Sending {message} to {self.name}') self.writer.write(f'{message}\n'.encode('utf-8')) async def receive_message(self): print(f'Waiting for message from {self.name}') return (await self.reader.readline()).decode('utf-8') connections = ( Connection(1, 'localhost', 21114), Connection(2, 'localhost', 21115), Connection(3, 'localhost', 21116) ) async def run(): for connection in connections: await connection.connect() # how to receive messages from each connection as they are received for connection in connections: await connection.receive_message() asyncio.run(run()) </code></pre>
<p>The <code>await</code> in the <code>for</code> loop is effectively serializing your connections. In asyncio parallelism happens at the level of <em>task</em>, so if you want the connections to run in parallel, you need to spawn several tasks (or use a function that will do it for you, such as <code>asyncio.gather</code>). For example:</p> <pre class="lang-py prettyprint-override"><code>async def handle_connection(conn): await conn.connect() await conn.receive_message() # ... async def run(): tasks = [] for conn in connections: tasks.append(asyncio.create_task(handle_connection(conn))) # wait until all the tasks finish await asyncio.gather(*tasks) </code></pre>
python|python-3.x|network-programming|python-asyncio
1
1,904,320
60,633,562
Pass a list to str.contains - Pandas
<p>I have a pandas related question: I need to filter a column (approx. 40k entries) based on substrings included (or not) in the column. Each of the entries in the column is basically a very long list of attributes (text) which I need to be able to filter individually. This line of code works, but it is not scalable (I have hundreds of attribures I have to filter for):</p> <pre><code>df[df['Product Lev 1'].str.contains('W1 Rough wood', na=False) &amp; df['Product Lev 1'].str.contains('W1.2', na=False)] </code></pre> <p>Is there a possibiltiy to insert all the items I have to filter and pass it as a list? Or any similr solution ?</p> <p>THANK YOU!</p>
<p>Like this:</p> <pre><code>data = {'col_1': [3, 2, 1, 0], 'col_2': ['aaaaDB', 'bbbbbbCB', 'cccccEB', 'ddddddUB']} df=pd.DataFrame.from_dict(data) lst = ['DB','CB'] #replace with your list rstr = '|'.join(lst) df[df['col_2'].str.upper().str.contains(rstr)] </code></pre>
python|string|pandas|filter
1
1,904,321
60,564,002
How to display value of ManytoMany model relation variable on Django Rest Framework
<p>I'm new to django and struggling, I look over google but couldn't find anything that can help.</p> <p>Here is my model:</p> <pre><code>from django.db import models # from django.contrib.auth.models import class Order(models.Model): table_id = models.IntegerField(unique=True) meal = models.ManyToManyField('meals') # Meal = models.ForeignKey('Meals',on_delete=models.CASCADE) @property def total_price(self): price = self.meal.objects.all().aggregate(total_price=models.Sum('meals__price')) return price['total_price'] class Meals(models.Model): name = models.CharField(max_length=100) price = models.DecimalField(decimal_places=2, max_digits=5) </code></pre> <p>Here is my serializer.py :</p> <pre><code>from rest_framework import serializers from cafe_app import models class MealSerializer(serializers.ModelSerializer): class Meta: model = models.Meals fields = ['id','name','price',] class OrderSerializer(serializers.ModelSerializer): **meal = MealSerializer(read_only=True,many=True)** class Meta: model = models.Order fields = ['table_id','meal',] </code></pre> <p>When I comment meal = MealSerializer(read_only=True,many=True) line then it shows input as table_id and meal where meal values are as Meal Object (1), Mean Object (2) ... .</p> <p>My questions :</p> <ol> <li>How to display meal Object value instead of it as object.</li> <li>How Can I use total_price method in my view/serializer.</li> <li>How to see the flow like how it flows from which class to which class call goes and what is type and value of structure that I received.</li> </ol> <p>Thanks.</p>
<blockquote> <ol> <li>How to display meal Object value instead of it as object.</li> </ol> </blockquote> <p>Use the <code>__str__</code> method on Meal.</p> <blockquote> <ol start="2"> <li>How Can I use total_price method in my view/serializer.</li> </ol> </blockquote> <p>Define the annotation in your view's queryset, then add the custom field to the serializer. Do not add it to your model unless it's a one-off thing. The way you have it is pretty inefficient as it'll generate many queries for a list view.</p> <blockquote> <ol start="3"> <li>How to see the flow like how it flows from which class to which class call goes and what is type and value of structure that I received.</li> </ol> </blockquote> <p>Use a debugger in an IDE like PyCharm, <a href="https://docs.python.org/3.8/library/pdb.html" rel="nofollow noreferrer">PDB</a>, or the classic <code>print</code> statements.</p> <p>Below are the corrections I've suggested for 1 and 2.</p> <pre><code># models.py from django.db import models class Order(models.Model): table_id = models.IntegerField(unique=True) meal = models.ManyToManyField('meals') class Meals(models.Model): name = models.CharField(max_length=100) price = models.DecimalField(decimal_places=2, max_digits=5) def __str__(self): # This method defines what the string representation an instance. return f'{self.name}: ${self.price}' # serializers.py from rest_framework import serializers from cafe_app import models class MealSerializer(serializers.ModelSerializer): class Meta: model = models.Meals fields = ['id','name','price',] class OrderSerializer(serializers.ModelSerializer): total_price = serializers.FloatField(read_only=True) meal = MealSerializer(read_only=True,many=True) class Meta: model = models.Order fields = ['table_id','meal', 'total_price'] # views.py class OrderView(ViewSet): # or View. queryset = Order.objects.annotate( total_price=Sum('meal__price') ) </code></pre>
python-3.x|django-models|django-rest-framework|django-serializer|django-related-manager
1
1,904,322
70,117,990
Get the longest period where most columns overlap in pandas dataframe
<p>I have multiple timeseries each representing a column in a dataframe. I need to clean the data in the sense that I would like to remove the columns that have gaps or find the longest period where all columns have data. For example for the toy dataset:</p> <pre><code> AEDC AGGI AKVA ALME ALOD ALTX 2014-01-02 NaN 0.03 0.04 0.0040 0.38 NaN 2014-01-03 NaN NaN 58.3 0.0040 NaN 0.083 2014-01-06 NaN NaN 58.9 0.0063 NaN 0.083 2014-01-07 NaN NaN NaN 0.0065 NaN 0.083 2014-01-08 NaN 0.04 NaN 0.0080 NaN NaN </code></pre> <p>The period which I would select is 2014-01 : 03-2014-01-06 because there I have overlap for 3 columns. Is there a library that would help me in achieving this goal?</p>
<p>You need to check the presence of <code>NaN</code> in the dataframe and then add across rows to figure out the number of non-na columns in each row. Then run some sort of run length encoding to obtain the runs of equal values.</p> <pre><code>import pdrle rle = pdrle.encode(df.notna().sum(axis=1)) rle[&quot;run_end_ind&quot;] = rle.runs.cumsum() rle[&quot;run_start_ind&quot;] = rle.run_end_ind - rle.runs rle vals runs run_end_ind run_start_ind 0 4 1 1 0 1 3 2 3 1 2 2 2 5 3 </code></pre> <p>From the second row of the <code>rle</code> in the example above, we see that in rows with index <code>1</code> (<code>run_start_ind</code>) through index <code>3</code> (<code>run_end_ind</code>) of <code>df</code>, there were <code>3</code> (<code>vals</code>) columns with non-na values.</p>
python|pandas|dataframe
1
1,904,323
70,137,651
Array not being returned as expected
<p>I am working on implementing a Naive Bayes Classification algorithm. I have a method <code>def prob_continous_value</code> which is supposed to return the probability density function for an attribute given a class attribute. The problem requires classifying the following datasets:</p> <pre><code>Venue,color,Model,Category,Location,weight,Veriety,Material,Volume 1,6,4,4,4,1,1,1,6 2,5,4,4,4,2,6,1,1 1,6,2,1,4,1,4,2,4 1,6,2,1,4,1,2,1,2 2,6,5,5,5,2,2,1,2 1,5,4,4,4,1,6,2,2 1,3,3,3,3,1,6,2,2 1,5,2,1,1,1,2,1,2 1,4,4,4,1,1,5,3,6 1,4,4,4,4,1,6,4,6 2,5,4,4,4,2,4,4,1 2,4,3,3,3,2,1,1,1 </code></pre> <pre><code>Venue,color,Model,Category,Location,weight,Veriety,Material,Volume 2,6,4,4,4,2,2,1,1 1,2,4,4,4,1,6,2,6 1,5,4,4,4,1,2,1,6 2,4,4,4,4,2,6,1,4 1,4,4,4,4,1,2,2,2 2,4,3,3,3,2,1,1,1 1,5,2,1,4,1,6,2,6 1,2,3,3,3,1,2,1,6 2,6,4,4,4,2,3,1,1 1,4,4,4,4,1,2,1,6 1,5,4,4,4,1,2,1,4 1,4,5,5,5,1,6,2,4 2,5,4,4,4,2,3,1,1 </code></pre> <p>The code for this is written like so:</p> <pre><code>from numpy.core.defchararray import count, index import pandas as pd import numpy as np import math from sklearn.decomposition import PCA from numpy import linalg as LA from sklearn.tree import DecisionTreeClassifier from sklearn.naive_bayes import GaussianNB test_set_Bayes = pd.read_csv(&quot;Assignment 2--Training set for Bayes.csv&quot;) training_set_Bayes = pd.read_csv(&quot;Assignment 2--Test set for Bayes.csv&quot;) def prob_continous_value(A, classAttribute, dataset, x): # calcuate the average for all values of A in dataset with class = x a = dataset[dataset[classAttribute] == x][A].mean() # calculate the standard deviation for all values A in dataset with class = x stdev = dataset[dataset[classAttribute] == x][A].std() v = dataset[A].iloc[0] print(f&quot;a:{a}, stdev:{stdev}, v:{v}&quot;) p = (1/(math.sqrt(2*math.pi)*stdev))*math.exp(-((v-a)*(v-a))/(2*stdev*stdev)) return p def valueIsNotContinuous(A,dataset): # check if value is continuous or not x = dataset[A].iloc[0] return type(x) == int or type(x) == float def BayesClassifier(training_set,test_set): classAttribute = 'Volume' for x in training_set[classAttribute].unique(): D = len(training_set[classAttribute].index) d = len(training_set[training_set[classAttribute] == x].index) px = d/D print(f'Step 1 calculate p({classAttribute}={x}|x)={px}') print(f'p({classAttribute}={x}|x)={px}') p = 0 probabilitiesProduct = 0 products = [] for A, values in training_set.iteritems(): if not A == classAttribute: print(f'Step 2 calculate p(Ai={A}={classAttribute}|{x})') p = prob_continous_value(A, classAttribute, training_set, x) print(f'p({A}|{x}) = {p}') probabilitiesProduct *= p print(f&quot;p(Ai={A}|{classAttribute}={x})={px*probabilitiesProduct}&quot;) products.append(probabilitiesProduct) print(products) # prompt user to select either ID3 or Bayes classifier. selection = &quot;Bayes&quot; #= input(&quot;Please enter your selection for either ID3 or Bayes classification: &quot;) if(selection == &quot;Bayes&quot;): BayesClassifier(training_set_Bayes,test_set_Bayes) </code></pre> <p>Expected:</p> <p>Array of probabilities</p> <p>Actual:</p> <pre><code>[nan] </code></pre> <p>The stdev</p> <pre><code>Technically the stdev is 0 for cases like: p(Ai=Model|Volume=5)=0.0 Step 2 calculate p(Ai=Category=Volume|5) 38 3 40 3 41 3 Name: Category, dtype: int64 average :3.0, stdev:0.0, value :4 </code></pre> <p>I'm unexpectedly getting an error <code>nan</code> this should be an array. I'd like to figure out how to return the max from the array.</p>
<p>I ran your code and it looks like your issue is this line:</p> <p><code>p = (1/(math.sqrt(2*math.pi)*stdev))*math.exp(-((v-a)*(v-a))/(2*stdev*stdev))</code></p> <p>The print statement above it says <code>stdev</code> is <code>0</code>, so you get a <code>1/0</code> error. In my interpreter it threw a <code>ZeroDivisionError</code> at that line, I'm surprised yours didn't.</p> <p>Setting <code>stdev = 1</code> before the division appears to solve the issue, so you need to either use input data that has a non-zero standard deviation, or there's an error in your equation.</p>
python|pandas
0
1,904,324
56,575,526
Adding sublists elements based on indexing by condition in python
<p>I have list like below</p> <pre><code>a=[['a',1,2,1,3],['b',1,3,4,3],['c',1,3,4,3]] b=[['b',1,3,4,3],['c',1,3,4,3]] </code></pre> <p>I want add the elements based on index if 1st sublist element match with other list sublist element</p> <p>tried with below:</p> <pre><code>from operator import add res_list1=[] for a1 in a: for b1 in b: if a1[0]==b1[0]: res_list = [map(add, a1[1:], b1[1:])] res = [[a1[0],i,j,k,l] for i,j,k,l in res_list] res_list1.append(res[0]) else: res_list=a1 res_list1.append(res_list) print res_list1 </code></pre> <p>but output resulting like below:</p> <pre><code>res_list1=[['a', 1, 2, 1, 3], ['a', 1, 2, 1, 3], ['b', 2, 6, 8, 6], ['b', 1, 3, 4, 3], ['c', 1, 3, 4, 3], ['c', 2, 6, 8, 6]] </code></pre> <p>but correct output should be:</p> <pre><code>res_list1=[['a', 1, 2, 1, 3], ['b', 2, 6, 8, 6], ['c', 2, 6, 8, 6]] </code></pre>
<p>Here's an <a href="https://docs.python.org/2/library/itertools.html" rel="nofollow noreferrer"><code>itertools</code></a> based approach:</p> <pre><code>from operator import itemgetter from itertools import groupby, islice l = sorted(a+b) [[k] + [sum(i) for i in islice(zip(*v),1,None)] for k,v in groupby(l, key=itemgetter(0))] # [['a', 1, 2, 1, 3], ['b', 2, 6, 8, 6], ['c', 2, 6, 8, 6]] </code></pre>
python|python-2.7
3
1,904,325
17,826,477
Python - How to track (add/remove) lots of class instances over mulitple iterations?
<p>I am building a dynamic map of earthquakes, using the vtk library. </p> <p>I've already made a static one, (see here: <a href="https://github.com/yacobuk/QuakeCloud" rel="nofollow noreferrer">https://github.com/yacobuk/QuakeCloud</a> and here: <a href="http://www.youtube.com/watch?v=4HVdTcI_ozI" rel="nofollow noreferrer">http://www.youtube.com/watch?v=4HVdTcI_ozI</a>) so I know the basic idea works, but now I want to try and show the quakes over time. </p> <p>I have some code examples that show me how to update the frame, and how to add / remove objects, but I'm stuck on figuring out how to spin up an instance, track it for a few periods, then remove it. </p> <p>The basic add/ remove code looks like this: </p> <pre><code>for point_and_mag in pm.points_mag: time.sleep(0.5) mag = point_and_mag[1] point = point_and_mag[0] if mag &gt; 2: pointCloud = VtkPointCloud(pm) pointCloud.addPoint(point, math.log(mag)*10) renderer.AddActor(pointCloud.vtkActor) renderer.ResetCamera() renderWindow.Render() time.sleep(0.3) renderer.RemoveActor(pointCloud.vtkActor) renderer.ResetCamera() renderWindow.Render() </code></pre> <p>But of course, this only allows one object at a time (an instance of <code>pointCloud.vtkActor</code> via <code>renderer.AddActor(pointCloud.vtkActor)</code> waits a while, then removes it with <code>renderer.RemoveActor(pointCloud.vtkActor)</code></p> <p>How can I add a number of actors (I'm going to use 10 min interval, and there was as many as 5 quakes in that time), tag it with a counter, increment the counter at every loop iteration, and when it reaches 5 iterations, remove the actor?</p> <p>There is some more context to this question here: <a href="https://stackoverflow.com/questions/17779491/python-vtk-set-each-point-size-individually-in-a-vtkpolydata-object">Python/vtk - set each point size individually in a vtkPolyData object?</a> </p>
<p>A possible(<em>untested</em>) solution might be:</p> <pre><code>from collections import deque # The number 5 indicates for how many iterations the actors should be rendered. rendered_actors = deque([None] * 5, maxlen=5) for point_and_mag in pm.points_mag: if rendered_actors[-1] is not None: renderer.removeActor(rendered_actors[-1]) renderer.ResetCamera() renderWindow.Render() time.sleep(0.5) mag = point_and_mag[1] point = point_and_mag[0] if mag &gt; 2: pointCloud = VtkPointCloud(pm) pointCloud.addPoint(point, math.log(mag)*10) rendered_actors.appendleft(pointcloud.vtkActor) renderer.AddActor(pointCloud.vtkActor) renderer.ResetCamera() renderWindow.Render() else: rendered_actors.appendleft(None) </code></pre> <p>This code creates a <code>deque</code>(which is a double-linked list) of length 5. The actors are inserted at the left of this deque and at each iteration the rightmost value, if it is an "actor", it is removed from the scene and the scene is re-rendered.</p> <p>Note that I don't have vtk so I cannot test this code.</p> <hr> <p>A small style note: this is really unpythonic code-style:</p> <pre><code>for point_and_mag in pm.points_mag: mag = point_and_mag[1] point = point_and_mag[0] </code></pre> <p>Use tuple-unpacking:</p> <pre><code>for point, mag in pm.points_mag: # ... if mag &gt; 2: # ... </code></pre>
python|class|loops|iteration|vtk
1
1,904,326
17,983,355
Detect if network is idle in Python
<p>I am making a Python based downloader. I have a working multiprocessing based console script as of now. I would like to know how to detect if the network is idle. That is, the users not using the network themselves (browsing, surfing, etc).</p> <p>It should be able to do two things in this regard:</p> <ol> <li>Resume downloading when network is detected idle.</li> <li>Pause downloading when it detects some network activity.</li> </ol> <p>One way to define 'idle-ness' could be to trigger if the network activity is at 1% of max bandwidth for 5 minutes straight.</p> <p>Is there a better way to detect if network is idle?</p>
<p>I would recommend using the <code>psutil.network_io_counters</code> module:</p> <blockquote> <p>Return network I/O statistics as a namedtuple including the following attributes:</p> <ul> <li>bytes_sent: number of bytes sent</li> <li>bytes_recv: number of bytes received</li> <li>packets_sent: number of packets sent</li> <li>packets_recv: number of packets received</li> <li>errin: total number of errors while receiving</li> <li>errout: total number of errors while sending</li> <li>dropin: total number of incoming packets which were dropped</li> <li>dropout: total number of outgoing packets which were dropped (always 0 on OSX and BSD)</li> </ul> <p>If pernic is True return the same information for every network interface installed on the system as a dictionary with network interface names as the keys and the namedtuple described above as the values.</p> </blockquote> <pre><code>In [1]: import psutil In [2]: psutil.network_io_counters(pernic=True) Out[2]: {'en0': iostat(bytes_sent=143391435L, bytes_recv=1541801914L, packets_sent=827983L, packets_recv=1234558L, errin=0L, errout=0L, dropin=0L, dropout=0), 'gif0': iostat(bytes_sent=0L, bytes_recv=0L, packets_sent=0L, packets_recv=0L, errin=0L, errout=0L, dropin=0L, dropout=0), 'lo0': iostat(bytes_sent=6143860L, bytes_recv=6143860L, packets_sent=55671L, packets_recv=55671L, errin=0L, errout=0L, dropin=0L, dropout=0), 'p2p0': iostat(bytes_sent=0L, bytes_recv=0L, packets_sent=0L, packets_recv=0L, errin=0L, errout=0L, dropin=0L, dropout=0), 'stf0': iostat(bytes_sent=0L, bytes_recv=0L, packets_sent=0L, packets_recv=0L, errin=0L, errout=0L, dropin=0L, dropout=0) } </code></pre> <p>You can keep track of the sent and received bytes on the interface you want to monitor and that will give you information if the network is idle or busy.</p>
python|networking
1
1,904,327
61,076,107
matplotlib colormap for just one of the subplots
<p>I am plotting some images with matplotlib as follows:</p> <pre><code>fig, axes = plt.subplots(1, len(slices)) for i, slice in enumerate(slices): if i != 3: axes[i].imshow(slice.T, cmap="gray", origin="lower") else: axes[i].imshow(slice.T, cmap="hot", origin="lower") </code></pre> <p>As you can see one of the subplot axes is a heatmap. Is it possible to have a colormap just next to that subplot?</p> <p><strong><em>EDIT:</em></strong></p> <p>Ok, I can do something like:</p> <pre><code>fig.colorbar(im, ax=axes[i]) </code></pre> <p>This shows the colorbar but it is disproportionately large! Is it possible to make it the same height as the rest of the plot.</p>
<p>Ok, for someone else, here is what worked:</p> <pre><code>im = axes[i].imshow(slice.T, cmap="hot", origin="lower") fig.colorbar(im, ax=axes[i], fraction=0.046, pad=0.04) </code></pre>
python|matplotlib
0
1,904,328
60,858,010
extract dataframe index values into a numpy arrray
<p>Its been hours and I cannot find the tool to solve this little issue that I having. I want to extract the count of index values of a pandas dataframe into a numpy array.</p> <p>here is an example:</p> <pre><code>df = pd.DataFrame({'item' : [1,2,3], 'quantity': [10,15,22]}) </code></pre> <p>and I have done this:</p> <pre><code>r = np.array(df.index.value_count.tolist()) </code></pre> <p>but this seems horribly wrong but I cannot do better and it does not work. and to be honest I don't know what to do do to make it working. I am trying to achieve this:</p> <pre><code>r = [0,1,2] </code></pre> <p>any help on this would be kindly appreciated. </p>
<p>See below:</p> <pre><code>r = np.array(df.index.values.tolist()) </code></pre> <p>This will transform the index values of your df to a list and then an numpy array.</p>
python|pandas
1
1,904,329
66,036,819
Ternary Operator Python single variable assignment python 3.8
<p>I am encountering a weird situation in Python and would like some advice. For some business reasons, we need to keep this python code module to shortest number of lines. Long story --- but it gets into a requirement that this code module is printed out and archived on paper. I didnt make the rules -- just need to pay the mortgage.</p> <p>We are reading a lot of data from a mainframe web service and applying some business rules to the data. For example and &quot;plain English&quot; business rule would be</p> <p>If the non resident state value for field XXXXXX is blank or shorter than two character [treat as same], the value for XXXXXX must be set to &quot;NR&quot;. Evaluation must treat the value as non resident unless the residence has been explicitly asserted.</p> <p>I would like to use ternary operators for some of these rules as they will help condense the over lines of code. I have not use ternary's in python3 for this type of work and I am missing something or formatting the line wrong</p> <pre><code> mtvartaxresXXX = &quot;NR&quot; if len(mtvartaxresXXX)&lt; 2 </code></pre> <p>does not work.</p> <p>This block (classic python) does work</p> <p><code>if len(mtvartaxresXXX) &lt; 2: </code><br /> <code> mtvartaxresXXX = &quot;NR&quot; </code></p> <p>What is the most &quot;pythonish&quot; way to perform this evaluation on a single line if statement.</p> <p>Thanks</p>
<p>You can simply write the <code>if</code> statement on a single line:</p> <pre><code>if len(mtvartaxresXXX) &lt; 2: mtvartaxresXXX = &quot;NR&quot; </code></pre> <p>This is the same number of lines as the ternary, and doesn't require an explicit <code>else</code> value, so it's fewer characters.</p>
python|conditional-statements|python-3.8|conditional-operator
2
1,904,330
68,039,507
How to check if list item is a string
<p>I am trying to add two lists together by using the zip function:</p> <pre><code>x = [2, 4] y = [5, 7] sum_list = [sum(x) for x in zip(list1, list2)] &gt; [7,11] </code></pre> <p>However, if the lists are comprised of strings I want to concatenate instead but the sum function doesn't work in this case.</p> <pre><code>x = ['a'] y = ['b'] return ['ab'] </code></pre> <p>Is there a way to turn this into an if statement so that if x[0] = string, then concatenate instead of sum?</p>
<h3>Simple</h3> <p>What about <code>+</code> operator ?</p> <pre><code>def concat_or_sum(list1, list2): return [x + y for x, y in zip(list1, list2)] sum_list = concat_or_sum([2, 4], [5, 7]) print(sum_list) # [7, 11] sum_list = concat_or_sum(['a', 'c'], ['b', 'd']) print(sum_list) # ['ab', 'cd'] </code></pre> <hr /> <h3>Improve</h3> <p>What about method version of <code>+</code> operator : <code>operator.add</code> ?</p> <p>First solution is data-dependant, when changing the amount of lists, it requires to change also the code.</p> <p>The solution is to apply <code>+</code> (using the <code>add</code> version) again and again on each value of the pairs</p> <pre><code>import operator from functools import reduce def concat_or_sum(*lists): return [reduce(operator.add, x) for x in zip(*lists)] sum_list = concat_or_sum([2, 4, 't'], [5, 7, 't'], [10, 11, 't']) print(sum_list) # [17, 22, 'ttt'] sum_list = concat_or_sum([1, 'a', 'c'], [2, 'b', 'd'], [3, 'e', 'f']) print(sum_list) # [6, 'abe', 'cdf'] </code></pre>
python|python-3.x
10
1,904,331
68,332,617
Why my PyQt5 Webview Code is not working?
<p>This is my code. Why its not working? Where is my problem?</p> <pre><code>from PyQt5 import QtCore, QtGui, QtWidgets from PyQt5.Qt import * from PyQt5.QtWebEngineWidgets import * from PyQt5.QtWidgets import QApplication class Ui_Form(object): def setupUi(self, Form): Form.setObjectName(&quot;Form&quot;) Form.resize(1280, 960) self.widget = QWebEngineView() self.widget.setGeometry(QtCore.QRect(0, 0, 1270, 920)) self.widget.setObjectName(&quot;widget&quot;) self.widget.load(QUrl(&quot;google.com&quot;)) self.retranslateUi(Form) QtCore.QMetaObject.connectSlotsByName(Form) def retranslateUi(self, Form): _translate = QtCore.QCoreApplication.translate Form.setWindowTitle(_translate(&quot;Form&quot;, &quot;Form&quot;)) if __name__ == &quot;__main__&quot;: import sys app = QApplication(sys.argv) Form = QtWidgets.QWidget() ui = Ui_Form() ui.setupUi(Form) Form.show() sys.exit(app.exec_()) </code></pre>
<p>Your code has 2 problems:</p> <ol> <li><p>The QWebEngineView is not a child of the window so it will not be displayed. Change to <code>self.widget = QWebEngineView(Form)</code></p> </li> <li><p><code>QUrl(&quot;google.com&quot;)</code> is not a valid url so you have 2 options, change to:</p> <ul> <li><code>QUrl(&quot;https://google.com&quot;)</code> OR</li> <li><code>QUrl.fromUserInput(&quot;google.com&quot;)</code></li> </ul> </li> </ol>
python|pyqt|pyqt5|qwebengineview
1
1,904,332
59,077,284
Fail to load a .pth file (pre-trained neural network) using torch.load() on google colab
<p>My google drive is linked to my google colab notebook. Using the pytorch library torch.load($PATH) fails to load this 219 Mo file (pre-trained neural network) (<a href="https://drive.google.com/drive/folders/1-9m4aVg8Hze0IsZRyxvm5gLybuRLJHv-" rel="nofollow noreferrer">https://drive.google.com/drive/folders/1-9m4aVg8Hze0IsZRyxvm5gLybuRLJHv-</a>) which is in my google drive. However it works fine when I do it locally on my computer. The error i get on google collab is: (settings: Python 3.6, pytorch 1.3.1):</p> <pre><code>state_dict = torch.load(model_path)['state_dict'] File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 303, in load return _load(f, map_location, pickle_module) File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 454, in _load return legacy_load(f) File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 380, in legacy_load with closing(tarfile.open(fileobj=f, mode='r:', format=tarfile.PAX_FORMAT)) as tar, File "/usr/lib/python3.6/tarfile.py", line 1589, in open return func(name, filemode, fileobj, **kwargs) File "/usr/lib/python3.6/tarfile.py", line 1619, in taropen return cls(name, mode, fileobj, **kwargs) File "/usr/lib/python3.6/tarfile.py", line 1482, in init self.firstmember = self.next() File "/usr/lib/python3.6/tarfile.py", line 2297, in next tarinfo = self.tarinfo.fromtarfile(self) File "/usr/lib/python3.6/tarfile.py", line 1092, in fromtarfile buf = tarfile.fileobj.read(BLOCKSIZE) OSError: [Errno 5] Input/output error``` Any help would be much appreciated! </code></pre>
<p>Large sized files are automatically analyzed for virus on Drive, every time you attempt to download a large file you have to pass thru this scan, making it hard to reach the download link.</p> <p>You could download the file directly using the Drive API and then pass it to the torch, it shouldn't be hard to implement on Python, I've made a sample on how to Download your file and pass it to Torch.</p> <pre class="lang-py prettyprint-override"><code>import torch import pickle import os.path import io from googleapiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.transport.requests import Request from googleapiclient.http import MediaIoBaseDownload from __future__ import print_function url = "https://drive.google.com/file/d/1RwpuwNPt_r0M5mQGEw18w-bCfKVwnZrs/view?usp=sharing" # If modifying these scopes, delete the file token.pickle. SCOPES = ( 'https://www.googleapis.com/auth/drive', ) def main(): """Shows basic usage of the Sheets API. Prints values from a sample spreadsheet. """ creds = None # The file token.pickle stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('token.pickle'): with open('token.pickle', 'rb') as token: creds = pickle.load(token) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( 'credentials.json', SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('token.pickle', 'wb') as token: pickle.dump(creds, token) drive_service = build('drive', 'v2', credentials=creds) file_id = '1RwpuwNPt_r0M5mQGEw18w-bCfKVwnZrs' request = drive_service.files().get_media(fileId=file_id) # fh = io.BytesIO() fh = open('file', 'wb') downloader = MediaIoBaseDownload(fh, request) done = False while done is False: status, done = downloader.next_chunk() print("Download %d%%." % int(status.progress() * 100)) fh.close() torch.load('file') if __name__ == '__main__': main() </code></pre> <p>To run it you'll have first to:</p> <ul> <li>Enable the Drive API for your account</li> <li>Install the Google Drive API libraries, </li> </ul> <p>This takes no more than 3 minutes and is properly explained on the <a href="https://developers.google.com/drive/api/v3/quickstart/python" rel="nofollow noreferrer">Quickstart Guide for Google Drive API</a>, just follow steps 1 and 2 and run the provided sample code from above.</p>
deep-learning|google-drive-api|pytorch|google-colaboratory|ioerror
0
1,904,333
59,058,940
how to limit a variable to zero on python
<p>I have started making a new game on python and have it so if either player1 or player2's health goes to zero or below, the code ends, but i do not want to display that the player has for example, -12 health at the end. Here is the code:</p> <pre><code>player1 = 50 player2 = 50 while player1 &gt;= 0 or player2 &gt;= 0: import random slash = random.randint(5, 9) stab = random.randint(1, 15) swing = random.randint(15, 20) heal = random.randint(10, 15) a = [slash, stab, swing] ai = random.choice(a) hit1 = input("Press 1 2 3 or 4") if hit1 == "1": print("You dealt " + str(slash)) player2 = player2 - slash print("Player 2 now has " + str(player2)) if hit1 == "2": print("You dealt " + str(stab)) player2 = player2 - stab print("Player 2 now has " + str(player2)) if hit1 == "3": print("You dealt " + str(swing)) player2 = player2 - swing print("Player 2 now has " + str(player2)) if hit1 == "4": print("You healed by " + str(heal)) player2 = player1 + heal print("Player 1 now has " + str(player1)) hit2 = print("player 2 has dealt " + str(ai)) player1 = player1 - ai print("player1 is now on " +str(player1)) </code></pre>
<p>you can use the <a href="https://www.programiz.com/python-programming/methods/built-in/max" rel="nofollow noreferrer"><code>max</code></a> function in python.</p> <p>for your relevant lines:</p> <pre><code> player2 = max(player2 - slash, 0) player2 = max(player2 - stab, 0) player2 = max(player2 - swing, 0) player1 = max(player1 - ai, 0) </code></pre> <p>moreover, you need to change your while condition:</p> <pre><code>while player1 &gt; 0 or player2 &gt; 0: </code></pre> <p><strong>edit</strong>:</p> <pre><code>player1 = 50 player2 = 50 while player1 &gt; 0 and player2 &gt; 0: import random slash = random.randint(5, 9) stab = random.randint(1, 15) swing = random.randint(15, 20) heal = random.randint(10, 15) hit_number_to_hit_type = {'1': slash, '2': stab, '3': swing} a = [slash, stab, swing] ai = random.choice(a) hit1 = input("Press 1 2 3 or 4\n") if "1" &lt;= hit1 &lt;= "3": hit = hit_number_to_hit_type[hit1] print("You dealt " + str(hit)) player2 = max(player2 - hit, 0) print("Player 2 now has " + str(player2)) if hit1 == "4": print("You healed by " + str(heal)) player1 += heal print("Player 1 now has " + str(player1)) hit2 = print("player 2 has dealt " + str(ai)) player1 = max(player1 - ai, 0) print("player1 is now on " + str(player1)) </code></pre>
python|python-3.x|variables|while-loop|zero
2
1,904,334
59,063,039
compute a timestamps based on 13-digit unixtime timestamp in ms in python
<p>I want to compute the timestamp that is 300 milliseconds before and after the given 13-digit unixtime of the system. I checked the question <a href="https://stackoverflow.com/questions/49710963/converting-13-digit-unixtime-in-ms-to-timestamp-in-python">here</a> to convert 13-digit unixtime to timestamp. </p> <p>Then, given the timestamp, I wrote a simple code in python to double-check whether the outputs are correct or not.</p> <p>Assume the given timestamp of the system is 13-digit unixtime and equal to X = "1396226255964". Now I need to compute the new timestamp "Y1" that is "300 milliseconds" before X and "Y2" that is "300 milliseconds" after X.</p> <p>Is this code computes Y1 &amp; Y2 correctly?</p> <pre><code>X = "1396226255964" Y1 = int(X) - int (300000) print("Y1:", Y1) Y2 = int(X) + int (300000) print("Y2:", Y2) </code></pre> <p>Outputs: Y1: 1396225955964 Y2: 1396226556620</p>
<pre><code>from datetime import datetime, timedelta X = "1396226255964" X_dt = datetime.fromtimestamp(int(X)/1000) # using the local timezone y1= X_dt + timedelta(milliseconds=300) y2 = X_dt + timedelta(milliseconds=-300) print(X_dt.strftime("%Y-%m-%d %H:%M:%S,%f")) print(y1.strftime("%Y-%m-%d %H:%M:%S,%f")) print(y2.strftime("%Y-%m-%d %H:%M:%S,%f")) </code></pre>
python
1
1,904,335
62,307,288
Multiple Radio button groups are linking, causing strange behavior
<p>My program generates forms on the fly based on SQL data. I make two radio buttons and a QLineEdit entry right next to them. When the radio button on the right is checked, the QLineEdit is enabled correctly. The problem comes from the radio buttons linking and causing exclusive actions between themselves. When the program starts, they look like this: <a href="https://i.stack.imgur.com/3I16M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3I16M.png" alt="Program start"></a></p> <p>Then when I click the first "No", it changes how I expect and enables the QLineEdit. <a href="https://i.stack.imgur.com/uicz6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uicz6.png" alt="Normal behavior"></a></p> <p>Now I want to click "No" for "Serial#: RC1" also. This is where the behavior starts to go wrong. The No button is clicked and deselects all the row above.</p> <p><a href="https://i.stack.imgur.com/ENUKU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ENUKU.png" alt="Deselects the top row"></a></p> <p>If I try to click "No" on the top row again, the "yes" on the second row deselects.</p> <p><a href="https://i.stack.imgur.com/qacYz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qacYz.png" alt="second row deselects"></a></p> <p>Finally, I can click the Selected radio buttons and deselect everything until I'm left with one active radio button. At this point, I cannot have any more selected buttons than just this one. Clicking on a deselected button will activate it and deselect the previously active button.</p> <p><a href="https://i.stack.imgur.com/U8RVZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U8RVZ.png" alt="One selected Button Left"></a></p> <p>I generate the buttons on the fly from helper functions that put radio buttons in QButtonGroups. I thought this would be enough to stop this behavior, but I'm wrong. What i would like is the radio buttons on each row to no respond to the actions on other radio buttons on other rows.</p> <pre><code># !/user/bin/env python import os import sys from PyQt5 import uic from PyQt5.QtWidgets import * from PyQt5.QtGui import * class Radio(QDialog): def __init__(self, app): super(Radio, self).__init__() self.bundle_dir = os.path.dirname(__file__) gui_path = os.path.join(self.bundle_dir, 'ui', 'radio_bt_test.ui') self.ui = uic.loadUi(gui_path, self) self.num_of_row = 4 self.formLayout = self.findChild(QFormLayout, "formLayout") self.radio_bt_lineEdit_connections = dict() # to help link the radio buttons and lineEdit self.add_rows() self.show() def add_rows(self): """ Adds pairs of radio buttons with a lineEdit to each row of the form layout :return: """ for i in range(self.num_of_row): lbl = QLabel("Label#" + str(i)) hbox = QHBoxLayout() buttons = self.new_radio_pair() entry = self.new_entry("Value if No") entry.setEnabled(False) self.radio_bt_lineEdit_connections[buttons[-1]] = entry # adding connection to dictionary for later event handling buttons[-1].toggled.connect(self.radio_bt_changed) for button in buttons: hbox.addWidget(button) hbox.addWidget(entry) self.formLayout.addRow(lbl, hbox) def new_radio_pair(self, texts=('Yes', 'No')) -&gt; list: """ Makes a pair of radio buttons in a button group for creating data entries in "Part" grouping on the fly :param texts: The texts that will go on the two buttons. The more texts that are added to make more radio buttons :return: A list with QRadioButtons that are all part of the same QButtonGroup """ group = QButtonGroup() buttons = [] for text in texts: bt = QRadioButton(text) bt.setFont(QFont('Roboto', 11)) if text == texts[0]: bt.setChecked(True) group.addButton(bt) buttons.append(bt) return buttons def radio_bt_changed(self) -&gt; None: """ Helps the anonymous radio buttons link to the anonymous QLineEdits that are made for data fields :return: None """ sender = self.sender() assert isinstance(sender, QRadioButton) lineEdit = self.radio_bt_lineEdit_connections[sender] assert isinstance(lineEdit, QLineEdit) if sender.isChecked(): lineEdit.setEnabled(True) else: lineEdit.setEnabled(False) lineEdit.clear() def new_entry(self, placeholder_text: str = "") -&gt; QLineEdit: """ Makes a new QLineEdit object for creating data entries in "Part" grouping on the fly :return: A new QLineEdit with appropriate font and size policy """ entry = QLineEdit() entry.setSizePolicy(QSizePolicy.Expanding, QSizePolicy.Fixed) entry.setFont(QFont('Roboto', 11)) entry.setStyleSheet("background-color: rgb(239, 241, 243);") # with style sheets, past anything in here between the css tags entry.setMaxLength(15) entry.setPlaceholderText(placeholder_text) return entry def main(): app = QApplication(sys.argv) radio = Radio(app) sys.exit(app.exec()) main() </code></pre> <p>Is it maybe because I declare the QButtonGroup then forget about them? Does a garbage collector come and erase them because I don't have a variable assigned or is there another problem that I'm missing.</p> <p>The ui was designed on QtDesigner and is just a dialog box with a form layout on it.</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;ui version="4.0"&gt; &lt;class&gt;Dialog&lt;/class&gt; &lt;widget class="QDialog" name="Dialog"&gt; &lt;property name="geometry"&gt; &lt;rect&gt; &lt;x&gt;0&lt;/x&gt; &lt;y&gt;0&lt;/y&gt; &lt;width&gt;491&lt;/width&gt; &lt;height&gt;382&lt;/height&gt; &lt;/rect&gt; &lt;/property&gt; &lt;property name="windowTitle"&gt; &lt;string&gt;Dialog&lt;/string&gt; &lt;/property&gt; &lt;property name="styleSheet"&gt; &lt;string notr="true"&gt;background-color: rgb(219, 221, 223);&lt;/string&gt; &lt;/property&gt; &lt;widget class="QWidget" name="formLayoutWidget"&gt; &lt;property name="geometry"&gt; &lt;rect&gt; &lt;x&gt;80&lt;/x&gt; &lt;y&gt;40&lt;/y&gt; &lt;width&gt;301&lt;/width&gt; &lt;height&gt;291&lt;/height&gt; &lt;/rect&gt; &lt;/property&gt; &lt;layout class="QFormLayout" name="formLayout"/&gt; &lt;/widget&gt; &lt;/widget&gt; &lt;resources/&gt; &lt;connections/&gt; &lt;/ui&gt; </code></pre>
<p>The object that should be used to make the row buttons exclusive is "group" but this is a local variable that is destroyed when the new_radio_pair method finishes executing, causing it not to behave as previously thought. </p> <p>The solution is to extend the life cycle and for this there are several options such as making it an attribute of the class, adding it to a container or class that has a longer life cycle, or in the case of QObjects, passing it another QObject as a parent(like <code>self</code>) that has a longer life cycle, and that is the best solution for this case:</p> <pre><code>group = QButtonGroup(<b>self</b>)</code></pre>
python|pyqt5|python-3.7|qradiobutton|qbuttongroup
4
1,904,336
62,297,082
File name in awk after PIPE
<p>FILENAME after each resulted line</p> <pre><code>awk '/Select User File/,/\*\*/' "{}"/* | grep "Rule" | sed 's/ //g' | sort -u </code></pre> <p>gives me the output as</p> <pre><code>Rule 123 Rule asd Rule asdklnj </code></pre> <p>I want to append the <code>FILENAME</code> at the end of each line </p> <p>I don't know how to get the file name after pipe</p>
<p>IMHO we could do this in a single <code>awk</code> itself or we could reduce commands in your attempt,since no samples are given then trying to fix OP's attempt itself, try following.</p> <pre><code>awk '/Select User File/,/\*\*/{gsub(/ /,"");print $0,FILENAME}' "{}"/* | grep "Rule" | sort -u </code></pre>
python|file|unix|awk
1
1,904,337
35,402,232
Cancel a "..." continuation block in Python interpreter
<p>I often find myself in a situation like this:</p> <pre><code>&gt;&gt;&gt; for line in infile.readlines(): ... line = line.rtrim() ... line█ # ^^^^^ Oops, mistake! </code></pre> <p>At this point I want to start again (because I mixed up the "trim" from Java with the "strip" from Python). But I can't afford to let the loop run even one iteration, because it would mess with the file.</p> <p>In this situation my typical way out is to type some illegal syntax, such as an exclamation mark:</p> <pre><code>&gt;&gt;&gt; for line in infile.readlines(): ... line = line.rtrim() ... line! File "&lt;stdin&gt;", line 2 line! ^ SyntaxError: invalid syntax &gt;&gt;&gt; █ </code></pre> <p>But that's a clumsy way of doing things, not pythonic at all. Isn't there some way to get the interpreter to forget the previous line of continuation that I typed in? That would also save me from retyping the whole thing again. Some control key combination? I can't find it.</p>
<p>Under Linux (and MacOS?), Control-C will give you a <code>KeyboardInterrupt</code>:</p> <pre><code>&gt;&gt;&gt; if True: ... asdf KeyboardInterrupt &gt;&gt;&gt; </code></pre> <p>However, under Windows, that will throw you back to the command prompt. Control-D under Linux will get you back to the command prompt but only if you're on a blank line.</p> <p>If your keyboard has a 'Home' key (or a shortcut for it such as function-left arrow under MacOS), you can quickly jump to the start of the line and add a '#' to comment out the line:</p> <pre><code>&gt;&gt;&gt; if True: ... # asdf ... print(False) ... False &gt;&gt;&gt; </code></pre> <p>The only downside to this is that it's a few more keystrokes. Both of these methods will lose your tabs/spaces and put you back at the start of the line.</p>
python|command-line-interface|python-interactive
1
1,904,338
35,512,900
How can I get a FileStorage object from string
<p>I have this string:</p> <pre><code>str = "FieldStorage('myfile', 'file.exe', 'hello\\n')" </code></pre> <p>So I would like to know if there is a way to get the FileStogare object from that string</p>
<p>You could. The question is, should you?</p> <pre><code>str = "FieldStorage('myfile', 'file.exe', 'hello\\n')" obj = eval(str) </code></pre>
python
0
1,904,339
73,512,251
Cannot add python Requests module to OpenRefine
<p>I can't seem to import the requests module into OpenRefine . How can I add more python modules to openrefine?</p> <p>I get the error:</p> <p>ImportError: No module named requests</p> <p>screenshot</p> <p><img src="https://i.stack.imgur.com/YTIy6.png" alt="1" /></p>
<p>OpenRefine is using <a href="https://www.jython.org/" rel="nofollow noreferrer">Jython</a>, a Java implementation of Python. Therefore, you can not &quot;just&quot; install another library/package.</p> <p>There is a tutorial in the OpenRefine wiki describing <a href="https://github.com/OpenRefine/OpenRefine/wiki/Extending-Jython-with-pypi-modules" rel="nofollow noreferrer">how to extend Jython with PyPi modules</a>.</p> <p>Please note that currently 2.7 is the newest Jython implementation. Jython 3 is still in it's planing and development phase. See <a href="https://www.jython.org/jython-3-roadmap" rel="nofollow noreferrer">Jython 3 Roadmap</a> for details. This makes it difficult to use external libraries, as Python 2 had its end of life on 01.01.2020 and accordingly (most) libraries stopped supporting Python 2.</p> <p>For requests the <a href="https://requests.readthedocs.io/en/latest/community/faq/#python-2-support" rel="nofollow noreferrer">last version that supports Python 2 is 2.27</a>.</p> <p>Also, some Python packages rely on C libraries that are not compatible with Jython. Check the <a href="https://jython.readthedocs.io/en/latest/appendixA/" rel="nofollow noreferrer">Appendix A of the Definitive Guide to Jython</a> for more details on using external tools and libraries.</p>
python|openrefine
0
1,904,340
31,444,580
How to slice strings backwards python
<p>I am learning python and I made a basic program where the user takes the link of a photo and inputs it, then the program downloads that photo. In order to make sure that the user doesn't enter a link that is a webpage instead of a photo, I had the program check what the file extension was by using string slicing, but I can't seem to find out how to slice the string backwards</p> <p>I know that this is an dumb question but after an hour of searching I still can't find the answer. Here is the code</p> <pre><code>import random import urllib.request import urllib.parse def download_web_image(url, file_format): try: name = random.randrange(1, 1000) full_name = str(name) + file_format urllib.request.urlretrieve(url, full_name) print("Image download successful!") print("Image named " + full_name) except: print('Error') def get_user_url(): url = input("Now enter the url of the photo you want to download:") try: if url[0:3:-1] is '.png': download_web_image(url, ".png") elif url[0:4:-1] is 'gepj.': download_web_image(url, '.jpeg') elif url[0:3:-1] is '.gpj': download_web_image(url, '.jpg') else: print('the file format is uncompatible: ' + url[1:4:-1]) except: print('The url is not valid!') print('look for an image on a website, make sure it is a JPG or PNG file or it will not work!') get_user_url() </code></pre> <p>Thank you for the help. and no, I do not want the string to show up backwards.</p>
<p>I suggest you use the built-in method <a href="https://docs.python.org/2.7/library/stdtypes.html?highlight=endswith#str.endswith" rel="nofollow"><code>endswith</code></a> saves the trouble of variable size extensions(<code>png</code>,<code>jpeg</code>,<code>jpg</code>...etc), this way:</p> <pre><code>&gt;&gt;&gt;url = 'https://www.python.org/static/community_logos/python-logo-master-v3-TM.png' &gt;&gt;&gt;url.endswith('.png') True </code></pre>
python|string|reverse|slice|file-extension
5
1,904,341
59,715,719
After for loop i get printed only one string path to image folder instead of multiple strings of images, python
<p>I have iterated over folder media/images that containsall paths to that folder images using os.walk() function. After I did for in loop i get only one image in that folder and there is more then 115 i think of images of Dota2 heroes.</p> <p>I will show you the code and print screen so you can closely look. I think i made a mistake with for in loop some were but I new to python.</p> <p>insert_heroes.py (lines 15 to 18 are the one that iterate over my media/images folder.)</p> <pre><code>import requests import json import os API_URL = 'http://127.0.0.1:8000' if __name__ == '__main__': r = requests.get("https://api.opendota.com/api/heroes") all_heroes_info = json.loads(r.content.decode('UTF-8')) for hero_info in all_heroes_info: name = hero_info['localized_name'] hero_type = hero_info['primary_attr'] for root, dirs, files in os.walk("../../media/images/", topdown=False): for image in files: pass print(image) keys = [name] values = [image] # dictionary = dist(zip(keys, values)) # print(dictionary) # print(keys) mutation_create_hero = ''' mutation ($name: String!, $heroType: String!, $image: String!) { createHero(name: $name, heroType: $heroType, image: $image) { name image } } ''' variables = {'name': name, 'heroType': hero_type, 'image': image} # localhost_api_response = requests.post( # '{}/graphql/'.format(API_URL), # json={ # 'query': mutation_create_hero, # 'variables': variables # } # ) </code></pre> <p>What i get is this, when I print image variable -- <a href="https://prnt.sc/qmzh8g" rel="nofollow noreferrer">https://prnt.sc/qmzh8g</a> Only 1 icon. Zeus icon.png that is last image file in media/images dir.</p> <p>Thanks </p>
<p>Change lines 15-18 from </p> <pre><code>for root, dirs, files in os.walk("../../media/images/", topdown=False): for image in files: pass print(image) </code></pre> <p>to:</p> <pre><code>for root, dirs, files in os.walk("../../media/images/", topdown=False): for image in files: print(image) </code></pre> <p>Your indentation was slightly off on the print. The <code>print</code> statement was in the loop but outside of the inner loop.</p>
python|django|for-loop
0
1,904,342
25,232,346
Syntax error in FOR loop declare section
<p>I'm attempting to use a <code>FOR</code> loop in Postgres to calculate data averages over a range of (variable) for each geolocation in my db. I am using python/psycopg2. Here is the code:</p> <pre><code>query =''' DECLARE geoids RECORD; BEGIN RAISE NOTICE 'Beginning average calculation'; FOR geoids IN select*from census_blocks WHERE ST_contains((select geom from census_cbsa WHERE cbsafp10='%s'),census_blocks.geom) LOOP INSERT INTO temp_avgs VALUES (geoids, select avg(select alljobs from accessibility_results WHERE geoid=geoids AND deptime BETWEEN '%s' and '%s' AND threshold='%s') END LOOP; END; ''' </code></pre> <p>The error I receive is </p> <pre><code>psycopg2.ProgrammingError: syntax error at or near "RECORD" LINE 2: DECLARE geoids RECORD; </code></pre> <p>If I remove this <code>DECLARE</code> statement (since I believe <code>LOOP</code> variables over selection values are automatically declared as <code>RECORD</code>), the error becomes:</p> <pre><code>psycopg2.ProgrammingError: syntax error at or near "RAISE" LINE 4: RAISE NOTICE 'Beginning average calculation'; </code></pre> <p>How should I properly format this query?</p>
<h3>Procedural solution with loop</h3> <p>You are using PL/pgSQL code but are trying to phrase it as SQL query. That's just not possible.</p> <p>Use a <a href="http://www.postgresql.org/docs/current/interactive/sql-do.html" rel="nofollow noreferrer"><strong><code>DO</code></strong></a> statement or (since you want to use parameters) create a <a href="http://www.postgresql.org/docs/current/interactive/sql-createfunction.html" rel="nofollow noreferrer">plpgsql function</a>:</p> <pre><code>CREATE OR REPLACE FUNCTION foo(_cbsafp10 ?type? -- replace with ... ,_deptime_from timestamp? -- ... actual data types ,_deptime_to timestamp? ,_threshold ?type?) RETURNS void AS $func$ DECLARE rec RECORD; BEGIN FOR rec IN SELECT b.* FROM census_blocks b JOIN census_cbsa c ON ST_contains(c.geom, b.geom) WHERE c.cbsafp10 = _cbsafp10 LOOP INSERT INTO temp_avgs -- you might add a target list for safety. depends .. SELECT rec.*, avg(alljobs) FROM accessibility_results WHERE geoid = rec.geoid -- assuming you join on column &quot;geoid&quot;? AND deptime BETWEEN _deptime_from AND _deptime_to AND threshold = _threshold; END LOOP; END $func$ LANGUAGE plpgsql; </code></pre> <p>Make sure you escape quotes properly in your client!</p> <ul> <li>LOOP variables over selection values are <em><strong>not</strong></em> automatically declared as anything.</li> <li>Replace your unnecessary subqueries.</li> <li>The immediate cause for your 2nd error msg: <a href="http://www.postgresql.org/docs/current/interactive/plpgsql-errors-and-messages.html" rel="nofollow noreferrer"><code>RAISE</code></a> is a plpgsql command, <em>not</em> an SQL command.</li> </ul> <h3>Superior set-based solution</h3> <p>This goes to demonstrate the basics of a plpgsql function. But, as @Gordon already supplied, just use a single <code>INSERT</code> statement <em>doing the same</em> instead. Untangled further:</p> <pre><code>INSERT INTO temp_avgs -- you might add a target list for safety. depends .. SELECT b.*, avg(alljobs) FROM census_cbsa c JOIN census_blocks b ON ST_contains(c.geom, b.geom) JOIN accessibility_results a ON a.geoid = b.geoid WHERE c.cbsafp10 = %s AND a.deptime BETWEEN %s AND %s AND a.threshold = %s GROUP BY b.geoid; -- assuming b.geoid is the primary key </code></pre>
python|sql|postgresql|plpgsql|psycopg2
1
1,904,343
25,205,140
Returning Cython array
<p>How does one properly initialize and return a Cython array? For instance:</p> <pre><code>cdef public double* cyTest(double[] input): cdef double output[3] for i in xrange(3): output[i] = input[i]**2 print 'loop: ' + str(output[i]) return output cdef double* test = [1,2,3] cdef double* results = cyTest(test) for i in xrange(3): print 'return: ' + str(results[i]) </code></pre> <p>This returns:</p> <pre><code>loop: 1.0-&gt;1.0 loop: 2.0-&gt;4.0 loop: 3.0-&gt;9.0 return: 1.88706086937e-299 return: 9.7051011575e+236 return: 1.88706086795e-299 </code></pre> <p>So obviously, <code>results</code> still points only to garbage instead of the values it should point to. Admittedly, I am slightly confused with mixing the pointer and array syntax and which one is preferable/possible in a Cython context. </p> <p>In the end, I want to call <code>cyTest</code> from a pure C++ function:</p> <pre><code>#include &lt;iostream&gt; #include &lt;Python.h&gt; #include "cyTest.h" void main() { Py_Initialize(); initcyTest(); double input[3] = {1,2,3}; double* output = cyTest(input); for(int i = 0; i &lt; 3; i++) std::cout &lt;&lt; "cout: " &lt;&lt; output[i] &lt;&lt; std::endl; Py_Finalize(); } </code></pre> <p>This returns similar results:</p> <pre><code>loop: 1.0-&gt;1.0 loop: 2.0-&gt;4.0 loop: 3.0-&gt;9.0 cout: 1 cout: 6.30058e+077 cout: 6.39301e-308 </code></pre> <p>Anyone care to explain what error I'm making? I'd like to keep it as simple as possible. It's just returning an array from Cython to C++ after all. I'll deal with dynamic memory allocation later, if not necessary.</p>
<p>You are returning reference to local array ( output ), which will not work.</p> <p>Try changing your script to:</p> <pre><code>from cpython.mem cimport PyMem_Malloc cdef public double * cyTest(double[] input): cdef double * output = &lt; double * &gt;PyMem_Malloc( sizeof(double) * 3 ) for i in xrange(3): output[i] = input[i]**2 print 'loop: ' + str(output[i]) return output </code></pre> <p>And in your c++ code,</p> <p>after you done using <code>double* output</code> issue <code>free( output );</code></p> <p>If you want to use <code>cdef double* results = cyTest(test)</code> in your pyx script then don't forget to use <code>PyMem_Free(results)</code></p>
python|c++|cython|cpython
1
1,904,344
25,345,765
In Python what is it called when you use enclose a variable between 2 plus signs?
<p>What is it called when you use the plus signs to pull a variable into a string?</p> <p>Example 1:</p> <pre><code>variable = "stuff" print "I would like to print "+variable+" " </code></pre> <p>and why would that be used vs</p> <p>Example 2:</p> <pre><code>variable = "stuff" print "I would like to print %s" % variable </code></pre> <p>I'm new to the whole programming thing and this site, please forgive my ignorance and correct me if I'm lacking in proper etiquette.</p>
<p>You are <em>concatenating strings</em>, not putting anything between two <code>+</code> signs. Compare this to adding up numbers:</p> <pre><code>4 + 5 + 6 </code></pre> <p>This doesn't do anything special to the <code>5</code> there, that's just a sum of <code>(4 + 5) + 6</code>. Your expression is simply adding a value to a string, then adding another string.</p> <p>You should use string formatting wherever possible, because it is more readable and gives you more flexibility. Consider learning about <a href="https://docs.python.org/2/library/stdtypes.html#str.format" rel="nofollow"><code>str.format()</code></a>, a more consistent and more powerful version of string formatting:</p> <pre><code>variable = "stuff" print "I would like to print {}".format(variable) mapping = {'answer': 42, 'interest': 0.815} print '''\ The answer to the ultimate question: {m[answer]:&gt;10d} Rate: {m[interest]:03.2%}!'''.format(m=mapping) </code></pre> <p>Demo:</p> <pre><code>&gt;&gt;&gt; variable = "stuff" &gt;&gt;&gt; print "I would like to print {}".format(variable) I would like to print stuff &gt;&gt;&gt; mapping = {'answer': 42, 'interest': 0.815} &gt;&gt;&gt; print '''\ ... The answer to the ultimate question: {m[answer]:&gt;10d} ... Rate: {m[interest]:03.2%}!'''.format(m=mapping) The answer to the ultimate question: 42 Rate: 81.50%! </code></pre>
python|variables
3
1,904,345
70,762,830
Test if two segments are roughly collinear (on the same line)
<p>I want to test if two segments are roughly collinear (on the same line) using <code>numpy.cross</code>. I have the coordinates in meters of the segments.</p> <pre><code>import numpy as np segment_A_x1 = -8020537.5158307655 segment_A_y1 = 5674541.918222183 segment_A_x2 = -8020547.42095263 segment_A_y2 = 5674500.781350276 segment_B_x1 = -8020556.569040865 segment_B_y1 = 5674462.788207927 segment_B_x2 = -8020594.740831952 segment_B_y2 = 5674328.095911447 a = np.array([[segment_A_x1, segment_A_y1], [segment_A_x2, segment_A_y2]]) b = np.array([[segment_B_x1, segment_B_y1], [segment_B_x2, segment_B_y2]]) crossproduct = np.cross(a, b) &gt;&gt;&gt;array([7.42783487e+08, 1.65354844e+09]) </code></pre> <p>The <code>crossproduct</code> values are pretty high even if I would say those two segments are roughly collinear. Why?</p> <p>How can I determine if the segments are colinear with the <code>crossproduct</code> result?</p> <p>Is there a possibility of using a tolerance in meters to tell if the segments are roughly collinear?</p>
<p>The problem with your approach is that the cross product value depends on the measurement scale.</p> <p>Maybe the most intuitive measure of collinearity is the angle between the line segments. Let's calculate it:</p> <pre class="lang-py prettyprint-override"><code>import math def slope(line): &quot;&quot;&quot;Line slope given two points&quot;&quot;&quot; p1, p2 = line return (p2[1] - p1[1]) / (p2[0] - p1[0]) def angle(s1, s2): &quot;&quot;&quot;Angle between two lines given their slopes&quot;&quot;&quot; return math.degrees(math.atan((s2 - s1) / (1 + (s2 * s1)))) ang = angle(slope(b), slope(a)) print('Angle in degrees = ', ang) </code></pre> <pre><code>Angle in degrees = 2.2845 </code></pre> <p>I made use of an <a href="https://stackoverflow.com/a/57503229/13014172">answer</a> by Anderson Oliveira.</p>
python|numpy
3
1,904,346
70,879,465
Connecting MS SQL Server using connectorx in python
<p><em><strong>I am facing issues while trying to connect with MSSQL Server using connectorx package in python. I have already verified all the connection details through MS SQL Server Management Studio. I have installed version connectorx version 0.2.3</strong></em></p> <pre><code>import urllib.parse import connectorx as cx mssql_url = f&quot;mssql://{urllib.parse.quote_plus('User ID')}:{urllib.parse.quote_plus('Password')}@Server URL:1433/Database&quot; query = &quot;SELECT * FROM table&quot; df = cx.read_sql(mssql_url, query) </code></pre> <blockquote> <p>Output of the script: [2022-01-27T12:02:13Z ERROR tiberius::tds::stream::token] message=Login failed for user 'User ID'. code=18456 [2022-01-27T12:02:14Z ERROR tiberius::tds::stream::token] message=Login failed for user 'User ID'. code=18456 [2022-01-27T12:02:14Z ERROR tiberius::tds::stream::token] message=Login failed for user 'User ID'. code=18456 [2022-01-27T12:02:16Z ERROR tiberius::tds::stream::token] message=Login failed for user 'User ID'. code=18456 [2022-01-27T12:02:19Z ERROR tiberius::tds::stream::token] message=Login failed for user 'User ID'. code=18456 [2022-01-27T12:02:26Z ERROR tiberius::tds::stream::token] message=Login failed for user 'User ID'. code=18456 [2022-01-27T12:02:38Z ERROR tiberius::tds::stream::token] message=Login failed for user 'User ID'. code=18456 Traceback (most recent call last): File &quot;&quot;, line 1, in File &quot;/app/path/xxxxxxxx/dev/lib/python3.8/site-packages/connectorx/<strong>init</strong>.py&quot;, line 118, in read_sql result = _read_sql( RuntimeError: Timed out in bb8</p> </blockquote>
<p>I am using mysql.connector, it works well for me:</p> <pre><code>import pandas as pd import mysql import mysql.connector host = &quot;1.1.1.1:1234&quot; user = &quot;myusername&quot;; password = &quot;mypassword&quot; database = &quot;mydb&quot; def load_db(): connection = mysql.connector.connect(host=host, user=user, password=password, database=database) cursor = connection.cursor() cursor.execute(&quot;SELECT * FROM mytable&quot;) field_names = [i[0] for i in mycursor.description] result = cursor.fetchall() dataframe = pd.DataFrame(result, columns=field_names) </code></pre>
python|sql-server|database-connection
1
1,904,347
60,088,826
How to output only the whole passage [Google Cloud Vision API, document_text_detection]
<p>I try Google Cloud Vision API's <code>document_text_detection</code>. It works really well in Japanese, but I have a problem. The response contains both the whole passage and partial passages with line breaks. I only need the whole passage.</p> <p>This is the response. </p> <pre><code>Google keep の画像 テキスト化 画像文字認識で手書き文字をどこ までテキスト化が出来るのかをテスト。 Google keep OCR機能がとれた け使えるかを確認 この手書き文書を認献してiPhone のGoogle keepでテキスト化して Macで編集をするのにどれだけ 出来るかも確認する。 Google keep の画像 テキスト化 画像文字認識で手書き文字をどこ までテキスト化が出来るのかをテスト 。 Google keep OCR機能がとれた け使えるかを確認 この手書き文書を認献してiPhone のGoogle keepでテキスト化して Macで編集をするのにどれだけ 出来るかも確認する 。 </code></pre> <p>This is my python code.</p> <pre><code>import io import os os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="credentials.json" """Detects text in the file.""" from google.cloud import vision client = vision.ImageAnnotatorClient() directory = 'resources/' files = os.listdir(directory) for i in files: with io.open(directory+i, 'rb') as image_file: content = image_file.read() image = vision.types.Image(content=content) response = client.document_text_detection(image=image) texts = response.text_annotations for text in texts: print('{}'.format(text.description)) </code></pre> <p>I read API reference (<a href="https://cloud.google.com/vision/docs/reference/rest/v1/AnnotateImageResponse#TextAnnotation" rel="nofollow noreferrer">https://cloud.google.com/vision/docs/reference/rest/v1/AnnotateImageResponse#TextAnnotation</a>) and came up with the idea to use <code>response.full_text_annotation</code> instead of <code>response.text_annotations</code>.</p> <pre><code>image = vision.types.Image(content=content) response = client.document_text_detection(image=image) texts = response.full_text_annotation print('{}'.format(text)) </code></pre> <p>But I got a error message.</p> <pre><code>File "/home/kazu/language/ocr.py", line 21, in &lt;module&gt; print('{}'.format(text)) NameError: name 'text' is not defined </code></pre> <p>Could you give me any information or suggestion? </p> <p>Thank you in advance.</p> <p>Sincerely, Kazu</p>
<p>Looks like a typo. You named your variable "texts", but tried to use variable "text".</p>
python|json|google-cloud-platform|google-api|google-cloud-vision
2
1,904,348
5,699,746
python, dynamically implement a class onthefly
<p>Flocks, we have a framework that allows our researchers to change methods(operations) in a class to suite thier needs while saving those changes. E.g Consider definition of the class foo below. (with version 1 &amp; version 2)</p> <pre><code>class foo: #class version 1 def operation_1(self): # version 1 pass def operation_2(self): # version 1 pass class foo: # class version 2 def operation_1(self): # version 2 pass def operation_2(self): # version 2 pass </code></pre> <p>another researcher may want to his class to appear as below; ( he is using a method from version 1 and another method from verion 2)</p> <pre><code>class foo: # class version 3 def operation_1(self): # version 1 pass def operation_2(self): # version 2 pass </code></pre> <p>Currenlty one has to copy and paste the source code. I am looking for a way to generalise this. probably something like</p> <pre><code> klass = foo() klass.operation_1 = foo.operation_1 # from ver 1 of foo klass.operation_2 = foo.operation_2 # from ver 2 of foo evaluate(klass) </code></pre> <p>and probably evaluate() is a function that evaluates such expressions. These classes are persistent </p>
<p><a href="http://docs.python.org/library/functions.html#type" rel="nofollow"><code>type</code></a> is the metaclass you want.</p> <pre><code>klass = type('klass', (foo,), {'operation_1': foo.operation_1, 'operation_2': foo.operation_2}) </code></pre>
python|methods
2
1,904,349
30,298,836
Convert part of string to integer
<p>I have two time values of type unicode and str like below:</p> <pre><code> time1 = "10:00 AM" #type: str time2 = "10:15 AM" #type: unicode </code></pre> <p>I want to convert integer part of time1 and time2 i.e 10:00 and 10:15 to integer and check if time2 > time1. </p> <p>Is there any way to convert part of string and unicode to integer ?</p>
<p>public static void main(String[] args) {</p> <p>String s1 = "10:00 AM";</p> <pre><code>String s2 = "10:20 AM"; int s1_mins = toMinutes(toNumber(s1)); int s2_mins = toMinutes(toNumber(s2)); if(s1_mins &lt; s2_mins){ System.out.println(s2 +" is more than "+ s1); }else{ System.out.println(s1 +" is more than "+s2); } } private static String toNumber(String s) { String[] timeInNumber = s.split(" "); return timeInNumber[0]; } private static int toMinutes(String s) { String[] hourMin = s.split(":"); int hour = Integer.parseInt(hourMin[0]); int mins = Integer.parseInt(hourMin[1]); int hoursInMins = hour * 60; return hoursInMins + mins; } </code></pre> <p>The above code helps you.</p>
string|python-2.7|unicode|integer|type-conversion
1
1,904,350
30,303,053
Python: Combination in lists of lists (?)
<p>First of all I wanted to say that my title is probably not describing my question correctly. I don't know how the process I am trying to accomplish is called, which made searching for a solution on stackoverflow or google very difficult. A hint regarding this could already help me a lot!</p> <p>What I currently have are basically two lists with lists as elements. Example:</p> <pre><code>List1 = [ [a,b], [c,d,e], [f] ] List2 = [ [g,h,i], [j], [k,l] ] </code></pre> <p>These lists are basically vertices of a graph I am trying to create later in my project, where the edges are supposed to be from List1 to List2 by rows.</p> <p>If we look at the first row of each of the lists, I therefore have:</p> <pre><code>[a,b] -&gt; [g,h,i] </code></pre> <p>However, I want to have assingments/edges of unique elements, so I need:</p> <pre><code>[a] -&gt; [g] [a] -&gt; [h] [a] -&gt; [i] [b] -&gt; [g] [b] -&gt; [h] [b] -&gt; [i] </code></pre> <p>The result I want to have is another list, with these unique assigments as elements, i.e. </p> <pre><code>List3 = [ [a,g], [a,h], [a,i], [b,g], ...] </code></pre> <p>Is there any elegant way to get from List1 and List2 to List 3?</p> <p>The way I wanted to accomplish that is by going row by row, determining the amount of elements of each row and then write clauses and loops to create a new list with all combinations possible. This, however, feels like a very inefficient way to do it.</p>
<p>You can <code>zip</code> your two lists, then use <code>itertools.product</code> to create each of your combinations. You can use <code>itertools.chain.from_iterable</code> to flatten the resulting list.</p> <pre><code>&gt;&gt;&gt; import itertools &gt;&gt;&gt; List1 = [ ['a','b'], ['c','d','e'], ['f'] ] &gt;&gt;&gt; List2 = [ ['g','h','i'], ['j'], ['k','l'] ] &gt;&gt;&gt; list(itertools.chain.from_iterable(itertools.product(a,b) for a,b in zip(List1, List2))) [('a', 'g'), ('a', 'h'), ('a', 'i'), ('b', 'g'), ('b', 'h'), ('b', 'i'), ('c', 'j'), ('d', 'j'), ('e', 'j'), ('f', 'k'), ('f', 'l')] </code></pre>
python|list|graph|vertices|edges
13
1,904,351
30,595,941
How to get all orderings of a list such that the list is equal to another list?
<p>I have lists A and B, which can have duplicates, for example:</p> <pre><code>A = ['x', 'x', 7] B = [7, 'x', 'x'] </code></pre> <p>Now I want all index permutations that permute list B into list A:</p> <pre><code>[1, 2, 0] # because [B[1], B[2], B[0]] == A [2, 1, 0] # because [B[2], B[1], B[0]] == A </code></pre> <p>Is there are way to achieve this without iterating over all possible permutations? I already use</p> <pre><code>import itertools for p in itertools.permutations(range(len(B))): if A == permute(B,p): </code></pre> <p>to iterate over all possible permutations and check for the ones I want, but I want to have the right ones faster.</p>
<p>You should decompose your problem in two : </p> <ul> <li>first find a particular permutation <code>sigma_0</code> that maps <code>B</code> onto <code>A</code></li> <li>find the set <code>S_B</code> of all the permutations that map B onto itself</li> </ul> <p>Then the set you are looking after is just <code>{sigma_0 \circ \sigma, sigma \in S_B}</code>.</p> <p>Now the question becomes : how to we determine <code>S_B</code> ? To do this, you can just observe that if you write the set <code>{0,1,2,..,n}</code> (with <code>n=2</code>in your case) as <code>A_1 \cup .. A_k</code>, where each <code>A_i</code>corresponds to the indices in <code>B</code>that correspond to the i-th element (in your case, you would have <code>A_1 = {1,2}</code>and <code>A_2 = {0}</code>), then each element of <code>S_B</code> can be written in a unique manner as a product <code>tau_1 \circ .. tau_k</code>where each <code>tau_i</code> is a permutation that acts on <code>A_i</code>.</p> <p>So, in your case <code>S_B = {id, (1,2)}</code> and you can take <code>sigma_0 = (0,2)</code>. It follows that the set your are after is <code>{(0,2), (2,0,1)}</code>.</p>
python|algorithm|list|sorting|permutation
2
1,904,352
67,002,263
Extracting Data from Multiple PDFs'
<p>I am trying to extract data from PDF document and have regarding that - I was able to get the code working for one single PDF. However, is there a way I can point the code to a folder with multiple PDF's and get the extract out in CSV? I am a complete beginner in Python, so any help will be appreciated. Below is the current code that I have.</p> <pre><code>import pdfplumber import pandas as pd file = 'Test Slip.pdf' lines = [] with pdfplumber.open(file) as pdf: pages = pdf.pages for page in pdf.pages: text = page.extract_text() for line in text.split('\n'): lines.append(line) print(line) df = pd.DataFrame(lines) df.to_csv('test.csv') </code></pre>
<p>I am not sure why you aim to write lines to a dataframe as rows but this should be what you need:</p> <pre><code>import pdfplumber import pandas as pd import os def extract_pdf(pdf_path): linesOfFile = [] with pdfplumber.open(pdf_path) as pdf: for pdf_page in pdf.pages: single_page_text = pdf_page.extract_text() for linesOfFile in single_page_text.split('\n'): linesOfFile.append(line) #print(linesOfFile) return linesOfFile folder_with_pdfs = 'folder_path' linesOfFiles = [] for pdf_file in os.listdir(folder_with_pdfs): if pdf_file.endswith('.pdf'): pdf_file_path = os.path.join(folder_with_pdfs, pdf_file) linesOfFile = extract_pdf(pdf_file_path) linesOfFiles.append(linesOfFile) df = pd.DataFrame(linesOfFiles) df.to_csv('test.csv') </code></pre>
python|parsing|pdf
0
1,904,353
42,909,218
Python Spark How to find cumulative sum by group using RDD API
<p>I am new to spark programming. Need help with spark python program, where i have input data like this and want to get cumulative summary for each group. Appreciate if someone guide me on this.</p> <h1>Input Data:</h1> <p>11,1,1,100</p> <p>11,1,2,150</p> <p>12,1,1,50</p> <p>12,2,1,70</p> <p>12,2,2,20</p> <h1>Output Data Needed like this:</h1> <p>11,1,1,100</p> <p>11,1,2,250 //(100+150)</p> <p>12,1,1,50</p> <p>12,2,1,70</p> <p>12,2,2,90 // (70+20)</p> <p>the code i tried:</p> <pre><code>def parseline(line): fields = line.split(",") f1 = float(fields[0]) f2 = float(fields[1]) f3 = float(fields[2]) f4 = float(fields[3]) return (f1, f2, f3, f4) input = sc.textFile("FIle:///...../a.dat") line = input.map(parseline) linesorted = line.sortBy(lambda x: (x[0], x[1], x[2])) runningpremium = linesorted.map(lambda y: (((y[0], y[1]), y[3])).reduceByKey(lambda accum, num: accum + num) for i in runningpremium.collect(): print i </code></pre>
<p>As in the comment, you can use window function to do the cumulative sum here on Spark Dataframe. First, we can create an example dataframe with dummie columns <code>'a', 'b', 'c', 'd'</code></p> <pre><code>ls = [(11,1,1,100), (11,1,2,150), (12,1,1,50), (12,2,1,70), (12,2,2,20)] ls_rdd = spark.sparkContext.parallelize(ls) df = spark.createDataFrame(ls_rdd, schema=['a', 'b', 'c', 'd']) </code></pre> <p>You can partition by column <code>a</code> and <code>b</code> then order by column <code>c</code>. Then, apply the <code>sum</code> function over the column <code>d</code> at the end</p> <pre><code>from pyspark.sql.window import Window import pyspark.sql.functions as func w = Window.partitionBy([df['a'], df['b']]).orderBy(df['c'].asc()) df_cumsum = df.select('a', 'b', 'c', func.sum(df.d).over(w).alias('cum_sum')) df_cumsum.sort(['a', 'b', 'c']).show() # simple sort column </code></pre> <p><strong>Output</strong></p> <pre><code>+---+---+---+-------+ | a| b| c|cum_sum| +---+---+---+-------+ | 11| 1| 1| 100| | 11| 1| 2| 250| | 12| 1| 1| 50| | 12| 2| 1| 70| | 12| 2| 2| 90| +---+---+---+-------+ </code></pre>
python|apache-spark|pyspark|rdd
2
1,904,354
42,841,037
HTML and Text in Python Driven Email
<p>I am using MIMEMultipart to send emails from Python. The code is as follows:</p> <pre><code>sender = "EMAIL" recipients = ["EMAIL"] msg = MIMEMultipart('alternative') msg['Subject'] = "Subject Text" msg['From'] = sender msg['To'] = ", ".join(recipients) html = PandasDataFrame.to_html() part2 = MIMEText(html, 'html') msg.attach(part2) SERVER = "SERVER" server = smtplib.SMTP(SERVER) server.sendmail(sender, recipients, msg.as_string()) server.quit() </code></pre> <p>This inserts a Python Pandas dataframe as HTML and works fine. Is it possible to add footnotes as text to the email body as well? How would the code work for doing both? Alternatively, I'm fine adding comments as HTML but more of less need some footnotes added to the email body. </p> <p>Thanks </p>
<p>This code works:</p> <p>First, import:</p> <pre><code>from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText from email.mime.application import MIMEApplication #Used for attachments import smtplib </code></pre> <p>And the code:</p> <pre><code>sender = "EMAIL" recipients = ["EMAIL1","EMAIL2"] msg = MIMEMultipart('mixed') #use mixed instead of alternative to load multiple things msg['Subject'] = "Subject Text" msg['From'] = sender msg['To'] = ", ".join(recipients) html = PandasDataFrame1.to_html() #first dataframe #insert text as follows html += ''' &lt;br&gt;&lt;br&gt; This is a new line of random text. &lt;br&gt;&lt;br&gt; ''' html += PandasDataFrame2.to_html() #second dataframe #put the html into the email body html = MIMEText(html, 'html') msg.attach(html) </code></pre> <p>If you also want to attach a file to the email, use this code</p> <pre><code>ATTACHMENT_PATH = 'path\\file.type' with open(ATTACHMENT_PATH, 'r') as fileobj: attachment = MIMEApplication(fileobj.read(), Name='file.type') attachment['Content-Disposition'] = 'attachment; filename="file.type"' msg.attach(attachment) </code></pre> <p>And the code to send using a server</p> <pre><code>SERVER = "SERVER" server = smtplib.SMTP(SERVER) server.sendmail(sender, recipients, msg.as_string()) server.quit() </code></pre>
python|html|email|pandas|mime
1
1,904,355
72,395,380
How to drop sentences that are too long in Huggingface?
<p>I'm going through the <a href="https://huggingface.co/course/chapter3/3?fw=pt" rel="nofollow noreferrer">Huggingface tutorial</a> and it appears as the library has automatic <code>truncation</code>, to cut sentences that are too long, based on a max value, or other things.</p> <p>How can I remove sentences for the same reasoning (sentences are too long, based on a max value, etc), instead of truncating them? e.g., if the sentence is too long, drop it.</p> <p><strong>Example for truncation:</strong></p> <pre><code>from transformers import AutoTokenizer checkpoint = &quot;distilbert-base-uncased-finetuned-sst-2-english&quot; tokenizer = AutoTokenizer.from_pretrained(checkpoint) sentence_input = 'this is an input' result = tokenizer(sentence_input, padding=True, truncation=True, return_tensors=&quot;pt&quot;) </code></pre> <p><strong>Example to prepare samples in a batch</strong></p> <pre><code>from datasets import load_dataset from transformers import AutoTokenizer, DataCollatorWithPadding raw_datasets = load_dataset(&quot;glue&quot;, &quot;mrpc&quot;) checkpoint = &quot;bert-base-uncased&quot; tokenizer = AutoTokenizer.from_pretrained(checkpoint) def tokenize_function(example): return tokenizer(example[&quot;sentence1&quot;], example[&quot;sentence2&quot;], truncation=True) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) data_collator = DataCollatorWithPadding(tokenizer=tokenizer) </code></pre>
<p>A <a href="https://huggingface.co/docs/datasets/process#select-and-filter" rel="nofollow noreferrer">filter</a> is all you need:</p> <pre class="lang-py prettyprint-override"><code>import pandas from datasets import Dataset from transformers import AutoTokenizer df = pandas.DataFrame([{&quot;sentence1&quot;: &quot;bla&quot;, &quot;sentence2&quot;: &quot;bla&quot;}, {&quot;sentence1&quot;: &quot;bla &quot;*600, &quot;sentence2&quot;: &quot;bla&quot;}]) dataset = Dataset.from_pandas(df) checkpoint = &quot;bert-base-uncased&quot; tokenizer = AutoTokenizer.from_pretrained(checkpoint) #Not truncating the samples allows us to filter them def tokenize_function(example): return tokenizer(example[&quot;sentence1&quot;], example[&quot;sentence2&quot;]) tokenized_datasets = dataset.map(tokenize_function, batched=True) print(len(tokenized_datasets)) tokenized_datasets = tokenized_datasets.filter(lambda example: len(example['input_ids']) &lt;= tokenizer.max_model_input_sizes[checkpoint]) print(len(tokenized_datasets)) </code></pre> <p>Output:</p> <pre><code>Token indices sequence length is longer than the specified maximum sequence length for this model (1205 &gt; 512). Running this sequence through the model will result in indexing errors 2 1 </code></pre>
python|huggingface-transformers|huggingface-tokenizers|huggingface-datasets
0
1,904,356
26,749,178
Python 3: list of 2 elements into dictionary
<p>Python3 OMG, sorry for a probably often repeated question in one way or another. For me, manipulating lists has always been like a black box of headaches, and I've been battling for an hour now with no results.</p> <p>I have a list in the form:</p> <pre class="lang-py prettyprint-override"><code>[('John', 'first@email.com'), ('John', 'second@email.com'), ('Jack', 'third@email.com')] </code></pre> <p>I would like to transform this into an iterable dictionary (that I could then bulk insert as a document using pymongo) so that it looks like this:</p> <pre class="lang-py prettyprint-override"><code> new_posts = [{"sender": "John", "email": "first@email.com"}, {"sender": "John", "email": "second@email.com"}, {"sender": "Jack", "email": "third@email.com"}] </code></pre> <p>How would this be achievable in an easy to read and efficient manner? </p>
<p>You can try</p> <pre><code>&gt;&gt;&gt; a = [('John', 'first@email.com'), ('John', 'second@email.com'), ('Jack', 'third@email.com')] &gt;&gt;&gt; [dict([('sender', x[0]), ('email',x[1])]) for x in a] [{'sender': 'John', 'email': 'first@email.com'}, {'sender': 'John', 'email': 'second@email.com'}, {'sender': 'Jack', 'email': 'third@email.com'}] </code></pre> <p>This will convert list to dict</p>
python|dictionary
2
1,904,357
56,616,950
Is there a recommended way of ensuring immutability
<p>I am observing following behavior since python passes object by reference?</p> <pre class="lang-py prettyprint-override"><code>class Person(object): pass person = Person() person.name = 'UI' def test(person): person.name = 'Test' test(person) print(person.name) &gt;&gt;&gt; Test </code></pre> <p>I found copy.deepcopy() to deepcopy object to prevent modifying the passed object. Are there any other recommendations ? </p> <pre class="lang-py prettyprint-override"><code>import copy class Person(object): pass person = Person() person.name = 'UI' def test(person): person_copy = copy.deepcopy(person) person_copy.name = 'Test' test(person) print(person.name) &gt;&gt;&gt; UI </code></pre>
<blockquote> <p>I am observing following behavior since python passes object by reference?</p> </blockquote> <p>Not really. it's a subtle question. you can look at <a href="https://stackoverflow.com/questions/986006/how-do-i-pass-a-variable-by-reference">python - How do I pass a variable by reference? - Stack Overflow</a></p> <p>Personally, I don't fully agree with the accepted answer and recommend you google <code>call by sharing</code>. Then, you can make your own decision on this subtle question.</p> <blockquote> <p>I found copy.deepcopy() to deepcopy object to prevent modifying the passed object. Are there any other recommendations ?</p> </blockquote> <p>As far as I know, there no other better way, if you don't use third package.</p>
python|class|immutability
1
1,904,358
45,058,346
Why am I unable to scrape this website?
<p>Say I want to scrape the following url:</p> <p><a href="https://soundcloud.com/search/sounds?q=edm&amp;filter.created_at=last_week" rel="nofollow noreferrer">https://soundcloud.com/search/sounds?q=edm&amp;filter.created_at=last_week</a></p> <p>I have the following python code:</p> <pre><code>import requests from lxml import html urlToSearch = 'https://soundcloud.com/search/sounds?q=edm&amp;filter.created_at=last_week' page = requests.get(urlToSearch) tree = html.fromstring(page.content) print(tree.xpath('//*[@id="content"]/div/div/div[3]/div/div/div/ul/div/div/text()')) </code></pre> <p>The trouble is when I print the text at the following xpath: </p> <pre><code>//*[@id="content"]/div/div/div[3]/div/div/div/ul/div/div </code></pre> <p>nothing appears but <code>[]</code> despite me confirming that "Found 500+ tracks" should be there. What am i doing wrong?</p>
<p>The problem is that requests does not generate dynamic content.</p> <p>Right click on the page and view the page source, you'll see that the static content does not include any of the content that you see after the dynamic content has loaded. </p> <p>However, (using Chrome) open dev tools, click on network and XHR. It looks like you can get the data through an API which is better than scraping anyway!</p>
python|web-scraping
1
1,904,359
45,199,104
Is there a fast way to create a numpy array that reduces unique values to their lowest form?
<p>Sorry if the question is worded confusingly. I have an array similar to the following: <code>[[3,7,9,5],[3,3,7,5]]</code>, though much larger.</p> <p>How can I convert this into a form such as<code>[[0,2,3,1],[0,0,2,1]]</code> where each unique value starting with the lowest is given an identifying number, starting with 0? Currently I am using a for loop, but it is very slow. Is there any functions in numpy that could speed this up? </p>
<p>We can use one approach using <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.unique.html" rel="nofollow noreferrer"><code>np.unique</code></a> with its optional arg <code>return_inverse</code> set as <code>True</code>. This flattens the input when fed to it, giving us unique IDs in sequence starting from <code>0</code>. The uniqueness is maintained across all elements in the array because it was flattened. So, the output needs a reshape afterwards to bring it back to the same shape as the input.</p> <p>Thus, the implementation would be -</p> <pre><code>np.unique(a, return_inverse=True)[1].reshape(a.shape) </code></pre> <p>Sample run -</p> <pre><code>In [208]: a = np.array([[3,7,9,5],[3,3,7,5]]) In [209]: np.unique(a, return_inverse=True)[1].reshape(a.shape) Out[209]: array([[0, 2, 3, 1], [0, 0, 2, 1]]) </code></pre>
python|arrays|numpy|multidimensional-array
5
1,904,360
57,911,497
Polynomial Features and polynomial regression in sklearn
<p>I have two questions:</p> <ol> <li>What is the output of <code>fit_transform</code> on a polynomial feature (what the numbers mean)? Correct me if I am wrong, but as far as I understood this method fit and transform our varriables to a polynomal model (of our choice of degree).<br> For instance:</li> </ol> <pre><code>from sklearn.preprocessing import PolynomialFeatures poly=PolynomialFeatures(degree=2) poly.fit_transform(df[[firstColumn,secondColumn]],df[targetColumn]) </code></pre> <p>So, the outcome is a 2-dimensional polynomial with df[firstColumn] and df[secondColumn] as varriables. </p> <p>2) In polynomial regression, why do we need to use fit_tranform? What is the logic behind it?<br> For instance,</p> <pre><code>Xpoly=poly.fit_transform(X) lin=LinearRegression() lin.fit(Xpoly,y) </code></pre>
<p>From <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html" rel="nofollow noreferrer">sklearn documentation</a>:</p> <blockquote> <p><strong>sklearn.preprocessing.PolynomialFeatures</strong> <br> Generate a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, if an input sample is two dimensional and of the form [a, b], the degree-2 polynomial features are [1, a, b, a^2, ab, b^2].<br></p> </blockquote> <p>So, this does exactly as you think.</p> <blockquote> <p><strong>fit_transform(self, X, y=None, **fit_params)</strong><br> Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.<br></p> </blockquote> <p>In sklearn, <code>fit()</code> just calculates the parameters and saves them as an internal objects state. Afterwards, you can call its <code>transform()</code> method to apply the transformation to a particular set of examples.</p> <p><code>fit_transform()</code> joins these two steps and is used for the initial fitting of parameters on the training set x, but it also returns a transformed x′. Internally, it just calls first <code>fit()</code> and then <code>transform()</code> on the same data.</p>
python|scikit-learn
1
1,904,361
58,069,067
Easiest way to iterate through python list in batches
<p>Say I have a data structure that looks like this:</p> <pre><code>entries = [{"name": "some_name", "age": "some_age"...}, {"name": "some_other_name", "age": "some_other_age"}, ...] </code></pre> <p>and I want to iterate through in batches in 10 but this doesn't work:</p> <pre><code> x = zip(*[iter(entries)]*10) &gt;&gt;&gt; x &lt;zip object at 0x1103db730&gt; &gt;&gt;&gt; list(x) [] </code></pre> <p>What I want is to eventually get an array of arrays of length 10 but I get nothing. The last array may have fewer than 10 elements. What can I do?</p>
<p>How about this?</p> <pre><code>chunks = [data[i:i+chunk_size] for i in range(0,len(data),chunk_size)] </code></pre>
python-3.x
1
1,904,362
58,159,244
Error in loading bootstrap on Python Flask
<p>I have the following code app.py in flask.</p> <pre class="lang-py prettyprint-override"><code>from flask import Flask, render_template, request from flask_bootstrap import Bootstrap app = Flask(__name__) Bootstrap(app) @app.route('/') def home(): return render_template("home.html") if __name__ == '__main__': app.run() </code></pre> <p>Where home.html:</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html lang="en" dir="ltr"&gt; &lt;head&gt; &lt;meta charset="utf-8"&gt; &lt;title&gt;Cheppers DevOps Challeng | Home&lt;/title&gt; &lt;/head&gt; &lt;body&gt; {% extends "template.html" %} {% block content %} {% endblock %} &lt;/body&gt; &lt;/html&gt; </code></pre> <p>The template has a simple bootstrap code that runs perfectly.</p> <p>I would like to run bootstrap over all my html pages, where I created a template webpage for the navbar (template.html)</p> <p>I have got the following error:</p> <pre><code> File "/Volumes/Data/final/web_cheppers/app.py", line 15, in home return render_template("home.html") File "/Volumes/Data/final/web_cheppers/venv/lib/python3.7/site-packages/flask/templating.py", line 140, in render_template ctx.app, File "/Volumes/Data/final/web_cheppers/venv/lib/python3.7/site-packages/flask/templating.py", line 120, in _render rv = template.render(context) File "/Volumes/Data/final/web_cheppers/venv/lib/python3.7/site-packages/jinja2/asyncsupport.py", line 76, in render return original_render(self, *args, **kwargs) File "/Volumes/Data/final/web_cheppers/venv/lib/python3.7/site-packages/jinja2/environment.py", line 1008, in render return self.environment.handle_exception(exc_info, True) File "/Volumes/Data/final/web_cheppers/venv/lib/python3.7/site-packages/jinja2/environment.py", line 780, in handle_exception reraise(exc_type, exc_value, tb) File "/Volumes/Data/final/web_cheppers/venv/lib/python3.7/site-packages/jinja2/_compat.py", line 37, in reraise raise value.with_traceback(tb) File "/Volumes/Data/final/web_cheppers/templates/home.html", line 8, in top-level template code {% extends "template.html" %} File "/Volumes/Data/final/web_cheppers/templates/template.html", line 47, in top-level template code {% block scripts %} File "/Volumes/Data/final/web_cheppers/templates/template.html", line 49, in block "scripts" {{ bootstrap.load_js() }} File "/Volumes/Data/final/web_cheppers/venv/lib/python3.7/site-packages/jinja2/environment.py", line 430, in getattr return getattr(obj, attribute) jinja2.exceptions.UndefinedError: 'bootstrap' is undefined </code></pre> <p>I have tried to uninstall and install Flask-Bootstrap, but still nothing.</p>
<p>I have found it what was wrong.</p> <p>Obviously, I was importing wrong library of Flask-Bootstrap which is unidentified by pycharm. I found the one which is working Bootstrap-Flask on 1.1.0 version</p>
python|twitter-bootstrap|flask
0
1,904,363
57,805,040
I'm converting seconds to hour and minutes using datetime in pandas, but it is showing with the date 1970. How do i remove date?
<p>I'm trying to convert seconds to hour and minutes, but it is taking time with the date 1970. How do I fetch only time?</p> <pre><code>ageis_aht_percentile['aht_in_secs'] = pd.to_datetime(ageis_aht_percentile["aht_in_secs"], unit='s') </code></pre> <p>I want only time, not date. Since it is coming with the date, I'm not able to create a clear graph.</p> <p>This is how my data is:</p> <pre><code> aht_in_secs count 5.000000 mean 907.200000 std 552.150976 min 292.000000 25% 406.000000 50% 1084.000000 75% 1135.000000 max 1619.000000 </code></pre> <p>Please, someone, help me in converting aht_in_secs column to HH:MM:SS format</p>
<p>IIUC, this is what you need.</p> <pre><code>ageis_aht_percentile['aht_in_secs'] = pd.to_datetime(ageis_aht_percentile["aht_in_secs"].round(), unit='s').dt.time </code></pre> <p><strong>Output</strong> Showing bout intput &amp; output as columns</p> <pre><code> aht_in_secs aht_in_secs3 count 5.000000 00:00:05 mean 907.200000 00:15:07 std 552.150976 00:09:12 min 292.000000 00:04:52 25% 406.000000 00:06:46 50% 1084.000000 00:18:04 75% 1135.000000 00:18:55 max 1619.000000 00:26:59 </code></pre>
pandas
0
1,904,364
56,388,640
How to Insert Huge Pandas Dataframe in MySQL table with Parallel Insert Statement?
<p>I am working on a project where I have to write a data frame with Millions of rows and about 25 columns mostly of numeric type. I am using <a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generat/pandas.DataFrame.to_sql.html" rel="nofollow noreferrer">Pandas DataFrame to SQL Function</a> to dump the dataframe in Mysql table. I have found this function creates an Insert statement that can insert multiple rows at once. This is a good approach but MySQL has a limitation on the length of query that can be built using this approach.</p> <p>Is there a way such that insert that in parallel in the same table so that I can speed up the process? </p>
<p>You can do a few things to achieve that.</p> <p>One way is to use an additional argument while writing to sql.</p> <pre class="lang-py prettyprint-override"><code>df.to_sql(method = 'multi') </code></pre> <p>According to this <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html" rel="noreferrer">documentation</a>, passing 'multi' to method argument allows you to bulk insert. </p> <p>Another solution is to construct a custom insert function using multiprocessing.dummy. here is the link to the documentation :<a href="https://docs.python.org/2/library/multiprocessing.html#module-multiprocessing.dummy" rel="noreferrer">https://docs.python.org/2/library/multiprocessing.html#module-multiprocessing.dummy</a></p> <pre class="lang-py prettyprint-override"><code>import math from multiprocessing.dummy import Pool as ThreadPool ... def insert_df(df, *args, **kwargs): nworkers = 4 # number of workers that executes insert in parallel fashion chunk = math.floor(df.shape[0] / nworkers) # number of chunks chunks = [(chunk * i, (chunk * i) + chunk) for i in range(nworkers)] chunks.append((chunk * nworkers, df.shape[0])) pool = ThreadPool(nworkers) def worker(chunk): i, j = chunk df.iloc[i:j, :].to_sql(*args, **kwargs) pool.map(worker, chunks) pool.close() pool.join() .... insert_df(df, "foo_bar", engine, if_exists='append') </code></pre> <p>The second method was suggested at <a href="https://stackoverflow.com/a/42164138/5614132">https://stackoverflow.com/a/42164138/5614132</a>.</p>
mysql|pandas|pandasql
6
1,904,365
56,253,096
"TypeError: 'str' object is not callable" error while using Jupyter Notebook
<p>I tried to plot a graph using pyplot from matplotlib and everything went fine until I tried to add the title.</p> <pre><code>from matplotlib import pyplot as plt a = [1, 4, 8] b = [1, 9, 18] plt.plot(a, b) plt.title("Title") plt.xlabel("x") plt.ylabel("y") plt.show() </code></pre> <p>The code works fine except the title, from which an error is displayed as follows:</p> <pre><code> TypeError Traceback (most recent call last) &lt;ipython-input-25-a1c519e5c0a1&gt; in &lt;module&gt; 4 plt.xlabel("X") 5 plt.ylabel("Y") ----&gt; 6 plt.title("title") 7 plt.show() TypeError: 'str' object is not callable </code></pre> <p>BTW, I am running this using Jupyter notebook</p>
<p>I had this error twice on the Jupyter notebook. Restart your anaconda or better: from the kernel tab restart and run all. This should work</p>
python|string|jupyter-notebook|anaconda|callable
4
1,904,366
56,270,776
How to make read_csv more flexibile with numbers and whitespaces
<p>I want to read a <code>txt.file</code> with Pandas and the Problem is the seperator/delimiter consits of a number and Minimum two blanks afterwards.</p> <p>I already tried it similiar to this code (<a href="https://stackoverflow.com/questions/15026698/how-to-make-separator-in-pandas-read-csv-more-flexible-wrt-whitespace">How to make separator in pandas read_csv more flexible wrt whitespace?</a>):</p> <pre><code>pd.read_csv("whitespace.txt", header=None, delimiter=r"\s+") </code></pre> <p>This is only working if there is only a blank or more. So I adjustet it to the following code.</p> <pre><code>delimiter=r"\d\s\s+" </code></pre> <p>But this is seperating my dataframe when it sees two blanks or more, but i strictly Need the number before it followed by at least two blanks, anyone has an idea how to fix it?</p> <p>My data Looks as follows:</p> <pre><code>I am an example of a dataframe I have Problems to get read 100,00 So How can I read it 20,00 </code></pre> <p>so the first row should be: <code>I am an example of a dataframe I have Problems to get read 100,00</code> followed by the second row: <code>So HOw can I read it 20,00</code></p>
<p>Id try it like this.</p> <p>Id manipulate the text file before I attempt to parse it to a dataframe as follows:</p> <pre><code>import pandas as pd import re f = open("whitespace.txt", "r") g = f.read().replace("\n", " ") prepared_text = re.sub(r'(\d+,\d+)', r'\1@', g) df = pd.DataFrame({'My columns':prepared_text.split('@')}) print(df) </code></pre> <p>This gives the following:</p> <pre><code> My columns 0 I am an example of a dataframe I have Problems... 1 So How can I read it 20,00 2 </code></pre> <p>I guess this'd suffice as long as the input file wasnt too large but using the re module and substitiution gives you the control you seek.</p> <p>The (\d+,\d+) parentheses mark a group which we want to match. We're basically matching any of your numbers in your text file. Then we use the \1 which is called a backreference to the matched group which is referred to when specifying a replacement. So \d+,\d+ is replaced by \d+,\d+@.</p> <p>Then we use the inserted character as a delimiter.</p> <p>There are some good examples here:</p> <p><a href="https://lzone.de/examples/Python%20re.sub" rel="nofollow noreferrer">https://lzone.de/examples/Python%20re.sub</a></p>
python|pandas|csv
1
1,904,367
18,687,330
how to build html body using list elements
<pre><code>html_body = """\ &lt;html&gt; &lt;head&gt; &lt;/head&gt; &lt;body&gt; &lt;br&gt; &lt;table border="1"&gt; %s &lt;/table&gt; &lt;/body&gt; &lt;/html&gt; """ % (html_content) html_contents = [&lt;tr&gt;&lt;td&gt;Name&lt;/td&gt;&lt;td&gt;id&lt;/td&gt;&lt;/tr&gt; , &lt;tr&gt;&lt;td&gt;Smith&lt;/td&gt;&lt;td&gt;1001&lt;/td&gt;&lt;/tr&gt;] </code></pre> <p>I am trying to for html body from a list so that it can be sent in an email. How can i put all the elements of html_content list in the html_body above ? Better way ?</p>
<p>You can use <code>join()</code> to join strings from the list:</p> <pre><code>&gt;&gt;&gt; html_contents = ['&lt;tr&gt;&lt;td&gt;Name&lt;/td&gt;&lt;td&gt;id&lt;/td&gt;&lt;/tr&gt;' , '&lt;tr&gt;&lt;td&gt;Smith&lt;/td&gt;&lt;td&gt;1001&lt;/td&gt;&lt;/tr&gt;'] &gt;&gt;&gt; html_body = """\ ... &lt;html&gt; ... &lt;head&gt; ... &lt;/head&gt; ... &lt;body&gt; ... &lt;br&gt; ... &lt;table border="1"&gt; ... %s ... &lt;/table&gt; ... &lt;/body&gt; ... &lt;/html&gt; ... """ % ("".join(html_contents)) &gt;&gt;&gt; print html_body &lt;html&gt; &lt;head&gt; &lt;/head&gt; &lt;body&gt; &lt;br&gt; &lt;table border="1"&gt; &lt;tr&gt;&lt;td&gt;Name&lt;/td&gt;&lt;td&gt;id&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Smith&lt;/td&gt;&lt;td&gt;1001&lt;/td&gt;&lt;/tr&gt; &lt;/table&gt; &lt;/body&gt; &lt;/html&gt; </code></pre>
python|html
1
1,904,368
71,728,141
How many async/await is ok to spawn?
<p>I am fetching data from a server asynchronously. Now what I want to ask is how many async/await functions are ok to spawn. i.e<br /> If I call 200 asynchronous functions in a for loop, and every async function make 100 requests to the server. is it efficient?<br /> Here is some dummy code:</p> <pre><code>getData() async { for (var i=0; i&lt;100; i++) { serverRequest(); } } main() { for (var i=0; i&lt;100; i++) { getData(); } } </code></pre> <p>What if I call the <code>getData()</code> a thousand times? I am using <strong>Dart</strong> language, but the question is applicable to all the async/await supported languages.</p>
<p>In Dart, async/await creates <code>Future</code>s (or <code>Stream</code>s if you use <code>async*</code>) which run within an isolate, which is single-threaded. You may think that they are async because they execute/complete in a random order, but in reality, all of your code runs synchronously inside of an event loop.</p> <p>Don't worry and spawn as many as you want (within reason, you still can run out of memory and get <code>Stack Overflow</code> exception)</p>
javascript|python|flutter|dart|asynchronous
2
1,904,369
55,294,973
Difficult dataframe lookup query
<p>I'm pretty sure there is a question for this already so if someone could just point me in the right direction.</p> <p>I have two dataframes, DF1:</p> <pre><code>+----------+-----------+------------+-------------+--------------------+ | Survived | Surname | FamilySize | NumSurvived | FamilySurvivalRate | +----------+-----------+------------+-------------+--------------------+ | 0 | Braund | 2 | 0 | 0 | | 1 | Cumings | 1 | 1 | 1 | | 1 | Heikkinen | 1 | 1 | 1 | | 1 | Futrelle | 2 | 1 | 0.5 | | 0 | Allen | 2 | 1 | 0.5 | | 0 | Moran | 3 | 1 | 0.333333333 | | 0 | McCarthy | 1 | 0 | 0 | | 0 | Palsson | 4 | 0 | 0 | +----------+-----------+------------+-------------+--------------------+ </code></pre> <p>and DF2:</p> <pre><code>+----------+-----------+------------+-------------+--------------------+ | Survived | Surname | FamilySize | NumSurvived | FamilySurvivalRate | +----------+-----------+------------+-------------+--------------------+ | 0 | Braund | 2 | 0 | | | 1 | Cumings | 1 | 1 | | | 1 | Heikkinen | 1 | 1 | | | 1 | Futrelle | 2 | 1 | | | 0 | Allen | 2 | 1 | | | 0 | Moran | 3 | 1 | | | 0 | McCarthy | 1 | 0 | | | 0 | Palsson | 4 | 0 | | +----------+-----------+------------+-------------+--------------------+ </code></pre> <p>For each surname in DF2 I need to find the FamilySurvivalRate for that surname in DF1 and put the value in DF2. If the surname is not in DF1 it needs to be 0.</p> <p>Thanks!</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a> by <code>Series</code> created from <code>df1</code> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.fillna.html" rel="nofollow noreferrer"><code>Series.fillna</code></a> for replace non matched values:</p> <pre><code>print (df2) Survived Surname FamilySize NumSurvived 0 0 Braund 2 0 1 1 Cumings1 1 1 &lt;- change surname for no match 2 1 Heikkinen 1 1 3 1 Futrelle 2 1 4 0 Allen 2 1 5 0 Moran 3 1 6 0 McCarthy 1 0 7 0 Palsson 4 0 s = df1.set_index('Surname')['FamilySurvivalRate'] df2['FamilySurvivalRate'] = df2['Surname'].map(s).fillna(0) print (df2) Survived Surname FamilySize NumSurvived FamilySurvivalRate 0 0 Braund 2 0 0.000000 1 1 Cumings1 1 1 0.000000 2 1 Heikkinen 1 1 1.000000 3 1 Futrelle 2 1 0.500000 4 0 Allen 2 1 0.500000 5 0 Moran 3 1 0.333333 6 0 McCarthy 1 0 0.000000 7 0 Palsson 4 0 0.000000 </code></pre>
python|pandas|dataframe
1
1,904,370
55,447,726
Trying to compare two text files in UTF-8 encoding to find and count similar words
<p>I want to compare two text files that are in UTF-8 encoding, File 1 is a dictionary of words and file 2 contains a sentence. I want to find out the similar words that are present in File 1 and File 2.</p> <pre><code>import codecs f1 = codecs.open('poswords.txt', 'r', 'UTF-8') for line in f1: print(line) f2 = codecs.open('0001b.txt', 'r', 'UTF-8') words=set(line.strip() for line in f1) for line in f2: word,freq =line.split() if word in words: print (word) </code></pre> <p>File 1(i.e Dictionary) contains</p> <pre><code>کرخت ناجائز فائدہ آب دیدہ ابال ابال کر پکانا **ابالنا** ابتدائ ابتر </code></pre> <p>File 2 contains a sentence:</p> <pre><code>وفاقی وزیر اطلاعات فواد چودھری سے استعفیٰ لے لیا**ابالنا** گیا ہے </code></pre> <p>There are two common words in both the files i want to find them and count their occurences. I want that it should return the similar words, but it returns an error saying that ValueError: too many values to unpack (expected 2)</p>
<p>You are attempting to retrieve two values from <code>split</code>:</p> <pre><code>word, freq = line.split() </code></pre> <p>This will only work when there are exactly two words on a line (and by the variable naming, the second should apparently be a frequency count).</p> <p>Another problem is that you consume all the lines from the first file when you <code>print</code> them. Once you have read all the lines from a handle, attempting to read more lines will simply return nothing. The simple fix is to both print and save each input word to the <code>words</code> set inside the same loop. (Maybe comment out the <code>print()</code>, actually; or <code>import logging</code> and change it to <code>logging.debug()</code>. This also ensures that the diagnostic output is not mixed with the program's regular standard output.)</p> <p>In Python 3, UTF-8 should be the default encoding on most sane platforms (though this conspicuously and emphatically excludes Windows); maybe you don't need the explicit <code>codecs</code> at all.</p> <p>Finally, you should be aware that Unicode can often represent the same string in multiple ways. I don't read Arabic, but briefly, for example, you can write "salaam" as a single glyph <a href="https://www.fileformat.info/info/unicode/char/fdf5/index.htm" rel="nofollow noreferrer">U+FDF5</a> or you can spell it out. Unicode normalization attempts to iron out any such wrinkles so you can be sure that text which displays the same is also written the same, and thus identical to Python's string comparison operator.</p> <pre><code>import codecs import unicodedata with codecs.open('poswords.txt', 'r', 'UTF-8') as f1: words = set() for line in f1: print(line) words.add(unicodedata.normalize('NFC', line.strip())) with codecs.open('0001b.txt', 'r', 'UTF-8') as f2: for line in f2: for word in line.split(): if unicodedata.normalize('NFC', word) in words: print (word) </code></pre>
python|utf-8|file-handling
0
1,904,371
57,298,153
Data-logging from an i2c-connected Atlas Scientific Sensor to a CSV file
<p>I am relatively new to Python, and programming as a whole. I am progressively getting the hang of it, however I have been stumped as of late in regards to one of my latest projects. I have a set of Atlas Scientific EZO circuits w/ their corresponding sensors hooked up to my Raspberry Pi 3. I can run the i2c script fine, and the majority of the code makes sense to me. However, I would like to pull data from the sensors and log it with a time stamp in a CSV file, taking data points in timed intervals. I am not quite sure how to pull the data from the sensor, and put it into a CSV. Making CSVs in Python is fairly simple, as is filling them with data, but I cannot seem to understand how I would make the data that goes into the CSV the same as what is displayed in the terminal when one runs the Poll function. Attached is the i2c sample code from Atlas' website. I have annotated it a bit more so as to help me understand it better.</p> <p>I have already attempted to make sense of the poll function, but am confused in regards to the self.file_write and self.file_read methods used throughout the code. I do believe they would be of use in this instance but I am generally stumped in terms of implementation. Below you will find a link to the Python script (i2c.py) written by Atlas Scientific</p> <p><a href="https://github.com/AtlasScientific/Raspberry-Pi-sample-code/blob/master/i2c.py" rel="nofollow noreferrer">https://github.com/AtlasScientific/Raspberry-Pi-sample-code/blob/master/i2c.py</a></p>
<p>I'm guessing by "the polling function" you are referring to this section of the code:</p> <pre><code># continuous polling command automatically polls the board elif user_cmd.upper().startswith("POLL"): delaytime = float(string.split(user_cmd, ',')[1]) # check for polling time being too short, change it to the minimum timeout if too short if delaytime &lt; AtlasI2C.long_timeout: print("Polling time is shorter than timeout, setting polling time to %0.2f" % AtlasI2C.long_timeout) delaytime = AtlasI2C.long_timeout # get the information of the board you're polling info = string.split(device.query("I"), ",")[1] print("Polling %s sensor every %0.2f seconds, press ctrl-c to stop polling" % (info, delaytime)) try: while True: print(device.query("R")) time.sleep(delaytime - AtlasI2C.long_timeout) except KeyboardInterrupt: # catches the ctrl-c command, which breaks the loop above print("Continuous polling stopped") </code></pre> <p>If this is the case then if looks like you can recycle most of this code for your use. You can grab the string you are seeing in your console with device.query("R"), instead of printing it, grab the return value and write it to your CSV. </p>
python|csv|raspberry-pi3|i2c|atlas
0
1,904,372
42,416,301
Store json data in file when connection between mqtt client and broker lost
<p>I have mqtt client which publishing some messages to broker.</p> <p>The message is like below</p> <pre><code>{ "message": { "body": { "software_sub_version": "O", "software_version": 1.25, "hardware_id": 17, "machine_id": 39 }, "header": { "Machine_name": "MY_MACHINE", "message_type": "machine_data", "message_version": "1.0", "publish_time": "2017-02-22T06:44:27.281644+00:00" } } } </code></pre> <p>While publishing the message, I am checking if connection with broker is there or not, if it is not there I am storing the message to file using below code.</p> <pre><code>def mqtt_publish(): if no_connection: store_json_data() else: send_msg() </code></pre> <p>I also need to send stored msg one by one to broker.</p> <pre><code>while True: if stored_msg_present: send_stored_msg_to_broker() </code></pre> <p>What should be the best way to do this. Is it good to use database like tinydb for this. But I am not able retrieve element from tinydb data base.</p>
<p>Save to database or file or just save in memory is up to you and your environment.</p> <p>If the connection is very rarely to connect and usually disconnect for a long time, then save to database or file is easier to save memory. But if your messages are less, it is still ok to just save in memory. (e.g. a queue or a list)</p> <p>If the connection is very stable and rare to disconnect, and will recover in short time (e.g. several seconds), then save to memory may not be a bad choice. But if your messages grows very fast (e.g. thousands of messages per second), then keep them all in memory may not be a good choice.</p> <p>Comparison between file and database, if you have to hold a lots of data, maybe multithread programing to access them, or need to query them to choose which one to upload first, then database will be easier to handle. If your data is simple and less, you may easier handle them with files.</p> <p>Environment is another consideration, if you use some low quality chip, maybe you should consider their performance or stability. (e.g. lack of memory or disk full)</p> <p>Finally is the data importance. If your memory and disk is short, data is not so important to miss some, maybe you can just discard them, save your effort to handle them. It's just up to you.</p>
python|json|database|serialization|mqtt
0
1,904,373
42,160,559
Partition groups as list of matrices in Python and NumPy
<p>I have a NumPy Matrix of the form:</p> <pre><code>dummy = np.array([['A', 1, 1], ['A', 1, 1], ['B', 1, 1], ['C', 1, 1], ['F', 1, 1], ['I', 1, 1], ['I', 1, 1], ['I', 1, 1], ['J', 1, 1], ['K', 1, 1], ['L', 1, 1], ['M', 1, 1]]) </code></pre> <p>What I am attempting to produce is a list of NumPy Matrices, where there can only be <em>n</em> different variations of string value per Matrix such as:</p> <p>Having <em>n</em> = 4:</p> <pre><code>[array([['A', '1', '1'], ['A', '1', '1'], ['B', '1', '1'], ['C', '1', '1'], ['F', '1', '1']], &lt;= 4 different groups (A,B,C,F) dtype='|S1'), array([['I', '1', '1'], ['I', '1', '1'], ['I', '1', '1'], ['J', '1', '1'], ['K', '1', '1'], ['L', '1', '1']], &lt;= 4 different groups (I,J,K,L) dtype='|S1'), array([['M', '1', '1']], &lt;= only one but kept dtype='|S1')] </code></pre> <p>I have the following function which almost works..</p> <pre><code>def partition_by(x, groups): uniques = set([]) p = [] q = [] for i in x: if len(uniques) &lt; groups or i[0] in uniques: uniques.add(i[0]) p.append(i.tolist()) q.append(np.array(p)) return q partition_by(dummy, 4) </code></pre> <p>yields:</p> <pre><code>[array([['A', '1', '1'], ['A', '1', '1'], ['B', '1', '1'], ['C', '1', '1'], ['F', '1', '1']], dtype='|S1')] </code></pre>
<p>Here's one approach -</p> <pre><code>def split_col_based(dummy, colID = 0, n=4): mask = dummy[1:,colID] != dummy[:-1,colID] interval_idx = n*(np.arange((mask.sum()+1)//n)+1) idx = np.searchsorted(mask.cumsum(), interval_idx) return np.split(dummy, idx+1, axis=0) </code></pre> <p><strong>Sample input, output</strong></p> <p>1) Input array :</p> <pre><code>In [79]: dummy Out[79]: array([['A', '1', '1'], ['A', '1', '1'], ['B', '1', '1'], ['C', '1', '1'], ['F', '1', '1'], ['I', '1', '1'], ['I', '1', '1'], ['I', '1', '1'], ['J', '1', '1'], ['K', '1', '1'], ['L', '1', '1'], ['M', '1', '1']], dtype='|S1') </code></pre> <p>2) Output with <code>n=4</code> :</p> <pre><code>In [80]: split_col_based(dummy, n=4) Out[80]: [array([['A', '1', '1'], ['A', '1', '1'], ['B', '1', '1'], ['C', '1', '1'], ['F', '1', '1']], dtype='|S1'), array([['I', '1', '1'], ['I', '1', '1'], ['I', '1', '1'], ['J', '1', '1'], ['K', '1', '1'], ['L', '1', '1']], dtype='|S1'), array([['M', '1', '1']], dtype='|S1')] </code></pre> <p>3) Output with <code>n=5</code> :</p> <pre><code>In [81]: split_col_based(dummy, n=5) Out[81]: [array([['A', '1', '1'], ['A', '1', '1'], ['B', '1', '1'], ['C', '1', '1'], ['F', '1', '1'], ['I', '1', '1'], ['I', '1', '1'], ['I', '1', '1']], dtype='|S1'), array([['J', '1', '1'], ['K', '1', '1'], ['L', '1', '1'], ['M', '1', '1']], dtype='|S1')] </code></pre> <p>4) Output with <code>n=2</code> :</p> <pre><code>In [84]: split_col_based(dummy, n=2) Out[84]: [array([['A', '1', '1'], ['A', '1', '1'], ['B', '1', '1']], dtype='|S1'), array([['C', '1', '1'], ['F', '1', '1']], dtype='|S1'), array([['I', '1', '1'], ['I', '1', '1'], ['I', '1', '1'], ['J', '1', '1']], dtype='|S1'), array([['K', '1', '1'], ['L', '1', '1']], dtype='|S1'), array([['M', '1', '1']], dtype='|S1')] </code></pre>
python|pandas|numpy
1
1,904,374
42,417,118
Maven build fails for required libraries while executing robot tests
<p>I am trying to execute the test-suite written on Robotframework through maven, as i want to capture the overall code covered in the unit and integration test phases for this I am using jacoco plugin.</p> <p>All looks good, It does starts executing the robot tests but fails to import some test libraries, like <code>SSHLibrary</code>, <code>requests</code>,<code>jsonschema</code> etc.</p> <p>I came to know that I will have to add jars in the classpath for such dependencies(in the below log see <code>com.trilead.ssh2</code> for <code>SSHLibrary</code>) and tried that as well but no luck.</p> <pre><code>--- robotframework-maven-plugin:1.4.7:acceptance-test (default) @ rdx --- Executing Robot with command: [/usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java, -javaagent:/home/asr/.m2/repository/org/jacoco/org.jacoco.agent/0.7.5.201505241946/org.jacoco.agent-0.7.5.201505241946-runtime.jar=destfile=/home/asr/workspace/RDX/target/coverage-reports/jacoco-it.exec, org.robotframework.RobotFramework, -d, /home/asr/workspace/RDX/target/robotframework-reports, -t, mycli_help_usage, -V, /home/asr/robot/tf2jan/etc/environments/mycli_env.py, -P, /home/asr/workspace/RDX/src/test/resources/robotframework/libraries, -x, TEST-mycli.xml, --xunitskipnoncritical, /home/asr/robot/tf2jan/Tests/CLI/mycli] ============================================================================== mycli ============================================================================== [ ERROR ] Error in file '/home/asr/robot/tf2jan/Tests/CLI/mycli/mycli_resources.txt': Importing test library 'SSHLibrary' failed: ImportError: No module named SSHLibrary Traceback (most recent call last): None PYTHONPATH: /home/asr/.m2/repository/org/robotframework/robotframework/3.0.2/Lib /home/asr/.m2/repository/org/robotframework/robotframework/3.0.2/robotframework-3.0.2.jar/Lib __classpath__ __pyclasspath__/ CLASSPATH: /home/asr/.m2/repository/org/robotframework/robotframework/3.0.2/robotframework-3.0.2.jar /home/asr/workspace/RDX/target/test-classes /home/asr/workspace/RDX/target/classes /home/asr/.m2/repository/com/trilead/trilead-ssh2/build213-svnkit-1.3-patch/trilead-ssh2-build213-svnkit-1.3-patch.jar /home/asr/.m2/repository/junit/junit/4.11/junit-4.11.jar /home/asr/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar /home/asr/.m2/repository/pl/pragmatists/JUnitParams/1.0.4/JUnitParams-1.0.4.jar /home/asr/.m2/repository/org/jacoco/org.jacoco.agent/0.7.5.201505241946/org.jacoco.agent-0.7.5.201505241946-runtime.jar </code></pre>
<p>There is a help about integrating Robot with maven with regards of using RED Robot Editor. Check following link, maybe you will find anything relevant and helpful:</p> <p><a href="https://github.com/nokia/RED/blob/master/red_help/user_guide/maven.html.md" rel="nofollow noreferrer">https://github.com/nokia/RED/blob/master/red_help/user_guide/maven.html.md</a></p>
python|maven|jython|robotframework|jacoco-maven-plugin
0
1,904,375
42,430,664
How to generate specific random numbers in Python?
<p>I need help with generating a list of specific random numbers using Python.</p> <p>I know how to generate a list of random number like this</p> <pre><code>import random number = random.sample(xrange(1,10), 3) </code></pre> <p>which will generate 3 random numbers between 1 to 10. (Example)</p> <pre><code>[7, 6, 1] </code></pre> <p>What I want to do achieve is generating a list of random numbers like this with duplicate value(Example)</p> <pre><code>[10000, 30000, 20000, 60000, 100, 3000, 3000, 100, ...] </code></pre> <p>But using above script like this</p> <pre><code>import random number = random.sample(xrange(10000,100000), 4) </code></pre> <p>yields following result (Example)</p> <pre><code>[12489, 43177, 51867, 68831] </code></pre> <p>Is it possible to do what I want to achieve here? Thank your answer.</p>
<p>Something that looks like your example</p> <pre><code>import random n1 = random.sample(xrange(1,10), 4) n2 = random.sample(xrange(2,6), 4) numbers = [n1i * 10**n2i for n1i, n2i in zip(n1, n2)] </code></pre> <p>Sample output:</p> <pre><code>[800000, 7000, 200, 30000] </code></pre> <p>If you want repeats:</p> <pre><code>random.choices(numbers, k=6) </code></pre> <p>Sample output:</p> <pre><code>[800000, 7000, 800000, 800000, 7000, 200] </code></pre>
python|random
2
1,904,376
54,212,148
python3 and pip3 in docker
<p>I want to use <code>python 3.x</code> and <code>pip3</code> to install some python libraries in docker. I used following commands to do it, but they were not installed.</p> <pre><code>FROM alpine:latest RUN apk add python3 py3-pip3 &amp;&amp; \ pip3 install --upgrade pip3 &amp;&amp; \ pip3 install wget &amp;&amp;\ pip3 install sys &amp;&amp;\ pip3 install threading &amp;&amp;\ pip3 install time &amp;&amp;\ pip3 install requests &amp;&amp;\ pip3 install paho-mqtt &amp;&amp;\ pip3 install logging &amp;&amp;\ rm -rf /var/cache/apk/* COPY NumSide.py /home/mehdi/Download/NumSide.py CMD ["python3","/home/mehdi/Download/NumSide.py"] </code></pre> <p>Below, the error I got:</p> <blockquote> <p>ERROR: unsatisfiable constraints: py3-pip3 (missing): required by: world[py3-pip3] The command '/bin/sh -c apk add python3 py3-pip3 &amp;&amp; pip3 install --upgrade pip3 &amp;&amp; pip3 install wget &amp;&amp;pip3 install sys &amp;&amp;pip3 install threading &amp;&amp;pip3 install time &amp;&amp;pip3 install requests &amp;&amp;pip3 install paho.mqtt.client &amp;&amp;pip3 install logging &amp;&amp;rm -rf /var/cache/apk/*' returned a non-zero code: 1</p> </blockquote>
<p>The package name is <code>py3-pip</code> <strong>not</strong> <code>py3-pip3</code></p>
python|docker|pip|alpine-linux
23
1,904,377
45,373,698
How to avoid cache invalidation in Dockerfile for large binary file (Python_Onbuild)
<p>I am downloading 1.6 GB binary compressed file in my dockfile and then unpacking it using gunzip which leads to storing a 3.6 GB file. I do not want it to be repeated all the time as it takes a lot of time. Its a static file so it should not be downloaded every time I deploy my changes to the server using Jenkins/docker. However, its download every time, I commit changes, and run Jenkins to deploy them. </p> <p>Here is my docker file:</p> <pre><code>FROM python:2.7.13-onbuild RUN mkdir -p /usr/src/app WORKDIR /usr/src/app ARG DEBIAN_FRONTEND=noninteractive RUN apt-get update &amp;&amp; apt-get install --assume-yes apt-utils RUN apt-get update &amp;&amp; apt-get install -y curl RUN apt-get update &amp;&amp; apt-get install -y unzip RUN curl -o - https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz \ | gunzip &gt; /usr/src/app/GoogleNews-vectors-negative300.bin </code></pre> <p><strong>UPDATE:</strong> I changed the dockfile to a simple one as given below:</p> <pre><code>FROM python:2.7.13-onbuild RUN mkdir -p /usr/src/app WORKDIR /usr/src/app RUN echo "Test Cache" CMD /usr/local/bin/gunicorn -t 240 -k gevent -w 1 -b 0.0.0.0:8000 --reload src.wsgi:app </code></pre> <p>Now if I do not change the code or any other file, this works fine so the command echo <code>"test cache"</code> is not repeated. However, as soon as I make any change to any file in the source folder, all my commands after the following steps are repeated which I think copies my source code the docker directory. This should not happen at this stage because it means that all my commands are repeated as soon as I make any commit. </p> <p>Here is the output when I do no make any changes to the code and run the build for second time:</p> <pre><code>Sending build context to Docker daemon 239.1kB Step 1/6 : FROM python:2.7.13-onbuild # Executing 3 build triggers... Step 1/1 : COPY requirements.txt /usr/src/app/ ---&gt; Using cache Step 1/1 : RUN pip install --no-cache-dir -r requirements.txt ---&gt; Using cache Step 1/1 : COPY . /usr/src/app ---&gt; Using cache ---&gt; 1911c6dc9fce Step 2/6 : RUN mkdir -p /usr/src/app ---&gt; Using cache ---&gt; 4019b029d05c Step 3/6 : WORKDIR /usr/src/app ---&gt; Using cache ---&gt; 1a99833e908c Step 4/6 : RUN echo "Test Cache" ---&gt; Using cache ---&gt; 488a62aa1b09 </code></pre> <p>Here is the output where I make a single change to one of the source files and you can see that the echo "test cache" is repeated.</p> <pre><code>Sending build context to Docker daemon 239.1kB Step 1/6 : FROM python:2.7.13-onbuild # Executing 3 build triggers... Step 1/1 : COPY requirements.txt /usr/src/app/ ---&gt; Using cache Step 1/1 : RUN pip install --no-cache-dir -r requirements.txt ---&gt; Using cache Step 1/1 : COPY . /usr/src/app ---&gt; 6fd1003e246a Removing intermediate container f25a4d2910cf Step 2/6 : RUN mkdir -p /usr/src/app ---&gt; Running in ff324f381875 ---&gt; 3694086a2b6a Removing intermediate container ff324f381875 Step 3/6 : WORKDIR /usr/src/app ---&gt; 5f23ab9a15df Removing intermediate container 0b0d796f97d0 Step 4/6 : RUN echo "Test Cache" ---&gt; Running in 296d2f141015 Test Cache ---&gt; f90c7708d9eb </code></pre>
<p>All my commands were repeating because I was using <code>python:2.7.13-onbuild</code> as a base image. It's <a href="https://github.com/docker-library/python/blob/d3c5f47b788adb96e69477dadfb0baca1d97f764/2.7/jessie/onbuild/Dockerfile" rel="nofollow noreferrer">docker-file</a> looks like this:</p> <pre><code>FROM python:2.7 RUN mkdir -p /usr/src/app WORKDIR /usr/src/app ONBUILD COPY requirements.txt /usr/src/app/ ONBUILD RUN pip install --no-cache-dir -r requirements.txt ONBUILD COPY . /usr/src/app </code></pre> <p>As I was using this as the base image, the copy command is executed before all commands in my docker file and this copy command changes the context every time I make any change in the source code.</p> <p>I was recommended to use Python:2.7 as a base image directly so I have more control over this copy operation. My new docker-file is as follows with the copy command at the end which solved the issue. </p> <pre><code>FROM python:2.7 RUN mkdir -p /usr/src/app WORKDIR /usr/src/app ARG DEBIAN_FRONTEND=noninteractive RUN apt-get update &amp;&amp; apt-get install --assume-yes apt-utils RUN apt-get update &amp;&amp; apt-get install -y curl RUN apt-get update &amp;&amp; apt-get install -y unzip RUN curl -o - https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz \ | gunzip &gt; /usr/src/app/GoogleNews-vectors-negative300.bin COPY requirements.txt /usr/src/app/ RUN pip install --no-cache-dir -r requirements.txt COPY . /usr/src/app </code></pre> <p>According to this <a href="https://hub.docker.com/_/python/" rel="nofollow noreferrer">documentation</a>, the use of Python_onbuild image is generally discouraged. </p> <p>This explanation is inspired by the answer of my other question on the same issue: <a href="https://stackoverflow.com/questions/45421386/which-python-variant-to-use-as-a-base-image-in-dockerfiles">Which Python variant to use as a base image in dockerfiles?</a></p>
python|docker|dockerfile
2
1,904,378
28,570,414
Serving a Zip file of pandas data frames within a flask app
<p>I am attempting to return a zip of pandas data frames within my Flask app. I have one of my views return the output from the serve_csv function. This is the original serve_csv function that I had, which successfully downloads a specified csv. </p> <pre><code>def serve_csv(dataframe,filename): buffer = StringIO.StringIO() dataframe.to_csv(buffer, encoding='utf-8', index=False) buffer.seek(0) return send_file(buffer, attachment_filename=filename, mimetype='text/csv') </code></pre> <p>I am attempting to change it to a serve_zip function that takes a list of pandas data frames and returns a zip of the corresponding csv files. However, I am receiving an error that states that an object of type Nonetype has no len. I am guessing this has to do with how I am attempting to write to the buffer, but after reading the documentation, am not sure how to fix it. Here is my current function:</p> <pre><code>def serve_zip(data_list,filename): '''data_list: a list of pandas data frames filename''' zipped_file = StringIO.StringIO() with zipfile.ZipFile(zipped_file, 'w') as zip: for i, dataframe in enumerate(data_list): print type(dataframe.to_csv(zipped_file, encoding='utf-8', index=False)) zip.writestr(filename, dataframe.to_csv(zipped_file, encoding='utf-8', index=False)) zipped_file.seek(0) return send_file(zipped_file, attachment_filename=filename, mimetype='application/octet-stream') </code></pre> <p>And my stack trace:</p> <pre><code>Full traceback: 127.0.0.1 - - [17/Feb/2015 15:57:21] "POST /part2/ HTTP/1.1" 500 - Traceback (most recent call last): File "/private/var/folders/f4/qr09tm_169n4b9xyjsrjv8680000gn/T/tmpmQ95cJ/.deps/Flask-0.10.1-py2.7.egg/flask/app.py", line 1836, in __call__ return self.wsgi_app(environ, start_response) File "/private/var/folders/f4/qr09tm_169n4b9xyjsrjv8680000gn/T/tmpmQ95cJ/.deps/Flask-0.10.1-py2.7.egg/flask/app.py", line 1820, in wsgi_app response = self.make_response(self.handle_exception(e)) File "/private/var/folders/f4/qr09tm_169n4b9xyjsrjv8680000gn/T/tmpmQ95cJ/.deps/Flask-0.10.1-py2.7.egg/flask/app.py", line 1403, in handle_exception reraise(exc_type, exc_value, tb) File "/private/var/folders/f4/qr09tm_169n4b9xyjsrjv8680000gn/T/tmpmQ95cJ/.deps/Flask-0.10.1-py2.7.egg/flask/app.py", line 1817, in wsgi_app response = self.full_dispatch_request() File "/private/var/folders/f4/qr09tm_169n4b9xyjsrjv8680000gn/T/tmpmQ95cJ/.deps/Flask-0.10.1-py2.7.egg/flask/app.py", line 1477, in full_dispatch_request rv = self.handle_user_exception(e) File "/private/var/folders/f4/qr09tm_169n4b9xyjsrjv8680000gn/T/tmpmQ95cJ/.deps/Flask-0.10.1-py2.7.egg/flask/app.py", line 1381, in handle_user_exception reraise(exc_type, exc_value, tb) File "/private/var/folders/f4/qr09tm_169n4b9xyjsrjv8680000gn/T/tmpmQ95cJ/.deps/Flask-0.10.1-py2.7.egg/flask/app.py", line 1475, in full_dispatch_request rv = self.dispatch_request() File "/private/var/folders/f4/qr09tm_169n4b9xyjsrjv8680000gn/T/tmpmQ95cJ/.deps/Flask-0.10.1-py2.7.egg/flask/app.py", line 1461, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/private/var/folders/f4/qr09tm_169n4b9xyjsrjv8680000gn/T/tmpmQ95cJ/webapp.py", line 110, in part2 return serve_zip([table3, agg_table], 'my_file.csv') File "/private/var/folders/f4/qr09tm_169n4b9xyjsrjv8680000gn/T/tmpmQ95cJ/webapp.py", line 61, in serve_csv zip.writestr(filename, dataframe.to_csv(zipped_file, encoding='utf-8', index=False)) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/zipfile.py", line 1216, in writestr zinfo.file_size = len(bytes) # Uncompressed size TypeError: object of type 'NoneType' has no len() 127.0.0.1 - - [17/Feb/2015 15:57:21] code 400, message Bad request version ('RTSP/1.0') 127.0.0.1 - - [17/Feb/2015 15:57:21] "GET /info?txtAirPlay&amp;txtRAOP RTSP/1.0" 400 - 127.0.0.1 - - [17/Feb/2015 15:57:21] "GET /part2/?__debugger__=yes&amp;cmd=resource&amp;f=style.css HTTP/1.1" 200 - 127.0.0.1 - - [17/Feb/2015 15:57:21] "GET /part2/?__debugger__=yes&amp;cmd=resource&amp;f=jquery.js HTTP/1.1" 200 - 127.0.0.1 - - [17/Feb/2015 15:57:21] "GET /part2/?__debugger__=yes&amp;cmd=resource&amp;f=debugger.js HTTP/1.1" 200 - 127.0.0.1 - - [17/Feb/2015 15:57:21] "GET /part2/?__debugger__=yes&amp;cmd=resource&amp;f=ubuntu.ttf HTTP/1.1" 200 - 127.0.0.1 - - [17/Feb/2015 15:57:21] "GET /part2/?__debugger__=yes&amp;cmd=resource&amp;f=console.png HTTP/1.1" 200 - 127.0.0.1 - - [17/Feb/2015 15:57:21] "GET /part2/?__debugger__=yes&amp;cmd=resource&amp;f=source.png HTTP/1.1" 200 - 127.0.0.1 - - [17/Feb/2015 15:57:21] "GET /part2/?__debugger__=yes&amp;cmd=resource&amp;f=console.png HTTP/1.1" 200 - </code></pre>
<p>It looks like <code>dataframe.to_csv</code> doesn't return the string if a buffer object is provided, but writes the CSV data into the buffer. This is not what you want to do anyway, since you want the data in the buffer to be a valid zip file. Instead, pass in <code>None</code>:</p> <pre><code>zip.writestr(filename, dataframe.to_csv(None, encoding='utf-8', index=False)) </code></pre> <p>This way the <em>zip</em> object will compress the CSV string and add it to the zip archive (which you are buffering in memory via <code>StringIO</code>).</p>
python|csv|flask|zip
2
1,904,379
14,621,932
Python Ctypes Load Library
<p>I am using windows 7 64 bit machine. I installed Visual studio 2010 and developed a simple win32 dll to add 2 numbers.. The dll is created and i used a test application to test the dll and it works fine..</p> <p>Now i write python script(shown below) to use this library. But i get the following error message.</p> <pre><code>Traceback (most recent call last): File "C:\Users\sbritto\Documents\Visual Studio 2008\Projects\MathFuncsDll\Debug\MathFuncs.py", line 5, in &lt;module&gt; lib = ctypes.WinDLL('MathFuncsDll.dll',use_last_error=True) File "C:\Python27\lib\ctypes\__init__.py", line 365, in __init__ self._handle = _dlopen(self._name, mode) WindowsError: [Error 193] %1 is not a valid Win32 application </code></pre> <p><strong>Python Script</strong></p> <pre><code>import ctypes from ctypes import * #lib = cdll.LoadLibrary("MathFuncsDll.dll") lib = ctypes.WinDLL('MathFuncsDll.dll',use_last_error=True) print lib </code></pre> <p>Kindly let me know ASAP.</p> <p>Thanks in advance</p>
<p>You'll get this error if you try to open a 64-bit DLL using a Python interpreter compiled for a 32-bit machine, or vice versa. So, if this is a 64-bit DLL you need to make sure you are running a 64-bit Python.</p>
visual-studio-2010|dll|python-2.7|ctypes|loadlibrary
3
1,904,380
14,373,193
Django: Access many-to-many (reverse) in the template
<p>Following up with the posting regarding <a href="https://stackoverflow.com/questions/9352662/how-to-use-the-reverse-of-a-django-manytomany-relationship">reversed many-to-many look ups</a>, I was wondering what the best practice for my project/picture problem is:</p> <p>I want to register a number of projects and the users can upload (but not required) multiple project pictures.</p> <p>Therefore I defined the following two classes:</p> <pre><code>from easy_thumbnails.fields import ThumbnailerImageField class Project(models.Model): name = models.CharField(_('Title'), max_length=100,) user = models.ForeignKey(User, verbose_name=_('user'),) ... class ProjectPicture(models.Model): project = models.ForeignKey('Project') picture = ThumbnailerImageField(_('Image'), upload_to='user/project_pictures/', null=True, blank=True,) def __unicode__(self): return u'%s\'s pictures' % (self.project.name) </code></pre> <p>So for every user, I am displaying their projects in a "dashboard" via </p> <pre><code>projects = Project.objects.filter(user = logged_user) </code></pre> <p>which returns a list of projects with the names, etc.</p> <p>Now I would like to display a picture of the project in the dashboard table. Therefore I have two questions I am seeking advice for:</p> <p>1) Is the class setup actually the best way to do it? I split up the classes like shown above to allow the users to upload more than one picture per project. Would there be a better way of doing it?</p> <p>2) How can I display the first picture of a project in the template, if a picture is available? Do I need to make a query on every ProjectPicture object which corresponds to a Project? Or is there an elegant Django solution for that problem?</p>
<p>It's not many-to-many relation, you use foreign keys. It's normal setup. To access first picture in template you can use <code>{{ project.projectpicture_set.all.0 }}</code>, it will generate additional query. To avoid it use <a href="https://docs.djangoproject.com/en/1.4/ref/models/querysets/#prefetch-related" rel="nofollow">prefetch_related</a>.</p>
python|django|many-to-many|thumbnails|reverse
3
1,904,381
57,197,283
AWS Lambda: cannot import name '_imaging' from 'PIL'
<p>I currently try to get this AWS Lambda Getting started tutorial running: <a href="https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-deployment-pkg.html#with-s3-example-deployment-pkg-python" rel="noreferrer">https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-deployment-pkg.html#with-s3-example-deployment-pkg-python</a></p> <p>However, I always receive an error:</p> <pre><code>{ "errorMessage": "Unable to import module 'CreateThumbnail': cannot import name '_imaging' from 'PIL' (/var/task/PIL/__init__.py)", "errorType": "Runtime.ImportModuleError" } </code></pre> <p>Log output</p> <pre><code>START RequestId: fefba1d1-443c-4617-a5ad-c3aac19e5591 Version: $LATEST [ERROR] Runtime.ImportModuleError: Unable to import module 'CreateThumbnail': cannot import name '_imaging' from 'PIL' (/var/task/PIL/__init__.py) END RequestId: fefba1d1-443c-4617-a5ad-c3aac19e5591 REPORT RequestId: fefba1d1-443c-4617-a5ad-c3aac19e5591 Duration: 1.52 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 71 MB </code></pre> <p>I went that far to build my .zip from a <code>lambci/docker-lambda</code> image. But it didn't resolve my problem.</p> <p>Here what's inside my .zip. Do you have any ideas, why I still get this error?</p> <p><a href="https://i.stack.imgur.com/TvleX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/TvleX.png" alt="enter image description here"></a></p>
<p>Using python 3.6 instead of 3.7 would just give me a different error. It seems AWS lambda is missing some components due to the way it was built.</p> <p>5-minutes solution that worked for me:</p> <ul> <li><p>go to <a href="https://github.com/keithrozario/Klayers/tree/master/deployments/python3.8/arns" rel="noreferrer">https://github.com/keithrozario/Klayers/tree/master/deployments/python3.8/arns</a> (note this is python 3.8)</p> </li> <li><p>select the file for the region your lambda runs into</p> </li> <li><p>get the ARN for the latest Pillow version</p> <p><a href="https://i.stack.imgur.com/mvqQa.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mvqQa.png" alt="enter image description here" /></a></p> </li> <li><p>on your Lambda in the AWS console, click &quot;Layers (0)&quot;</p> <p><a href="https://i.stack.imgur.com/L6lPR.png" rel="noreferrer"><img src="https://i.stack.imgur.com/L6lPR.png" alt="enter image description here" /></a></p> </li> <li><p>add a new layer:</p> <p><a href="https://i.stack.imgur.com/9iiof.png" rel="noreferrer"><img src="https://i.stack.imgur.com/9iiof.png" alt="enter image description here" /></a></p> </li> </ul> <p>Save all and it should just work! However you will have to make sure that redeploying the lambda keeps the layer somehow.</p> <hr /> <p>Full credits to this life saving blog post <a href="https://medium.com/@derekurizar/aws-lambda-python-pil-cannot-import-name-imaging-11b2377d31c4" rel="noreferrer">https://medium.com/@derekurizar/aws-lambda-python-pil-cannot-import-name-imaging-11b2377d31c4</a></p>
python|amazon-web-services|aws-lambda
11
1,904,382
56,924,062
setting the first value of a groupby to Nan
<p>I have a timeseries for different categories</p> <pre><code>cat date price A 2000-01-01 100 A 2000-02-01 101 ... A 2010-12-01 140 B 2000-01-01 10 B 2000-02-01 10.4 ... B 2010-12-01 11.1 ... Z 2010-12-01 13.1 </code></pre> <p>I need to compute returns on all assets, which is very quick using</p> <pre><code>df['ret'] = df['price'] / df['price'].shift(1) - 1 </code></pre> <p>However, that also computes incorrect returns for the first element of each company (besides A) based on the last observation of the previous company. Therefore, I want to NaN the first observation in each category.</p> <p>It is easy to get these observations using</p> <pre><code>df.groupby('cat')['ret'].first() </code></pre> <p>but I am a bit lost on how to set them. </p> <pre><code>df.groupby('cat')['ret'].first() = np.NaN </code></pre> <p>and</p> <pre><code>df.loc[df.groupby('cat')['ret'].first(), 'ret']=np.NaN </code></pre> <p>did not lead anywhere.</p>
<p>for set first value per groups to missing values use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.duplicated.html" rel="nofollow noreferrer"><code>Series.duplicated</code></a>:</p> <pre><code>df.loc[~df['cat'].duplicated(), 'ret']=np.NaN </code></pre> <p>But it seems need <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>DataFrame.sort_values</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.pct_change.html" rel="nofollow noreferrer"><code>GroupBy.pct_change</code></a>:</p> <pre><code>df = df.sort_values(['cat','date']) df['ret1'] = df.groupby('cat')['price'].pct_change() </code></pre> <p>Your solution should be changed with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.shift.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.shift</code></a>:</p> <pre><code>df['ret2'] = df['price'] / df.groupby('cat')['price'].shift(1) - 1 print (df) cat date price ret1 ret2 0 A 2000-01-01 100.0 NaN NaN 1 A 2000-02-01 101.0 0.010000 0.010000 2 A 2010-12-01 140.0 0.386139 0.386139 3 B 2000-01-01 10.0 NaN NaN 4 B 2000-02-01 10.4 0.040000 0.040000 5 B 2010-12-01 11.1 0.067308 0.067308 6 Z 2010-12-01 13.1 NaN NaN </code></pre>
python|pandas|pandas-groupby
2
1,904,383
25,907,994
To generate a distance function?
<p>Given the value of X and Y where X is load and Y is run-time</p> <pre><code>X =[0.1, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15] Y =[139, 130, 141, 142, 145, 146, 146, 147, 147, 149, 150, 152, 152, 152, 154, 155, 165, 169, 175, 211, 224] </code></pre> <p>I tried using their difference but didn't work</p> <pre><code>D = abs(X[0]-Y[0]) </code></pre> <p>I need a distance function to be generated based on the values of X and Y.If any one can suggest be some ways of doing it.</p>
<p>You can use the Eucleidian distance as it is defined for n-dimensions:</p> <pre><code>D=sqrt(Sum(xi-yi)*2) </code></pre> <p>for your case would be:</p> <pre><code>Dxy=sqrt((0.1-139)*2+....+(0.15-224)*2) </code></pre>
python|polynomial-math
0
1,904,384
23,566,320
How to change permissions on a couchdb database with python
<p>I am trying to add a user in the "_security" document of a database with python-couchdb. I am getting an error because the "_security" document doesn't have an "_id" attribute.</p> <pre><code>import couchdb couch=couchdb.Server("http://admin:admin@localhost:5984") couch["db"]["_security"] Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/usr/local/lib/python2.7/dist-packages/couchdb/client.py", line 977, in __repr__ return '&lt;%s %r@%r %r&gt;' % (type(self).__name__, self.id, self.rev, File "/usr/local/lib/python2.7/dist-packages/couchdb/client.py", line 987, in id return self['_id'] KeyError: '_id' </code></pre>
<p>The right way to do it is</p> <pre><code>import couchdb couch=couchdb.Server("http://admin:admin@localhost:5984") db=couch["db"] security_doc=db.resource.get_json("_security")[2] db.resource.put("_security",{u'admins': {u'names': [u'admin1','admin2','admin3'], u'roles': []}, u'members': {u'names': [], u'roles': []}}) </code></pre>
python|couchdb|couchdb-python
2
1,904,385
24,306,360
read numbers from a file specified on the command line and print the sum of the numbers
<p>Write a program that inputs a list of numbers from a file and prints their sum. The file name comes in as a command-line argument. Assuming the file "orange" contains the numbers: </p> <pre><code>6 26 10 8 </code></pre> <p>Your program should behave as follows: </p> <pre><code> python3 a4.py orange 50 </code></pre> <p>My current code:</p> <pre><code>from scanner import * def main(): s = Scanner("orange") print(sum(float(x) for x in sys.argv[1:])) s.close() main() </code></pre> <p>For my code to do that I will need to implement a scanner correct? However, when I run the updated code with the scanner, it returns a <code>cannot convert string to float: orange</code>. Sorry if this is a dumb question, but Im taking my first programming class and I'm trying to get ahead.</p>
<p>You don't need a scanner, you can just get the filename from <code>sys.argv</code> and read the file like this:</p> <pre><code>import sys with open(sys.argv[1], 'r') as f: print(sum(float(x) for x in f.read().split())) </code></pre> <p>output for your sample file:</p> <pre><code> &gt;&gt; python3 a4.py orange 50.0 </code></pre> <p>If it's guaranteed that there are only integers in your file you can use <code>int(x)</code>.</p>
python|python-3.x
0
1,904,386
20,377,775
Self scope in PyQT5 app?
<p>I am trying to get my first Python learning app completed, I used PyQT5 designer to make a basic UI and used the following to test things out and I get the <code>NameError: global name 'self' is not defined</code> error. My question is what is the correct procedure to update <code>txtProg</code> (Text Editor) in GUI.</p> <pre><code># -*- coding: utf-8 -*- # Form implementation generated from reading ui file 'main.ui' # # Created: Sun Dec 1 20:19:03 2013 # by: PyQt5 UI code generator 5.1.1 # # WARNING! All changes made in this file will be lost! import sys from PyQt5 import QtCore, QtGui, QtWidgets from mydl import TheDLClass def progress_hook(txt): self.txtProg.setText(txt) class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(800, 600) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.txtProg = QtWidgets.QPlainTextEdit(self.centralwidget) self.txtProg.setGeometry(QtCore.QRect(50, 80, 661, 431)) self.txtProg.setObjectName("txtProg") MainWindow.setCentralWidget(self.centralwidget) self.menubar = QtWidgets.QMenuBar(MainWindow) self.menubar.setGeometry(QtCore.QRect(0, 0, 800, 21)) self.menubar.setObjectName("menubar") MainWindow.setMenuBar(self.menubar) self.statusbar = QtWidgets.QStatusBar(MainWindow) self.statusbar.setObjectName("statusbar") MainWindow.setStatusBar(self.statusbar) self.retranslateUi(MainWindow) self.launchit(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate("MainWindow", "My Downloader")) def launchit(self, MainWindow): MainWindow.setWindowTitle('Launching..') with TheDLClass() as dl: dl.fd.add_progress_hook(progress_hook) dl.download(['http://myurl.com/']) if __name__ == "__main__": import sys app = QtWidgets.QApplication(sys.argv) MainWindow = QtWidgets.QMainWindow() ui = Ui_MainWindow() ui.setupUi(MainWindow) MainWindow.show() sys.exit(app.exec_()) </code></pre>
<p>You can't access <code>self</code> without explicitly pass it. (using method or lambda)</p> <p>More preferably, define <code>progress_hook</code> as instance method:</p> <pre><code>def progress_hook(self, txt): self.txtProg.setText(txt) </code></pre> <p>and bind it as follow:</p> <pre><code>def launchit(self, MainWindow): MainWindow.setWindowTitle('Launching..') with TheDLClass() as dl: dl.fd.add_progress_hook(self.progress_hook) # &lt;---- dl.download(['http://myurl.com/']) </code></pre>
python|python-3.x|pyqt|pyqt5
0
1,904,387
20,531,186
Correct PATH location in .profile
<p>I am running into some issues with installing <code>python</code> modules while using <code>python</code> versus <code>ipython</code> and I think it has to do with my <code>.profile</code> and <code>.bash_profile</code>. </p> <p>My desired setup is to be able to leverage <code>homebrew</code>, <code>pip</code> and <code>easy_install</code> to install programs and modules, and have them install into the correct location so <code>python</code> and <code>ipython</code> point to the same source. Here is the output of <code>which</code> for various programs: </p> <pre><code>mike$ which brew /usr/local/bin/brew mike$ which ruby /usr/bin/ruby mike$ which python /usr/local/bin/python mike$ which ipython /usr/local/share/python/ipython </code></pre> <p><code>.profile</code> output:</p> <pre><code>PATH="/usr/local/bin:/usr/local/sbin:/usr/local/mysql/bin:$PATH" </code></pre> <p><code>.bash_profile</code> output:</p> <pre><code>if [ -f ~/.bashrc ]; then source ~/.bashrc ; fi export PATH=/usr/local/share/python:$PATH </code></pre> <p>What changes should I make so that when I install modules or programs, they automatically go into the correct location?</p>
<p>Your <code>PATH</code> tells bash which executables to run when you type a certain command, not which modules to load in python. You can check where modules are installed by doing:</p> <pre><code>import module_name print module_name.__file__ </code></pre> <p>You're problem is likely due to either running different versions of python or having ipython use a different <code>PYTHONPATH</code>. Try doing:</p> <pre><code>import os print os.environ["PYTHONPATH"] </code></pre> <p>in the two different interpreters. If it raises a KeyError, then try setting <code>PYTHONPATH</code> in your .bash_profile to the libraries you want, e.g.:</p> <pre><code>export PYTHONPATH=.:/usr/local/lib/python </code></pre> <p>If it's a version issue, then use the appropriate pip command (e.g. <code>pip-2.7</code> - see <a href="https://stackoverflow.com/questions/2812520/pip-dealing-with-multiple-python-versions">pip: dealing with multiple Python versions?</a>). The same applies for easy_install.</p> <p>For ruby I recommend using <code>rvm</code> and <code>gem install</code> (steps 6-8 of <a href="http://www.moncefbelyamani.com/how-to-install-xcode-homebrew-git-rvm-ruby-on-mac/" rel="nofollow noreferrer">http://www.moncefbelyamani.com/how-to-install-xcode-homebrew-git-rvm-ruby-on-mac/</a>). These tools are similar to python's pip and easy_install, allowing for seamless installation of ruby gems.</p>
python|pip|homebrew|.profile
1
1,904,388
36,183,607
Closing Plink window with Python
<p>I want to communicate with a data-logger via Telnet. Therefore, I wrote the following python-script:</p> <pre><code>import subprocess command ='plink.exe -telnet -P 23 12.17.46.06' p = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, bufsize=1, shell=False) answer = p.communicate('command')[0] print answer </code></pre> <p>By running the script, a plink-windows pops up. The python script seems to wait for some action to be done inside the plink command window. By closing the window manually, the desired "answer" shows up inside python.</p> <p>I am looking for a command / procedure to close plink directly out of python. It seems not to be sufficient to just close the subprocess, as in this case only the communication between python and plink gets closed and not the program plink.exe itself.</p> <p>Any help is appreciated! Regards, Phil</p>
<p>The documentation for the <a href="https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate" rel="nofollow"><code>communicate()</code></a> function says: <em>Wait for process to terminate.</em> Thus the function does not return until plink.exe exits and thus your program doesn't get the output until then.</p> <p>You should add to your <code>'command'</code> something that will close the telnet connection. When the far end closes the telnet connection plink.exe will exit and its window will close. If your telnet session runs a unix shell you could add '; exit' to your command.</p>
python|telnet|plink
2
1,904,389
14,928,519
MySQLdb security when connecting to a remote server?
<pre><code> db = MySQLdb.connect(host ="host", user="user", passwd="pass", db="dbname") q = db.cursor() </code></pre> <p>So, that's my code block, I was just wondering, how easy would this be to reverse engineer, and does mysqldb send authentications over cleartext?</p> <p>I am creating a program that connects to a mySQL server over the internet, would someone be able to get my credentials?</p> <p>Would someone be able to get my server login details?</p>
<p>The MySQL server could be configured to use SSL to secure the connection. See <a href="https://stackoverflow.com/questions/7287088/ca-ssl-parameter-for-python-mysqldb-not-working-but-key-does">here</a> for an example of using MySQLdb with an SSL connection and <a href="http://dev.mysql.com/doc/refman/5.1/en/configuring-for-ssl.html" rel="nofollow noreferrer">here</a> for some info on configuring the server.</p>
python|security|mysql-python
1
1,904,390
46,430,662
I am not able to update tensorflow installed with anaconda
<p> I have been trying to update tensorflow from 1.2.1 to 1.3. I did the following on my terminal:</p> <pre class="lang-sh prettyprint-override"><code>pip3 install tensorflow --upgrade </code></pre> <p>After this, I tried checking the version</p> <pre class="lang-sh prettyprint-override"><code>python3 -c 'import tensorflow as tf; print(tf.__version__)' </code></pre> <p>This outputs 1.2.1 but not 1.3.0</p> <p>All this had been done in an anaconda environment.<br> Operating system: macOS Sierra</p> <p>I would like to know how one can perform this update. Thank you</p>
<p>The solution that I found was to install pip inside the conda environment and use that instance of pip to install the packages. Please refer <a href="https://stackoverflow.com/questions/41060382/using-pip-to-install-packages-to-anaconda-environment">here</a></p> <p>The reason I got the above error was that the python3 and pip3 paths were different.</p> <p>This became evident when I ran the below commands inside the conda environment.</p> <pre><code>which python3 </code></pre> <p>/Users/SMBP/anaconda/envs/tensorflow/bin/python3</p> <pre><code>which pip3 </code></pre> <p>/usr/local/bin/pip3</p> <p>Also, I think it is better to use virtualenv and virtualenvwrapper if you wish to work with tensorflow as it is the recommended way.</p>
tensorflow|pip|installation
0
1,904,391
21,081,769
How to signal "index not found" in Python Function result
<p>I'm writing a little function to return the index of the first occurrence of a string in a list of strings, using a "fuzzy" comparison.</p> <p>My question is: what is the proper way to signify the target string not matching any in the source list? </p> <p>The obvious (only?) thing to do is to return -1. But since -1 in Python means the last element of a sequence, it occurs to me this may not be good Python style. Is there a more Pythonic (Pythonesque?) way?</p>
<blockquote> <p>My question is: what is the proper way to signify the target string not matching any in the source list?</p> </blockquote> <p>You <strong>raise an error</strong>:</p> <pre><code>raise ValueError("String Not Found") </code></pre> <p>Python is a <strong>duck typed</strong> lnguage; see: <a href="http://en.wikipedia.org/wiki/Duck_typing" rel="nofollow">http://en.wikipedia.org/wiki/Duck_typing</a> so it's perfectly acceptable and accepted convention to "raise an appropriate error".</p> <p><strong>Update:</strong> As usual, there have been several answers already by now and comments and even suggestions of using <code>raise ValueError</code>. IHMO I believe <code>IndexError</code> to be more appropriate; but this may be a matter of style and personal taste. Also read the: <a href="http://www.python.org/dev/peps/pep-0020/" rel="nofollow">The Zen of Python</a> -- Specifically around the line <em>"There should be one-- and preferably only one --obvious way to do it."</em>.</p> <p><strong>Update II:</strong> I guess for consistency's sake with Pyton's builtin <code>list.index()</code> and <code>str.index()</code> <code>raise ValueError(...)</code> should be used :)</p>
python|indexing
2
1,904,392
62,553,474
Find same keys and merge in list of dicts? Python
<p>This code produces a list of dictionaries.</p> <pre><code> watchlist = r.get_open_option_positions() for x in watchlist: print('Symbol: {}, Average Price: {}, Quantity: {}'.format(x['chain_symbol'], x['average_price'], x['quantity'])) </code></pre> <p>Output:</p> <pre><code>Symbol: PG, Average Price: -46.5714, Quantity: 35.0000 Symbol: PG, Average Price: 33.7142, Quantity: 35.0000 Symbol: MSFT, Average Price: -80.0000, Quantity: 6.0000 Symbol: MSFT, Average Price: 53.0000, Quantity: 6.0000 </code></pre> <p>How do I code the following criteria:</p> <pre><code>if symbol is the same and quantity of both symbols is the same, then subtract average prices and multiply by quantity </code></pre> <p>So for example the result should look like:</p> <pre><code>Symbol: PG, Average Price: (-12.8572 * 35), Quantity: 35.000 Symbol: MSFT, Average Price: (-27 * 6), Quantity: 6.000 </code></pre>
<ul> <li>Set up a dict (a defaultdict for convenience) to keep track of each group: <pre><code>groups = collections.defaultdict(list) </code></pre> </li> <li>Iterate over <code>watchlist</code> add each <code>x</code> to a group: <pre><code>for x in watchlist: groups[(x[&quot;chain_symbol&quot;], x[&quot;quantity&quot;])].append(x) </code></pre> </li> <li>Iterate over each group and sum the prices (it's the same thing as subtracting them here really): <pre><code>for group_key, group in groups.items(): final_price = sum(x[&quot;average_price&quot;] for x in group) print(group_key, final_price) </code></pre> </li> </ul>
python|list|dictionary|merge|finance
1
1,904,393
62,557,107
I get err 500 when modifing template on django
<p>I've a deployed Django project <a href="https://sadraedu.com" rel="nofollow noreferrer">link here</a>, every time i try to mod the template server give <code>500 error</code> while it doesn't have any bugs.</p> <p>some times the error vanishes by itself and some times i should reset the template to the previous version.</p> <p>my <code>back</code> is written with <code>python-Django</code> and <code>front</code> is written with <code>jquery, bootstrap, django-template</code>.</p> <p>there is probability that the site has problem with the below code:</p> <pre><code>#signup/views.py from django.shortcuts import render,redirect from .forms import signup_form def Signup(request): form = signup_form(request.POST or None) context={ 'form':form, 'authed':False, 'name':&quot;&quot;, 'gender':&quot;&quot;, } if form.is_valid(): context['authed'] = True context['name'] = request.POST['student_name'] context['gender'] = request.POST['gender'] form.save() return render(request, 'signup.html', context) </code></pre>
<p>thanks of all, the problem was solved. it was not my app or my own fault I submited a ticket to the server support describing the problem and they answered it was <code>server providers fault</code> and now i get always 200 in my site. again thanks of all for paying attention.</p>
python|django|templates
0
1,904,394
53,677,151
How to convert a column with missing value to integer type
<p>I want to convert a column to integer but the problem is that the column contains a missing value. The column converts to float fine, but cant convert to integer.</p> <p>Sample code:</p> <pre><code>d2 = {'location': ['NY', 'NY', 'PA', 'NY', 'PA', 'PA', 'NY'], 'dep_name': ['hr', 'mk', 'fin', 'fin', 'hr', 'fin', 'fin'], 'Duration_of_Employment' : [10, 5, 9, 8, 2, 4, 7], 'Salary' : [50000, 86000,25000, 73000, 28000, 60000, 40000], 'Days_Since_Last_Promotion': ['61', '35', '25', '98', 'NaN', '45', '22']} df2 = pd.DataFrame(data = d2) df2['xy'] = df2['Days_Since_Last_Promotion'].astype(float) df2['Months_Since_Last_Promotion'] = df2['xy'] // 30 </code></pre> <p>Now 'Months_Since_Last_Promotion' is float type. But when I try to convert it to integer I get the following error.</p> <pre><code>df2['Months_Since_Last_Promotion'] = df2['Months_Since_Last_Promotion'].astype(int) </code></pre> <blockquote> <p>ValueError: Cannot convert NA to integer</p> </blockquote> <p>From the error, I figured its due to the missing value Nan and tried this work around .But it didnt work and 'Months_Since_Last_Promotion' is still showing as float64.</p> <pre><code>df2.loc[df2['Months_Since_Last_Promotion'].notnull(), 'Months_Since_Last_Promotion'] = df2.loc[df2['Months_Since_Last_Promotion'].notnull(), 'Months_Since_Last_Promotion'].astype(int) </code></pre> <blockquote> <p>Note: I cant use fillna to replace the NaN. The goal is to keep the column as integer.</p> </blockquote>
<p>Numeric columns containing <code>NaN</code> values are stored as floats by default (even if all other numbers are integers) - this is because of typecasting restrictions in pandas. What this means is that if you want to retain the <code>NaN</code> as is without filling the missing value, casting the column to an integer may not be possible (to the best of my knowledge). Here's an excerpt from the documentation:</p> <blockquote> <p>"While pandas supports storing arrays of integer and boolean type, these types are not capable of storing missing data. Until we can switch to using a native NA type in NumPy, we’ve established some “casting rules”. When a reindexing operation introduces missing data, the Series will be cast according to the rules introduced in the table below."</p> </blockquote> <p>Please refer to:</p> <p><a href="https://pandas.pydata.org/pandas-docs/stable/missing_data.html#missing-data-casting-rules-and-indexing" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/missing_data.html#missing-data-casting-rules-and-indexing</a></p>
python|python-3.x|pandas|integer
2
1,904,395
53,484,129
Determine if x is positive for both floats and timedeltas
<p>I'm looking for a simple and neat way to check if a variable is positive where this can be either a float or a <code>numpy.timedelta64</code> or <code>datetime.timedelta</code>.</p> <p>I obviously tried <code>x&gt;0</code>, but this doesn't work for <code>timedelta</code>s (neither <code>numpy</code>'s or <code>datetime</code>'s)</p> <p>The only solution i found is <code>x/abs(x) &gt; 0</code> that I find somehow cumbersome. Not even <code>np.sign</code> returns an answer. </p> <p>Is there a better way to check this?</p> <p>EDIT: Using <code>x.total_seconds()</code> returns an error whenever <code>x</code> is a float. Using <code>x &gt; np.timedelta(0)</code> does not work for <code>datetime.timedelta</code>s. </p>
<p>You can compare against a "zero time-delta" object, then use <code>try</code> / <code>except</code> to cover numeric inputs:</p> <pre><code>import numpy as np from datetime import timedelta def pos_check(x, zero_value=timedelta()): try: return x &gt; zero_value except TypeError: return x &gt; 0 py_td = timedelta(days=1) # 1-day, regular Python np_td = np.timedelta64(py_td) # 1-day, NumPy assert pos_check(1.5) assert pos_check(py_td) assert pos_check(np_td) </code></pre>
python|numpy|datetime|timedelta
3
1,904,396
46,037,588
auto login in cloud using selenium [python]
<p>I am trying to write a program that allows me to auto login on a cloud service. The fields corrensponding to the username and password in the HTML are:</p> <pre><code>&lt;div class="formRow"&gt; &lt;input type="text" name="emailOrUsername" ng-model="emailOrUsername" panono-text-box= "{ &amp;quot;placeholder&amp;quot;: &amp;quot;states.account.logIn.emailOrUsername&amp;quot;, &amp;quot;required&amp;quot;: true } " panono-focus="" class="ng-pristine ng-valid panonoTextBox ng-touched" placeholder= "Email or username *" spellcheck="false"&gt; &lt;/div&gt; &lt;div class="formRow"&gt; &lt;input type="password" name="password" ng-model="password" panono-text-box= "{ &amp;quot;placeholder&amp;quot;: &amp;quot;states.account.logIn.password&amp;quot;, &amp;quot;required&amp;quot;: true } " class="ng-pristine ng-untouched ng-valid panonoTextBox" placeholder="Password *" spellcheck="false"&gt; &lt;/div&gt; </code></pre> <p>The code I have written is:</p> <pre><code>from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait usernameStr = "my_username" passwordStr = "my_password" browser = webdriver.Chrome() browser.get("https://mywebsite.com") wait = WebDriverWait(browser, 10) if wait: username = browser.find_element_by_name("emailOrUsername"); password = browser.find_element_by_name("password") username.send_keys(usernameStr) username.send_keys(usernameStr) </code></pre> <p>I get this error message.</p> <blockquote> <p>raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"id","selector":"emailOrUsername"}</p> </blockquote>
<p>Looks like you need to find those elements <code>by name</code> instead of <code>by id</code></p> <pre><code>username = browser.find_element_by_name("emailOrUsername"); password = browser.find_element_by_name("password") </code></pre>
python|selenium|autologin
1
1,904,397
46,018,462
Upgrading version of colorama for buildozer
<p>I have a big problem. I am trying to use buildozer to package my application for android that I have written using kivy in Python 2. I have installed all dependencies, and I fell upon an error, stating:</p> <pre><code>ERROR: colorama version is 0.3.2, but python-for-android needs at least 0.3.3. </code></pre> <p>So I ran:</p> <pre><code> pip install colorama --upgrade to upgrade it. </code></pre> <p>It finishes, but it also states that</p> <pre><code>Not uninstalling colorama at /usr/lib/python2.7/dist-packages, owned by OS </code></pre> <p>I thought it worked, but it turns out that the colorama at /usr/lib/python2.7/dist-packages is the one python sees first, and buildozer still doesn't work.</p> <p>The problem is, for some reason, colorama seems to be at it's latest version when I type <code>aptitude upgrade colorama</code> , and pip needs colorama, so if I uninstall colorama, aptitude tells me I have to uninstall pip. pip just happens to be a dependancy of buildozer as well. </p> <p>Update: I ran this all as root, previously stating su, exept for the buildozer commands.</p>
<p>It's a permission error. Try running </p> <pre><code>sudo pip install colorama --upgrade </code></pre>
android|python
0
1,904,398
45,850,998
How can I remove a double string to have it only be the first word?
<p>I have a string that is the same word twice, i.e. <code>"hihi"</code>. How can I remove the first <code>'hi'</code> so only the second occurrence exists?</p>
<p>If the string is always double, you can split it in the middle.</p> <pre><code>str_1 = "hihi" print(str_1[len(str_1)//2:]) </code></pre>
python|string|words
1
1,904,399
73,820,112
Replace column values in python dataframe using dictionary based on condition
<p>I have below dataframe</p> <pre><code>df = pd.DataFrame({'col1': {0: '4', 1: '4', 2: '2'},'col2': {0: 'USA', 1: 'England', 2: 'Japan'}}) &gt;&gt;&gt; df col1 col2 0 4 USA 1 4 England 2 2 Japan </code></pre> <p>and I have below dictionary</p> <pre><code>dict_1 = {&quot;USA&quot; : 'Washington',&quot;Japan&quot; : 'Tokyo',&quot;England&quot; : 'London'} </code></pre> <p>I want to replace values in col2 using dict_1 but replace in rows where col1 == 2</p> <p>Desired output is as below</p> <pre><code> col1 col2 0 4 USA 1 4 England 2 2 Tokyo </code></pre> <p>I tried below method but it doesnt do anything</p> <pre><code>df.loc[df['col1'] == '2', 'col2'].replace(dict_1,inplace=True) </code></pre>
<p>Don't do <code>inplace=True</code> specially when you slice:</p> <pre><code>df.loc[df['col1']=='2', 'col2'] = df.loc[df['col1'] == '2', 'col2'].replace(dict_1) </code></pre> <p>Output:</p> <pre><code> col1 col2 0 4 USA 1 4 England 2 2 Tokyo </code></pre>
python|pandas|dataframe|dictionary|replace
3