Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,901,900
8,213,007
Profiling on live Django server?
<p>I've never done code coverage in Python, but I'm looking for something like <code>GCC</code>'s <code>gcov</code>, which tells me how many times each line executes, or Apple's Shark which gives a hierarchial breakdown of how long each function is taking.</p> <p>My problem is that I have a live server which is experiencing high load, and I can't tell from the logs what's causing it. I would like to attach something to my Django instance to monitor which lines are the hottest and/or which functions are taking the longest time.</p> <p>This is something like, but not exactly, code coverage. I would like to introduce it to a live running server, preferably without modifying too much.</p> <p>Ideas?</p>
<p><a href="http://invitebox.github.com/django-live-profiler/" rel="nofollow">Django-live-profiler</a> is a drop-in Django app that lets you profile your running application using <a href="https://github.com/bos/statprof.py" rel="nofollow">statprof</a> and visualize the results. </p>
python|django|code-coverage
3
1,901,901
47,218,100
Trying to scrape HTML table and convert into dataframe in Python. Code not working correctly
<p>I'm trying to scrape the coinmarketcap website for its historical daily data (in a HTML table), but not getting the correct result. Below is the code. The code only returns the last row of the table. I'm doing something wrong with the loop...any help is greatly appreciated!</p> <pre><code>import requests from bs4 import BeautifulSoup import pandas as pd data = requests.get('https://coinmarketcap.com/currencies/ethereum/historical-data') soup = BeautifulSoup(data._content, 'lxml') table = soup.find_all('table')[0] #the table has 7 columns, about 30 rows new_table = pd.DataFrame(columns=range(0,7), index = [0]) row_marker = 0 for row in table.find_all('tr'): column_marker = 0 columns = row.find_all('td') for column in columns: new_table.iat[row_marker,column_marker] = column.get_text() column_marker += 1 print (new_table.head()) </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_html.html" rel="nofollow noreferrer"><code>read_html</code></a>:</p> <pre><code>url = 'https://coinmarketcap.com/currencies/ethereum/historical-data' df = pd.read_html(url, parse_dates=[0])[0] print (df.head()) Date Open High Low Close Volume Market Cap 0 2017-11-09 308.64 329.45 307.06 320.88 893250000 29509000000 1 2017-11-08 294.27 318.70 293.10 309.07 967956000 28128700000 2 2017-11-07 298.57 304.84 290.77 294.66 540766000 28533300000 3 2017-11-06 296.43 305.42 293.72 298.89 579359000 28322700000 4 2017-11-05 300.04 301.37 295.12 296.26 337658000 28661500000 </code></pre>
python|beautifulsoup
1
1,901,902
47,420,130
Find lines and color them with mouse click
<p>@ALL This is an edit of the original question to bring a little more light on the subject.</p> <h3>Problem Statement</h3> <ul> <li>Suppose there is an industrial P&amp;ID plot.</li> <li>Aiming to color only some lines important to the process.</li> <li>The user should only click (left mouse-click) on the line segment to get it colored.</li> </ul> <h3>Problem Approach</h3> <p>I am new to programming -> using Python (3.5) to try this out. The way I see it the algorithm is like this:</p> <ul> <li>The plot will be in .pdf format. Therefore I could employ PIL ImageGrab or convert .pdf to .png as presented in <a href="https://stackoverflow.com/questions/27826854/python-wand-convert-pdf-to-png-disable-transparent-alpha-channel">this example</a></li> <li>The algorithm will search for pixels around the mouse click, then compare it to another portion of identical size (let's say a strip of 6x3 px), but one step to the left/right (be it 1-5 px)</li> <li>Checking the mean of their differences will tell us if the two strips are identical</li> <li>This way the algorithm should find both the line endings, arrows, corners or other elements</li> <li>Once this is found, the positions recorded and the markup line drawn, the user is expected to pick another line </li> </ul> <h3>Summed up</h3> <ul> <li>Click on the wanted line</li> <li>Grab a small portion of the image around the mouse click</li> <li>Check if the line is either horizontal or vertical</li> <li>Crop an horizontal/vertical slice of a given size</li> <li>Find line endings and record the endings positions</li> <li>Between the two found positions draw a line of a certain color (let's say green)</li> <li>Wait for the next line to be picked and repeat</li> </ul> <h3>Other thoughts</h3> <ul> <li>Attached you can find two pictures of a sample image and what I am trying to achieve.</li> <li>Tried to find "holes" in the slices using the approach found here: <a href="https://stackoverflow.com/questions/35847990/detect-holes-ends-and-beginnings-of-a-line-using-opencv">OpenCV to find line endings</a></li> <li>There is no strict rule in sticking with ImageGrab routine or anything alike</li> <li>If you know other tactics that I could use, please feel free to comment</li> <li>Any advice is welcome and sincerely appreciated</li> </ul> <p>Sample image:</p> <blockquote> <p><img src="https://i.stack.imgur.com/YvqU2.jpg" alt="Sample Image"></p> </blockquote> <p>Desired Result (modified in Paint):</p> <blockquote> <p><img src="https://i.stack.imgur.com/KJDrK.jpg" alt="Desired Result (modified in Paint)"></p> </blockquote> <h3>Adding an update to the post with the work I tried out so far</h3> <p>I've done some modifications on the original code so I will post it below. Everything in comments is either for debug or explanations. Your help is highly appreciated! Do not be afraid to intervene.</p> <pre><code>import win32gui as w from PIL import ImageStat, ImageChops, Image, ImageDraw import win32api as wa img=Image.open("Trials.jpg") img_width=img.size[0] img_height=img.size[1] #Using 1920 x 1080 resolution #Hide the taskbar to center the Photo Viewer #Defining a way to make sure the mouse click is inside the image #Substract the width from total and divide by 2 to get base point of the crop width_lim = (1920 - img_width)/2 height_lim = (1080 - img_height)/2-7 #After several tests, the math in calculating the height is off by 7 pixels, hence the correction #Use these values when doing the crop #Check if left mouse button was pressed and record its position left_p = wa.GetKeyState(0x01) #print(left_p) while True : a=wa.GetKeyState(0x01) if a != left_p: left_p = a if a&lt;0 : pos = w.GetCursorPos() pos_x=pos[0]-width_lim pos_y=pos[1]-height_lim # print(pos_x,pos_y) else: break #img.show() #print(img.size) #Define the crop height; size is doubled height_size = 10 #Define max length limit #Getting a horizontal strip im_hor = img.crop(box=[0, pos_y-height_size, img_width, pos_y+height_size]) #im_hor.show() #failed in trying crop a small square of 3x3 size using the pos_x #sq_size = 3 #st_sq = im_hor.crop(box=[pos_x,0,pos_x+sq_size,height_size*2]) #st_sq.show() #going back to the code it works #crop a standard strip and compare with a new test one #if the mean of difference is zero, the strips are identical #still looking for a way to find the position of the central pixel (that would be the one with maximum value - black) strip_len = 3 step = 3 i = pos_x st_sq = im_hor.crop(box=[i,0,i+strip_len,height_size*2]) test_sq = im_hor.crop(box=[i+step,0,i+strip_len+step,height_size*2]) diff = ImageChops.difference(st_sq,test_sq) stat=ImageStat.Stat(diff) mean = stat.mean mean1 = stat.mean #print(mean) #iterate to the right until finding a different strip, record position while mean==[0,0,0]: i = i+1 st_sq = im_hor.crop(box=[i,0,i+strip_len,height_size*2]) #st_sq.show() test_sq = im_hor.crop(box=[i+step,0,i+strip_len+step,height_size*2]) #test_sq.show() diff = ImageChops.difference(st_sq,test_sq) #diff.show() stat=ImageStat.Stat(diff) mean = stat.mean # print(mean) print(i-1) r = i-1 #print("STOP") #print(r) #record the right end as r = i-1 #iterate to the left until finding a different strip. record the position while mean1==[0,0,0]: i = i-1 st_sq = im_hor.crop(box=[i,0,i+strip_len,height_size*2]) #st_sq.show() test_sq = im_hor.crop(box=[i+step,0,i+strip_len+step,height_size*2]) #test_sq.show() diff = ImageChops.difference(st_sq,test_sq) #diff.show() stat=ImageStat.Stat(diff) mean1 = stat.mean # print(mean) #print("STOP") print(i+1) l = i+1 #record the left end as l=i+1 test_draw = ImageDraw.Draw(img) test_draw.line([l,pos_y,r,pos_y], fill=128) img.show() #find another approach or die trying!!! </code></pre> <p>Below is the <a href="https://i.stack.imgur.com/vQXpK.jpg" rel="nofollow noreferrer">result</a> I got. It is not what I was hoping for, but I feel like being on the right track. I could really use some help on finding the pixel position in a strip and make it relative to the big picture pixel position.</p> <p>Another <a href="https://i.stack.imgur.com/5eL1C.png" rel="nofollow noreferrer">image</a> of the sort, in better quality, but yet bringing more problems into the fray.</p>
<p>So this solution is not a full solution to your exact problem, but I think it might be a good approach that can get you at least part of the way there. The issues I have in general with line detection approaches is that they usually heavily rely on <em>multiple</em> hyperparameters. More annoyingly, they are slow since they are searching a wide array of angles; your lines are strictly either horizontal or vertical. As such, I'd recommend using morphology. You can find a general overview of morphology <a href="https://docs.opencv.org/trunk/d9/d61/tutorial_py_morphological_ops.html" rel="nofollow noreferrer">on the OpenCV site</a> and you can see it applied to remove the music bars in a score on <a href="https://docs.opencv.org/3.2.0/d1/dee/tutorial_moprh_lines_detection.html" rel="nofollow noreferrer">this tutorial on the OpenCV site</a>.</p> <p>The basic idea I thought was:</p> <ol> <li>Detect horizontal and vertical lines</li> <li>Run <code>connectedComponents()</code> on the detected lines to identify each line separately</li> <li>Get user mouse position and define a window around it</li> <li>If a label from the connected components is in that window, then grab that component</li> <li>Draw that component on the image</li> </ol> <p>Now, this is a very basic idea and ignores some of the challenges involved. However, what this <em>will</em> do for sure is, if you click anywhere and there <em>is</em> a line in your image within the window of that click, you <em>will</em> get it. There are no missed lines here. Additional good news is it doesn't ignore the thicker borders and such in the image where you would naturally want this to stop (note this problem exists for line detection schemes). This will only detect lines that have a defined width and if the line gets thicker (turns into an arrow or hits a line going a different direction) it cuts it off. The bad news is that this uses pre-defined width for your lines. You can somewhat skirt around this by using a hit-or-miss transform, but note that the implementation is currently broken for versions of OpenCV older than 3.3-rc; see <a href="https://stackoverflow.com/a/46146544/5087436">here</a> for more (you can get around the broken implementation easily). Anyways, a hit-or-miss transform here allows you to say "I want a horizontal line but it can be a few pixels wide or just one pixel wide". Of course the wider you make it, the more things that aren't lines might turn into one. You can filter these out later though based on the size (toss all lines that are smaller than some size with erosion or dilation).</p> <hr> <p>Now what does that look like in code? I decided to make a simple example and apply this, but note the code is thrown together so there's no real error catching here, and you'd want to write this a lot nicer. Either way it's just a quick hack to give an example of the above method. </p> <p>First, we'll create the image and draw some lines:</p> <pre><code>import cv2 import numpy as np img = 255*np.ones((500, 500), dtype=np.uint8) cv2.line(img, (10, 350), (200, 350), color=0, thickness=1) cv2.line(img, (100, 150), (400, 150), color=0, thickness=1) cv2.line(img, (300, 250), (300, 500), color=0, thickness=1) cv2.line(img, (100, 50), (100, 350), color=0, thickness=1) bin_img = cv2.bitwise_not(img) </code></pre> <p><a href="https://i.stack.imgur.com/ucFZK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ucFZK.png" alt="Simple line example"></a></p> <p>And note I've also created the opposite image, because I prefer to keep the things I'm trying to <em>detect</em> white with black being the background.</p> <p>Now we'll grab those horizontal and vertical lines with morphology (erosion in this case):</p> <pre><code>h_kernel = np.array([[0, 0, 0], [1, 1, 1], [0, 0, 0]], dtype=np.uint8) v_kernel = np.array([[0, 1, 0], [0, 1, 0], [0, 1, 0]], dtype=np.uint8) h_lines = cv2.morphologyEx(bin_img, cv2.MORPH_ERODE, h_kernel) v_lines = cv2.morphologyEx(bin_img, cv2.MORPH_ERODE, v_kernel) </code></pre> <p><a href="https://i.stack.imgur.com/Zr5vF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zr5vF.png" alt="Horizontal lines"></a></p> <p><a href="https://i.stack.imgur.com/nbQ3h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nbQ3h.png" alt="Vertical lines"></a></p> <p>And now we'll label each line:</p> <pre><code>h_n, h_labels = cv2.connectedComponents(h_lines) v_n, v_labels = cv2.connectedComponents(v_lines) </code></pre> <p>These images <code>h_labels</code> and <code>v_labels</code> will be identical to <code>h_lines</code> and <code>v_lines</code> but instead of the color/value being white at each pixel, the value is instead an integer for each different component in the image. So the background pixels will have the value 0, one line will be labeled with 1s, the other labeled with 2s. And so on for images with more lines.</p> <p>Now we'll define a window around a user mouse-click. Instead of implementing that pipeline here I'm just going to hard-code a mouse-click position:</p> <pre><code>mouse_click = [101, 148] # x, y click_radius = 3 # pixel width around mouse click window = [[mouse_click[0] - i, mouse_click[1] - j] for i in range(-click_radius, click_radius+1) for j in range(-click_radius, click_radius+1)] </code></pre> <p>The last thing to do is loop through all the locations inside the <code>window</code> and check if the label there is positive (i.e. it's not background). If it is, then we've hit a line. So now we can just look at all the pixels that have that label, and that will be the full line. Then we can use any number of methods to draw the line on the original <code>img</code>. </p> <pre><code>label = 0 for pixel in window: if h_labels[pixel[1], pixel[0]] &gt; 0: label = h_labels[pixel[1], pixel[0]] bin_labeled = 255*(h_labels == label).astype(np.uint8) elif v_labels[pixel[1], pixel[0]] &gt; 0: label = v_labels[pixel[1], pixel[0]] bin_labeled = 255*(v_labels == label).astype(np.uint8) if label &gt; 0: rgb_labeled = cv2.merge([img, img+bin_labeled, img]) break </code></pre> <p><a href="https://i.stack.imgur.com/A5hCD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A5hCD.png" alt="Labeled line"></a></p> <p>IMO this code directly above is really sloppy, there are better ways to draw this but I didn't want to spend time on something not really central to the question.</p> <hr> <p>One easy way to improve this would be to connect near lines---you could do this still with morphology, before you find the components. A better way to draw would probably be to simply find the min/max locations of that label inside the image, and just use those as the endpoint coordinates for OpenCV's <code>line()</code> function to draw, which would allow you to choose colors and line thicknesses easily. One thing I'd suggest doing if possible would be to show these lines on a mouseover before the user clicks (so they know they're clicking in the right area). That way if a user is close to two lines they know which one they're selecting.</p>
python|opencv|python-imaging-library|straight-line-detection
0
1,901,903
57,370,047
How to not create an object when there is an exception (python)?
<p>I created a class, that accepts an argument, and when the type of this argument is not string, it raises an exception. Well, it raises the exception when I want to create an object of this class and the argument is not string, but it creates the object anyway. Is there any way to prevent object from creation?</p> <pre class="lang-py prettyprint-override"><code>class Movie(): def __init__(self, name): if type(name) != str: raise Exception("name should be string") else: self.name = name </code></pre> <p><code>movie = Movie(1)</code></p> <p>Here I get the exception but if I <code>print(movie)</code>, I get the location of the object in the memory. I want to get an error, that this name is not defined. Thanks in Advance.</p>
<p>Solution, del self object before raise</p> <pre><code>class Movie(): def __init__(self, name): if type(name) != str: del self raise Exception("name should be string") else: self.name = name </code></pre>
python|class|exception
2
1,901,904
70,903,170
Flask html templates with node.js: required is not defined
<p>Im making a website with heroku with python <code>Flask</code></p> <p>Now I need to connect mongodb database with javascript but it shows this error:</p> <pre class="lang-js prettyprint-override"><code>Uncaught ReferenceError: require is not defined &lt;anonymous&gt; http://mortis666stocksimulator.herokuapp.com/static/scripts/mongo.js:1 </code></pre> <p>The javascript works before but after I added the line</p> <pre class="lang-js prettyprint-override"><code>const { MongoClient } = require('mongodb'); </code></pre> <p>It gives me the error. I tried to fix this so i created another buildpack (heroku/node.js) <a href="https://i.stack.imgur.com/dk70Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dk70Q.png" alt="The buildpack" /></a></p> <p>And in my project folder I created a <code>package.json</code></p> <p>This is the content:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;name&quot;: &quot;stocksimulator&quot;, &quot;version&quot;: &quot;1.0.0&quot;, &quot;description&quot;: &quot;First ever website project &quot;, &quot;author&quot;: &quot;Mortis_666&quot;, &quot;license&quot;: &quot;MIT&quot;, &quot;dependencies&quot;: { &quot;mongodb&quot;: &quot;^4.3&quot; }, &quot;engines&quot;: { &quot;node&quot;: &quot;16.13.1&quot; } } </code></pre> <p>And it still doesnt work<br /> Is there anyway to fix it?</p> <h2><strong>Edit</strong></h2> <p>Guess what? I tried to run the flask file in my <strong>local machine</strong>. And it gives me the same error.(Even I've <strong>already installed</strong> node.js)</p> <p>But when I run the <code>mongo.js</code> file individually, it doesnt gives me error! And the mongodb's function even work!</p> <p>So it means that the problem <strong>now</strong> is not about how to install node.js in heroku. Its about how to use node.js in <code>flask</code></p> <p>My javascript file is inside the folder <code>static</code> And I'm accessing the javascript file in html file</p> <pre class="lang-html prettyprint-override"><code>&lt;script src=&quot;{{url_for('static', filename='scripts/mongo.js'}}&quot;&gt;&lt;/script&gt; </code></pre> <p>If you want to see the source code you can see it in <a href="https://github.com/Mortis66666/stock" rel="nofollow noreferrer">Github</a></p> <p>The <a href="https://github.com/Mortis66666/stock/blob/main/templates/home.html" rel="nofollow noreferrer">html template</a><br /> The <a href="https://github.com/Mortis66666/stock/blob/main/static/scripts/mongo.js" rel="nofollow noreferrer">javascript file</a><br /> The <a href="https://github.com/Mortis66666/stock/blob/main/app.py" rel="nofollow noreferrer">flask file</a></p>
<p>You do <em>not</em> need JavaScript to connect to MongoDB, you can connect to it directly from your Flask code using the MongoDB Python driver called <a href="https://pymongo.readthedocs.io/en/stable/" rel="nofollow noreferrer">PyMongo</a>.</p> <p>Second, any environment variables passed to the frontend so they can be read by JavaScript are publicly available to anybody who views your source. This is <strong>VERY INSECURE</strong> and no sensitive values should every be passed this way!</p> <p>Third, if you want your &quot;refresh&quot; button to work, you wouldn't talk to the database directly from the browser with JavaScript. Instead, the easiest thing to do would be to use JavaScript to trigger a browser refresh and refresh the whole page.</p> <pre><code>&lt;button onclick=&quot;location.reload();&quot;&gt; </code></pre> <p>However, if you want to only reload part of the page, the next most simple approach is to render a &quot;partial&quot;. You'd create a new Flask route, let's say <code>/stocks/partial</code>, then use JavaScript to call it with <code>fetch()</code>, and that route would only render the part of the the template that listed your stocks. You could then use JavaScript to find where the old list of stocks was in the HTML and replace it.</p> <p>Finally, the most common way this is done these days is with an API call. You'd set up a route that returns JSON data (for example <code>/api/stocks</code>). You'd then call that route with <code>fetch()</code> and take that data and use javascript, or a frontend templating library, to render the data as HTML. This is why React and Vue are popular; they're frontend template (and component) systems. You build your entire frontend with them and your Flask app becomes nothing but an API backend for the frontend to make calls to.</p>
node.js|python-3.x|flask
1
1,901,905
33,934,325
How to paste multiple lines to an ipdb shell in python?
<p>I am working with python and ipdb debugger. Let's say is set some breaking point in some line in a python file. Now, after running the python file, the program stops at the breakpoint. I want to be able to paste multiple lines to the ipdb shell. Now i get an error, if trying to paste mutiple lines. How can i paste mutiple lines?</p> <p>Thanks.</p>
<p>As far as I know, you cannot simply paste them. You have to use <code>;</code> to indicate indentation. For example:</p> <pre><code>for i in range(10): print i; print("hello") </code></pre> <p>would be equivalent to</p> <pre><code>for i in range(10): print(i) print("hello") </code></pre> <p>If you want the <code>hello</code> out of the loop, then you need to use <code>;;</code> instead:</p> <pre><code>for i in range(10): print i;; print("hello") </code></pre>
python-2.7
2
1,901,906
46,993,476
How to dynamically select a subset from pandas dataframe?
<p>I'm new to python and I wanted to do this particular task which doesn't seem obvious to me how to do it. I don't even know what to search in order to find it. First here is the code snippet and I'll explain what I'm aiming for below it:</p> <pre><code>import pandas as pd mycolumns = ['col1', 'col2', 'col3'] df = pd.DataFrame(data=[[**1**,2,3,**1**,5,6],[1,2,3,4,5,6]], columns=['col1_l', 'col2_l', 'col3_l', 'col1_r', 'col2_r', 'col3_r']) criteria = list() for col in mycolumns : criterion = (df[col + '_l'] == df[col + '_r']) criteria.append(criterion) df = df[criteria[0] | criteria[1] | ... | criteria[5]] print df </code></pre> <p>Output:</p> <pre><code> col1_l col2_l col3_l col1_r col2_r col3_r 0 1, 2, 3, 1, 5, 6 </code></pre> <p>What I want is to be able to select the dataframe rows that meet all the specified criteria, but the problem is that the number of columns is not fixed, each run could have different number of columns and I want to do the same each time I execute this. Question is, how can I write this line:</p> <pre><code>df = df[criteria[0] | criteria[1] | ... | criteria[5]] </code></pre> <p>Keep in mind that the dataframe is obtained from a join sql query over a database, I just wrote this example dataframe for clarification. Thank you and pardon me if this was obvious.</p>
<p>Use <a href="https://stackoverflow.com/a/20528566/2901002"><code>np.logical_or.reduce</code></a>:</p> <pre><code>print (df[np.logical_or.reduce(criteria)]) col1_l col2_l col3_l col1_r col2_r col3_r 0 1 2 3 1 5 6 </code></pre>
python|pandas|dataframe
3
1,901,907
46,962,153
Extracting 2 D from 3D numpy array
<p>I have two 3D arrays of the form (1000, 1000, 20). The last dimension, 13, is an index through time stamps. I want to step through the arrays by time stamps and compare arrays. Suppose I have A (1000, 1000, 20) and B(1000, 1000, 20).</p> <p>I want something like</p> <pre><code>for t in range(0,21): asub = A[,,t] bsub = B[,,t] #compare asub and bsub </code></pre> <p>However, that syntax does not seem to work. how can I do this?</p>
<h3>From <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.indexing.html#combining-advanced-and-basic-indexing" rel="nofollow noreferrer">the <code>documentation</code></a>, you need to <em>&quot;Combine advanced and basic indexing&quot;</em>.</h3> <p>So <em>advanced</em> <code>indexing</code> involves <code>indexing</code> specific <code>axis</code> of an <code>array</code>.</p> <p>For example, taking the <code>elements</code> from <code>index</code> <code>1</code> onwards:</p> <pre><code>&gt;&gt;&gt; a = np.array([[1,2,3], [4,5,6],[7,8,9]]) &gt;&gt;&gt; a[1:, 1:] array([[5, 6], [8, 9]]) </code></pre> <p>So if you want to get the <code>element</code> from the third <code>axis</code> at <code>index</code> <code>t</code>, you need to select all the <code>elements</code> from the other <code>axis</code> with just a regular colon (<code>:</code>) and then specify <code>t</code> for the last <code>index</code>:</p> <p>So you want to do:</p> <pre><code>A[:, :, t] </code></pre>
python|arrays|numpy
2
1,901,908
47,003,215
Can I access a list from another running python script
<p>If I have two scripts, one has a list and is currently running and the other one needs to access that list while the first one is running, how would I do that?</p> <p>Example</p> <p>Script 1</p> <pre><code>import random example_list = [] while True: example_list.append(random.randint(0,9)) </code></pre> <p>Script 2</p> <pre><code>x = example_list[i] </code></pre> <p>I can not change Script 1.<br> How would I access the list created in Script 1 from Script 2?</p> <p>P.S. This is just an example so it's purpose doesn't matter.</p>
<p>No! You can not do it. At least in python.</p> <p>There is no way you can access a variable in another running process.</p>
python|python-3.x|interprocess
0
1,901,909
47,052,039
IndentationError with Python
<p>I am currently trying to stream tweets for a project using Python, Elasticsearch and Kibana.</p> <p>While running my Python script, I have an IndentationError and I don't understand why, can anyone help me through this problem ? </p> <p>Thanks in advance.</p> <p>My Python script : </p> <pre class="lang-python prettyprint-override"><code>import json import tweepy import textblob import elasticsearch from tweepy import OAuthHandler, Stream from tweepy.streaming import StreamListener from textblob import TextBlob from elasticsearch import Elasticsearch consumer_key = '...' consumer_secret = '...' access_token = '...' access_token_secret = '...' elastic_search = Elasticsearch() class MyStreamListener(StreamListener): def on_data(self, data): dict_data = json.loads(data) tweet = TextBlob(dict_data["text"]) print(tweet.sentiment.polarity) if tweet.sentiment.polarity &lt; 0: sentiment = "negative" elif tweet.sentiment.polarity == 0: sentiment = "neutral" else: sentiment = "positive" print(sentiment) elastic_search.index(index="sentiment", doc_type="test-type", body={"author": dict_data["user"]["screen_name"], "date": dict_data["created_at"], "message": dict_data["text"], "polarity": tweet.sentiment.polarity, "subjectivity": tweet.sentiment.subjectivity, "sentiment": sentiment}) return True def on_failure(self, status): print(status) if __name__ == '__main__': listener = MyStreamListener() auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) stream = Stream(auth, listener) stream.filter(track=['congress']) # user_choice = input("Please choose a Hashtag... : ") # retrieve_tweets = api.search(user_choice) </code></pre> <p>The error message : </p> <pre><code>File "sentiment.py", line 21 tweet = TextBlob(dict_data["text"]) ^ IndentationError: unindent does not match any outer indentation level </code></pre>
<p>You do have tabs there.</p> <pre><code> def on_data(self, data): dict_data = json.loads(data) # ^ tab and 4 spaces here tweet = TextBlob(dict_data[&quot;text&quot;]) # ^ 8 spaces here print(tweet.sentiment.polarity) # ^ ^ two tabs here (equal 16 spaces) </code></pre> <p>Note that the representation in SO site translates the tabs to spaces, but if you copy <a href="https://stackoverflow.com/revisions/4448b9a4-4545-4af6-8871-974fc85391af/view-source">the source</a> into a code editor, it reveals the tabs:</p> <p><a href="https://i.stack.imgur.com/ynbYS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ynbYS.png" alt="indentation" /></a></p>
python|indentation
4
1,901,910
37,619,930
Python How do i add/remove items in a RPG Inventory system
<p>I know how to make a simple inventory, but how do I add and remove items? Here is my inventory:<code>inventory = ["Sword", "Shield", "Helmet", "Gloves"]</code> I can print the inventory: <code>print inventory</code> I use Python 2.7.10 I tried this but doesn't work: <code>inventory.add("Gun")</code></p>
<p>Use <code>append()</code> to append to a list</p>
python
0
1,901,911
67,902,090
How to import psycopg2 into a new module im writing
<p>So to simplify, I'm trying to write my own module (<code>test.py</code>) that looks as follows:</p> <pre><code>import psycopg2 get_data(xyz): connection = psycopg2.connect(user=&quot;&quot;, password=&quot;&quot;, host=&quot;&quot;, port=&quot;&quot;, database=&quot;&quot;) last_qry = &quot;&quot;&quot;select * from xyz.abc&quot;&quot;&quot; cursor = connection.cursor() cursor.execute(last_qry) last_data = cursor.fetchone() cursor.close() connection.close() return last_data </code></pre> <p>in a different file I am running:</p> <pre><code>import test get_data(xyz) </code></pre> <p>and I get the following error:</p> <pre><code>name 'psycopg2' is not defined </code></pre> <p>What am I doing wrong?</p>
<p>There are many bugs in these code snippets that you put here:</p> <ol> <li><p>Your import should be like this:</p> <pre><code>from test import get_data </code></pre> <p>or in this way:</p> <pre><code>import test test.get_data() </code></pre> </li> <li><p>What is the use of <code>xyz</code> ? the second code snippet must return <code>NameError</code> because <code>xyz</code> is not define;if you want to use it in <code>last_qry</code> you must have <code>.format()</code> for it.</p> </li> <li><p>What is the structure of directory that included the second file ? and where is the first file?</p> </li> </ol>
python-3.x|module|importerror
0
1,901,912
61,358,131
Pandas: Fill Cells Down Algorithm
<p>In the example below I need to populate the 'Parent' column as follows: All of the column values would be CISCO except for rows 0 and 7 (should be left blank). </p> <p>Note that 'CISCO' "is in" the cell below it 'CISCO System' which "is in" the cell below it 'CISCO Systems' etc. in fact..all of the CISCOs start with 'CISCO' so I need to group all of the cells that have the same start together as one entity and label the parent with the starting cell (CISCO).</p> <p>We have multiple names for the same vendor so I'm trying to map all of those child 'CISCOs' to one parent 'CISCO'</p> <p>Please note that I have 100,000 rows so the algorithm must be done automatically without manual intervention (i.e. not simply by hard coding parents = 'CISCO')</p> <pre><code>df = pd.DataFrame(['MICROSOFT','CISCO', 'CISCO System', 'CISCO Systems', 'CISCO Systems CANADA', 'CISCO Systems CANADA Corporation', 'CISCO Systems CANADA Corporation Limited', 'IBM'], columns=['Child']) #,[]], columns=['Child', 'Parent']) df['Parent'] = '' df </code></pre> <p>I was hoping that there's an elegant solution, preferably without needing loops. Many thanks for your help!</p> <p><a href="https://i.stack.imgur.com/EWsW2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EWsW2.png" alt="enter image description here"></a></p> <p>Required output:</p> <p><a href="https://i.stack.imgur.com/5oDyL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5oDyL.png" alt="enter image description here"></a></p>
<p>This is a curly one. My attempt again;</p> <p>Data</p> <pre><code>df = pd.DataFrame({'Child':['CANADA MOTOR','CANADA COMPUTERS', 'CANADA COMPUTERS CORPORATION', 'CANADA COMPUTERS CORPORATION LTD', 'CANADA SUPPLIES', 'CANADA SUPPLIES CORPORATION', 'CANADA SUPPLIES CORPORATION LTD', 'IBM','MICROSOFT','CISCO', 'CISCO System', 'CISCO Systems', 'CISCO Systems CANADA', 'CISCO Systems CANADA Corporation', 'CISCO Systems CANADA Corporation Limited', 'IBM']}) </code></pre> <p>Extract first name for each Child into <code>FirstCompanyName</code></p> <pre><code>df['FirstCompanyName']=df.Child.str.extract('(^\w+)') </code></pre> <p>Extract First and Second Names for each child into <code>df2</code>, drop those without second name and rename columns to <code>Child</code> and <code>SeconCompanyName</code></p> <pre><code>df2=df.Child.str.extract('(^((?:\S+\s+){1}\S+).*)', expand=True).dropna() df2.columns=['Child','SeconCompanyName'] </code></pre> <p>Merge the 2 dataframes, replace any <code>NaNs</code> and drop unwanted columns</p> <pre><code> df3= pd.merge(df, df2, left_index=True, right_index=True, how='left',suffixes=('', '_New')) #df3.fillna('', inplace=True)# df3.drop(columns=['Child_New'], inplace=True) df3 </code></pre> <p>mask where <code>SeconCompanyName</code> is null</p> <pre><code>m=df3.SeconCompanyName.isna() </code></pre> <p>Replace <code>SeconCompanyName</code> with <code>FirstCompanyName</code> while the mask is still on</p> <pre><code>df3.loc[m,'SeconCompanyName']=df3.loc[m,'FirstCompanyName'] df3 </code></pre> <p><strong>Outcome 1</strong></p> <p><a href="https://i.stack.imgur.com/wx5Yg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wx5Yg.png" alt="enter image description here"></a></p> <p>If you dont like the above skip the mask and do the following;</p> <pre><code>df3['SeconCompanyName']=np.where(df3.SeconCompanyName.isna(), df3.shift(-1).SeconCompanyName, df3.SeconCompanyName) df3.fillna('', inplace=True) df3 </code></pre> <p><strong>Outcome 2</strong></p> <p><a href="https://i.stack.imgur.com/6nlgn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6nlgn.png" alt="enter image description here"></a></p>
python|pandas
1
1,901,913
27,609,499
numpy.array_split() odd behavior
<p>I'm trying to split a large data frame with cycle data into smaller data frames of equal , or near equal, cycle length. Array_split was working great until my data would not allow equal split (worked fine with 500,000 cycles,but not with 1,190,508). I'm wanting the sections to be in 1000 cycle increments (except the last frame would be less).</p> <p>Here's the scenario:</p> <pre><code>d = { 'a': pd.Series(random(1190508)), 'b': pd.Series(random(1190508)), 'c': pd.Series(random(1190508)), } frame = pd.DataFrame(d) cycles = 1000 sections = math.ceil(len(frame)/cycles) split_frames = np.array_split(frame, sections) </code></pre> <p>The docs show array_split basically splitting even groups while it can, then making smaller group at the end because the data can't be divided evenly. This is what I want, but currently, if I look at the lengths of each frame in this new <code>split_frames list</code>:</p> <pre><code>split_len = pd.DataFrame([len(a) for a in split_frame]) split_len.to_csv('lengths.csv') </code></pre> <p>the lengths of the first 698 frames are 1000 elements, but then the rest (frame 699 to 1190) are 999 elements in length. </p> <p>It seems to make this randomly occurring break in length no matter what number I pass for <code>sections</code> (rounding, even number, or whatever else).</p> <p>I'm struggling to understand why it's not creating equal frame lengths except the last one like in the docs:</p> <pre><code>&gt;&gt;&gt; x = np.arange(8.0) &gt;&gt;&gt; np.array_split(x, 3) [array([ 0., 1., 2.]), array([ 3., 4., 5.]), array([ 6., 7.])] </code></pre> <p>Any help is appreciated, thanks!</p>
<p><code>array_split</code> doesn't make a number of equal sections and one with the leftovers. If you split an array of length <code>l</code> into <code>n</code> sections, it makes <code>l % n</code> sections of size <code>l//n + 1</code> and the rest of size <code>l//n</code>. See <a href="https://github.com/numpy/numpy/blob/v1.9.1/numpy/lib/shape_base.py#L377" rel="nofollow noreferrer">the source</a> for more details. (This really ought to be explained in the docs.)</p> <p>Update: as of NumPy 1.14, this is now explained in the docs.</p>
python|numpy|pandas
6
1,901,914
43,084,006
Change parameters in Convolution neural network
<p>I'm praticing CNNs. I read some papers about training MNIST dataset use CNNs.size of image is 28x28 and use architecture 5 layers: input>conv1-maxpool1>conv2-maxpool2>fully connected>output</p> <pre><code>Convolutional Layer #1 - Computes 32 features using a 5x5 filter with ReLU activation. - Padding is added to preserve width and height. - Input Tensor Shape: [batch_size, 28, 28, 1] - Output Tensor Shape: [batch_size, 28, 28, 32] Pooling Layer #1 - First max pooling layer with a 2x2 filter and stride of 2 - Input Tensor Shape: [batch_size, 28, 28, 32] - Output Tensor Shape: [batch_size, 14, 14, 32] Convolutional Layer #2 - Computes 64 features using a 5x5 filter. - Padding is added to preserve width and height. - Input Tensor Shape: [batch_size, 14, 14, 32] - Output Tensor Shape: [batch_size, 14, 14, 64] Pooling Layer #2 - Second max pooling layer with a 2x2 filter and stride of 2 - Input Tensor Shape: [batch_size, 14, 14, 64] - Output Tensor Shape: [batch_size, 7, 7, 64] Flatten tensor into a batch of vectors - Input Tensor Shape: [batch_size, 7, 7, 64] - Output Tensor Shape: [batch_size, 7 * 7 * 64] Fully Connected Layer - Densely connected layer with 1024 neurons - Input Tensor Shape: [batch_size, 7 * 7 * 64] - Output Tensor Shape: [batch_size, 1024] Output layer - Input Tensor Shape: [batch_size, 1024] - Output Tensor Shape: [batch_size, 10] </code></pre> <p>In conv1, with 1 input computates 32 features using a 5x5 filter and in conv2 with 32 input from conv1 computates 64 features using same filter. What are parameters such as 32,64,2x2 filter chosen based on? Do they based on size of image?</p> <p>If size of images is larger than 28x28 such as 128x128. Should I increse the number of layers over 5 layers? How are above parameters changed with other size of images?</p> <p>Thank advance</p>
<blockquote> <p>What are parameters such as 32,64,2x2 filter chosen based on? Do they based on size of image?</p> </blockquote> <p>The parameters that you have mentioned (32,64,2x2) are number of filters for a convolutional layer and a filter size. They are the hyperparameters that you can select and adjust as you train your models. Depending on your dataset, application and model performance you can control them.</p> <p>For a number of filters, you use it to control number of features that your model learns. In your model, your filter number increase from 32 to 64 after maxpooling layer. Maxpooling layer with 2x2 filter will reduce the number of features by half. And by increasing the filter numbers by double, it will keep the same number of features in the model. In convention, after a 2x2 maxpooling layer, the filter number will double for this reason.</p> <p>And for the filter size, if it's for maxpooling layer, it will determine the size of feature reduction. If it's for convolutional layer, it will determine how detail the input images are learned. For example, if you are trying to work with images where small pixels or features differciate objects, you would choose small filter size such as 3x3 or 5x5. And vise versa for large filter size.</p> <p>One way of understanding these hyperparameters is that understand how these affect learning of the model and you will know how to control them depending on each case. And another way is to look at how these are set for models used by other people. You may find some conventions such as filter number increasing after maxpooling layer.</p> <blockquote> <p>If size of images is larger than 28x28 such as 128x128. Should I increse the number of layers over 5 layers? How are above parameters changed with other size of images?</p> </blockquote> <p>And about layers, having more layers will make your model deeper and will result in more parameters. This means that your model will become more complex and become able to learn more about image features. So, having a deep architecture can benefit learning images with large resolutions. Because large resolution means there are many features to learn. However, this also will depend case by case. Good approach to this would be starting with simple model with few layers and gradually increase your layers while it benefits your model.</p>
python|tensorflow|deep-learning|convolution
1
1,901,915
43,077,047
Possible reasons for overfitting the dataset
<p>The dataset I used contains 33k images. The training contains 27k and validation set contains 6k images.<br> I used the following CNN code for the model :</p> <pre><code>model = Sequential() model.add(Convolution2D(32, 3, 3, activation='relu', border_mode="same", input_shape=(row, col, ch))) model.add(Convolution2D(32, 3, 3, activation='relu', border_mode="same")) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Convolution2D(64, 3, 3, activation='relu', border_mode="same")) model.add(Convolution2D(128, 3, 3, activation='relu', border_mode="same")) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Activation('relu')) model.add(Dense(1024)) model.add(Dropout(0.5)) model.add(Activation('relu')) model.add(Dense(1)) adam = Adam(lr=0.0001) model.compile(optimizer=adam, loss="mse", metrics=["mae"]) </code></pre> <p>The output I obtain has a <code>decreasing training loss</code> but <code>increasing validation loss</code> suggesting <code>overfitting</code>. But I have included <code>dropouts</code> which should have helped in <code>preventing overfitting</code>.Following is the snap of output when trained for 10 epochs :<br> </p> <pre><code>Epoch 1/10 27008/27040 [============================&gt;.] - ETA: 5s - loss: 0.0629 - mean_absolute_error: 0.1428 Epoch 00000: val_loss improved from inf to 0.07595, saving model to dataset/-00-val_loss_with_mymodel-0.08.hdf5 27040/27040 [==============================] - 4666s - loss: 0.0629 - mean_absolute_error: 0.1428 - val_loss: 0.0759 - val_mean_absolute_error: 0.1925 Epoch 2/10 27008/27040 [============================&gt;.] - ETA: 5s - loss: 0.0495 - mean_absolute_error: 0.1287 Epoch 00001: val_loss did not improve 27040/27040 [==============================] - 4605s - loss: 0.0494 - mean_absolute_error: 0.1287 - val_loss: 0.0946 - val_mean_absolute_error: 0.2289 Epoch 3/10 27008/27040 [============================&gt;.] - ETA: 5s - loss: 0.0382 - mean_absolute_error: 0.1119 Epoch 00002: val_loss did not improve 27040/27040 [==============================] - 4610s - loss: 0.0382 - mean_absolute_error: 0.1119 - val_loss: 0.1081 - val_mean_absolute_error: 0.2463 </code></pre> <p><code>So, what is wrong? Are there any other methods to prevent overfitting?</code><br> <code>Does shuffling of data help?</code></p>
<p>I would try to add weight decay of <code>1E-4</code>. This can be done by adding the weight decay layer wise like this: <code>model.add(Convolution2D(32, 3, 3, activation='relu', border_mode="same", input_shape=(row, col, ch), W_regularizer=l2(1E-4), b_regularizer=l2(1E-4)))</code>. L2 can be found in <code>keras.regularizers</code> (<a href="https://keras.io/regularizers/#example" rel="nofollow noreferrer">https://keras.io/regularizers/#example</a>). Weight regularization is very good at combating overfitting.</p> <p>However overfitting might not only be a result of your model, but also of your model. If the validation data is somehow "harder" then your train data then it might just be that you can not fit it as well.</p>
tensorflow|neural-network|keras
1
1,901,916
43,419,040
Python3 importing module.py just a code part of main.py
<p>This is a mdiWindow. <code>openChildWindow</code> action opens new child window. But I try to use import instead of class not working. I just want to short code lines instead of long code lines. I trying to use <code>child2.py</code> only part of <code>main.py</code> but not working. <code>import child2.py</code> doesn't make new mdiChildWindow.</p> <p>main.py</p> <pre><code>import sys, time from PyQt5 import uic from PyQt5.QtWidgets import QMainWindow, QApplication class MainWindow(QMainWindow): def __init__(self): super(MainWindow, self).__init__() uic.loadUi("mainWindow.ui",self) self.actionChildWindow.triggered.connect(self.openChild) self.actionChildWindow2.triggered.connect(self.openChild2) def openChild(self): childWindow=openChild("child.ui") childWindow.add(self.mdiArea) def openChild2(self): import child2 child2.childWindow2=openChild2("child2.ui") child2.childWindow2.add(self.mdiArea) class openChild(QMainWindow): def __init__(self,modul): super().__init__() uic.loadUi(modul,self) def add(self,addToMainWindow): addToMainWindow.addSubWindow(self) self.show() def main(): app = QApplication(sys.argv) ex = MainWindow() ex.show() sys.exit(app.exec_()) if __name__ =='__main__': main() </code></pre> <p>child2.py</p> <pre><code>class openChild2(QMainWindow): def __init__(self,modul): super().__init__() uic.loadUi(modul,self) def add(self,addToMainWindow): addToMainWindow.addSubWindow(self) self.show() </code></pre>
<p>Okay so theres a few issues with your code. I think they one you're talking about specifically is the issue with importing child2.py. Your problem is that you're trying to import inside of a function like this:</p> <pre><code>def openChild2(self): import child2 child2.childWindow2=openChild2("child2.ui") child2.childWindow2.add(self.mdiArea) </code></pre> <p>Really what you should be doing is importing at the start of the file like your other imports. Since you are only importing one class i like to use the from syntax for this. like so:</p> <pre><code>import sys, time from PyQt5 import uic from PyQt5.QtWidgets import QMainWindow, QApplication from child2 import openChild2 </code></pre> <p>Then you can use openChild2 like the class was defined in your main.py file.</p>
python-3.x|import|pyqt5
0
1,901,917
43,424,510
Building a list from user input throughout different functions
<p>I started the how to learn python the hard way recently. I'm on exercise 36, where we design our own text adventure game. I was hoping to have the user go through different rooms collecting items to later be used on the boss in the last room. But I can't figure our how to keep adding to the same list when I change rooms. This is what i have so far for that part (I cut out description text)...</p> <pre><code>def add_item(): backpack = [] i = 0 while 1: i += 1 item = raw_input(" &gt; ") if item == '': break backpack.append(item) print "\nAh, the %s, let us hope this serves you well." % item print "This is the inventory you have acquired so far..." print backpack def dizzygas_hallway(): print "If so which item do you choose? (cloak or pendant)" add_item() def dark_laboratory(): print "Which item do you take? (book, potion or sword)\n" add_item() print "You exit the only door in sight..." dizzygas_hallway() </code></pre>
<p>I think you're close, just swap some lines around. Otherwise every time you wanted to add an item, you create an new, empty list. </p> <pre><code>backpack = [] # Define outside of function def add_item(): global backpack # Use global variable (this line isn't 'necessary', though) </code></pre>
python|python-2.7
1
1,901,918
67,081,513
mysql.connector only inserts 1 character per row into MySQL database
<p>I have this (somewhat) working script that takes comments from a Reddit subreddit and insert them into a MySQL database. The spaghetti code I ended up with, however, pastes one character from each comment to each row of the database instead of one comment per row.</p> <p>I have searched through but couldn't find any instance like this on previous cases. Below is a screenshot of how MySQL look like + the code snippet. <a href="https://i.stack.imgur.com/WG6qb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WG6qb.jpg" alt="enter image description here" /></a></p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>import praw import configReddit import mysql.connector import configMySQL # ---------------- REDDIT ---------------- # reddit = praw.Reddit( client_id=configReddit.client_id, client_secret=configReddit.client_secret, password=configReddit.password, user_agent=configReddit.user_agent, username=configReddit.username, charset='utf8mb4' ) # ---------------- DATABASE ---------------- # mydb = mysql.connector.connect( host=configMySQL.host, user=configMySQL.user, password=configMySQL.password, database="db_blockchainstable" ) mycursor = mydb.cursor() sql = "INSERT INTO test (rdtext) VALUES (%s)" # ---------------- EXE ---------------- # for comment in reddit.subreddit("news").stream.comments(): mycursor.executemany(sql, comment.body) mydb.commit() print(comment.body)</code></pre> </div> </div> </p> <p>Below what the console is returning based on the <em>print(comment.body)</em>. <a href="https://i.stack.imgur.com/Mr1m7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Mr1m7.png" alt="enter image description here" /></a></p> <p>Also, if I change <em>mycursor.executemany(sql, db)</em> to <em>mycursor.execute(sql, db)</em> I get this error:</p> <p><a href="https://i.stack.imgur.com/UyEh2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UyEh2.png" alt="enter image description here" /></a></p> <p>If I wrap %s in ' ' --&gt; '%s' the database records <em>%s</em> as value (see below).</p> <p><a href="https://i.stack.imgur.com/CRqLD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CRqLD.png" alt="enter image description here" /></a></p>
<p>I have come up with an even spaghettier code that works partially... I have certainly made some progress but I don't think is the final answer. Nevertheless, I will add it here in case it serves anyone.</p> <p>I did the following <strong>changes</strong>:</p> <ol> <li>Created a dictionary <code>dt = []</code></li> <li>Appended comment body to that dictionary <code>dt.append(comment.body)</code></li> <li>Then I wrap db within mycursor.executemany in <code>zip</code> as <code>mycursor.executemany(sql, zip(db)</code></li> </ol> <p>The combination of appending everything onto a dictionary + ZIP makes it work and now I have one comment per row in my MySQL database. This is an answer from a <a href="https://stackoverflow.com/questions/50956459/how-do-i-use-executemany-to-write-the-first-values-in-each-key-to-a-database">different question in this forum</a> that helped me to get there.</p> <blockquote> <p>I believe executemany wants a list of tuples each one containing one row. That is not what .items() provide, since it will provide a key and the values associated with that key on each iteration. Luckly the zip() function does exactly what you need:</p> </blockquote> <p>That said, a) I don't really understand why is working with ZIP and b) I need other alternatives and I don't know any others.</p> <p>The reason is because ZIP, I believe only allows for max of 500 characters, every time a comment has more than 500 it throws an error. Which means that I'm not carrying 100% of the information from reddit as some comments are above 500 characters.</p> <pre><code>mysql.connector.errors.DataError: 1406 (22001): Data too long for column 'rdtext' at row 87 </code></pre> <p><strong>Here the new code:</strong></p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>import praw import configReddit import mysql.connector import configMySQL # ---------------- REDDIT ---------------- # reddit = praw.Reddit( client_id=configReddit.client_id, client_secret=configReddit.client_secret, password=configReddit.password, user_agent=configReddit.user_agent, username=configReddit.username, charset='utf8mb4' ) # ---------------- DATABASE ---------------- # mydb = mysql.connector.connect( host=configMySQL.host, user=configMySQL.user, password=configMySQL.password, database="db_blockchainstable", ) mycursor = mydb.cursor() sql = "INSERT INTO test (rdtext) VALUES (%s)" # ---------------- EXE ---------------- # dt = [] for comment in reddit.subreddit("News").stream.comments(): print(comment.body) print(len(comment.body)) dt.append(comment.body) rdtext = dt db = (rdtext) mycursor.executemany(sql, zip(db)) mydb.commit()</code></pre> </div> </div> </p>
python|mysql
0
1,901,919
66,948,344
Why does this work on this, but not on this? [Selenium] [Python]
<p>I'm making a stock checker and I have a bit of a problem when I go to Amazon (AMZN) in <a href="http://finance.yahoo.com" rel="nofollow noreferrer">finance.yahoo.com</a>. My code works perfectly, but when I try for example Tesla (TSLA) it doesn't work. Here is part of my code.</p> <pre><code>getperstock = driver.find_element_by_xpath('//*[@class=&quot;Trsdu(0.3s) Fw(b) Fz(36px) Mb(-4px) D(ib)&quot;]').text #Works print(str(getperstock)) #&quot;Trsdu(0.3s) Fw(500) Pstart(10px) Fz(24px) C($negativeColor)&quot; getstockratio = driver.find_element_by_xpath('//*[@class=&quot;Trsdu(0.3s) Fw(500) Pstart(10px) Fz(24px) C($positiveColor)&quot;]').text #Doesnt print(str(getstockratio)) </code></pre>
<p>Here's the minimal HTML</p> <pre class="lang-html prettyprint-override"><code>&lt;div class=&quot;D(ib) Mend(20px)&quot; data-reactid=&quot;31&quot;&gt; &lt;span class=&quot;Trsdu(0.3s) Fw(b) Fz(36px) Mb(-4px) D(ib)&quot; data-reactid=&quot;32&quot;&gt;670.97&lt;/span&gt; &lt;span class=&quot;Trsdu(0.3s) Fw(500) Pstart(10px) Fz(24px) C($negativeColor)&quot; data-reactid=&quot;33&quot;&gt;-20.65 (-2.99%)&lt;/span&gt; &lt;div id=&quot;quote-market-notice&quot; class=&quot;C($tertiaryColor) D(b) Fz(12px) Fw(n) Mstart(0)--mobpsm Mt(6px)--mobpsm&quot; data-reactid=&quot;34&quot;&gt; &lt;span data-reactid=&quot;35&quot;&gt;At close: 4:00PM EDT&lt;/span&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>Your first xpath to get the first <code>span</code> is OK. But the second <code>span</code> will include <code>negative</code> or <code>positive</code> so if the stock is going negatively/positively, it will fail to locate the element. I see that in your comment you said it worked, but I'm sure that once it goes the opposite way, it will fail.</p> <p>Better way to do it:</p> <pre class="lang-py prettyprint-override"><code># get parent of both `span` parent = driver.find_element_by_xpath('//div[@class=&quot;D(ib) Mend(20px)&quot;]') # get the `span` insides span_elems = parent.find_elements_by_tag_name(&quot;span&quot;) # first one is the stock price price = span_elems[0].text # second span is the % ratio = span_elems[1].text # third one is the &quot;at close...&quot; but you dont need it </code></pre> <p>By the way, don't use <code>find_elements...</code> to find the parent node. Use explicit wait will make the code do the waiting part efficiently for you: <a href="https://selenium-python.readthedocs.io/waits.html" rel="nofollow noreferrer">https://selenium-python.readthedocs.io/waits.html</a></p>
python|selenium|selenium-chromedriver
1
1,901,920
4,305,284
Python M2crypto error
<p>I am trying to build <code>crda</code> agent module on a cross platform(ARM). To build the same, one of the input module is <code>m2crypto</code> shared object file. I have successfully cross compiled and <code>m2crypto.so</code> file has been generated.</p> <p>when I give the <code>make</code> command, python script is called internally which should take <code>m2crypto.so</code> module as input and should generate openssl(RSA) keys.</p> <p>The problem I am facing is the python script couldn't import any of the modules from the <code>__m2crypto.so</code> file. I am using python 2.4 version. And the error i am getting is </p> <pre><code>$ make GEN keys-ssl.c Trusted pubkeys: /home/tools/crda/pubkeys/linville.key.pub.pem Traceback (most recent call last): File "./utils/key2pub.py", line 6, in ? import m2crypto ImportError: /usr/lib/python2.4/lib-dynload/m2crypto.so: cannot open shared object file: No such file or directory make: *** [keys-ssl.c] Error 1 </code></pre> <p>where as, when I compile <code>m2crypto</code> for host machine(x86 platform) and try to build the <code>crda</code> for the same, python is able to import the <code>m2crypto.so</code> file.</p> <p>Any suggestions on how to build it successfully on the different platform(ARM).</p> <p>Thanks in Advance, Rams ch </p>
<p>This question has some age ;-) I was faced to the same question in the last few days. Maybe the solution which fixed my problem is also helpfull for anyone reading this question. I was using a patch from openwrt:</p> <p><a href="https://dev.openwrt.org/browser/trunk/package/crda/patches/101-make_crypto_use_optional.patch?rev=21956" rel="nofollow">101-make_crypto_use_optional.patch</a></p> <p>This patch removes the crypto setup from crda. For me this is was okay...</p>
python|m2crypto
0
1,901,921
48,072,091
Buildozer Numpy RuntimeError: Broken toolchain: cannot link a simple C program
<p>Writing my first Android app in Python and using Buildozer to package it. Because I will need to use numpy later on in the project, I tried packaging the following test code:</p> <pre><code>import numpy import kivy kivy.require('1.0.6') from kivy.app import App from kivy.uix.button import Button class TestApp(App): def build(self): return Button(text='Hello World') TestApp().run() </code></pre> <p>However, I got the following error:</p> <pre><code>Traceback (most recent call last): File "setup.py", line 251, in &lt;module&gt; setup_package() File "setup.py", line 243, in setup_package setup(**metadata) File "/home/kivy/Desktop/cam/.buildozer/android/platform/build/build/other_builds/numpy/armeabi-v7a/numpy/numpy/distutils/core.py", line 169, in setup return old_setup(**new_attr) File "/home/kivy/Desktop/cam/.buildozer/android/platform/build/build/python-installs/myapp/lib/python2.7/distutils/core.py", line 152, in setup dist.run_commands() File "/home/kivy/Desktop/cam/.buildozer/android/platform/build/build/python-installs/myapp/lib/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "/home/kivy/Desktop/cam/.buildozer/android/platform/build/build/python-installs/myapp/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/home/kivy/Desktop/cam/.buildozer/android/platform/build/build/other_builds/numpy/armeabi-v7a/numpy/numpy/distutils/command/build_ext.py", line 59, in run self.run_command('build_src') File "/home/kivy/Desktop/cam/.buildozer/android/platform/build/build/python-installs/myapp/lib/python2.7/distutils/cmd.py", line 326, in run_command self.distribution.run_command(command) File "/home/kivy/Desktop/cam/.buildozer/android/platform/build/build/python-installs/myapp/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/home/kivy/Desktop/cam/.buildozer/android/platform/build/build/other_builds/numpy/armeabi-v7a/numpy/numpy/distutils/command/build_src.py", line 153, in run self.build_sources() File "/home/kivy/Desktop/cam/.buildozer/android/platform/build/build/other_builds/numpy/armeabi-v7a/numpy/numpy/distutils/command/build_src.py", line 164, in build_sources self.build_library_sources(*libname_info) File "/home/kivy/Desktop/cam/.buildozer/android/platform/build/build/other_builds/numpy/armeabi-v7a/numpy/numpy/distutils/command/build_src.py", line 299, in build_library_sources sources = self.generate_sources(sources, (lib_name, build_info)) File "/home/kivy/Desktop/cam/.buildozer/android/platform/build/build/other_builds/numpy/armeabi-v7a/numpy/numpy/distutils/command/build_src.py", line 386, in generate_sources source = func(extension, build_dir) File "numpy/core/setup.py", line 686, in get_mathlib_info raise RuntimeError("Broken toolchain: cannot link a simple C program") RuntimeError: Broken toolchain: cannot link a simple C program STDERR: # Command failed: /usr/bin/python -m pythonforandroid.toolchain create --dist_name=myapp --bootstrap=sdl2 --requirements=kivy,numpy --arch armeabi-v7a --copy-libs --color=always --storage-dir=/home/kivy/Desktop/cam/.buildozer/android/platform/build # # Buildozer failed to execute the last command # The error might be hidden in the log above this error # Please read the full log, and search for it before # raising an issue with buildozer itself. # In case of a bug report, please add a full log with log_level = 2 </code></pre> <p>Also, here is my <code>buildozer.spec</code> file:</p> <pre><code>title = My Application package.name = myapp package.domain = org.test source.dir = . source.include_exts = py,png,jpg,kv,atlas version = 1.0 requirements = kivy,numpy orientation = portrait osx.kivy_version = 1.9.1 fullscreen = 0 android.api = 19 android.sdk = 20 android.ndk = 9c android.arch = armeabi-v7a log_level = 2 warn_on_root = 1 </code></pre> <p>Note that when I removed "import numpy" from the python code and removed "numpy" from the requirements list in the buildozer.spec file, my code was packaged perfectly. I am running this on VM Virtual Box which had Buildozer pre-installed. </p> <p>Also it is not just Numpy giving me this issue- OpenCV is giving me the exact same errors. Will make separate post for that if needed. </p>
<p>This issue is reported in Python for Android (p4a) project <a href="https://github.com/kivy/python-for-android/issues/1141" rel="nofollow noreferrer">here</a>, didn't knew it's actual for stable p4a. Nevertheless, by link you can find <a href="https://github.com/kivy/python-for-android/pull/1209" rel="nofollow noreferrer">PR that fixes issue</a>. I didn't test it, but different people say it works.</p> <p>You can try to build numpy with this fix, here's what you'll need:</p> <ol> <li><p>Make sure you clean all left from current building process with command:</p> <p><code>buildozer distclean</code></p></li> <li><p>Clone p4a branch with fix using command:</p> <p><code>git clone -b p4a_numpy_fix https://github.com/mahomahomaho/python-for-android fix-numpy</code></p></li> <li><p>Change your <code>buildozer.spec</code> to use this cloned version of p4a (use your actual path):</p> <p><code>p4a.source_dir = /home/ubuntu/p4a_numpy_fix</code></p></li> </ol> <p>And run building apk again. If everything will work fine, you'll be able to build apk. If not you'll face another errors, no guarantees here :(</p>
android|python|numpy|kivy|buildozer
2
1,901,922
48,048,279
TypeError: Cannot read property '_uploadFiles' of undefined in google colaboratory
<p>I am trying to write upload the file in Google Colaboratory and I'm going to write the code as below. </p> <pre><code>from google.colab import files uploaded = files.upload() </code></pre> <p>But I am getting the below error to run the code in browser.</p> <blockquote> <p>MessageError: TypeError: Cannot read property '_uploadFiles' of undefined</p> </blockquote> <p>Please help me solve the issue.</p>
<p>Well, if running on Brave Browser, i can confirm that <strong>turning down the shields</strong> will do the job.</p>
python|google-colaboratory
32
1,901,923
48,255,432
inserting new values in dicts stored in a list of dicts
<pre><code>[{'name': None, 'price': None, 'shares': None}, {'name': None, 'price': None, 'shares': None}, {'name': None, 'price': None, 'shares': None}, {'name': None, 'price': None, 'shares': None}, </code></pre> <p>I have a list of dicts like this and 3 lists that I zipped into one that has the values Id like to go into the corresponding dict. </p> <p>I thought I could just go through the list and use a for loop to update the values but what ends up happening is that every dict is updated to the last item in the list of values</p>
<p>You can do this using your list of dicts. But as cᴏʟᴅsᴘᴇᴇᴅ noted, you don't need to initialize the empty structure. Here are two solutions, one with and one without pre-initialization.</p> <p>Example data:</p> <pre><code>names = ["Alice", "Bob", "Carol", "Eve"] price = [1.00, 2.01, 3.02, 4.03] shares = [10, 50, 80, 100] data = zip(names, price, shares) </code></pre> <p>With pre-initialization:</p> <pre><code>frame = [{'name': None, 'price': None, 'shares': None}, {'name': None, 'price': None, 'shares': None}, {'name': None, 'price': None, 'shares': None}, {'name': None, 'price': None, 'shares': None}] dlist = list(data) for i, d in enumerate(frame): for j, k in enumerate(d.keys()): d[k] = dlist[i][j] frame [{'name': 'Alice', 'price': 1.0, 'shares': 10}, {'name': 'Bob', 'price': 2.01, 'shares': 50}, {'name': 'Carol', 'price': 3.02, 'shares': 80}, {'name': 'Eve', 'price': 4.03, 'shares': 100}] </code></pre> <p>Without pre-initialization:</p> <pre><code>fields = ["name", "price", "shares"] [{k:v for k, v in zip(fields, d)} for d in data] </code></pre>
python
1
1,901,924
51,138,152
Python: Azure Storage table failed to insert batch items when they are exists
<p>I using Azure storage table with python and try to insert batch of entities. When inserting an entity for the first time, when it's not exists on table, it's working fast (as expected). On the second time to insert the same entity, the code is just stuck for a minute and nothing really happend. </p> <h2>The code</h2> <p>This is my batch insert:</p> <pre><code>acc_name = 'AccountName' acc_key = 'MyKey' table_name='MyTable' service = TableService(account_name=acc_name, account_key=acc_key) batch = TableBatch() batch.insert_entity({ 'PartitionKey': 'PARTITION1', 'RowKey': "1", 'someKey': 'key' }) service.commit_batch(table_name, batch) </code></pre> <p>Just try to run this code twice. First time it will work, on the second time it's stuck for a minute with and return the error:</p> <pre><code>Client-Request-ID=a734f002-7dff-11e8-b587-28c63f6cb636 Retry policy did not allow for a retry: Server-Timestamp=Mon, 02 Jul 2018 13:55:29 GMT, Server-Request-ID=4168269a-0002-0073-640c-121de2000000, HTTP status code=202, Exception=The specified entity already exists.RequestId:4168269a-0002-0073-640c-121de2000000Time:2018-07-02T13:55:30.4994452Z. </code></pre> <h2>Test #1</h2> <p>I'm pretty sure that this is not planned behavior since when I running the equivalent code in C#, it's throw an exception immediately: "Element in index 0 is already exists.". Which is make sense...</p> <h2>Test #2</h2> <p>Another test I made is to insert an entity, not in batch. In this case, when the entity already in the table, it's just throw "Already exists" exception. Which is good.</p> <h2>My environment</h2> <p>Windows 10, Python 3.6 (64 bit), azure-sdk for python (version 3.0.0).</p> <p>Someone can confirm this behavior? What to do?</p>
<p>As mentioned by @Gaurav Mantri simply replace insert_entity with insert_or_merge_entity:</p> <pre class="lang-py prettyprint-override"><code>acc_name = 'AccountName' acc_key = 'MyKey' table_name='MyTable' service = TableService(account_name=acc_name, account_key=acc_key) batch = TableBatch() batch.insert_or_merge_entity({ 'PartitionKey': 'PARTITION1', 'RowKey': &quot;1&quot;, 'someKey': 'key' }) service.commit_batch(table_name, batch) </code></pre>
python|azure|azure-storage|azure-table-storage
0
1,901,925
51,381,044
Django can't find static files on server
<p>So I have this problem with deploying static files on server with Django. I am running nginx, and successfully tested my website locally, but now, when I'm deploying my site on the web, I can't load the local files, despite I have done everything according to the instructions. So I got this error all the time</p> <pre><code>"GET /staticold/botadd/anyfile HTTP/1.0" 404 99 </code></pre> <p>My <code>settings.py</code> static settings looks like this</p> <pre><code>STATIC_URL = '/staticold/' STATIC_ROOT = os.path.join(BASE_DIR, 'static') STATICFILES_DIRS = ( os.path.join(BASE_DIR, 'botadd/static'), ) </code></pre> <p>I've also tried it like this</p> <pre><code>STATIC_URL = '/static/' STATIC_ROOT = os.path.join(BASE_DIR, 'static') </code></pre> <p><code>python manage.py collectfiles</code> command is successfull, all of my files are transfered to <code>STATIC_ROOT</code> folder, but then I got that 404 error. What can it be, I am stuck.</p> <p>My <code>sites-available/default</code></p> <pre><code>server { listen 80 default_server; listen [::]:80 default_server; # SSL configuration # # listen 443 ssl default_server; # listen [::]:443 ssl default_server; # # Note: You should disable gzip for SSL traffic. # See: https://bugs.debian.org/773332 # # Read up on ssl_ciphers to ensure a secure configuration. # See: https://bugs.debian.org/765782 root /var/www/html; # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html; server_name _; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. #try_files $uri $uri/ =404; proxy_pass http://localhost:8000; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # include snippets/fastcgi-php.conf; # # # With php7.0-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php7.0-fpm: # fastcgi_pass unix:/run/php/php7.0-fpm.sock; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # Virtual Host configuration for example.com # # You can move that to a different file under sites-available/ and symlink that # to sites-enabled/ to enable it. # #server { # listen 80; # listen [::]:80; # # server_name example.com; # # root /var/www/example.com; # index index.html; # # location / { # try_files $uri $uri/ =404; # } #} </code></pre>
<p>You may have to tell nginx where to look to serve your static content. Try adding a "location /static/" block in your nginx sites-available file, eg:</p> <pre><code>server { server_name &lt;your_server_name&gt;.com; listen 443; listen [::]:443; ssl on; ssl_certificate /etc/nginx/ssl/cert.crt; ssl_certificate_key /etc/nginx/ssl/private.key; location = /favicon.ico { access_log off; log_not_found off; } location /static/ { allow all; alias /your/base/dir/here/.../botadd/static/; } location / { ... } } </code></pre>
python|django
0
1,901,926
17,579,130
Using python to issue command prompts
<p>I have been teaching myself python over the past few months and am finally starting to do some useful things. </p> <p>What I am trying to ultimately do is have a python script that acts as a queue. That is, I would like to have a folder with a bunch of input files that another program uses to run calculations (I am a theoretical physicist and do many computational jobs a day).</p> <p>The way I must do this now is put all of the input files on the box that has the computational software. Then I have to convert the dos input files to unix (dos2unix), following this I must copy the new input file to a file called 'INPUT'. Finally I run a command that starts the job.</p> <p>All of these tasks are handled in a command prompt. My question is how to I interface my program with the command prompt? Then, how can I monitor the process (which I normally do via cpu usage and the TOP command), and have python start the next job as soon as the last job finishes.</p> <p>Sorry for rambling, I just do not know how to control a command prompt from a script, and then have it automatically 'watch' the job.</p> <p>Thanks</p>
<p>The <a href="http://docs.python.org/2/library/subprocess.html" rel="nofollow noreferrer">subprocess</a> module has many tools for executing system commands in python.</p> <pre><code>from subprocess import call call(["ls", "-l"]) </code></pre> <p><a href="https://stackoverflow.com/a/89243/2502012">source</a></p> <p>call will wait for the command to finish and return its returncode, so you can call another one afterwards knowing that the previous one has finished.</p> <p><code>os.system</code> is an older way to do it, but has fewer tools and isn't recommended:</p> <pre><code>import os os.system('"C:/Temp/a b c/Notepad.exe"') </code></pre> <p><strong>edit</strong> FvD left a comment explaning how to "watch" the process below</p>
python|interface|command|subprocess|prompt
6
1,901,927
64,234,841
How to force Spyder or any IDE to reload python modules/files
<p>I am checking someone else code and I could not figure out how to make it work. I need to reload the whole modules and sometimes it does it sometimes it does not.</p> <p>The folder structure is as shown below:</p> <pre><code>Parent Folder -&gt; Folder jobs -&gt; plant_trans.py -&gt; Folder scripts -&gt; __init__.py -&gt; connect.py </code></pre> <p>I need to run a script in <code>jobs</code> folder.</p> <p>The script is named <code>plant_trans.py</code> and contains <code>import scripts</code> on top.</p> <p>When I run it, i get an error at line:</p> <pre><code>with scripts.connect.get_connection(DB_NAME) as td_con: </code></pre> <p>Error:</p> <pre><code>with scripts.connect.get_connection(DB_NAME) as td_con: AttributeError: module 'scripts' has no attribute 'connect' </code></pre> <p>My guess is because the reload is not working when I run the <code>plant_trans.py</code> file. Sometimes I did get the reload modules etc notice and it worked but I cannot force it to reload the modules whenever I want. Any workaround?</p>
<p>I have to use <code>PYTHONPATH</code> and set it to the <code>Parent Folder</code> level</p> <p>In Spyder, the option is available at <code>Tools -&gt; PYTHONPATH Manager</code></p> <p><a href="https://i.stack.imgur.com/0RMaQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0RMaQ.png" alt="PYTHONPATH Spyder" /></a></p> <p>Click on <code>+ Add path</code> and paste the <code>Parent Folder</code> there. Also, click on the <code>Synchronize..</code> afterwards</p> <p><a href="https://i.stack.imgur.com/wjjNL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wjjNL.png" alt="enter image description here" /></a></p>
python|python-3.x
0
1,901,928
64,324,141
How to apply CNN algorithm in python for non image dataset
<p>I have a dataset in the below format</p> <p>feature1 feature2 feature 3 ....... feature 8000 decision. dataset size is (900*8000) and every column's value is either 0 or 1 (in binary) this is basically an android malware dataset of permissions and I need to apply CNN using TensorFlow for this dataset. I applied to cifar dataset but not getting how to use the same algo for this dataset as there are no images. what to change in code and what will be the value of activation, input_shape, conv2D etc</p> <pre><code>import tensorflow as tf from tensorflow.keras import datasets, layers, models import matplotlib.pyplot as plt import keras keras.__version__ (train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data() train_images, test_images = train_images / 255.0, test_images / 255.0 class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i], cmap=plt.cm.binary) plt.xlabel(class_names[train_labels[i][0]]) plt.show() model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(10)) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) history = model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels)) plt.plot(history.history['accuracy'], label='accuracy') plt.plot(history.history['val_accuracy'], label = 'val_accuracy') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.ylim([0.5, 1]) plt.legend(loc='lower right') test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2) print(test_acc) </code></pre>
<p>So you have 900 samples in the dataset, each of them having 8000 features. You could reshape the 1-D tensor of 8000 values to, let's say, &quot;an image&quot; of shape (80, 100). I'm not sure if CNN is the best type of neural network for these values. You'll see.<br /> Also, 900 samples might not be enough to train a neural network properly. It depends on the complexity of the problem. You'll need a shallow architecture, otherwise, it's going to overfit.</p>
python|tensorflow|keras|deep-learning|conv-neural-network
0
1,901,929
55,633,536
Get indices of slices with at least one element obeying some condition
<p>I have an ndarray <code>A</code> of shape (n, a, b) I want a Boolean ndarray <code>X</code> of shape (a, b) where</p> <p><code>X[i,j]=any(A[:, i, j] &lt; 0)</code></p> <p>How to achieve this?</p>
<p>I would use an intermediate matrix and the <code>sum(axis)</code> method:</p> <pre><code>np.random.seed(24) # example matrix filled either with 0 or -1: A = np.random.randint(2, size=(3, 2, 2)) - 1 # condition test: X_elementwise = A &lt; 0 # Check whether the conditions are fullfilled at least once: X = X_elementwise.sum(axis=0) &gt;= 1 </code></pre> <p>Values for A and X:</p> <pre><code>A = array([[[-1, 0], [-1, 0]], [[ 0, 0], [ 0, -1]], [[ 0, 0], [-1, 0]]]) X = array([[ True, False], [ True, True]]) </code></pre>
numpy|multidimensional-array
1
1,901,930
73,419,112
Creating specific Json with python dictionaries
<p>first i'm still a python newbie, so be patient with me. I'm having trouble creating this specific structure in python 3.8 on windows 8 based on random generated words: <br/>The random generated sentences are: <br/>sent=&quot;Lorem Ipsum. has been long time. It has. popularised.&quot; <br/>I have to generate this json structure:</p> <pre><code> {'datas': [{'sentence 1': 'Lorem Ipsum', 'elements 1': [{'word1': 'Lorem','word2':'Ipsum'}], 'sentence 2': 'has been long time', 'elements 2': [{'word 1': 'has','word 2':'been','word 3':'long','word 4': 'time'}], 'phrase 3': ' It has', 'elements 3': [{'word 1': 'It', 'word 2': 'has'}], 'phrase 4': 'popularised', 'elements 4': [{'word 1': 'popularised'}]}]} </code></pre> <p>I've created this code:</p> <pre><code>import json sent=&quot;Lorem Ipsum.has been long time. It has. popularised.&quot; def jsoncreator(strr): myJSON = {} myJSON[&quot;datas&quot;]=[] elements={} tmpel={} tmp=strr.split(&quot;.&quot;) for el,idx in zip(tmp,range(1,len(tmp))): elements['sentence '+str(idx)]=el elements['elements'+str(idx)] =[] tmpliste=el.split() for el1,idx2 in zip(tmpliste,range(1,len(tmpliste))): tmpel['word'+str(idx2)]=el1 elements['elements'+str(idx)].append(tmpel) myJSON['datas'].append(elements) print(myJSON) jsoncreator(sent) </code></pre> <p>which gives me that result for the moment: <br/></p> <pre><code>{'datas': [{'phrase 1': 'Lorem Ipsum', 'elements1': [{'word1': 'It', 'word2': 'been', '3': 'long'}], 'phrase 2': 'has been long time', 'elements2': [{'le mma1': 'It', 'lemma2': 'been', 'lemma3': 'long'}, {'lemma1': 'It', 'lemma2': 'be en', 'lemma3': 'long'}, {'lemma1': 'It', 'lemma2': 'been', 'lemma3': 'long'}], ' phrase 3': ' It has', 'elements3': [{'lemma1': 'It', 'lemma2': 'been', 'lemma3': 'long'}], 'phrase 4': ' popularised', 'elements4': []}]} </code></pre> <p>Can someone please help me find the error please, i'm banging my head agains the wall and don't understand this!</p>
<p>I'd do it a with help of <code>enumerate()</code> (it simplifies the code):</p> <pre class="lang-py prettyprint-override"><code>sent = &quot;Lorem Ipsum.has been long time. It has. popularised.&quot; i, data = 1, {} for sentence in map(str.strip, sent.split(&quot;.&quot;)): if sentence == &quot;&quot;: continue data[f&quot;sentence {i}&quot;] = sentence data[f&quot;elements {i}&quot;] = [ {f&quot;word{j}&quot;: word for j, word in enumerate(sentence.split(), 1)} ] i += 1 print({&quot;datas&quot;: [data]}) </code></pre> <p>Prints:</p> <pre class="lang-py prettyprint-override"><code>{ &quot;datas&quot;: [ { &quot;sentence 1&quot;: &quot;Lorem Ipsum&quot;, &quot;elements 1&quot;: [{&quot;word1&quot;: &quot;Lorem&quot;, &quot;word2&quot;: &quot;Ipsum&quot;}], &quot;sentence 2&quot;: &quot;has been long time&quot;, &quot;elements 2&quot;: [ { &quot;word1&quot;: &quot;has&quot;, &quot;word2&quot;: &quot;been&quot;, &quot;word3&quot;: &quot;long&quot;, &quot;word4&quot;: &quot;time&quot;, } ], &quot;sentence 3&quot;: &quot;It has&quot;, &quot;elements 3&quot;: [{&quot;word1&quot;: &quot;It&quot;, &quot;word2&quot;: &quot;has&quot;}], &quot;sentence 4&quot;: &quot;popularised&quot;, &quot;elements 4&quot;: [{&quot;word1&quot;: &quot;popularised&quot;}], } ] } </code></pre>
python|json|dictionary
1
1,901,931
73,460,299
Select first row with a range (every 10 minutes)
<p>I have a dataframe like this:</p> <pre><code>df = pd.DataFrame({&quot;DateTime&quot;:[&quot;2020-04-02 06:06:22&quot;, &quot;2020-04-02 06:12:22&quot;, &quot;2020-04-02 06:14:39&quot;, &quot;2020-04-02 06:16:56&quot;, &quot;2020-04-02 06:20:34&quot;, &quot;2020-04-02 06:35:44&quot;], &quot;Data&quot;:[23, 31, 10, 23, 56, 81]}) # column DateTime type must be datetime64[ns] df[&quot;DateTime&quot;] = df[&quot;DateTime&quot;].astype(&quot;datetime64[ns]&quot;) df Out[4]: DateTime Data 0 2020-04-02 06:06:22 23 1 2020-04-02 06:12:22 31 2 2020-04-02 06:14:39 10 3 2020-04-02 06:16:56 23 4 2020-04-02 06:20:34 56 5 2020-04-02 06:35:44 81 </code></pre> <p>I would like to select rows after every 10 min. So my dataframe should be like:</p> <pre><code> DateTime Data 0 2020-04-02 06:06:22 23 3 2020-04-02 06:16:56 23 5 2020-04-02 06:35:44 81 </code></pre> <p>This solution <a href="https://stackoverflow.com/questions/73450906/how-to-drop-rows-based-on-datetime-every-15-min">How to drop rows based on datetime (every 15 min)?</a> drops rows every 15 min but always looking at the exactly row below, so it deletes rows that I don't want. And actually I would like to select rows after a specific time range.</p> <p>Anyone could help me?</p>
<p>This looks like a job for <a href="https://pandas.pydata.org/docs/reference/api/pandas.merge_asof.html" rel="nofollow noreferrer"><code>merge_asof</code></a>:</p> <pre><code># set up indexer DataFrame df2 = pd.DataFrame({'idx': pd.date_range(df['DateTime'].min(), df['DateTime'].max(), freq='10min') }) # get first value for each slice of 10 minutes out = (pd.merge_asof(df2, df, left_on='idx', right_on='DateTime', direction='forward') #.drop(columns='idx') # uncomment to remove idx ) </code></pre> <p>output:</p> <pre><code> idx DateTime Data 0 2020-04-02 06:06:22 2020-04-02 06:06:22 23 1 2020-04-02 06:16:22 2020-04-02 06:16:56 23 2 2020-04-02 06:26:22 2020-04-02 06:35:44 81 </code></pre> <p>output with <code>.drop(columns='idx')</code>:</p> <pre><code> DateTime Data 0 2020-04-02 06:06:22 23 1 2020-04-02 06:16:56 23 2 2020-04-02 06:35:44 81 </code></pre>
python|pandas|datetime
2
1,901,932
64,072,730
Flask how to return just a variable for jinja without rendering a new page?
<p>I'm trying to update the variable &quot;error&quot; to the html based off whatever the error is. Currently the variable passes through, but I have to render a new template which refreshes the inputs the user put into the form, making them start the form all over again. How can I pass a variable through without rendering a new page? This is my code. Basically, it gets user input when they submit the form from html inputs and checks the existing SQLite database if the values exist in there already. If they do, then it assigns the error variable with the string of whatever the error is. At the bottom if at least one value exists in the database then <code>return render_template(register.html, error=error)</code> passes to update error in the html. I just want the error element to change or somehow preserve user input when rendering a new template. How do I do this?</p> <pre><code>@app.route(&quot;/register&quot;, methods=[&quot;GET&quot;,&quot;POST&quot;]) def register(): if request.method == &quot;GET&quot;: return render_template(&quot;register.html&quot;) else: username = request.form.get(&quot;username&quot;) email = request.form.get(&quot;email&quot;) password = request.form.get(&quot;password&quot;) password2 = request.form.get(&quot;password2&quot;) if password != password2: error = &quot;Passwords do not match&quot; return render_template(&quot;register.html&quot;, error=error) userCheck = c.execute(&quot;SELECT username FROM logins WHERE username = (?)&quot;, (username,)).fetchone() emailCheck = c.execute(&quot;SELECT email FROM logins WHERE email = (?)&quot;, (email,)).fetchone() passCheck = c.execute(&quot;SELECT password FROM logins WHERE password = (?)&quot;, (password,)).fetchone() if userCheck is None and emailCheck is None and passCheck is None: c.execute(&quot;INSERT INTO logins VALUES (null,?,?,?)&quot;, (username, email, password)) conn.commit() return redirect(&quot;/&quot;) if userCheck is not None: error = &quot;Username already taken&quot; elif emailCheck is not None: error = &quot;This email is already registered&quot; elif passCheck is not None: error = &quot;Password already taken&quot; return render_template(&quot;register.html&quot;, error=error) </code></pre>
<p>Add the username and email variables to the render_template.</p> <pre><code>@app.route(&quot;/register&quot;, methods=[&quot;GET&quot;,&quot;POST&quot;]) def register(): if request.method == &quot;GET&quot;: return render_template(&quot;register.html&quot;,username=&quot;&quot;, email=&quot;&quot;) else: username = request.form.get(&quot;username&quot;) email = request.form.get(&quot;email&quot;) password = request.form.get(&quot;password&quot;) password2 = request.form.get(&quot;password2&quot;) if password != password2: error = &quot;Passwords do not match&quot; return render_template(&quot;register.html&quot;, error=error, username=username, email=email) userCheck = c.execute(&quot;SELECT username FROM logins WHERE username = (?)&quot;, (username,)).fetchone() emailCheck = c.execute(&quot;SELECT email FROM logins WHERE email = (?)&quot;, (email,)).fetchone() passCheck = c.execute(&quot;SELECT password FROM logins WHERE password = (?)&quot;, (password,)).fetchone() if userCheck is None and emailCheck is None and passCheck is None: c.execute(&quot;INSERT INTO logins VALUES (null,?,?,?)&quot;, (username, email, password)) conn.commit() return redirect(&quot;/&quot;) if userCheck is not None: error = &quot;Username already taken&quot; elif emailCheck is not None: error = &quot;This email is already registered&quot; elif passCheck is not None: error = &quot;Password already taken&quot; return render_template(&quot;register.html&quot;, error=error, username=username, email=email) </code></pre> <p>Then update your html (and similar for email).</p> <pre><code>&lt;input type=&quot;text&quot; value=&quot;{{ username }}&quot;&gt; </code></pre>
python|flask|jinja2
0
1,901,933
64,076,975
Getting text value of a HTML tag through Selenium Web Automation in Python?
<p>I am making a reddit bot that will look for certain attributes in comments, use selenium to visit the information website, and use <code>driver.find_element_by...</code> to get the value inside that tag, but it is not working.</p> <p>When I use <code>driver.find_element_by_class_name()</code>, this is the data returned:</p> <pre><code>&lt;selenium.webdriver.remote.webelement.WebElement (session=&quot;f454dcf92728b9db4de080a27a844bf7&quot;, element=&quot;514bd57d-99d7-4fce-a05d-3fa92f66ad49&quot;)&gt; </code></pre> <p>when I use <code>driver.find_elements_by_css_selector(&quot;.style-scope.ytd-video-renderer&quot;)</code>, this is returned:</p> <pre><code>[ &lt;selenium.webdriver.remote.webelement.WebElement (session=&quot;43cb953cde81df270260bf769fe081a2&quot;, element=&quot;6b4ee3e2-5e6b-48e2-8ec8-9083bf15baea&quot;)&gt;, &lt;selenium.webdriver.remote.webelement.WebElement (session=&quot;43cb953cde81df270260bf769fe081a2&quot;, ... ] </code></pre> <p>when I use <code>driver.find_elements_by_css_selector(&quot;.style-scope.ytd-video-renderer&quot;)</code>.</p> <p>Suppose that this is what I'm trying to locate (The above code returned the above Selenium data for this tag):</p> <pre><code>&lt;yt-formatted-string class=&quot;style-scope ytd-video-renderer&quot; aria-label=&quot;Sword Art Online: Alicization Lycoris Opening Full『ReoNa - Scar/let』 by Melodic Star 2 months ago 4 minutes, 18 seconds 837,676 views&quot;&gt;Sword Art Online: Alicization Lycoris Opening Full『ReoNa - Scar/let』&lt;/yt-formatted-string&gt; </code></pre> <p><strong>What I want</strong></p> <p>I want <code>Sword Art Online: Alicization Lycoris Opening Full『ReoNa - Scar/let』</code> returned.</p> <p>What could I do?</p>
<p>Use <strong><code>.text</code></strong>:</p> <pre><code>element = driver.find_element_by_xpath('//*[@id=&quot;container&quot;]/h1/yt-formatted-string') print(element.text) </code></pre>
python|selenium|selenium-webdriver|webdriverwait|praw
3
1,901,934
62,585,065
Different Classes but nested classes are same called?
<p>As for training I'm trying to strip some data from a page, but I noticed that &quot;official-store-info info-property-code&quot; and &quot;official-store-info info-property-code&quot; have the same &quot;info&quot; class and when I use find(), it always get the first info instead of the second.</p> <p>How is this possible and is there any trick to fix this?</p> <p>Sorry for all the misunderstanding, I'm quite new, so please be aware of spaghetti code.</p> <p>here's what I tried:</p> <pre><code>&lt;div class=&quot;card-description card-phone-description&quot;&gt; &lt;input class=&quot;profile-info-phone-chk&quot; id=&quot;displayMessageSuperior&quot; type=&quot;checkbox&quot;&gt; &lt;label for=&quot;displayMessageSuperior&quot; class=&quot;ch-btn ch-btn-skin contact-phone show-phone&quot;&gt;Ver teléfono&lt;/label&gt; &lt;span class=&quot;profile-info-phones profile-info-phones--multiple&quot;&gt; &lt;span class=&quot;profile-info-phone-value&quot;&gt;56956558885&lt;/span&gt; &lt;span class=&quot;profile-info-phone-value&quot;&gt;56956558885&lt;/span&gt; &lt;/span&gt; &lt;div class=&quot;official-store-info info-property-code&quot;&gt; &lt;p class=&quot;title&quot;&gt;Código&lt;/p&gt; &lt;p class=&quot;info&quot;&gt;5550478&lt;/p&gt; &lt;/div&gt; &lt;div class=&quot;official-store-info info-property-date&quot;&gt; &lt;p class=&quot;title&quot;&gt;Fecha de Publicación&lt;/p&gt; &lt;p class=&quot;info&quot;&gt;20-06-2020&lt;/p&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>Here's what I tried:</p> <pre><code>if(pageSoup.find('div', {'class':&quot;official-store-info info-property-code&quot;})): no4_text = pageSoup.find('p', {'class':&quot;info&quot;}) no4 = no4_text.text.strip() print (no4) else: no4 = &quot;N/A&quot; if(pageSoup.find('div', {'class':&quot;official-store-info info-property-date&quot;})): no5_text = pageSoup.find('p', {'class':&quot;info&quot;}) no5 = no5_text.text.strip() print(no5) else: no5 = &quot;N/A&quot; </code></pre>
<p>You can use <code>pageSoup.findAll()</code> instead of <code>pageSoup.find()</code>, which will return a list. Then you can choose n-th element:</p> <pre><code>findAll('div', {'class':&quot;official-store-info info-property-date&quot;})[1] </code></pre>
python|web-scraping|beautifulsoup
0
1,901,935
61,778,582
pre-process and load the NSL_KDD data set
<p>since I am a newbie in python programming and I want to load the data according to the table of the article but I don’t know how to can do categorical training and testing the NSL_KDD dataset into(‘normal’, ‘dos’, ‘r2l’, ‘probe’, ‘u2r’). <a href="https://i.stack.imgur.com/VSfPy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VSfPy.png" alt="enter image description here"></a></p> <p>I’ve reviewed a lot of code in GateHub to pre-process the NSL_KDD data set to categorize into five groups(‘normal’, ‘dos’, ‘r2l’, ‘probe’, ‘u2r’), but I still haven’t been able to find the code to do it right. Can anyone help me? I really need help.</p>
<p>This code for loading you can find how to <a href="https://render.githubusercontent.com/view/ipynb?commit=03bab6212c3058d6c87f8b740550231d8d5d64e3&amp;enc_url=68747470733a2f2f7261772e67697468756275736572636f6e74656e742e636f6d2f7468696e6c696e6537322f6e736c2d6b64642f303362616236323132633330353864366338376638623734303535303233316438643564363465332f4e534c2d4b44442e6970796e62&amp;nwo=thinline72%2Fnsl-kdd&amp;path=NSL-KDD.ipynb&amp;repository_id=70263928&amp;repository_type=Repository#9.-Gaussian-Mixture-clustering-with-Random-Forest-Classifier" rel="nofollow noreferrer">PCA algorithm is used for visualization purposes. It's also used later as preprocessing for Gaussian Mixture clustering.</a></p> <pre><code>import os import math import itertools import multiprocessing import pandas import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from time import time from collections import OrderedDict %matplotlib inline gt0 = time() from pyspark import SparkConf, SparkContext from pyspark.sql import SQLContext, Row conf = SparkConf()\ .setMaster(f"local[{multiprocessing.cpu_count()}]")\ .setAppName("PySpark NSL-KDD")\ .setAll([("spark.driver.memory", "8g"), ("spark.default.parallelism", f"{multiprocessing.cpu_count()}")]) # Creating local SparkContext with specified SparkConf and creating SQLContext based on it sc = SparkContext.getOrCreate(conf=conf) sc.setLogLevel('INFO') sqlContext = SQLContext(sc) from pyspark.sql.types import * from pyspark.sql.functions import udf, split, col import pyspark.sql.functions as sql train20_nsl_kdd_dataset_path = os.path.join("NSL_KDD_Dataset", "KDDTrain+_20Percent.txt") train_nsl_kdd_dataset_path = os.path.join("NSL_KDD_Dataset", "KDDTrain+.txt") test_nsl_kdd_dataset_path = os.path.join("NSL_KDD_Dataset", "KDDTest+.txt") col_names = np.array(["duration","protocol_type","service","flag","src_bytes", "dst_bytes","land","wrong_fragment","urgent","hot","num_failed_logins", "logged_in","num_compromised","root_shell","su_attempted","num_root", "num_file_creations","num_shells","num_access_files","num_outbound_cmds", "is_host_login","is_guest_login","count","srv_count","serror_rate", "srv_serror_rate","rerror_rate","srv_rerror_rate","same_srv_rate", "diff_srv_rate","srv_diff_host_rate","dst_host_count","dst_host_srv_count", "dst_host_same_srv_rate","dst_host_diff_srv_rate","dst_host_same_src_port_rate", "dst_host_srv_diff_host_rate","dst_host_serror_rate","dst_host_srv_serror_rate", "dst_host_rerror_rate","dst_host_srv_rerror_rate","labels"]) nominal_inx = [1, 2, 3] binary_inx = [6, 11, 13, 14, 20, 21] numeric_inx = list(set(range(41)).difference(nominal_inx).difference(binary_inx)) nominal_inx = [1, 2, 3] binary_inx = [6, 11, 13, 14, 20, 21] numeric_inx = list(set(range(41)).difference(nominal_inx).difference(binary_inx)) nominal_cols = col_names[nominal_inx].tolist() binary_cols = col_names[binary_inx].tolist() numeric_cols = col_names[numeric_inx].tolist() # Function to load dataset and divide it into 8 partitions def load_dataset(path): dataset_rdd = sc.textFile(path, 8).map(lambda line: line.split(',')) dataset_df = (dataset_rdd.toDF(col_names.tolist()).select( col('duration').cast(DoubleType()), col('protocol_type').cast(StringType()), col('service').cast(StringType()), col('flag').cast(StringType()), col('src_bytes').cast(DoubleType()), col('dst_bytes').cast(DoubleType()), col('land').cast(DoubleType()), col('wrong_fragment').cast(DoubleType()), col('urgent').cast(DoubleType()), col('hot').cast(DoubleType()), col('num_failed_logins').cast(DoubleType()), col('logged_in').cast(DoubleType()), col('num_compromised').cast(DoubleType()), col('root_shell').cast(DoubleType()), col('su_attempted').cast(DoubleType()), col('num_root').cast(DoubleType()), col('num_file_creations').cast(DoubleType()), col('num_shells').cast(DoubleType()), col('num_access_files').cast(DoubleType()), col('num_outbound_cmds').cast(DoubleType()), col('is_host_login').cast(DoubleType()), col('is_guest_login').cast(DoubleType()), col('count').cast(DoubleType()), col('srv_count').cast(DoubleType()), col('serror_rate').cast(DoubleType()), col('srv_serror_rate').cast(DoubleType()), col('rerror_rate').cast(DoubleType()), col('srv_rerror_rate').cast(DoubleType()), col('same_srv_rate').cast(DoubleType()), col('diff_srv_rate').cast(DoubleType()), col('srv_diff_host_rate').cast(DoubleType()), col('dst_host_count').cast(DoubleType()), col('dst_host_srv_count').cast(DoubleType()), col('dst_host_same_srv_rate').cast(DoubleType()), col('dst_host_diff_srv_rate').cast(DoubleType()), col('dst_host_same_src_port_rate').cast(DoubleType()), col('dst_host_srv_diff_host_rate').cast(DoubleType()), col('dst_host_serror_rate').cast(DoubleType()), col('dst_host_srv_serror_rate').cast(DoubleType()), col('dst_host_rerror_rate').cast(DoubleType()), col('dst_host_srv_rerror_rate').cast(DoubleType()), col('labels').cast(StringType()))) return dataset_df from pyspark.ml import Pipeline, Transformer from pyspark.ml.feature import StringIndexer from pyspark import keyword_only from pyspark.ml.param.shared import HasInputCol, HasOutputCol, Param # Dictionary that contains mapping of various attacks to the four main categories attack_dict = { 'normal': 'normal', 'back': 'DoS', 'land': 'DoS', 'neptune': 'DoS', 'pod': 'DoS', 'smurf': 'DoS', 'teardrop': 'DoS', 'mailbomb': 'DoS', 'apache2': 'DoS', 'processtable': 'DoS', 'udpstorm': 'DoS', 'ipsweep': 'Probe', 'nmap': 'Probe', 'portsweep': 'Probe', 'satan': 'Probe', 'mscan': 'Probe', 'saint': 'Probe', 'ftp_write': 'R2L', 'guess_passwd': 'R2L', 'imap': 'R2L', 'multihop': 'R2L', 'phf': 'R2L', 'spy': 'R2L', 'warezclient': 'R2L', 'warezmaster': 'R2L', 'sendmail': 'R2L', 'named': 'R2L', 'snmpgetattack': 'R2L', 'snmpguess': 'R2L', 'xlock': 'R2L', 'xsnoop': 'R2L', 'worm': 'R2L', 'buffer_overflow': 'U2R', 'loadmodule': 'U2R', 'perl': 'U2R', 'rootkit': 'U2R', 'httptunnel': 'U2R', 'ps': 'U2R', 'sqlattack': 'U2R', 'xterm': 'U2R' } attack_mapping_udf = udf(lambda v: attack_dict[v]) class Labels2Converter(Transformer): @keyword_only def __init__(self): super(Labels2Converter, self).__init__() def _transform(self, dataset): return dataset.withColumn('labels2', sql.regexp_replace(col('labels'), '^(?!normal).*$', 'attack')) class Labels5Converter(Transformer): @keyword_only def __init__(self): super(Labels5Converter, self).__init__() def _transform(self, dataset): return dataset.withColumn('labels5', attack_mapping_udf(col('labels'))) labels2_indexer = StringIndexer(inputCol="labels2", outputCol="labels2_index") labels5_indexer = StringIndexer(inputCol="labels5", outputCol="labels5_index") labels_mapping_pipeline = Pipeline(stages=[Labels2Converter(), Labels5Converter(), labels2_indexer, labels5_indexer]) labels2 = ['normal', 'attack'] labels5 = ['normal', 'DoS', 'Probe', 'R2L', 'U2R'] labels_col = 'labels2_index' # Loading train data t0 = time() train_df = load_dataset(train_nsl_kdd_dataset_path) # Fitting preparation pipeline labels_mapping_model = labels_mapping_pipeline.fit(train_df) # Transforming labels column and adding id column train_df = labels_mapping_model.transform(train_df).withColumn('id', sql.monotonically_increasing_id()) train_df = train_df.cache() print(f"Number of examples in train set: {train_df.count()}") print(f"Time: {time() - t0:.2f}s") # Loading test data t0 = time() test_df = load_dataset(test_nsl_kdd_dataset_path) # Transforming labels column and adding id column test_df = labels_mapping_model.transform(test_df).withColumn('id', sql.monotonically_increasing_id()) test_df = test_df.cache() print(f"Number of examples in test set: {test_df.count()}") print(f"Time: {time() - t0:.2f}s") </code></pre>
python|tensorflow|machine-learning
0
1,901,936
67,430,786
output prints on the same line as hostname
<p>I wrote a code using for loop that prints output on the same line. However, after executing the code a hostname displays on the same line as output. Is there any way to avoid it?</p> <p><em>Code</em></p> <pre><code>list = [&quot;h&quot;, &quot;e&quot;, &quot;l&quot;, &quot;l&quot;, &quot;o&quot;] for x in list: print(x, end='') </code></pre> <p><em>Ouput</em></p> <pre><code>hellohostname:~$ </code></pre>
<p>Put <code>print()</code> after your loop to add a newline to the output.</p>
python|hostname
0
1,901,937
70,223,081
Automating Excel using Python
<p>I've written out code that will concat multiple Excel sheets from a single file into a DataFrame. Then, by using a function, the DF will be split by 1mil rows into mini DFs. Lastly, the mini DFs are converted into CSVs. How can this function work so that it automatically counts how many DFs per 1mil rows to create? Here is my code so far:</p> <pre><code>data_df = pd.concat(pd.read_excel('file location', sheet_name = None), ignore_index = True) def automate_excel(): df1 = pd.DataFrame(data_df[0:1000000]) df2 = pd.DataFrame(data_df[1000001:2000000]) df1.to_csv('data_df1', index = False) df2.to_csv('data_df2', index = False) automate_excel() </code></pre> <p>This is a minimized example as the current file is 5mil rows but I can have some up to 10mil rows or broken out into 10 CSV files</p>
<pre><code>small_df_rows = 1000000 def split_df(n, idx): small_df = df.loc[n:n+small_df_rows-1] small_df.to_csv(f'data_df{idx+1}.csv', index=False) for idx, n in enumerate(range(0, len(df), small_df_rows)): split_df(n, idx) </code></pre>
python|excel|pandas|dataframe|automation
1
1,901,938
56,809,083
VSCode fold docstrings Python MacOS
<p>I have tried using the command:</p> <p><code>&gt;Fold Level 2</code> and it is causing too much folding.</p> <p>And </p> <p><code>&gt;Fold Level 3</code> does not fold the methods' docstrings.</p> <p>My primary goal is to fold the docstrings and nothing else.</p> <pre><code>def test(a, b, c): """A lot of multiline docstrings here that dont get folded """ return ... </code></pre> <p>would turn into:</p> <pre><code>def test(a, b, c): return ... </code></pre> <p>Is there a way to achieve this?</p>
<p><a href="https://github.com/microsoft/vscode/issues/3422" rel="nofollow noreferrer">By default VS Code's cold folding is indentation-based, unaware of the programming language used</a>. Aside from Brett's answer, one hack is to indent the docstring.</p> <pre><code>def test(a, b, c): &quot;&quot;&quot;[short summary] [indented long summary parameters returns etc.] ^^^^ &quot;&quot;&quot; return ... </code></pre> <p>This is clearly a bad idea for anything professional, but it works.</p> <hr /> <p><a href="https://github.com/microsoft/vscode-python/issues/1847#issuecomment-808990258" rel="nofollow noreferrer">Towards the end of Brett's link</a> one can find a new comment that would allow the Python extension to fold docstrings properly. The comment contains instructions for MacOS. On my Ubuntu 20.04.1 LTS machine, I did the following instead:</p> <ol> <li>Run <code>which code</code>. I got <code>/usr/bin/code</code>, but that was linked to <code>/usr/share/code/bin/code</code>.</li> <li>Copy the directory <code>/usr/share/code/resources/app/extensions/python</code> to <code>~/.vscode/extensions</code>.</li> <li>Add the regex below to <code>~/.vscode/extensions/python/language-configuration.json</code>.</li> </ol> <pre><code>{ &quot;folding&quot;: { &quot;offSide&quot;: true, &quot;markers&quot;: { &quot;start&quot;: &quot;^\\s*#\\s*region\\b|^\\ *\&quot;{3}(?!.*\&quot;{3}).+&quot;, &quot;end&quot;: &quot;^\\s*#\\s*endregion\\b|^\\ *\&quot;{3}$&quot; } } } </code></pre>
python|macos|visual-studio-code
2
1,901,939
42,687,168
How do I sort column by targeting a specific number within that cell?
<p>I would like to use Pandas Python to sort a specific column by date (more specifically the year). However, the year is buried within a bunch of other numbers. How do I just target the 2 digits that I need?</p> <p>In the example below, I want to sort this column by the numbers [16,14,15...] rather than considering all the numbers in that row.</p> <pre><code>3/18/16 11:46 6/19/14 14:58 7/27/15 14:22 8/3/15 12:59 2/20/13 12:33 9/27/16 12:08 7/27/15 14:22 </code></pre>
<p>Given a dataframe like this,</p> <pre><code> date 0 3/18/16 1 6/19/14 2 7/27/15 3 8/3/15 4 2/20/13 5 9/27/16 6 7/27/15 </code></pre> <p>You can convert the date column to datetime format and then sort.</p> <pre><code>df['date'] = pd.to_datetime(df['date']) df = df.sort_values(by = 'date') </code></pre> <p>The resulting dataframe </p> <pre><code> date 4 2013-02-20 1 2014-06-19 2 2015-07-27 6 2015-07-27 3 2015-08-03 0 2016-03-18 5 2016-09-27 </code></pre>
pandas
1
1,901,940
65,554,051
Install python package to python distribution of salome_meca on Ubuntu 18.04
<p>I am having problems with installing a python package (pandas) on Ubuntu 18.04 to a specific Python (3.6.5) distribution in Salome_meca located in: <code>/home/username/salome_meca/V2019.0.3_universal/prerequisites/Python-365/lib/python3.6/os.py</code></p> <p>if I run: <code>sudo python3.6 -m pip install --install-option=&quot;--prefix=/home/username/salome_meca/V2019.0.3_universal/prerequisites/Pandas-120&quot; pandas</code></p> <p>It raises an error: <code>Requirement already satisfied: pandas in /usr/lib/python3/dist-packages</code></p> <p>And I cannot import this module as the python (3.6.5) distribution in Salome_meca cannot find it, when I run the code in the Salome_meca invornment.</p>
<p>Try using the -t (target switch) as seen <a href="https://stackoverflow.com/questions/2915471/install-a-python-package-into-a-different-directory-using-pip">here</a></p> <pre><code>sudo python3.6 -m pip install -t =/home/username/salome_meca/V2019.0.3_universal/prerequisites/Pandas-120 </code></pre>
python|pandas|ubuntu|pip
1
1,901,941
61,397,252
Jenkins Build not failing though coverage is below 80%
<p>Trying to fail Jenkins build even if one of the python test files has less than 80% coverage. Towards it, in Jenkins I'm using nosetests to run test coverage on 2 python test files. It prints results as below. Though one of them has 78% coverage, the build passes. I would want the build to fail in this case I have added the Cobertura plugin with post build options as, Fail builds if no reports, Fail unhealthy builds, Fail unstable builds. Also set threshold as 80,0,0 for Methods, Packages, Conditionals , Classes &amp; Files. </p> <p>I tried to run so that total goes below 80 but it still fails. </p> <pre><code>+ nosetests --with-xunit --with-coverage --cover-erase --cover-package=. Name Stmts Miss **Cover** test_sample_script.py 5 0 **100%** test_sample_script1_80.py 9 2 **78%** TOTAL 14 2 **86%** Ran 2 tests in 0.110s OK + python3 -m coverage xml [Cobertura] Publishing Cobertura coverage report... [Cobertura] Publishing Cobertura coverage results... [Cobertura] Cobertura coverage report found. Finished: SUCCESS </code></pre>
<p>As mentioned here : <a href="https://github.com/jenkinsci/cobertura-plugin/issues/111#issuecomment-580886792" rel="nofollow noreferrer">https://github.com/jenkinsci/cobertura-plugin/issues/111#issuecomment-580886792</a></p> <pre><code>With lineCoverageTargets: '90.0, 80.1, 50': Report health as 100% if line coverage &gt; 90% Report health as 0% if line coverage &lt; 80.1% Mark build as unstable if line coverage &lt; 50% </code></pre>
python|jenkins|jenkins-plugins|cobertura|coverage.py
0
1,901,942
58,066,652
Decorator backoff.on_predicate not waiting as expected
<p>I'm checking the constant interval between calls and found, that in this infinite loop, the time between consecutive calls is not 5 seconds and varies by random, though less than 5 sec. Don't understand, why.</p> <pre><code>from datetime import datetime from backoff import on_predicate, constant @on_predicate(constant, interval=5) def fnc(i): print('%s %d' % (datetime.now().strftime("%H:%M:%S:%f"),i), flush=True) return i for i in range(7): fnc(i) </code></pre> <p>Output:</p> <pre><code>17:48:48:348775 0 17:48:50:898752 0 17:48:52:686353 0 17:48:53:037900 0 17:48:57:264762 0 17:48:58:348803 0 </code></pre>
<p>The <code>backoff</code> library uses a jitter function to randomize the interval. It's normally what you want when doing exponential backoff or similar, but might be surprising when using the <code>constant</code> wait generator. To disable jitter, specify <code>jitter=None</code>:</p> <pre><code>@on_predicate(constant, interval=5, jitter=None) def fnc(i): print('%s %d' % (datetime.now().strftime("%H:%M:%S:%f"),i), flush=True) return i </code></pre>
python-3.x|exponential-backoff
4
1,901,943
53,979,977
Running a script with arguments from within another python script
<p>In Jupyter, this can be achieved with the <code>%run</code> line magic. Like this:</p> <pre><code>%run -i somescript.py -f -b </code></pre> <p>But that magic does not work from within a python script file. I tried this:</p> <pre><code>os.system("python3 -i somescript.py -f -b") </code></pre> <p>A further complication is that I want to use a variable, like this:</p> <pre><code>%run -i somescript.py -f $LINE -b </code></pre>
<p>Have you tried using subprocess ?</p> <p>That is probably not the best answer, but you can call bash from python script, which calls another python script</p> <pre><code>import subprocess subprocess.check_call(["echo", "Hello world!"]) </code></pre>
python
0
1,901,944
54,248,701
How to run TFLite model on the RaspberryPi
<p>I trained the model SSD_InceptionV2_coco on my PC with GPU on a customer image set. it works great on my pc so move it to my pi which run ok but super slow 0.7 FPS :( so i read about TFLite and used the script that comes on the Object_detection folder called "export_tflite_ssd_graph.py" it created a new .pb file but i run it on the script that works with regular frozen file i get the following:</p> <blockquote> <p>Traceback (most recent call last): File "light_A.I_CT.py", line 81, in od_graph_def.ParseFromString(serialized_graph) File "/home/pi/.local/lib/python3.5/site-packages/google/protobuf/message.py", line 185, in ParseFromString self.MergeFromString(serialized) File "/home/pi/.local/lib/python3.5/site-packages/google/protobuf/internal/python_message.py", line 1083, in MergeFromString if self._InternalParse(serialized, 0, length) != length: File "/home/pi/.local/lib/python3.5/site-packages/google/protobuf/internal/python_message.py", line 1120, in InternalParse pos = field_decoder(buffer, new_pos, end, self, field_dict) File "/home/pi/.local/lib/python3.5/site-packages/google/protobuf/internal/decoder.py", line 610, in DecodeRepeatedField raise _DecodeError('Truncated message.') google.protobuf.message.DecodeError: Truncated message.</p> </blockquote> <p>The code i am using is the following:</p> <blockquote> <h1>Load the Tensorflow model into memory. detection_graph = tf.Graph() with detection_graph.as_default():</h1> <pre><code>od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') sess = tf.Session(graph=detection_graph) </code></pre> </blockquote> <p>Its pretty basic and taken from the examples but i dont know if i need to do something else as all the TFLite samples are for IOS or Android.</p>
<p>You cannot a TFLite model with regular Tensorflow code, and you need to build TFLite instead. You may want to see <a href="https://medium.com/@haraldfernengel/compiling-tensorflow-lite-for-a-raspberry-pi-786b1b98e646" rel="nofollow noreferrer">this</a> as an example.</p>
python|tensorflow
3
1,901,945
57,039,537
Python Web Scraping - Navigating to next page link and obtaining data
<p>I am trying to navigate to links and extracting data (the data is a href download link),this data should be added to a new field besides the previous fields of the first page (from where i got the links),but i am struggling how to do that</p> <p>Firstable,i've created a parse and extracted all the links of the first page and added it to a field named "Links",this links are redirecting to a page that contains a download Button,so i need the real link of the download button,so what i did here is to create a for loop with the previous links and executing the function yield response.follow but it didn't go well. </p> <pre><code>import scrapy class thirdallo(scrapy.Spider): name = "thirdallo" start_urls = [ 'https://www.alloschool.com/course/alriadhiat-alaol-ibtdaii', ] def parse(self, response): yield { 'path': response.css('ol.breadcrumb li a::text').extract(), 'links': response.css('#top .default .er').xpath('@href').extract() } hrefs=response.css('#top .default .er').xpath('@href').extract() for i in hrefs: yield response.follow(i, callback=self.parse,meta={'finalLink' :response.css('a.btn.btn-primary').xpath('@href)').extract() }) </code></pre>
<p>In the <code>@href</code> you are trying to scrape out, it seems that you have some <code>.rar</code> links, that can't be parsed with the designated function.</p> <p>Find my code below, with <code>requests</code> and <code>lxml</code> libraries: </p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import requests &gt;&gt;&gt; from lxml import html &gt;&gt;&gt; s = requests.Session() &gt;&gt;&gt; resp = s.get('https://www.alloschool.com/course/alriadhiat-alaol-ibtdaii') &gt;&gt;&gt; doc = html.fromstring(resp.text) &gt;&gt;&gt; doc.xpath("//*[@id='top']//*//*[@class='default']//*//*[@class='er']/@href") ['https://www.alloschool.com/assets/documents/course-342/jthathat-alftra-1-aldora-1.rar', 'https://www.alloschool.com/assets/documents/course-342/jthathat-alftra-2-aldora-1.rar', 'https://www.alloschool.com/assets/documents/course-342/jthathat-alftra-3-aldora-2.rar', 'https://www.alloschool.com/assets/documents/course-342/jdadat-alftra-4-aldora-2.rar', 'https://www.alloschool.com/element/44905', 'https://www.alloschool.com/element/43081', 'https://www.alloschool.com/element/43082', 'https://www.alloschool.com/element/43083', 'https://www.alloschool.com/element/43084', 'https://www.alloschool.com/element/43085', 'https://www.alloschool.com/element/43086', 'https://www.alloschool.com/element/43087', 'https://www.alloschool.com/element/43088', 'https://www.alloschool.com/element/43080', 'https://www.alloschool.com/element/43089', 'https://www.alloschool.com/element/43090', 'https://www.alloschool.com/element/43091', 'https://www.alloschool.com/element/43092', 'https://www.alloschool.com/element/43093', 'https://www.alloschool.com/element/43094', 'https://www.alloschool.com/element/43095', 'https://www.alloschool.com/element/43096', 'https://www.alloschool.com/element/43097', 'https://www.alloschool.com/element/43098', 'https://www.alloschool.com/element/43099', 'https://www.alloschool.com/element/43100', 'https://www.alloschool.com/element/43101', 'https://www.alloschool.com/element/43102', 'https://www.alloschool.com/element/43103', 'https://www.alloschool.com/element/43104', 'https://www.alloschool.com/element/43105', 'https://www.alloschool.com/element/43106', 'https://www.alloschool.com/element/43107', 'https://www.alloschool.com/element/43108', 'https://www.alloschool.com/element/43109', 'https://www.alloschool.com/element/43110', 'https://www.alloschool.com/element/43111', 'https://www.alloschool.com/element/43112', 'https://www.alloschool.com/element/43113'] </code></pre> <p>In your code, try this:</p> <pre class="lang-py prettyprint-override"><code>for i in hrefs: if '.rar' not in i: yield response.follow(i, callback=self.parse,meta={'finalLink' :response.css('a.btn.btn-primary').xpath('@href)').extract() }) </code></pre>
python|web-scraping
0
1,901,946
36,643,618
Scoring consistency within dataset
<p>Suppose I am given a set of structured data. The data is known to be problematic, and I need to somehow "score" them on consistency. For example, I have the data as shown below:</p> <pre><code>fieldA | fieldB | fieldC -------+--------+------- foo | bar | baz fooo | bar | baz foo | bar | lorem .. | .. | .. lorem | ipsum | dolor lorem | upsum | dolor lorem | ipsum | baz </code></pre> <p>So assume the first row is considered the correct entry because there are relatively more data in that combination compared to the records in second and third row. In the second row, the value for <code>fieldA</code> should be <code>foo</code> (inconsistent due to misspelling). Then in the third row, the value of <code>fieldC</code> should be <code>baz</code> as other entries in the dataset with similar values for <code>fieldA</code> (<code>foo</code>) and <code>fieldB</code> (<code>bar</code>) suggest.</p> <p>Also, in other part of the dataset, there's another combination that is relatively more common (<code>lorem</code>, <code>ipsum</code>, <code>dolor</code>). So the problem in the following records are the same as the one mentioned before, just that the value combination is different.</p> <p>I initially dumped everything to a SQL database, and use statements with <code>GROUP BY</code> to check consistency of fields values. So there will be 1 query for each field I want to check for consistency, and for each record.</p> <pre><code>SELECT fieldA, count(fieldA) FROM cache WHERE fieldB = 'bar' and fieldC = 'baz' GROUP BY fieldA </code></pre> <p>Then I could check if the value of <code>fieldA</code> of a record is consistent with the rest by referring the record to the object below (processed result of the previous SQL query).</p> <pre><code>{'foo': {'consistency': 0.99, 'count': 99, 'total': 100} 'fooo': {'consistency': 0.01, 'count': 1, 'total': 100}} </code></pre> <p>However it was very slow (dataset has about 2.2million records, and I am checking 4 fields, so making about 9mil queries), and would take half a day to complete. Then I replaced SQL storage to elasticsearch, and the processing time shrunk to about 5 hours, can it be made somehow faster?</p> <p>Also just out of curiosity, am I re-inventing a wheel here? Is there an existing tool for this? Currently it is implemented in Python3, with elasticsearch.</p>
<p>I just read your question and found it quite interesting. I did something similar using <a href="http://www.nltk.org/" rel="nofollow noreferrer">ntlk</a> (python Natural Language Toolkit). Anyway, in this case I think you dont need the sophisticated <a href="https://stackoverflow.com/questions/1471153/string-similarity-metrics-in-python">string comparison algorithms</a>.</p> <p>So I tried an approach using the python <a href="https://docs.python.org/3/library/difflib.html" rel="nofollow noreferrer">difflib</a>. The Title sounds promising: difflib — <strong>Helpers for computing deltas</strong>¶</p> <p>The <strong>difflib.SequenceMatcher</strong> class says:</p> <p><em>This is a flexible class for comparing pairs of sequences of any type, so long as the sequence elements are hashable.</em></p> <p>By the way I think that if you want to save time you could hold and process 2.000.000 3-tuples of (relatively short) strings easily in Memory. (see testruns and Mem Usage below)</p> <p>So I wrote a <a href="https://gist.github.com/pythononwheels/37f5570affe643b626358ba45799b764" rel="nofollow noreferrer">demo App</a> that produces 2.000.000 (you can vary that) 3-tuples of randomly slightly shuffled strings. The shuffled strings are based and compared with a default pattern like yours: ['foofoo', 'bar', 'lorem']. It then compares them using difflib.SequenceMatcher. All in Memory.</p> <p><strong>Here is the compare code:</strong> </p> <pre><code>def compare(intuple, pattern_list): """ compare two strings with difflib intuple: in this case a n-tuple of strings pattern_list: a given pattern list. n-tuple and list must be of the same lenght. return a dict (Ordered) with the tuple and the score """ d = collections.OrderedDict() d["tuple"] = intuple #d["pattern"] = pattern_list scorelist = [] for counter in range(0,len(pattern_list)): score = difflib.SequenceMatcher(None,intuple[counter].lower(),pattern_list[counter].lower()).ratio() scorelist.append(score) d["score"] = scorelist return d </code></pre> <p><strong>Here are the runtime and Memory usage results:</strong></p> <p>2000 3-tuples: - compare time: 417 ms = 0,417 sec - Mem Usage: 594 KiB</p> <p>200.000 3-tuples: - compare time: 5360 ms = 5,3 sec - Mem Usage: 58 MiB</p> <p>2.000.000 3-tuples: - compare time: 462241 ms = 462 sec - Mem Usage: 580 MiB</p> <p>So it scales linear in time and Mem usage. And it (only) needs 462 seconds for 2.000.000 3-tuple strings tom compare.</p> <p>The result looks like this:(example for 200.000 rows)</p> <pre><code>[ TIMIMG ] build function took 53304.028034 ms [ TIMIMG ] compare_all function took 462241.254807 ms [ INFO ] num rows: 2000000 pattern: ['foofoo', 'bar', 'lorem'] [ SHOWING 10 random results ] 0: {"tuple": ["foofoo", "bar", "ewrem"], "score": [1.0, 1.0, 0.6]} 1: {"tuple": ["hoofoo", "kar", "lorem"], "score": [0.8333333333333334, 0.6666666666666666, 1.0]} 2: {"tuple": ["imofoo", "bar", "lorem"], "score": [0.6666666666666666, 1.0, 1.0]} 3: {"tuple": ["foofoo", "bar", "lorem"], "score": [1.0, 1.0, 1.0]} .... </code></pre> <p>As you can see you get an score based on the similarity of the string compared to the pattern. 1.0 means equal and everything below gets worse the lower the score is.</p> <p>difflib is known as not to be the fastest algorithm for that but I think 7 minutes is quite an improvement to half a day or 5 hours.</p> <p>I hope this helps you (and is not complete missunderstanding) but it was a lot of fun to program this yesterday. And I learned a lot. ;) For example to track memory usage using <a href="https://docs.python.org/3/library/tracemalloc.html" rel="nofollow noreferrer">tracemalloc</a>. Never did that before.</p> <p>I dropped the code to <a href="https://gist.github.com/pythononwheels/37f5570affe643b626358ba45799b764" rel="nofollow noreferrer">github (as a one file gist)</a>.</p>
python|python-3.x|data-integrity|data-cleaning
1
1,901,947
54,622,228
pandas: how to groupby using string using a string
<p>I have csv file with newline delimiters that I read into a pandas dataframe.</p> <pre><code> df = pd.dataframe("data.csv", delimiter="\n", header=None) </code></pre> <p>This returns something like this</p> <pre><code> marker1 10 20 30 marker2 40 50 marker3 60 70 80 90 100 ..... </code></pre> <p>I want to generate a dataframe as follows</p> <pre><code> marker1 10 marker1 20 marker1 30 marker2 40 marker2 50 marker3 60 marker3 70 marker3 80 marker3 90 marker3 100 </code></pre> <p>I think this can be done with groupby but I don't know how to proceed. How can I do this?</p> <p>Thanks</p> <p>Ranga</p>
<p>Using <code>contains</code> and assign those cell contain marker to another columns , then we do <code>ffill</code> , and select col not equal to New col</p> <pre><code>df['New']=df.loc[df.col.str.contains('marker'),'col'] df.New=df.New.ffill() df=df.query('New!=col') df col New 1 10 marker1 2 20 marker1 3 30 marker1 5 40 marker2 6 50 marker2 8 60 marker3 9 70 marker3 10 80 marker3 11 90 marker3 12 100 marker3 </code></pre>
python|pandas|pandas-groupby
1
1,901,948
38,732,878
Pandas rolling apply to update the Series for next iteration?
<p>I have following Series s, I want to rolling apply a self-defined function "test", and immediately update the results to s so that the next iteration of "test" is based on the updated s. Let me walk you through my example:</p> <pre><code>s = pd.Series(range(5), index=pd.date_range('1/1/2000', periods=5)) s 2000-01-01 0 2000-01-02 1 2000-01-03 2 2000-01-04 3 2000-01-05 4 Freq: D, dtype: int32 </code></pre> <p>My self-defined function as below. This is just a simplified example of my real case. We can see during the first iteration, the returned variable 'update' is set to 100, and I want the s to be updated as [0, 1, 100, 3, 4,....]. For the next iteration, the arr.sum() will calculated based on (1+100+3) instead of (1+2+3). </p> <pre><code>def test(arr): print(arr) print(arr.sum()) if arr.sum()%3==0: print('True') update=100 else: update=arr[-1] return update s=s.rolling(window=3).apply(test) [ 0. 1. 2.] 3.0 True [ 1. 2. 3.] 6.0 True [ 2. 3. 4.] 9.0 True </code></pre> <p>Ideal output:</p> <pre><code>[ 0. 1. 2.] 3.0 True 'Update s with 100' [ 1. 100. 3.] 104 [ 100. 3. 4.] 107 </code></pre>
<p>I think dataframe.rolling is operating on the original dataframe only, it actually provides a rolling transformation. If any data is modified in a rolling window of the dataframe, it will NOT be updated in the consequential rolling windows.</p> <p>Actually I am facing the same issue here. So far the alternative I am using is to manually loop through each rolling window, and put the logic inside the loop. I know it is slow, but I have no idea if there is a better way to do this.</p> <p>BTW, the same question is asked by other people: <a href="https://stackoverflow.com/questions/38509107/sliding-window-iterator-using-rolling-in-pandas">Sliding window iterator using rolling in pandas</a></p> <p><a href="https://stackoverflow.com/questions/36723003/why-doesnt-my-pandas-rolling-apply-work-when-the-series-contains-collection">Why doesn&#39;t my pandas rolling().apply() work when the series contains collections?</a></p>
python|pandas|apply
0
1,901,949
64,396,988
Allow reserved key words as methods in CPython
<p>It looks like Python has a list of reserved key words that cannot be used as method names. For instance,</p> <pre class="lang-py prettyprint-override"><code>class A: def finally(self): return 0 </code></pre> <p>returns a <code>SyntaxError: invalid syntax</code>. There is a way around it with <code>getattr/setattr</code>,</p> <pre class="lang-py prettyprint-override"><code> class A: pass setattr(A, 'finally', lambda self: 0) a = A() print(getattr(a, &quot;finally&quot;)()) </code></pre> <p>works fine. However, <code>a.finally()</code> still produces a <code>SyntaxError: invalid syntax</code>.</p> <p>Is there a way to avoid it? More specifically, are there some settings when compiling CPython 3.8 from sources (or a code patch) that would allow avoiding this error?</p> <p>Note that the same error happens in PyPy 3.</p> <p>The context is that in Pyodide, that builds CPython to WebAssembly, one can pass Javascripts objects to Python. And because of the present limitation, currently, Python code like <code>Promise.new(...).then(...).finally(...)</code> would error with a syntax error (cf <a href="https://github.com/iodide-project/pyodide/issues/769" rel="nofollow noreferrer">GH-pyodide#769</a>)</p>
<p>This is not possible in Python. The <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow noreferrer">standard way</a> around this is to append an underscore if the name would be reserved (e.g. <code>finally_</code>). Otherwise you can just choose a different name.</p> <p>In your specific case, what you have is JavaScript code, so it's not clear why you would want it to also be valid Python code. If it's not something you need to execute in Python-land then it should probably be in a string instead of raw code; there may be a way to tell the cross-compiler to emit JavaScript code provided as a string. If you do need to execute it in Python-land, then <code>getattr(promise, 'finally')</code> will retrieve it; it may be convenient to define a helper function for dealing with JavaScript promises in Python so you don't have to write <code>getattr</code> everywhere.</p>
python|syntax-error|cpython|pyodide
3
1,901,950
64,462,347
Tensorflow 1.14 performance issue on rtx 3090
<p>I am running a model written with TensorFlow 1.x on 4x RTX 3090 and it is taking a long time <strong>to start up the training</strong> than as in 1x RTX 3090. Although, as training starts, it gets finished up earlier in 4x than in 1x. I am using CUDA 11.1 and TensorFlow 1.14 in both the GPUs.</p> <p>Secondly, When I am using 1x RTX 2080ti, with CUDA 10.2 and TensorFlow 1.14, it is taking less amount <strong>to start the training</strong> as compared to 1x RTX 3090 with 11.1 CUDA and Tensorflow 1.14. Tentatively, it is taking 5 min in 1x RTX 2080ti, 30-35 minutes in 1x RTX 3090, and 1.5 hrs in 4x RTX 3090 <strong>to start the training</strong> for one of the datasets.</p> <p>I'll be grateful if anyone can help me to resolve this issue.</p> <p>I am using Ubuntu 16.04, Core™ i9-10980XE CPU, and 32 GB ram both in 2080ti and 3090 machines.</p> <p>EDIT: I found out that TF takes a long start-up time in Ampere architecture GPUs, according <a href="https://www.tensorflow.org/install/gpu" rel="noreferrer">to this</a>, but I'm still unclear if this is the case; and, if this <em>is</em> the case, does any solution exist for it?</p>
<p>T.F. 1.x does not have binaries for CUDA 11.1, so at the start, it takes time to compile. Because RTX 3090 compiles using PTX &amp; JIT-compiler it takes a long time. <br>A general solution for this is to increase the cache size,.using code:-&quot;export CUDA_CACHE_MAXSIZE=2147483648&quot; (here 2147483648 is the cache size, you can set it any number by considering memory limit and it's usage in other processes in account). Refer to <a href="https://www.tensorflow.org/install/gpu" rel="noreferrer">https://www.tensorflow.org/install/gpu</a> for clarification. From this in the subsequent run, start-up time will be small. But even after this, binaries produce(At this start) will not be compatible with CUDA 11.1</p> The best is to migrate the code from T.F. 1.x to 2.x(2.4+) to make it run on RTX 30XX series or try compiling T.F. 1.x from source with CUDA 11.1(Not sure on this).
tensorflow|nvidia|stylegan
9
1,901,951
70,493,892
How to visualise route of vehicle in SUMO using TraCI/Python
<p>I'm using Python to compute the possible routes of a vehicle from a point to another point of a map drawn in SUMO. I would like to use now TraCI to show these routes on the map by highlighting them. Is it possible via the API to select them and then use the <code>selection</code> visualisation to see the route in traci ?</p>
<p>Yes, you can use <a href="https://sumo.dlr.de/pydoc/traci._gui.html#GuiDomain-toggleSelection" rel="nofollow noreferrer">traci.gui.toggleSelection</a>:</p> <pre><code>for e in route: traci.gui.toggleSelection(e, &quot;edge&quot;) </code></pre>
python|sumo|traffic-simulation
1
1,901,952
55,916,212
Numpy version of this particular list comprehension
<p>So a few days ago I needed a particular list comprehension in this thread: <a href="https://stackoverflow.com/questions/55877743/selecting-a-subset-of-integers-given-two-lists-of-end-points">Selecting a subset of integers given two lists of end points</a></p> <p>and I got a satisfying answer. Fast forward now, I somehow need to boost the performance, as what I am working with involves looping it and in each iterations the lengths of these end-point arrays are at least few thousands. </p> <p>So my question is is there any functions in numpy package that can get the job done but a lot faster? I've looked into numpy's linspace, repeat, arange and stuff, and couldn't find any breakthrough. And if there is a way to get the job done faster, can you guys show me the way? </p> <p>Thank you guys in advance.</p>
<p>If it is still of interest to you, you could get rid of one <code>for</code> loop and use <code>numpy.arange()</code> in combination with list comprehension and <code>numpy.hstack()</code> to get what is desired. Having said that we would still need at least one <code>for</code> loop to get this done (because neither <code>range</code> nor <code>arange</code> accept a sequence of endpoints)</p> <pre><code>t1 = [0,13,22] t2 = [4,14,25] np.hstack([np.arange(r[0], r[1]+1) for r in zip(t1, t2)]) # outputs array([ 0, 1, 2, 3, 4, 13, 14, 22, 23, 24, 25]) </code></pre> <p>However, I don't know how much more performant this is going to be for your specific case. </p>
python|list|performance|numpy|list-comprehension
2
1,901,953
49,840,564
Pandas.DataFrame.plot() problems with axes values
<p>This is the .head() of my DataFrame:</p> <pre><code> Open High Low Close Volume Market Cap Date Apr 09, 2018 7044.32 7178.11 6661.99 6770.73 4894060000 119516000000 Apr 08, 2018 6919.98 7111.56 6919.98 7023.52 3652500000 117392000000 Apr 07, 2018 6630.51 7050.54 6630.51 6911.09 3976610000 112467000000 Apr 06, 2018 6815.96 6857.49 6575.00 6636.32 3766810000 115601000000 Apr 05, 2018 6848.65 6933.82 6644.80 6811.47 5639320000 116142000000 </code></pre> <p>&amp; the statement to plot the 'Close' over the DateIndex:</p> <pre><code>plot = df['Close'].plot() </code></pre> <p>By default the x-axis is labeled with 'Date' but no DateValues are shown.How to adjust this?<br> The y-axis is not labeled but the CloseValues are shown with intervals i also want to adjust.</p> <p>Found no solution :( Thx for help as always!</p>
<p>Make the index a <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DatetimeIndex.html" rel="nofollow noreferrer"><code>DatetimeIndex</code></a>:</p> <pre><code>df.index = pd.to_datetime(df.index) df.Close.plot() </code></pre> <p><a href="https://i.stack.imgur.com/oIBfPm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oIBfPm.png" alt="enter image description here"></a></p>
python|pandas|dataframe|matplotlib
1
1,901,954
50,160,889
How to use Tensorflow with a GTX 1050 mobile edition?
<p>I recently started programming with Tensorflow on python. I wanted to improve the calculation-power using my GTX 1050 in my laptop but i didn't success...</p> <p>After installing all the required libraries and softwares (CUDA 9.0, CuDNN for CUDA 9.0, imported : tensorflow and tensorflow-gpu...), I tryed basic example from the Tensorflow website :</p> <pre><code>import tensorflow as tf a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b') c = tf.matmul(a, b) sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) print(sess.run(c))` </code></pre> <p>but this return the following answer (I translate it right after) :</p> <pre><code>2018-05-03 19:15:59.540038: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 Device mapping: no known devices. 2018-05-03 19:15:59.543664: I T:\src\github\tensorflow\tensorflow\core\common_runtime\direct_session.cc:284] Device mapping: MatMul: (MatMul): /job:localhost/replica:0/task:0/device:CPU:0 2018-05-03 19:15:59.546930: I T:\src\github\tensorflow\tensorflow\core\common_runtime\placer.cc:886] MatMul: (MatMul)/job:localhost/replica:0/task:0/device:CPU:0 b: (Const): /job:localhost/replica:0/task:0/device:CPU:0 2018-05-03 19:15:59.547254: I T:\src\github\tensorflow\tensorflow\core\common_runtime\placer.cc:886] b: (Const)/job:localhost/replica:0/task:0/device:CPU:0 a: (Const): /job:localhost/replica:0/task:0/device:CPU:0 2018-05-03 19:15:59.547597: I T:\src\github\tensorflow\tensorflow\core\common_runtime\placer.cc:886] a: (Const)/job:localhost/replica:0/task:0/device:CPU:0 [[22. 28.] [49. 64.]] </code></pre> <p>This heavy message mean that the only device available on my laptop is my CPU but I have a GTX 1050. I tried to add this line at the begining : <code>with tf.device("/device:gpu:0"):</code> I spare you all the insult of the command line but it return me this : </p> <pre><code>Operation was explicitly assigned to /device:gpu:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device. </code></pre> <p>Is anyone have an idea of the origin of my issue ? Or someone have ever overcome this issue and could help me ?</p>
<p>If it could help anyone i delete tensorflow-gpu and installed it again via the pip command (pip install tensorflow-gpu==1.8), I specify the version 1.8 and it magically worked.</p>
python|tensorflow
0
1,901,955
50,008,700
Python ncurses addstr expected bytes or str, got int
<p>I am attempting to create an ASCII level editor in Python using <code>curses</code> but I'm having issues. I get <code>Traceback (most recent call last): File "lvleditor_curses.py", line 36, in &lt;module&gt; editor.newLine() File "lvleditor_curses.py", line 31, in newLine self.stdscr.addstr(self.level[0][0]) TypeError: expect bytes or str, got int</code> when using the following code.</p> <pre><code>import os, curses class Editor: def __init__(self): self.stdscr = curses.initscr() curses.noecho() curses.cbreak() self.stdscr.keypad(True) self.stdscr.addstr("test") self.stdscr.refresh() self.level = [] def newLine(self): line = self.stdscr.getstr() self.level += [list(line)] self.stdscr.addstr(self.level[0][0]) self.stdscr.refresh() editor = Editor() editor.newLine() </code></pre>
<p>I just had this issue now. The <code>getstr</code> function returns a bytearray by default. Therefore you must cast the bytearray to a string. Add:</p> <pre><code>line = line.decode() </code></pre> <p>Under:</p> <pre><code>line = self.stdscr.getstr() </code></pre>
python|python-curses
0
1,901,956
66,497,069
Is it a good idea to store copies of documents from a mongodb collection in a dictionary list, and use this data instead of querying the database?
<p>I am currently developing a Python Discord bot that uses a Mongo database to store user data.</p> <p>As this data is continually changed, the database would be subjected to a massive number of queries to both extract and update the data; so I'm trying to find ways to minimize client-server communication and reduce bot response times.</p> <p>In this sense, is it a good idea to create a copy of a Mongo collection as a dictionary list as soon as the script is run, and manipulate the data offline instead of continually querying the database?</p> <p>In particular, every time a data would be searched with the collection.find() method, it is instead extracted from the list. On the other hand, every time a data needs to be updated with collection.update(), both the list and the database are updated.</p> <p>I'll give an example to better explain what I'm trying to do. Let's say that my collection contains documents with the following structure:</p> <pre><code>{&quot;user_id&quot;: id_of_the_user, &quot;experience&quot;: current_amount_of_experience} </code></pre> <p>and the experience value must be continually increased.</p> <p>Here's how I'm implementing it at the moment:</p> <pre><code>online_collection = db[&quot;collection_name&quot;] # mongodb cursor offline_collection = list(online_collection.find()) # a copy of the collection def updateExperience(user_id): online_collection.update_one({&quot;user_id&quot;:user_id}, {&quot;$inc&quot;:{&quot;experience&quot;:1}}) mydocument = next((document for document in offline_documents if document[&quot;user_id&quot;] == user_id)) mydocument[&quot;experience&quot;] += 1 def findExperience(user_id): mydocument = next((document for document in offline_documents if document[&quot;user_id&quot;] == user_id)) return mydocument[&quot;experience&quot;] </code></pre> <p>As you can see, the database is involved only for the update function.</p> <p>Is this a valid approach? For very large collections (millions of documents) does the next () function have the same execution times or would there still be some slowdowns?</p> <p>Also, while not explicitly asked in the question, I'd me more than happy to get any advice on how to improve the performance of a Discord bot, as long as it doesn't include using a VPS or sharding, since I'm already using these options.</p>
<p>I don't really see why not - as long as you're aware of the following :</p> <ol> <li>You will need the system resources to load an entire database into memory</li> <li>It is your responsibility to sync the actual db and your local store</li> <li>You do need to be the only person/system updating the database</li> <li>Eventually this pattern will fail i.e. db gets too large, or more than one process needs to update, so it isn't future-proof.</li> </ol> <p>In essence you're talking about a caching solution - so no need to reinvent the wheel - many such products/solutions you could use.</p> <p>It's probably not the traditional way of doing things, but if it works then why not</p>
python|mongodb|discord.py|pymongo
2
1,901,957
66,690,898
Beam - Filter out Records from Bigquery
<p>I am new to Apache Beam, and I trying to do three tasks</p> <ol> <li>Read Top 30 Items from the table</li> <li>Read Top 30 Stores from the table</li> <li>select required columns from the bigquery and apply Filter on the columns <strong>Items</strong> and <strong>Stores</strong>.</li> </ol> <p>I have this below code, to execute the pipeline</p> <pre><code>with beam.Pipeline(options=pipeline_args) as p: #read the dataset from bigquery query_top_30_items = ( p | 'GetTopItemNumbers' &gt;&gt; beam.io.ReadFromBigQuery( query=&quot;&quot;&quot;SELECT item_number, COUNT(item_number) AS freq_count FROM [bigquery-public-data.iowa_liquor_sales.sales] GROUP BY item_number ORDER BY freq_count DESC LIMIT 30&quot;&quot;&quot; ) | 'ReadItemNumbers' &gt;&gt; beam.Map(lambda elem: elem['item_number']) | 'ItemNumberAsList' &gt;&gt; beam.combiners.ToList() ) query_top_30_stores = ( p | 'GetTopStores' &gt;&gt; beam.io.ReadFromBigQuery( query = &quot;&quot;&quot;SELECT store_number, COUNT(store_number) AS store_count FROM [bigquery-public-data.iowa_liquor_sales.sales] GROUP BY store_number ORDER BY store_count DESC LIMIT 30&quot;&quot;&quot; ) | 'ReadStoreValues' &gt;&gt; beam.Map(lambda elem:elem['store_number']) | 'StoreValuesAsList' &gt;&gt; beam.combiners.ToList() ) query_whole_table = ( (query_top_30_items, query_top_30_stores) |'ReadTable' &gt;&gt; beam.io.ReadFromBigQuery( query=&quot;&quot;&quot;SELECT item_number, store_number, bottles_sold, state_bottle_retail FROM [bigquery-public-data.iowa_liquor_sales.sales]&quot;&quot;&quot;) | 'FilterByItems' &gt;&gt; beam.Filter(lambda row:row['item_number'] in query_top_30_items) | 'FilterByStore' &gt;&gt; beam.Filter(lambda row:row['store_number'] in query_top_30_stores) ) </code></pre> <p>I have attached Traceback for reference. How Can I solve this error?</p> <blockquote> <p>temp_location = pcoll.pipeline.options.view_as( Traceback (most recent call last): File &quot;run.py&quot;, line 113, in run() File &quot;run.py&quot;, line 100, in run | 'FilterByStore' &gt;&gt; beam.Filter(lambda row:row['store_number'] in query_top_30_stores) File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/apache_beam/transforms/ptransform.py&quot;, line 1058, in <strong>ror</strong> return self.transform.<strong>ror</strong>(pvalueish, self.label) File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/apache_beam/transforms/ptransform.py&quot;, line 573, in <strong>ror</strong> result = p.apply(self, pvalueish, label) File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/apache_beam/pipeline.py&quot;, line 646, in apply return self.apply(transform, pvalueish) File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/apache_beam/pipeline.py&quot;, line 689, in apply pvalueish_result = self.runner.apply(transform, pvalueish, self._options) File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/apache_beam/runners/runner.py&quot;, line 188, in apply return m(transform, input, options) File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/apache_beam/runners/runner.py&quot;, line 218, in apply_PTransform return transform.expand(input) File &quot;/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/apache_beam/io/gcp/bigquery.py&quot;, line 1881, in expand temp_location = pcoll.pipeline.options.view_as( AttributeError: 'tuple' object has no attribute 'pipeline'</p> </blockquote> <p>Since I am new to Beam, the code is not that optimized. Please let me know If I can optimize this code further.</p> <p>Thanks for your time and Help!</p>
<p>Apply Filter condition over a function will not work in a pipeline. You have 2 option for the same:-</p> <ol> <li>Apply Filter condition within pipeline.</li> <li>Apply filter condition over BQ-SQL.</li> </ol> <p>Filter condition over Function will be ambiguous for Function what to return to calling function. Hence modify you code to apply filter conditions to either of 2 place highlighted above.</p>
python-3.x|apache-beam|dataflow
2
1,901,958
63,968,492
Convert ogg audio to wav in python without ffmpeg
<p>I am getting oga file from javascript blob and I want to convert it to PCM compatible wav file in python. The approach I am using is as follow</p> <pre><code>AudioSegment.converter = r&quot;C:/ffmpeg/bin/ffmpeg.exe&quot; AudioSegment.ffprobe = r&quot;C:/ffmpeg/bin/ffprobe.exe&quot; sound = AudioSegment.from_file(&quot;file.oga&quot;) sound.export(&quot;file.wav&quot;, format=&quot;wav&quot;) </code></pre> <p>For this I have to download ffmpeg locally. Is there any way to convert oga file to wave directly.</p> <p>This is how i am saving file</p> <pre><code>f = open('./file.oga', 'wb') f.write(base64.b64decode(file)) f.close() </code></pre>
<p>pydub is your friend!</p> <pre><code>from pydub import AudioSegment def ogg2wav(ofn): wfn = ofn.replace('.ogg','.wav') x = AudioSegment.from_file(ofn) x.export(wfn, format='wav') # maybe use original resolution to make smaller </code></pre>
javascript|python|ffmpeg
0
1,901,959
63,938,088
My ussername checker keeps saying all names are available, despite if/else test
<p>I'm attempting to make a simple, small, practice program for checking usernames using if-elif-else statements.</p> <pre><code>current_users = ['Enrique', 'Jose', 'Pablo', 'John', 'Jake'] new_users = ['Mike', 'Tom', 'Bowser', 'John', 'Howard', 'Ben'] '''for user in current_users: user = user.lower() print(user) for user in new_users: user = user.lower() print(user) ''' for user in new_users: if user.lower() == current_users: print(f&quot;{user}: That username is taken.&quot;) else: print(f&quot;{user}: That username is available.&quot;) </code></pre> <p>I expect for it to print the message &quot;That username is taken&quot; for a username that's on both the new_user list and the current_user list, but instead, they all evaluate the True when I run them in the Sublime text editor.</p> <p>I know it's a logic error because no actual error message is given, but I can't quite see it yet.</p> <p>I've tried inverting the loop by starting with checking if a username from the new_users list is NOT from the current_users list.</p> <p>As you can see, I've tried changing all of the usernames to lowercase to see if it would help, but it doesn't.</p>
<p>When you look for values in a list use &quot;in&quot; operator.</p> <pre><code> current_users = ['Enrique', 'Jose', 'Pablo', 'John', 'Jake'] new_users = ['Mike', 'Tom', 'Bowser', 'John', 'Howard', 'Ben'] for user in new_users: if user in current_users: print(f&quot;{user}: That username is taken.&quot;) else: print(f&quot;{user}: That username is available.&quot;) </code></pre>
python|for-loop|if-statement|logic
1
1,901,960
53,200,980
Extracting subset of data efficiently from Pandas Dataframe
<p>I have 6 pandas dataframes (Patients, Test1, Test2, Test3, Test4, Test5) linked by an ID key. </p> <p>Each row in the Patients dataframe represents a patient containing a unique ID there are 200000+ patients/rows.</p> <p>Each row in the Test dataframes represents a test result on a day. The columns for the Test dataframes are ID, DATE, TEST_UNIT, TEST_RESULT. Each of the Test dataframes contains between 6,000,000 to 7,000,000 rows. </p> <p>I want to loop through all the IDs in the Patients dataframe and in each iteration use the ID to extract relevant test data from each of the 5 Test dataframes and do some processing on them. </p> <p>If I do</p> <pre><code>for i in range(len(Patients)): ind_id = Patients.ID.iloc[i] ind_test1 = Test1[Test1['ID'] == ind_id] ind_test2 = Test2[Test2['ID'] == ind_id] ind_test3 = Test3[Test3['ID'] == ind_id] ind_test4 = Test4[Test4['ID'] == ind_id] ind_test3 = Test5[Test5['ID'] == ind_id] </code></pre> <p>It takes about 3.6 seconds per iteration.</p> <p>When I tried to speed it up by using the Numpy interface. </p> <pre><code>Patients_v = Patients.values Test1_v = Test1.values Test2_v = Test2.values Test3_v = Test3.values Test4_v = Test4.values Test5_v = Test5.values for i in range(len(Patients_v)): ind_id = Patients_v[i, ID_idx] ind_test1 = Test1_v[Test1_v[:, 0] == ind_id] ind_test2 = Test2_v[Test2_v[:, 0] == ind_id] ind_test3 = Test3_v[Test3_v[:, 0] == ind_id] ind_test4 = Test4_v[Test4_v[:, 0] == ind_id] ind_test5 = Test5_v[Test5_v[:, 0] == ind_id] </code></pre> <p>It takes about 0.9 seconds per iteration. </p> <p>How can I speed this up? </p> <p>Thank you </p>
<p>It is unclear what output you desire. We can only assume that you want patient-specific dataframes. </p> <p>In any case, your current code will have to hold all dataframes in memory. This is inefficient. Look at, for example, <a href="https://stackoverflow.com/questions/231767/what-does-the-yield-keyword-do/231855#231855">generator functions</a>:</p> <p><strong>1. Create a list of all IDs</strong></p> <pre><code>ALL_IDS = Patients.IDs.tolist() # Assuming all you need is the ID </code></pre> <p><strong>2. Create a master dataframe</strong></p> <pre><code>ALL_DFS = [Test1, Test2, Test3, Test4, Test5] df_master = pd.concat(ALL_DFS) </code></pre> <p><strong>3. Create generator function that yields patient-specific dataframes for further processing</strong></p> <pre><code>def patient_slices(ALL_IDS): # Generator for ID in ALL_IDS: df_slice = df_master[df_master.ID == ID] yield df_slice df_slice = patient_slices(ALL_IDS) for _ in xrange(len(ALL_IDS)): # Call the generator n times sinlge_patient = next(df_slice) # Next patient for every call your_processing(sinlge_patient) # Do your magic </code></pre>
python|pandas|performance|numpy-ndarray
0
1,901,961
53,249,641
Save image stream with timestamp using OpenCV Python
<p>I am using open CV, Python to save same camera Images in jpg and png format. I am using timestamp to save the images in sequence. My code sample is following. But the problem is it only saves one image every time I execute. What will be the best solution to save the image stream with timestamp </p> <pre><code>import numpy as np import cv2 import time camera = cv2.VideoCapture(0) time = time.time() #timestamp def saveJpgImage(frame): #process image img_name = "opencv_frame_{}.jpg".format(time) cv2.imwrite(img_name, frame) def savePngImage(): #process image img_name = "opencv_frame_{}.png".format(time) cv2.imwrite(img_name, frame) def main(): while True: ret, frame = cam.read() cv2.imshow("Camera Images", frame) if not ret: break k = cv2.waitKey(1) if k%256 == 27: # ESC pressed print("Escape hit, closing...") break elif k%256 == 32: saveJpgImage(frame) savePngImage(frame) if __name__ == '__main__': main() </code></pre>
<p>You're testing when a key is pressed and calling the save function when it's pressed. If you want to call a video loop when the key is pressed please do so! (don't forget to include the escape method!)</p>
python|python-2.7|opencv3.0
0
1,901,962
53,271,865
Anaconda prompt crashes as soon as I activate tensorflow env
<p>I have just installed Anaconda 3.7 in Windows 10. Then I have created a new env for tensorflow and installed it there. It got installed without any problem. Then I used the command conda install -c conda-forge keras to install Keras. While Keras installation was running, Anaconda Prompt crashed suddenly. I restarted it and I tried to activate my tensorflow env; but as soon as I try to activate it, Anaconda Prompt crashes!! Please take a look at my screenshot. How can I fix this? Thank you very much for your support. Ferari</p> <p><a href="https://i.stack.imgur.com/NMhLD.png" rel="noreferrer">Anaconda Prompt Crashes</a></p>
<p>The problem could be due to the version of tensorflow - tensorboard mismatch. When you give the command <em>conda install -c conda-forge keras</em> for installing keras, the tensorflow and tensorboard versions gets changed. </p> <p>I tried the following steps and it worked fine for me.</p> <ul> <li>conda create -n tf python=3.6</li> <li>activate tf</li> <li>conda install keras</li> </ul> <p>Installing keras will automatically install tensorflow.</p>
tensorflow|keras|anaconda
7
1,901,963
53,227,868
Python converting csv files to dataframes
<p>I have a large csv file containing data like:</p> <pre><code>2018-09, 100, A, 2018-10, 50, M, 2018-11, 69, H,.... </code></pre> <p>and so on. (continuous stream without separate rows)</p> <p>I would want to convert it into dataframe, which would look something like</p> <pre><code>Col1 Col2 Col3 2018-09 100 A 2018-10 50 M 2018-11 69 H </code></pre> <p>This is a simplified version of the actual data. Please advice what would be the best way to approach it. </p> <p><strong>Edit:</strong> To clarify, my csv file doesn't have separate lines for each row. All the data is on one row.</p>
<p>One solution is to split your single row into chunks via the <code>csv</code> module and <a href="https://stackoverflow.com/a/312464/9209546">this algorithm</a>, then feed to <code>pd.DataFrame</code> constructor. Note your dataframe will be of dtype <code>object</code>, so you'll have to cast numeric series types explicitly afterwards.</p> <pre><code>from io import StringIO import pandas as pd import csv x = StringIO("""2018-09, 100, A, 2018-10, 50, M, 2018-11, 69, H""") # define chunking algorithm def chunks(L, n): """Yield successive n-sized chunks from l.""" for i in range(0, len(L), n): yield L[i:i + n] # replace x with open('file.csv', 'r') with x as fin: reader = csv.reader(fin, skipinitialspace=True) data = list(chunks(next(iter(reader)), 3)) # read dataframe df = pd.DataFrame(data) print(df) 0 1 2 0 2018-09 100 A 1 2018-10 50 M 2 2018-11 69 H </code></pre>
python|pandas|csv|dataframe
3
1,901,964
65,087,460
Tensorflow model.fit() reproducibility
<pre><code>import tensorflow as tf RANDOM_SEED_CONSTANT = 42 # FOR_REPRODUCIBILITY tf.random.set_seed(RANDOM_SEED_CONSTANT) # Prevent NHWC errors https://www.nuomiphp.com/eplan/en/50125.html from tensorflow.keras import backend as K K.set_image_data_format(&quot;channels_last&quot;) from tensorflow import keras from tensorflow.keras import datasets, layers, models (train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data() train_images, test_images = train_images / 255.0, test_images / 255.0 # Normalize pixel values to be between 0 and 1 # Create a simple CNN model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu', kernel_initializer=tf.keras.initializers.HeNormal(seed=RANDOM_SEED_CONSTANT))) model.add(layers.Dense(10, kernel_initializer=tf.keras.initializers.HeNormal(seed=RANDOM_SEED_CONSTANT))) print(model.summary()) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.save_weights('myweights.h5') # Run1 history = model.fit(train_images, train_labels, epochs=1, shuffle=False, validation_data=(test_images, test_labels)) # Run2 model.load_weights('myweights.h5') history = model.fit(train_images, train_labels, epochs=1, shuffle=False, validation_data=(test_images, test_labels)) # Run3 model.load_weights('myweights.h5') history = model.fit(train_images, train_labels, epochs=1, shuffle=False, validation_data=(test_images, test_labels)) </code></pre> <p>The above 3 model.fit() calls gives me the following results:</p> <pre><code>1563/1563 [==============================] - 7s 4ms/step - loss: 1.4939 - accuracy: 0.4543 - val_loss: 1.2516 - val_accuracy: 0.5567 1563/1563 [==============================] - 6s 4ms/step - loss: 1.6071 - accuracy: 0.4092 - val_loss: 1.3857 - val_accuracy: 0.4951 1563/1563 [==============================] - 7s 4ms/step - loss: 1.5538 - accuracy: 0.4325 - val_loss: 1.3187 - val_accuracy: 0.5294 </code></pre> <p>What is the reason for this difference? I am trying to understand sources which might impede reproducing results from models. Apart from random seed, dense layers initialization, what else am I missing?</p> <pre><code></code></pre>
<p>The way you are testing the reproducibility is not correct. You need to close the program and rerun it to see if the results are the same. Otherwise, the run 2 depends on the events that happened during the run 1, and the run 3 depends on the events that happened during the run 1 and 2.</p> <p>The reason is that Tensorflow maintains an internal counter for random generation, as stated in the documentation of <a href="https://www.tensorflow.org/api_docs/python/tf/random/set_seed" rel="nofollow noreferrer"><code>tf.random.set_seed</code></a> (<strong>emphasis</strong> is mine) :</p> <blockquote> <pre class="lang-py prettyprint-override"><code>print(tf.random.uniform([1], seed=1)) # generates 'A1' print(tf.random.uniform([1], seed=1)) # generates 'A2' </code></pre> <p>The reason we get 'A2' instead 'A1' on the second call of tf.random.uniform above is because the same tf.random.uniform kernel (i.e. internal representation) is used by TensorFlow for all calls of it with the same arguments, and <strong>the kernel maintains an internal counter which is incremented every time it is executed, generating different results.</strong></p> </blockquote> <p>If I run only the first run of your program twice, closing the program between each run (in IPython in that case), I get:</p> <pre class="lang-py prettyprint-override"><code>In [1]: run program.py 1563/1563 [==============================] - 13s 8ms/step - loss: 1.4997 - accuracy: 0.4540 - val_loss: 1.2528 - val_accuracy: 0.5494 {'loss': [1.4996991157531738], 'accuracy': [0.4540199935436249], 'val_loss': [1.2527965307235718], 'val_accuracy': [0.5493999719619751]} In [2]: run program.py 1563/1563 [==============================] - 12s 8ms/step - loss: 1.4997 - accuracy: 0.4540 - val_loss: 1.2528 - val_accuracy: 0.5494 {'loss': [1.4996991157531738], 'accuracy': [0.4540199935436249], 'val_loss': [1.2527965307235718], 'val_accuracy': [0.5493999719619751]} </code></pre> <p>Minus the time taken to perform the computation, that can vary a bit depending on the load on the machine, <strong>the results are completely identical.</strong></p>
tensorflow|keras|deep-learning|tensorflow2.x
1
1,901,965
65,263,877
How to define binds for class objects in python?
<p>I'm trying to define bind button for class objects which are created in function. Therefore, I write simple code here. When I press &quot;Line button&quot;, it creates new instance in class method. In &quot;<strong>init</strong> method&quot; I'm verifying it. However, when I press right click(Button 3), it gives error. It is unable to reach class instances that I created in &quot;create_line&quot; function. How can I solve this problem? I'm also open to other ideas, like defining bind function in class maybe?</p> <pre><code>from tkinter import * class line_class(): def __init__(self,line_no): self.line_number=line_no print(self.line_number) def settings_menu(self, event): print(self.line_number, &quot;: line entered&quot;) def create_line(): A=line_class(my_canvas.create_line(200, 200, 100, 100, fill='red', width=5, capstyle=ROUND, joinstyle=ROUND)) root = Tk() root.title('Moving objects') root.resizable(width=False, height=False) root.geometry('1200x600+200+50') root.configure(bg='light green') my_canvas = Canvas(root, bg='white', height=500, width=700) my_canvas.pack() btn_line = Button(root, text='Line', width=30, command=lambda: create_line()) btn_line.place(relx=0,rely=0.1) root.bind(&quot;&lt;Button-3&gt;&quot;,A.settings_menu) root.mainloop() </code></pre>
<p>First when <code>root.bind(&quot;&lt;Button-3&gt;&quot;,A.settings_menu)</code> is executed, <code>A</code> is not created yet. Second, <code>A</code> is a local variable inside <code>create_line()</code> so it cannot be accessed outside the function.</p> <p>I would recommend to define the binding inside <code>line_class</code>, and use <code>my_canvas.tag_bind(...)</code> instead of <code>root.bind(...)</code>:</p> <pre class="lang-py prettyprint-override"><code>from tkinter import * class line_class(): def __init__(self,line_no): self.line_number=line_no print(self.line_number) my_canvas.tag_bind(line_no, &quot;&lt;Button-3&gt;&quot;, self.settings_menu) def settings_menu(self, event): print(self.line_number, &quot;: line entered&quot;) def create_line(): line_class(my_canvas.create_line(200, 200, 100, 100, fill='red', width=5, capstyle=ROUND, joinstyle=ROUND)) root = Tk() root.title('Moving objects') root.resizable(width=False, height=False) root.geometry('1200x600+200+50') root.configure(bg='light green') my_canvas = Canvas(root, bg='white', height=500, width=700) my_canvas.pack() btn_line = Button(root, text='Line', width=30, command=create_line) btn_line.place(relx=0,rely=0.1) root.mainloop() </code></pre>
python|function|class|tkinter|binding
0
1,901,966
68,613,912
web scraping trouble - Some characters could not be decoded, and were replaced with REPLACEMENT CHARACTER
<p>I tried to <strong>scrape</strong> a website with urllib and beautifulsoup (python 3.9) but I still have the same error message &quot;Some characters could not be decoded, and were replaced with REPLACEMENT CHARACTER&quot; with special caracters as below:</p> <p>��T�w?.��m����%�%z��%�H=S��$S�YYyi�ABD�x�!%��f36��\�Y�j�46f����I��9��!D��������������������b7�3�8��JnH�t���mړBm���&lt;���,�zR�m��A�g��{�XF%��&amp;)�6zy��' �)a�Fo �����N舅,���~?w�w� �7z�Y6N������Q��ƣA��,p�8��/��W��q�$ ���#e�J7�#� 5�X�z�Ȥ�&amp;q��8 ��H&quot;����I0�����͂8ZY}J�m��c}&amp;5e��? &quot;/&gt;[�7X�?NF4r���[k��6�X?��VV��H�J$j�6h��e�C��]&lt;�V��z D ����&quot;d�nje��{���+YL��*�X?a���m�������MNn�+��1=b$�N�4p�0���/�h�'�?�,�[��V��$�D���Z��+�?�x�X�g����</p> <p>I read some topics about this problem but I don't find the solution in my case. Below, my code :</p> <pre><code>url = &quot;https://www.fnac.com&quot; hdr = {&quot;User-Agent&quot;: &quot;Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:89.0) Gecko/20100101 Firefox/89.0&quot;, &quot;Accept&quot;: &quot;*/*&quot;, &quot;Accept-Encoding&quot; : &quot;gzip, deflate, br&quot;, &quot;Accept-Language&quot;: &quot;fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3&quot;, &quot;Connection&quot; : &quot;keep-alive&quot;} req = urllib.request.Request(url, headers=hdr) page = urllib.request.urlopen(req) if page.getcode() == 200: soup = BeautifulSoup(page, &quot;html.parser&quot;, from_encoding=&quot;utf-8&quot;) #divs = soup.findAll('div') #href = [i['href'] for i in soup.findAll('a', href=True)] print(soup) else: print(&quot;failed!&quot;) </code></pre> <p>I tried to change encoding mode by ASCII or iso-8858-(1...9) but the problem is stil the same.</p> <p>Thanks for your help :)</p>
<p>Remove <code>Accept-Encoding</code> from the HTTP headers:</p> <pre class="lang-py prettyprint-override"><code>import urllib from bs4 import BeautifulSoup url = &quot;https://www.fnac.com&quot; hdr = { &quot;User-Agent&quot;: &quot;Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:89.0) Gecko/20100101 Firefox/89.0&quot;, &quot;Accept&quot;: &quot;*/*&quot;, # &quot;Accept-Encoding&quot;: &quot;gzip, deflate, br&quot;, &quot;Accept-Language&quot;: &quot;fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3&quot;, &quot;Connection&quot;: &quot;keep-alive&quot;, } req = urllib.request.Request(url, headers=hdr) page = urllib.request.urlopen(req) if page.getcode() == 200: soup = BeautifulSoup(page, &quot;html.parser&quot;, from_encoding=&quot;utf-8&quot;) # divs = soup.findAll('div') # href = [i['href'] for i in soup.findAll('a', href=True)] print(soup) else: print(&quot;failed!&quot;) </code></pre> <p>Prints:</p> <pre class="lang-html prettyprint-override"><code> &lt;!DOCTYPE html&gt; &lt;html class=&quot;no-js&quot; lang=&quot;fr-FR&quot;&gt; &lt;head&gt;&lt;meta charset=&quot;utf-8&quot;/&gt; &lt;!-- entry: inline-kameleoon --&gt; ... </code></pre>
python-3.x|web-scraping
2
1,901,967
71,560,880
plotly: add box plot as subplot
<p>I'm attempting to create a visualization where a pie chart appears on top, and a box plot appears below. I'm using the plotly library.</p> <p>I tried using this code:</p> <pre><code>import plotly.express as px import plotly.graph_objects as go from plotly.subplots import make_subplots fig = make_subplots( rows=2, cols=1, specs=[[{'type':'pie'}], [{'type':'box'}]], ) # pie chart pie = go.Pie(values=[1, 2, 3, 4, 5], labels=['a', 'b', 'a', 'a', 'c'], sort=False) # box plot import numpy as np np.random.seed(1) y0 = [10, 1, 2, 3, 1, 5, 8, 2] y1 = [10, 1, 2, 3, 1, 5, 8, 2] box = go.Figure() box.add_trace(go.Box(y=y0)) box.add_trace(go.Box(y=y1)) # add pie chart and box plot to figure fig.add_trace(pie, row=1, col=1) fig.add_trace(box, row=2, col=1) fig.update_traces(textposition='inside', textinfo='percent+label') fig.show() </code></pre> <p>However, I'm encountering this error:</p> <pre><code>Invalid element(s) received for the 'data' property of Invalid elements include: [Figure({ 'data': [{'type': 'box', 'y': [10, 1, 2, 3, 1, 5, 8, 2]}, {'type': 'box', 'y': [10, 1, 2, 3, 1, 5, 8, 2]}], 'layout': {'template': '...'} })] </code></pre>
<p>There are two errors in your code</p> <ol> <li><code>box</code> is created as a figure. <code>add_trace()</code> adds a trace not a figure. Changed to loop through traces in <code>box</code> figure</li> <li><code>update_traces()</code> is updating attributes that don't exist in a <strong>Box</strong> trace. Changed to use <code>for_each_trace()</code> and update attribute that are valid for trace type</li> </ol> <h3>full code</h3> <pre><code>import plotly.express as px import plotly.graph_objects as go from plotly.subplots import make_subplots fig = make_subplots( rows=2, cols=1, specs=[[{&quot;type&quot;: &quot;pie&quot;}], [{&quot;type&quot;: &quot;box&quot;}]], ) # pie chart pie = go.Pie(values=[1, 2, 3, 4, 5], labels=[&quot;a&quot;, &quot;b&quot;, &quot;a&quot;, &quot;a&quot;, &quot;c&quot;], sort=False) # box plot import numpy as np np.random.seed(1) y0 = [10, 1, 2, 3, 1, 5, 8, 2] y1 = [10, 1, 2, 3, 1, 5, 8, 2] box = go.Figure() box.add_trace(go.Box(y=y0)) box.add_trace(go.Box(y=y1)) # add pie chart and box plot to figure fig.add_trace(pie, row=1, col=1) # fig.add_trace(box, row=2, col=1) for t in box.data: fig.add_trace(t, row=2, col=1) # fig.update_traces(textposition='inside', textinfo='percent+label') fig.for_each_trace( lambda t: t.update(textposition=&quot;inside&quot;, textinfo=&quot;percent+label&quot;) if isinstance(t, go.Pie) else t ) fig.show() </code></pre> <p><a href="https://i.stack.imgur.com/jnl3O.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jnl3O.png" alt="enter image description here" /></a></p>
python|plotly|plotly-dash|plotly-python
0
1,901,968
71,505,199
How can I implement this counting sort pseudocode in python?
<p>I have been trying to implement this counting sort pseudocode in python but I haven't been successful. How can I fix it?</p> <pre><code>functionCountingSort(A,k) C ← newVector[k + 1] R ← newVector[Length(A)] pos ← 0 for 0 ≤ j &lt; Length(A)do C[A[j]] ← C[A[j]] + 1 end for for 0 ≤ i &lt; k+1 do for pos ≤ r &lt; pos + C[i] do R[r] ← i end for pos ← r end for return R end function </code></pre> <p><a href="https://i.stack.imgur.com/WmelR.png" rel="nofollow noreferrer">enter image description here</a></p> <p>My own attempt.</p> <pre><code>def countingSort(arr): size = len(arr) k = max(arr) k_1 = k + 1 pos = 0 # Count array C = [0] * k_1 # Output array R = [0] * size for j in range(0, size): C[arr[j]] = C[arr[j]] + 1 for i in range(0, k_1): for pos in range(r, pos + C[i]) R[r[i]] = i pos = r return R </code></pre> <p>data = [4, 2, 2, 8, 3, 3, 1] countingSort(data) print(&quot;Sorted Array in Ascending Order: &quot;) print(data)</p>
<pre class="lang-py prettyprint-override"><code>for pos in range(r, pos + C[i]) R[r[i]] = i </code></pre> <p>seem to be</p> <pre class="lang-py prettyprint-override"><code>for r in range(pos, pos + C[i]) R[r] = i </code></pre> <p>if your pseudo-code is right.</p>
python|python-3.x|pseudocode
0
1,901,969
71,545,628
append values to the new columns in the CSV
<p>I have two CSV, one is the Master-Data and the other is the Component-Data, Master-Data has Two Rows and two columns, where as Component-Data has 5 rows and two Columns. <a href="https://i.stack.imgur.com/lG7K4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lG7K4.png" alt="Master-Data" /></a></p> <p><a href="https://i.stack.imgur.com/sd4qI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sd4qI.png" alt="Component-Data" /></a></p> <p>I'm trying to find the cosine-similarity between each of them after Tokenization, Stemming and Lemmatization and then append the similarity index to the new columns, I'm unable to append the corresponding values to the column in the data-frame which is further needs to be converted to CSV.</p> <p>My Approach:</p> <pre><code>import re from nltk.corpus import stopwords from nltk.stem import PorterStemmer,WordNetLemmatizer from collections import Counter import pandas as pd portStemmer=PorterStemmer() wordNetLemmatizer = WordNetLemmatizer() fields = ['Sentences'] cosineSimilarityList = [] def fetchLemmantizedWords(): eliminatePunctuation = re.sub('[^a-zA-Z]', ' ',value) convertLowerCase = eliminatePunctuation.lower() tokenizeData = convertLowerCase.split() eliminateStopWords = [word for word in tokenizeData if not word in set(stopwords.words('english'))] stemWords= list(set([portStemmer.stem(value) for value in eliminateStopWords])) wordLemmatization = [wordNetLemmatizer.lemmatize(x) for x in stemWords] return wordLemmatization def fetchCosine(eachMasterData,eachComponentData): masterDataValues = Counter(eachMasterData) componentDataValues = Counter(eachComponentData) bagOfWords = list(masterDataValues.keys() | componentDataValues.keys()) masterDataVector = [masterDataValues.get(bagOfWords, 0) for bagOfWords in bagOfWords] componentDataVector = [componentDataValues.get(bagOfWords, 0) for bagOfWords in bagOfWords] masterDataLength = sum(contractElement*contractElement for contractElement in masterDataVector) ** 0.5 componentDataLength = sum(questionElement*questionElement for questionElement in componentDataVector) ** 0.5 dotProduct = sum(contractElement*questionElement for contractElement,questionElement in zip(masterDataVector, componentDataVector)) cosine = int((dotProduct / (masterDataLength * componentDataLength))*100) return cosine masterData = pd.read_csv('C:\\Similarity\\MasterData.csv', skipinitialspace=True) componentData = pd.read_csv('C:\\Similarity\\ComponentData.csv', skipinitialspace=True) for value in masterData['Sentences']: eachMasterData = fetchLemmantizedWords() for value in componentData['Sentences']: eachComponentData = fetchLemmantizedWords() cosineSimilarity = fetchCosine(eachMasterData,eachComponentData) cosineSimilarityList.append(cosineSimilarity) for value in cosineSimilarityList: componentData = componentData.append(pd.DataFrame(cosineSimilarityList, columns=['Cosine Similarity']), ignore_index=True) #componentData['Cosine Similarity'] = value </code></pre> <p>expected output after converting the df to CSV, <a href="https://i.stack.imgur.com/n81y3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n81y3.png" alt="Expected Output" /></a></p> <p>Facing issues while appending the values to the Data-frame, Please assist me with an approach for this. Thanks.</p>
<p>Here's what I came up with:</p> <h2>Sample set up</h2> <pre class="lang-py prettyprint-override"><code>csv_master_data = \ &quot;&quot;&quot; SI.No;Sentences 1;Emma is writing a letter. 2;We wake up early in the morning. &quot;&quot;&quot; csv_component_data = \ &quot;&quot;&quot; SI.No;Sentences 1;Emma is writing a letter. 2;We wake up early in the morning. 3;Did Emma Write a letter? 4;We sleep early at night. 5;Emma wrote a letter. &quot;&quot;&quot; import pandas as pd from io import StringIO df_md = pd.read_csv(StringIO(csv_master_data), delimiter=';') df_cd = pd.read_csv(StringIO(csv_component_data), delimiter=';') </code></pre> <p>We end up with 2 dataframes (showing <code>df_cd</code>):</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: right;">SI.No</th> <th style="text-align: left;">Sentences</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: right;">1</td> <td style="text-align: left;">Emma is writing a letter.</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: right;">2</td> <td style="text-align: left;">We wake up early in the morning.</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: right;">3</td> <td style="text-align: left;">Did Emma Write a letter?</td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: right;">4</td> <td style="text-align: left;">We sleep early at night.</td> </tr> <tr> <td style="text-align: right;">4</td> <td style="text-align: right;">5</td> <td style="text-align: left;">Emma wrote a letter.</td> </tr> </tbody> </table> </div> <p>I replaced the 2 functions you used by the following dummy functions:</p> <pre class="lang-py prettyprint-override"><code>import random def fetchLemmantizedWords(words): return [random.randint(1,30) for x in words] def fetchCosine(lem_md, lem_cd): return 100 if len(lem_md) == len(lem_cd) else random.randint(0,100) </code></pre> <h2>Processing data</h2> <p>First, we apply the <code>fetchLemmantizedWords</code> function on each dataframe. The regex replace, lowercase and split of the sentences is done by Pandas instead of doing them in the function itself.</p> <p>By making the sentence lowercase first, we can simplify the regex to only consider lowercase letters.</p> <pre class="lang-py prettyprint-override"><code>for df in (df_md, df_cd): df['lem'] = df.apply(lambda x: fetchLemmantizedWords(x.Sentences .lower() .replace(r'[^a-z]', ' ') .split()), result_type='reduce', axis=1) </code></pre> <p>Result for <code>df_cd</code>:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: right;">SI.No</th> <th style="text-align: left;">Sentences</th> <th style="text-align: left;">lem</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: right;">1</td> <td style="text-align: left;">Emma is writing a letter.</td> <td style="text-align: left;">[29, 5, 4, 9, 28]</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: right;">2</td> <td style="text-align: left;">We wake up early in the morning.</td> <td style="text-align: left;">[16, 8, 21, 14, 13, 4, 6]</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: right;">3</td> <td style="text-align: left;">Did Emma Write a letter?</td> <td style="text-align: left;">[30, 9, 23, 16, 5]</td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: right;">4</td> <td style="text-align: left;">We sleep early at night.</td> <td style="text-align: left;">[8, 25, 24, 7, 3]</td> </tr> <tr> <td style="text-align: right;">4</td> <td style="text-align: right;">5</td> <td style="text-align: left;">Emma wrote a letter.</td> <td style="text-align: left;">[30, 30, 15, 7]</td> </tr> </tbody> </table> </div> <p>Next, we use a cross-join to make a dataframe with all possible combinations of <code>md</code> and <code>cd</code> data.</p> <pre class="lang-py prettyprint-override"><code>df_merged = pd.merge(df_md[['SI.No', 'lem']], df_cd[['SI.No', 'lem']], how='cross', suffixes=('_md','_cd') ) </code></pre> <p><code>df_merged</code> contents:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: right;">SI.No_md</th> <th style="text-align: left;">lem_md</th> <th style="text-align: right;">SI.No_cd</th> <th style="text-align: left;">lem_cd</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: right;">1</td> <td style="text-align: left;">[14, 22, 9, 21, 4]</td> <td style="text-align: right;">1</td> <td style="text-align: left;">[3, 4, 8, 17, 2]</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: right;">1</td> <td style="text-align: left;">[14, 22, 9, 21, 4]</td> <td style="text-align: right;">2</td> <td style="text-align: left;">[29, 3, 10, 2, 19, 18, 21]</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: right;">1</td> <td style="text-align: left;">[14, 22, 9, 21, 4]</td> <td style="text-align: right;">3</td> <td style="text-align: left;">[20, 22, 29, 4, 3]</td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: right;">1</td> <td style="text-align: left;">[14, 22, 9, 21, 4]</td> <td style="text-align: right;">4</td> <td style="text-align: left;">[17, 7, 1, 27, 19]</td> </tr> <tr> <td style="text-align: right;">4</td> <td style="text-align: right;">1</td> <td style="text-align: left;">[14, 22, 9, 21, 4]</td> <td style="text-align: right;">5</td> <td style="text-align: left;">[17, 5, 3, 29]</td> </tr> <tr> <td style="text-align: right;">5</td> <td style="text-align: right;">2</td> <td style="text-align: left;">[12, 30, 10, 11, 7, 11, 8]</td> <td style="text-align: right;">1</td> <td style="text-align: left;">[3, 4, 8, 17, 2]</td> </tr> <tr> <td style="text-align: right;">6</td> <td style="text-align: right;">2</td> <td style="text-align: left;">[12, 30, 10, 11, 7, 11, 8]</td> <td style="text-align: right;">2</td> <td style="text-align: left;">[29, 3, 10, 2, 19, 18, 21]</td> </tr> <tr> <td style="text-align: right;">7</td> <td style="text-align: right;">2</td> <td style="text-align: left;">[12, 30, 10, 11, 7, 11, 8]</td> <td style="text-align: right;">3</td> <td style="text-align: left;">[20, 22, 29, 4, 3]</td> </tr> <tr> <td style="text-align: right;">8</td> <td style="text-align: right;">2</td> <td style="text-align: left;">[12, 30, 10, 11, 7, 11, 8]</td> <td style="text-align: right;">4</td> <td style="text-align: left;">[17, 7, 1, 27, 19]</td> </tr> <tr> <td style="text-align: right;">9</td> <td style="text-align: right;">2</td> <td style="text-align: left;">[12, 30, 10, 11, 7, 11, 8]</td> <td style="text-align: right;">5</td> <td style="text-align: left;">[17, 5, 3, 29]</td> </tr> </tbody> </table> </div> <p>Next, we calculate the <em>cosine</em> value:</p> <pre class="lang-py prettyprint-override"><code>df_merged['cosine'] = df_merged.apply(lambda x: fetchCosine(x.lem_md, x.lem_cd), axis=1) </code></pre> <p>In the last step, we pivot the data and merge the original <code>df_cd</code> with the calculated results :</p> <pre class="lang-py prettyprint-override"><code>pd.merge(df_cd.drop(columns='lem').set_index('SI.No'), df_merged.pivot_table(index='SI.No_cd', columns='SI.No_md').droplevel(0, axis=1), how='inner', left_index=True, right_index=True) </code></pre> <p>Result (again, these are dummy calculations):</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">SI.No</th> <th style="text-align: left;">Sentences</th> <th style="text-align: left;">1</th> <th style="text-align: left;">2</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">1</td> <td style="text-align: left;">Emma is writing a letter.</td> <td style="text-align: left;">100</td> <td style="text-align: left;">64</td> </tr> <tr> <td style="text-align: left;">2</td> <td style="text-align: left;">We wake up early in the morning.</td> <td style="text-align: left;">63</td> <td style="text-align: left;">100</td> </tr> <tr> <td style="text-align: left;">3</td> <td style="text-align: left;">Did Emma Write a letter?</td> <td style="text-align: left;">100</td> <td style="text-align: left;">5</td> </tr> <tr> <td style="text-align: left;">4</td> <td style="text-align: left;">We sleep early at night.</td> <td style="text-align: left;">100</td> <td style="text-align: left;">17</td> </tr> <tr> <td style="text-align: left;">5</td> <td style="text-align: left;">Emma wrote a letter.</td> <td style="text-align: left;">35</td> <td style="text-align: left;">9</td> </tr> </tbody> </table> </div>
python|python-3.x|list|dataframe|csv
1
1,901,970
10,853,688
Tests not being convered by 2to3 in setup.py?
<p>I have a setup.py that needs to support both Python 2 and 3.</p> <p>The code currently works and is installable in Python 2.x</p> <p>If I add the <code>use_2to3 = True</code> clause to my setup.py, then the module can be <em>installed</em> in Python 3, however, doing a:</p> <pre><code>python setup.py test </code></pre> <p>Causes a failure as one of the tests uses the StringIO class, and the import line goofs in Python 3 (it's currently <code>from StringIO import StringIO</code>, where in Python3 it should be <code>from io import StringIO</code></p> <p>I thought though that once you add the use_2to3 keyword all tests (including unittests) were processed by 2to3 before being tested.</p> <p>What am I missing? In case it helps, the bulk of my setup.py looks like:</p> <pre><code>from setuptools import setup setup( name='myproject', version='1.0', description='My Cool project', classifiers = [ 'Programming Language :: Python', 'Programming Language :: Python :: 3', ], py_modules=['mymodule'], test_suite='test_mymodule', zip_safe=False, use_2to3 = True, ) </code></pre> <p>Edit: the reason I feel as though 2to3 isn't getting run on a <code>python setup.py test</code> is that it blows up &amp; the bottom of the stacktrace reads:</p> <pre><code>File "/home/aparkin/temp/mymodule/test_mymodule.py", line 18, in &lt;module&gt; from StringIO import StringIO </code></pre> <p>But if I ran 2to3 on test_mymodule.py, then that import line should've been reworked to:</p> <pre><code>from io import StringIO </code></pre> <p>And (at worst) the tests should just individually fail.</p>
<p>In order for distribute to pick your module up and run in through 2to3, it must be listed in py_modules. So change that to:</p> <pre><code>py_modules=['mymodule', 'test_mymodule'], </code></pre> <p>Unfortunately this has a side-effect of installing test_mymodule when you install the project, which you probably did not want. For cases like this I will generally convert the project into a package with a mymodule.tests sub-package. This way the tests can be "installable" without adding additional clutter.</p>
python|unit-testing|python-3.x|setup.py|python-2to3
1
1,901,971
10,782,712
Python to insert quotes to column in CSV
<p>I have no knowledge of python. What i want to be able to do is create a script that will edit a CSV file so that it will wrap every field in column 3 around quotes. I haven't been able to find much help, is this quick and easy to do? Thanks.</p> <pre><code>column1,column2,column3 1111111,2222222,333333 </code></pre>
<p>This is a fairly crude solution, very specific to your request (assuming your source file is called "csvfile.csv" and is in C:\Temp).</p> <pre><code>import csv newrow = [] csvFileRead = open('c:/temp/csvfile.csv', 'rb') csvFileNew = open('c:/temp/csvfilenew.csv', 'wb') # Open the CSV csvReader = csv.reader(csvFileRead, delimiter = ',') # Append the rows to variable newrow for row in csvReader: newrow.append(row) # Add quotes around the third list item for row in newrow: row[2] = "'"+str(row[2])+"'" csvFileRead.close() # Create a new CSV file csvWriter = csv.writer(csvFileNew, delimiter = ',') # Append the csv with rows from newrow variable for row in newrow: csvWriter.writerow(row) csvFileNew.close() </code></pre> <p>There are MUCH more elegant ways of doing what you want, but I've tried to break it down into basic chunks to show how each bit works.</p>
python
2
1,901,972
4,978,835
is there an API for python to work with pressure-sensitive pen-tablets? (Mac OS, Linux)
<p>I want to write a cross-platform wxPython app, and I'm wondering if there a single API to work with pen-tablets on different platforms? I'm only interested to get pressure value and ereaser flag - but I couldn't fined anything cross-platform for python.</p> <p><strong>UPD.</strong> so far, I found only windows-specific <a href="http://cgkit.sourceforge.net/doc2/wintab.html#" rel="nofollow">solution</a>, what are the options for Mac OS and Linux?</p>
<p><a href="https://bitbucket.org/AnomalousUnderdog/pythonmactabletlib" rel="nofollow">https://bitbucket.org/AnomalousUnderdog/pythonmactabletlib</a></p> <blockquote> <p>A small Python library to allow Python scripts to access pen tablet input data in Mac OS X.</p> <p>The library exists as plain C code compiled as a dynamic library/shared object. It interfaces with the Mac OS X's API to get data on pen tablet input.</p> <p>Then, Python scripts can use ctypes to get the data.</p> </blockquote> <p>Send me a message if you have any problems with it.</p>
python|wxpython|tablet|pen-tablet
2
1,901,973
5,323,110
How to do paging/pagination in Django for 3rd party REST services
<p>To use flickr as an example, a request URL looks something like this:</p> <pre><code>'http://api.flickr.com/services/rest/?&amp;method=flickr.people.getPublicPhotos&amp;api_key=' + settings.FLICKR_API_KEY + '&amp;user_id=' + userid + '&amp;format=json&amp;per_page' + per_page + '&amp;page=' + page + '&amp;nojsoncallback=1' </code></pre> <p>where <code>page</code> controls which page to display and <code>per_page</code> controls the number of photos to return</p> <p>To simplify matters, let's make <code>per_page</code> fixed. So my question is, how can I implement a paging system that allows a user to go one page forwards or back at anytime on the webpage? </p> <p>I imagine I would need to pass the page number to iterate through the request URL such that the right data is displayed. SO I guess I'm not sure how to tie the template to the views.py. Essentially, I'm looking for the Django version of this <a href="https://stackoverflow.com/questions/676571/pagination-and-sorting-in-a-rails-restful-application">Rails question</a>.</p> <p>The examples and plugins I have come across so far (e.g. django-pagination) mainly deal with pagination resulting from a database query. </p>
<p>Django-pagination will work with any list of objects -- not just calls from the database. The example <a href="http://docs.djangoproject.com/en/dev/topics/pagination/?from=olddocs" rel="nofollow">here</a> actually starts off with an example that has nothing to do with local models or databases.</p> <p>For an API-type call, you'd just need to read your objects into a list, and then create a new Paginator objects based off of that list. All you have to do is give it the number of objects you want per page. It's really very simple.</p>
python|django|api|rest|pagination
3
1,901,974
62,696,868
Highlighting maximum value in a column on a seaborn heatmap
<p>I have a <code>seaborn.heatmap</code> plotted from a <code>DataFrame</code>:</p> <p><a href="https://i.stack.imgur.com/vJrSf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vJrSf.png" alt="enter image description here" /></a></p> <pre><code>import seaborn as sns import matplotlib.pyplot as plt fig = plt.figure(facecolor='w', edgecolor='k') sns.heatmap(collected_data_frame, annot=True, vmax=1.0, cmap='Blues', cbar=False, fmt='.4g') </code></pre> <p>I would like to create some sort of highlight for a maximum value in each column - it could be a red box around that value, or a red dot plotted next to that value, or the cell could be colored red instead of using <code>Blues</code>. Ideally I'm expecting something like this:</p> <p><a href="https://i.stack.imgur.com/BPcBu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BPcBu.png" alt="enter image description here" /></a></p> <p>I got the highlight working for DataFrame printing in Jupyter Notebook using tips from <a href="https://stackoverflow.com/questions/45606458/python-pandas-highlighting-maximum-value-in-column">this answer</a>:</p> <p><a href="https://i.stack.imgur.com/QcwqK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QcwqK.png" alt="enter image description here" /></a></p> <p>How can I achieve a similar thing but on a heatmap?</p>
<p>We've customized the heatmap examples in the <a href="https://seaborn.pydata.org/examples/heatmap_annotation.html" rel="nofollow noreferrer">official reference</a>. The customization examples were created from the responses from <a href="https://stackoverflow.com/questions/31290778/add-custom-border-to-certain-cells-in-a-matplotlib-seaborn-plot">this site</a>. It's a form of adding parts to an existing graph. I added a frame around the maximum value, but this is manual.</p> <pre><code>import matplotlib.pyplot as plt from matplotlib.patches import Rectangle import seaborn as sns sns.set() # Load the example flights dataset and convert to long-form flights_long = sns.load_dataset(&quot;flights&quot;) flights = flights_long.pivot(&quot;month&quot;, &quot;year&quot;, &quot;passengers&quot;) # Draw a heatmap with the numeric values in each cell f, ax = plt.subplots(figsize=(9, 6)) ax = sns.heatmap(flights, annot=True, fmt=&quot;d&quot;, linewidths=.5, ax=ax) ax.add_patch(Rectangle((10,6),2,2, fill=False, edgecolor='blue', lw=3)) </code></pre> <p><a href="https://i.stack.imgur.com/IUGVG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IUGVG.png" alt="enter image description here" /></a></p> <p>max value:</p> <pre><code>ymax = max(flights) ymax 1960 flights.columns.get_loc(ymax) 11 xmax = flights[ymax].idxmax() xmax 'July' xpos = flights.index.get_loc(xmax) xpos 6 ax.add_patch(Rectangle((ymax,xpos),1,1, fill=False, edgecolor='blue', lw=3)) </code></pre>
python|seaborn|heatmap
2
1,901,975
67,286,429
FileNotFoundError while taking input an image in "load_img" in keras in jupyter notebook
<p>FileNotFoundError while taking input an image in &quot;load_img&quot; in keras in jupyter notebook</p> <pre><code>import pickle import numpy as np from tensorflow.keras.preprocessing.image import ImageDataGenerator,load_img import json import requests import PIL import matplotlib.pyplot as plt img=np.array(load_img(&quot;IMG_2996.jpg&quot;).resize((224,224))).tolist() #Iam getting problem here,in jupyter notebook it is showing filenotfound url='http://127.0.0.1:5000/model' requested_data=json.dumps({'img':img}) response = requests.post(url,requested_data) response.text </code></pre>
<p>Your image file is not in the same directory as your current working directory. This will tell you the directory it's looking for.</p> <pre><code>import os print(os.getcwd()) </code></pre> <p>Correct your relative path or use an absolute path.</p>
python|image-processing|keras|jupyter-notebook|jupyter
0
1,901,976
60,577,911
Sorting lost in Altair layered bar chart with error bars
<p>I am using a custom sort for my bar chart and it works well. However when I want to add error bars to it and use a layered chart then the sorting is not taking into account anymore. I also defined <code>axis = None</code> and that is also not taking into account.</p> <p>Here is an example of the data:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame( {'size' : ['huge', 'huge', 'huge', 'huge', 'huge', 'huge', 'big', 'big', 'big', 'big', 'big', 'big', 'small', 'small', 'small', 'small', 'small', 'small'], 'weight': ['10 mg', '10 mg', '10 g', '10 g', '10 kg', '10 kg', '10 mg', '10 mg', '10 g', '10 g', '10 kg', '10 kg','10 mg', '10 mg', '10 g', '10 g', '10 kg', '10 kg'], 'value': [3.5,2.6,5.1,6.5,2.3,4.6,7.1,2.8,6.9,1.5,2.6,2.8,6.9,2.3,4.6,3.5,2.6,5.1] } ) </code></pre> <p>Using just the bar chart works</p> <pre class="lang-py prettyprint-override"><code>alt.Chart(df).mark_bar().encode( x = alt.X('weight:O', title=None, axis=None, sort=['10 kg', '10 g', '10 mg']), y = alt.Y('mean(value)', title='Value'), color = alt.Color('weight:O', sort=['10 kg', '10 g', '10 mg']), column = alt.Column('size', sort=['huge', 'big', 'small']) ) </code></pre> <p><a href="https://i.stack.imgur.com/Nxe1b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Nxe1b.png" alt="enter image description here"></a></p> <p>But not anymore with the error bars:</p> <pre class="lang-py prettyprint-override"><code>error_bars = alt.Chart().mark_errorbar(extent='ci').encode( x=alt.X('weight:O', sort=['10 kg', '10 g', '10 mg']), y='value:Q' ) bars = alt.Chart().mark_bar().encode( x = alt.X('weight:O', title=None, axis=None, sort=['10 kg', '10 g', '10 mg']), y = alt.Y('mean(value)', title='Value'), color = alt.Color('weight:O', sort=['10 kg', '10 g', '10 mg']) ) alt.layer(bars, error_bars, data=df).facet( column = alt.Column('size', sort=['huge', 'big', 'small']) ) </code></pre> <p><a href="https://i.stack.imgur.com/Wvkhz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Wvkhz.png" alt="enter image description here"></a></p> <p>In both plots the <code>axis</code> and <code>title</code> have been set to <code>None</code> but is not taking into account in the layered chart. The weird thing is that the sorting is taking into account for the legend (see <code>color = ...</code>) but not for the x-axis (within each size).</p> <p>Is there a way around this or am I not using the layered charts correctly?</p>
<p>To hide the axis in the layered chart, you should set <code>axis=None</code> and <code>title=None</code> in both layers:</p> <pre><code>error_bars = alt.Chart().mark_errorbar(extent='ci').encode( x=alt.X('weight:O', title=None, axis=None, sort=['10 kg', '10 g', '10 mg']), y='value:Q' ) bars = alt.Chart().mark_bar().encode( x = alt.X('weight:O', title=None, axis=None, sort=['10 kg', '10 g', '10 mg']), y = alt.Y('mean(value)', title='Value'), color = alt.Color('weight:O', sort=['10 kg', '10 g', '10 mg']) ) alt.layer(bars, error_bars, data=df).facet( column = alt.Column('size', sort=['huge', 'big', 'small']) ) </code></pre> <p><a href="https://i.stack.imgur.com/yQUNo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yQUNo.png" alt="enter image description here"></a></p> <p>You'll notice that my version of the chart has the correct sort order: this is because I'm using Altair version 4.0. There was a <a href="https://github.com/vega/vega-lite/issues/5048" rel="nofollow noreferrer">bug</a> in earlier versions of Altair/Vega-Lite that prevented sort from behaving properly in layered charts.</p> <p>Update to Altair 4.0 or newer and your sorting will work.</p>
python|altair
1
1,901,977
70,048,278
How to get sum() for each year and return years and sum of those years in Pandas
<p>I am using the below code:</p> <pre class="lang-py prettyprint-override"><code>df.groupby(['year','value']).sum().groupby(['year','value']).count() </code></pre> <p>And it produces the following table:</p> <pre class="lang-none prettyprint-override"><code>year value 1960 5 1962 6 4 7 2000 4 2020 7 3 9 </code></pre> <p>My question is how to get the sum of <code>value</code> for each year without using a <code>for</code> loop. If it's possible, I presume that <code>cumsum()</code> will be the best solution.</p> <p>Expected result:</p> <pre class="lang-none prettyprint-override"><code>year value 1960 5 1962 17 2000 4 2020 19 </code></pre>
<pre class="lang-py prettyprint-override"><code>df = df.groupby('year').sum().reset_index() </code></pre> <p>Output:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df year value 0 1960 5 1 1962 17 2 2000 4 3 2020 19 </code></pre>
pandas|dataframe
1
1,901,978
10,893,742
How to prevent Tkinter and Pyserial from blocking entire program execution
<p>I am currently writing a program that involves the usage of Tkinter (for GUI), Pyserial (serial communication with a device), and Tweepy. The current flow of execution is as follow:</p> <ol> <li>GUI class is instantiated (based on Tkinter). This set of code uses Tkinter's standard key binding functions as well. </li> <li>GUI class creates a non-blocking Serial connection internally. This is only used for writing, not reading.</li> <li>Program then proceeds to create a Tweepy Stream instance, and listens for Tweets.</li> </ol> <p>The GUI, and Tweepy instances work fine separately. However, when called sequentially, the stops at (2). Tweepy is only initialized after the Tkinter window is closed.</p> <p>Is there a way to make both Tweepy and the GUI work concurrently (similar to multithreading, except that the objects should have references to each other?)</p>
<p>Found a solution. I used the threading class.</p> <p>Apparently, I just had to call the GUI initializer method using <code>thread_1 = threading.Thread()</code> and <code>thread_1.start()</code>. The other functions did not have to be run as threads.</p> <p>These might prove helpful for anyone who has encountered the same problem: <a href="http://softwareramblings.com/2008/06/running-functions-as-threads-in-python.html" rel="nofollow">http://softwareramblings.com/2008/06/running-functions-as-threads-in-python.html</a></p>
python|tkinter|tweepy
0
1,901,979
63,484,237
Python 3.9 - Tkinter - Custom widget class event bind
<p>I'm attempting to create a custom search box class named &quot;TkinterEntryBox&quot; based on Tkinter &quot;Entry&quot; widget that would clear its content when left mouse button is pressed inside text entry area. Currently I'm trying to bind a function called &quot;clear&quot; that is a part of custom class that would clear an input in a parent window that would contain instance of &quot;TkinterEntryBox&quot; widget. I read that inheriting from &quot;Entry&quot; class would be a preferred way of resolving my problem, but I would like to use composition instead of inheritance, since I don't want to have &quot;Entry&quot; class &quot;leaking&quot; outside my custom class.</p> <p>Problem is, that while callback function is called as expected its &quot;event&quot; argument contains bound &quot;Entry&quot; class widget instance and not &quot;TkinterEntryBox&quot; instance. It causes and exception, since &quot;Entry&quot; class does not have an &quot;clear&quot; method.</p> <p>Is it possible to force Tkinter to bind my custom class instead of &quot;Entry&quot; class, so that &quot;event&quot; argument in callback function would contain my custom class instance under &quot;widget&quot; property, so that I can safely call &quot;clear&quot; method? Moreover, since I'm new to Tkinter and GUI programming could someone, please tell me if such an approach of creation of widgets is a valid one? If not, then I would greatly appreciate some pointers how to improve my code.</p> <p>Here is a rough idea of what I created so far:</p> <p>Below is the custom entry class which &quot;clear&quot; method I would like to call through event callback:</p> <pre><code>class TkinterEntryBox: def __init__(self, parent_window: BaseWindow, events_to_callbacks_bindings: Dict[str, Callable]): self._tkinter_entry = Entry(parent_window) self._bind_callbacks_to_events(events_to_callbacks_bindings) def clear(self) -&gt; None: self._tkinter_entry.delete(ENTRY_BOX_POINTER_START_INDEX, ENTRY_BOX_POINTER_TO_END) def input(self) -&gt; str: return self._tkinter_entry.get() def set(self, text: str) -&gt; None: self._tkinter_entry.insert(ENTRY_BOX_POINTER_START_INDEX, text) def place(self, placement_orientation: Geometry) -&gt; None: self._tkinter_entry.pack(side=str(placement_orientation)) def _bind_callbacks_to_events(self, events_to_callbacks_bindings: Dict[str, Callable]) -&gt; None: for event_name, callback_function in events_to_callbacks_bindings.items(): self._bind_callback_to_event(event_name, callback_function) def _bind_callback_to_event(self, event_name: str, callback_function: Callable) -&gt; None: self._tkinter_entry.bind(event_name, callback_function) </code></pre> <p>Here is how I initialize my custom entry box class:</p> <pre><code>def _initialize_search_entry_box(self, factory: TkinterWidgetFactory) -&gt; TkinterEntryBox: events_to_callbacks_bindings = { EVENT_ON_LEFT_MOUSE_BUTTON_DOWN: self._on_left_mouse_button_down } search_entry_box = factory.assemble_entry_box(self, INGREDIENT_SEARCH_BOX_PLACEHOLDER, events_to_callbacks_bindings) return search_entry_box </code></pre> <p>And here is an callback function that is called when left mouse button is pressed inside entry box input area:</p> <pre><code>@staticmethod def _on_left_mouse_button_down(event) -&gt; None: event.widget.clear() </code></pre> <p>The error message I'm getting in above function call is:</p> <pre><code>AttributeError: 'Entry' object has no attribute 'clear' </code></pre>
<p>You can simply add <code>self._tkinter_entry.clear = self.clear</code> after creating the entry:</p> <pre><code>class TkinterEntryBox: def __init__(self, parent_window: BaseWindow, events_to_callbacks_bindings): self._tkinter_entry = Entry(parent_window) self._tkinter_entry.clear = self.clear ... </code></pre>
user-interface|events|tkinter|python-3.8
0
1,901,980
56,465,522
How do I combine TimeoutError and tqdm progress bar in multiprocessing?
<p>I would like to perform multiprocessing using BOTH TimeoutError and tqdm progress bar.</p> <p>I have been successful at trying them separately. How should I combine the logic?</p> <p>Goals:</p> <ul> <li><p>The progress bar should update with every imap_unordered call</p></li> <li><p>Every process should be checked for TimeoutError</p></li> </ul> <p>I've tried a million ways to combine them (not shown). Every time I wrap the imap_unordered call with tqdm, then I am not able to access the "res.next" method for timeout.</p> <pre class="lang-py prettyprint-override"><code>from multiprocessing import Pool, TimeoutError from tqdm import tqdm def runner(obj): obj.go() return obj def dispatch(objs): with Pool() as pool: newObjs = list(tqdm(pool.imap_unordered(runner, objs), total=len(objs))) # need to find a way to integrate TimeoutError into progress bar # I've tried this a million ways using multiprocessing # try: # res.next(timeout=10) # except TimeoutError: # raise return newObjs </code></pre> <p>Code works perfectly for progress bar. Need to track if any process exceeds timeout.</p>
<p>You can assign the progress bar without an iterator and update it manually using <a href="https://tqdm.github.io/docs/tqdm/#update" rel="nofollow noreferrer"><code>update()</code></a>.</p> <pre><code>from multiprocessing import Pool, TimeoutError as mpTimeoutError from tqdm import tqdm def runner(obj): obj.go() return obj def dispatch(objs): with Pool() as pool: it = pool.imap_unordered(runner, objs) pbar = tqdm(total=len(objs)) new_objs = [] while True: try: new_objs.append(it.next(timeout=10)) pbar.update() except mpTimeoutError: raise except StopIteration: # signal that the iterator is exhausted pbar.close() break return new_objs </code></pre>
python|timeout|python-multiprocessing|tqdm
0
1,901,981
56,692,500
Divide columns in a DataFrame by a Series (result is only NaNs?)
<p>I'm trying to do a similar thing to what is posted in this question: <a href="https://stackoverflow.com/questions/30563934/python-pandas-n-x-m-dataframe-multiplied-by-1-x-m-dataframe">Python Pandas - n X m DataFrame multiplied by 1 X m Dataframe</a></p> <p>I have an n x m DataFrame, with all non-zero float values, and a 1 x m column, with all non-zero float values, and I'm trying to divide each column in the n x m dataframe by the values in the column. </p> <p>So I've got:</p> <pre><code>a b c 1 2 3 4 5 6 7 8 9 </code></pre> <p>and </p> <pre><code>x 11 12 13 </code></pre> <p>and I'm looking to return:</p> <pre><code>a b c 1/11 2/11 3/11 4/12 5/12 6/12 7/13 8/13 9/13 </code></pre> <p>I've tried a multiplication operation first, to see if I can make it work, so I tried applying the two solutions given in the answer to the question above.</p> <pre><code>df_prod = pd.DataFrame({c:df[c]* df_1[c].ix[0] for c in df.columns}) </code></pre> <p>This produces a "Key Error 0" And using the other solution to :</p> <pre><code>df.mul(df_1.iloc[0]) </code></pre> <p>This just gives me all NaN, although in the right shape.</p>
<p>The cause of NaNs are due to misalignment of your indexes. To get over this, you will either need to divide by numpy arrays,</p> <pre><code># &lt;=0.23 df.values / df2[['x']].values # or df2.values assuming there's only 1 column # 0.24+ df.to_numpy() / df[['x']].to_numpy() array([[0.09090909, 0.18181818, 0.27272727], [0.33333333, 0.41666667, 0.5 ], [0.53846154, 0.61538462, 0.69230769]]) </code></pre> <p>Or perform an axis aligned division using <code>.div</code>:</p> <pre><code>df.div(df2['x'], axis=0) a b c 0 0.090909 0.181818 0.272727 1 0.333333 0.416667 0.500000 2 0.538462 0.615385 0.692308 </code></pre>
python|pandas|dataframe|division
3
1,901,982
56,596,318
How to display an image within the notebook in Google Colab (like in Anacondan Jupyter Notebook)?
<p>I am writing a Google Colab Notebook about Tight Binding Theory. I want to display either in a markdown cell or in a code cell an image, like its possible to do in Anaconda with the following code</p> <pre><code>from IPython.display import Image # needed to embed an image Image(filename='example.png', embed=True) </code></pre> <p>I have tried doing it this way in Google Colab:</p> <pre><code>from IPython.display import Image Image('example.png') </code></pre> <p>It runs and nothing shows up. </p> <p>I have read that this way is also possible:</p> <ol> <li><p>putting your image in /usr/local/share/jupyter/nbextensions/google.colab/</p></li> <li><pre><code>&lt;img src='/nbextensions/google.colab/image.png' /&gt; ``` </code></pre></li> </ol> <p>I don't really understand this last way of doing it. Like is that first step a directory in my computer (I have tried looking for it and it is not there)? </p> <p>Is there a simpler way to approach this?</p> <p>EDIT: I realise now that directory is for LINUX operating systems. Any way I could do something equivalent in Windows (My computer operates Windows)</p>
<p>Your code should work, but you need to upload <code>example.png</code> to Colab first.</p> <pre class="lang-py prettyprint-override"><code>from IPython.display import Image Image('example.png') </code></pre> <p>To upload it, open the tab on the left, select 'Files' then 'Upload'</p>
python|image|display|google-colaboratory
5
1,901,983
69,968,624
python setup of control logic for multiple steps
<p>What would be the best way to implement logic to handle scenarios where control logic should evaluate a prior steps output to determine if the next step should be invoked?</p> <p>For example, say we are making a request using the requests library. I am expecting a key to exist in the first call's response payload, but it may not exist. So how would I say that we try / run the second try block only if the first one did not throw an exception? I believe I could set a global variable somewhere, like <code>exception_thrown = FALSE</code>, and add an if statement to evaluate if this is true, but is there a more performant way to write this control logic? There may be more than 1 additional step after the first http call, but these should only be made if the previous try did not fail.</p> <pre><code>try: response = requests.get('https://website.com').json() if 'key_named_amount_due' in response: print('key exists') key_for_future_call = response['key_named_amount_due'] except Exception as error: print('key did not exist') try: response = requests.get('https://website.com/key=' + key_for_future_call).json() except Exception as error: print(error) </code></pre>
<p>Rather than <code>print</code>ing the error, you should raise an exception and let the calling scope handle it. That way, the function can continue as if it's on the happy path and let the calling scope handle any errors.</p> <pre class="lang-py prettyprint-override"><code>import requests class ValidationFailedError(Exception): pass def handle_get(url): resp = requests.get(url).json() if 'key_named_amount_due' not in response: raise ValidationFailedError('key_named_amount_due not in response') key = resp['key_named_amount_due'] try: resp = requests.get(f&quot;{url}/key={key}&quot;).json() except Exception as e: raise ValidationFailedError(&quot;failed to get key &quot; + key) from e </code></pre>
python|python-3.x
1
1,901,984
18,210,784
GAE: How to build a query where a string begins with a value
<p>In google engine I have the following query to find me all users with the given firstname.</p> <p>When I type in 'Mi' It would list me all "Michael" and "Mike"'s in the database.</p> <pre><code>class User(UserMixin, ndb.Model): firstname = ndb.StringProperty() data = User.query(ndb.AND(User.firstname &gt;= name_startsWith, User.firstname &lt;= name_startsWith + u'\ufffd')).fetch(5) </code></pre> <p>I would like to make it invariant so that I can type "mi" and it still outputs the same names.</p> <p>I tried <code>lower()</code> in Python, but this doesn't work with app engine's <code>StringProperty()</code></p> <pre><code> data = User.query(ndb.AND(User.firstname.lower() &gt;= name_startsWith.lower(), User.firstname.lower() &lt;= name_startsWith.lower() + u'\ufffd')).fetch(5) </code></pre> <p>It throws the error:</p> <blockquote> <p>AttributeError: 'StringProperty' object has no attribute 'lower'</p> </blockquote>
<p>You can't do searches like that with the datastore API. You can either store an additional lower-case version of the field, or use the full-text search API which is meant for this sort of thing.</p>
python|google-app-engine|app-engine-ndb
2
1,901,985
60,980,610
Django Using multiple (two) user types extending AbstractUser
<p><strong>UPDATE</strong>: The fix has been found. The problem has been laying in the Models.py, where we are trying to save donor/hospital as a User instance, but we did not use self.donor (we used donor and assumed it will be related to the created instance).</p> <p>For our project, we have two user types: Donor and Hospital. So we use the User model, which extends the AbstractUser model. The Donor and Hospital models both have OneToOneField relationship with the User and it uses the default user authentication.</p> <p>Almost everything works fine. The Donor and Hospital creation works well, the instances are added to the database and we can log in.</p> <p>However, we need the Donor object and its fields in the view and in the template. We have the user contained in the request and the id of the user.</p> <p><code>donor = Donor.objects.get(pk=request.user.id)</code></p> <p>Or</p> <p><code>donor = Donor.objects.filter(donor=request.user).first()</code></p> <p>Should return the donor object, but it returns None.</p> <p>It works for Hospital. </p> <p>If we do Donor.objects.all() the newly created Donor is not in the list. However, created hospital is present in the list. </p> <p>We cannot figure out what is the problem, the ids for donor and user do not match.</p> <p>Thank you very much for your help!</p> <p>These are our models:</p> <pre><code>class User(AbstractUser): is_donor = models.BooleanField(default=False) is_hospital = models.BooleanField(default=False) class Donor(models.Model): # username, pword, first+last name, email donor = models.OneToOneField(User, on_delete=models.CASCADE, primary_key=True) # donor_id = models.IntegerField(auto_created=True, unique=True, null=False, primary_key=True) nickname = models.CharField(max_length=40, unique=True) phone = models.CharField(max_length=10) address = models.CharField(max_length=100) blood_type = models.CharField(max_length=3) weight = models.IntegerField() height = models.IntegerField() birth = models.DateField() age = models.IntegerField() notification = models.BooleanField likedStories = models.TextField(default=json.dumps([])) def get_age(self, b): return int((datetime.date.today() - datetime.datetime.strptime(b, "%Y-%m-%d").date()).days / 365.25) def __str__(self): return self.donor.username def new_donor(self, data): try: MISTAKE # donor = User.objects.create_user(username=data['email'], password=data['password']) self.donor = User.objects.create_user(username=data['email'], password=data['password']) except IntegrityError: return {'error': "email already exists"} except: return {'error': "something went wrong with email please try again"} donor.first_name = data['first_name'] donor.last_name = data['last_name'] # donor.nickname = data['nickname'] donor.is_donor = True self.nickname = data['username'] self.birth = (data['birthday']) self.age = self.get_age(self.birth) self.address = data['city'] self.height = data['height'] self.weight = data['weight'] self.blood_type = data['blood_type'] # self.notification = data['notification'] try: # HAS TO BE SELF. THE SAME FOR HOSPITAL self.donor.save() self.save() except IntegrityError: donor.delete() return {'error': "nickname already exists"} except: donor.delete() return {'error': "something went wrong with nickname please try again"} return {'error': None} class Hospital(models.Model): # email, pword hospital = models.OneToOneField(User, on_delete=models.CASCADE, primary_key=True) # hospital_id = models.IntegerField(auto_created=True, unique=True, null=False, primary_key=True) name = models.CharField(max_length=100) location = models.CharField(max_length=300) notified_types = models.CharField(max_length=100) slug_name = models.SlugField(unique=True) # stories = models.ManyToOneRel(Hospital, ) def save(self, *args, **kwargs): self.slug_name = slugify(self.name) super(Hospital, self).save(*args, **kwargs) def __str__(self): return self.name def new_hospital(self, data): # hospital = User.objects.create_user(data['hospital_name'], data['hospital_email'], data['hospital_password']) hospital = User.objects.create_user(username=data['hospital_email'], password=data['hospital_password']) hospital.is_hospital = True self.name = data['hospital_name'] self.location = data['location'] # self.notified_types = data['notif_types'] hospital.save() self.save() </code></pre> <p>This is the model, where I need to get the Donor field:</p> <pre><code>@login_required def app(request): context_dict = {} # Get 4 most liked stories stories = Story.objects.order_by('-likes')[:4] # Get all reviews of by the donor reviews = Review.objects.all() context_dict["stories"] = stories context_dict["reviews"] = reviews print("Is donor:") print(request.user.is_donor) print("Is hospital:") print(request.user.is_hospital) print("All donors:") print(Donor.objects.all()) print("All hospitals:") print(Hospital.objects.all()) print("Has user blood_type attribute:") print(request.user._meta.fields) # print (Donor.objects.get(donor_id = request.user.id).nickname) if request.user.is_hospital: hospital = Hospital.objects.filter(hospital=request.user).first() print(hospital) context_dict["hospital"] = hospital else: donor = Donor.objects.filter(donor=request.user).first() print(donor) # context_dict["donor"] = donor response = render(request, 'app/app.html', context=context_dict) # Return a rendered response to send to the client. return response </code></pre>
<pre><code>donor = Donor.objects.get(donor__user=request.user) </code></pre> <p>or </p> <pre><code>donor = Donor.objects.get(donor__user.id=request.user.id) </code></pre>
python|django|database|model
0
1,901,986
66,198,205
Paho MQTT Python - Clear topic queue if new message published
<p>I am developing a robotics project which sends a livestream of images. The images are all published to the same topic. What I am finding is that a backlog is created and a delay starts to form between images being sent for publish and them actually being published.</p> <p>I am assuming there is some form of internal buffer / queueing system in PAHO MQTT which is causing this.</p> <p>Given the nature of the project I am not precious about each image being published, ideally I'd be able to drop any messages that are waiting to be published to a certain topic and re-publish new content. Does anyone know if this is possible, and if so how?</p> <p>Thank you</p>
<p>No, this is not possible.</p> <p>The only thing that would cause messages to back up on the client is if you are publishing them quicker than the client can send them to the broker, which under normal circumstances will be a product of the network speed between the client and the broker.</p> <p>The only other thing that might have an impact would be if you are manually running the network loop and you are not calling it often enough.</p>
python|mqtt|communication|paho
2
1,901,987
66,151,406
How to read excel file from Google cloud Storage to Jupyter notebook/ Jupyter lab using Python
<pre><code>''' import pandas as pd import numpy as np ! pip install google-cloud ! pip install google-cloud-vision ! pip install gcsfs df= pd.read_excel('gs://bank-modelling-project//training_data.xlsx') df ''' </code></pre> <p>Error: HttpError: Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object., 401</p> <ol> <li>Im using free trial account</li> <li>I made the bucket as Public</li> </ol> <p>Still im gettting the above error , Anybody can save my time here?</p>
<p>According to the <a href="https://cloud.google.com/storage/docs/access-control/iam-permissions#object_permissions" rel="nofollow noreferrer">documentation</a>, in order to be able to read from an Object stored in Google Cloud Storage, you need to have the proper permissions. In your case it is the: <code>storage.objects.get</code>.</p> <p>In GCP, <a href="https://cloud.google.com/iam/docs/service-accounts" rel="nofollow noreferrer">service accounts</a> are by between services to communicate between each other in the cloud with the right permissions. Thus, the Jupyter notebook uses a service account to access the object <code>gs://bank-modelling-project//training_data.xlsx</code> in GCS. So it needs the <code>storage.objects.get</code> permission to read the object.</p> <p>Below are the steps to assign this permission to your service account:</p> <ol> <li>Go to your notebooks instance page and click on the instance you are using;</li> <li>Under <em><strong>Instance properties</strong></em>, check the <em>Service account</em>;</li> <li>Now go to <strong>IAM &amp; ADMIN</strong> and find the service account you saw in the previous step;</li> <li>On the same line as its name, click on the Pencil icon to <strong>edit the member</strong>;</li> <li>Add the pre-defined role Storage Object Viewer, because this role has the permission you need, such as described <a href="https://cloud.google.com/storage/docs/access-control/iam-roles#standard-roles" rel="nofollow noreferrer">here</a>.</li> <li>Re-open your notebook instance and you will be able to read the GCS object you desire.</li> </ol> <p><em>Please note that you won't be able to create any object in GCS with only this permission.</em></p>
python|excel|csv|google-cloud-platform|jupyter
0
1,901,988
66,070,517
Transpose dataframe based on column list
<p>I have a dataframe in the following structure:</p> <pre><code>cNames | cValues | number [a,b,c] | [1,2,3] | 10 [a,b,d] | [55,66,77]| 20 </code></pre> <p>I would like to transpose - <strong>create columns from the names in <em>cNames</em></strong>.<br /> But I can't manage to achieve this with <em>transpose</em> because I want a column for each value in the list.<br /> The needed output:</p> <pre><code>a | b | c | d | number 1 | 2 | 3 | NaN | 10 55 | 66 | NaN | 77 | 20 </code></pre> <p>How can I achieve this result?<br /> Thanks!</p> <p><em>The code to create the DF:</em></p> <pre><code>d = {'cNames': [['a','b','c'], ['a','b','d']], 'cValues': [[1,2,3], [55,66,77]], 'number': [10,20]} df = pd.DataFrame(data=d) </code></pre>
<p>One option is <code>concat</code>:</p> <pre><code>pd.concat([pd.Series(x['cValues'], x['cNames'], name=idx) for idx, x in df.iterrows()], axis=1 ).T.join(df.iloc[:,2:]) </code></pre> <p>Or a DataFrame construction:</p> <pre><code>pd.DataFrame({idx: dict(zip(x['cNames'], x['cValues']) ) for idx, x in df.iterrows() }).T.join(df.iloc[:,2:]) </code></pre> <p>Output:</p> <pre><code> a b c d number 0 1.0 2.0 3.0 NaN 10 1 55.0 66.0 NaN 77.0 20 </code></pre> <hr /> <p><strong>Update</strong> Performances sort by run time on sample data</p> <p><strong>DataFrame</strong></p> <pre><code>%%timeit pd.DataFrame({idx: dict(zip(x['cNames'], x['cValues']) ) for idx, x in df.iterrows() }).T.join(df.iloc[:,2:]) 1.29 ms ± 36.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) </code></pre> <p><strong>concat</strong>:</p> <pre><code>%%timeit pd.concat([pd.Series(x['cValues'], x['cNames'], name=idx) for idx, x in df.iterrows()], axis=1 ).T.join(df.iloc[:,2:]) 2.03 ms ± 86.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) </code></pre> <p><strong>KJDII's new series</strong></p> <pre><code>%%timeit df['series'] = df.apply(lambda x: dict(zip(x['cNames'], x['cValues'])), axis=1) pd.concat([df['number'], df['series'].apply(pd.Series)], axis=1) 2.09 ms ± 65.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) </code></pre> <p><strong>Scott's apply(pd.Series.explode)</strong></p> <pre><code>%%timeit df.apply(pd.Series.explode)\ .set_index(['number', 'cNames'], append=True)['cValues']\ .unstack()\ .reset_index()\ .drop('level_0', axis=1) 4.9 ms ± 135 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) </code></pre> <p><strong>wwnde's set_index.apply(explode)</strong></p> <pre><code>%%timeit g=df.set_index('number').apply(lambda x: x.explode()).reset_index() g['cValues']=g['cValues'].astype(int) pd.pivot_table(g, index=[&quot;number&quot;],values=[&quot;cValues&quot;],columns=[&quot;cNames&quot;]).droplevel(0, axis=1).reset_index() 7.27 ms ± 162 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) </code></pre> <p><strong>Celius' double explode</strong></p> <pre><code>%%timeit df1 = df.explode('cNames').explode('cValues') df1['cValues'] = pd.to_numeric(df1['cValues']) df1.pivot_table(columns='cNames',index='number',values='cValues') 9.42 ms ± 189 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) </code></pre>
python|pandas|dataframe
7
1,901,989
68,439,172
Adding a total per level-2 index in a multiindex pandas dataframe
<p>I have a dataframe:</p> <pre><code>df_full = pd.DataFrame.from_dict({('group', ''): {0: 'A', 1: 'A', 2: 'A', 3: 'A', 4: 'A', 5: 'A', 6: 'A', 7: 'B', 8: 'B', 9: 'B', 10: 'B', 11: 'B', 12: 'B', 13: 'B'}, ('category', ''): {0: 'Books', 1: 'Candy', 2: 'Pencil', 3: 'Table', 4: 'PC', 5: 'Printer', 6: 'Lamp', 7: 'Books', 8: 'Candy', 9: 'Pencil', 10: 'Table', 11: 'PC', 12: 'Printer', 13: 'Lamp'}, (pd.Timestamp('2021-06-28 00:00:00'), 'Sales_1'): {0: 9.937449997200002, 1: 30.71300000639998, 2: 58.81199999639999, 3: 25.661999978399994, 4: 3.657999996, 5: 12.0879999972, 6: 61.16600000040001, 7: 6.319439989199998, 8: 12.333119997600003, 9: 24.0544100028, 10: 24.384659998799997, 11: 1.9992000012000002, 12: 0.324, 13: 40.69122000000001}, (pd.Timestamp('2021-06-28 00:00:00'), 'Sales_2'): {0: 21.890370397789923, 1: 28.300470581874837, 2: 53.52039700062155, 3: 52.425508769690694, 4: 6.384936971649232, 5: 6.807138946302334, 6: 52.172, 7: 5.916852561, 8: 5.810764652, 9: 12.1243325, 10: 17.88071596, 11: 0.913782413, 12: 0.869207661, 13: 20.9447844}, (pd.Timestamp('2021-06-28 00:00:00'), 'last_week_sales'): {0: np.nan, 1: np.nan, 2: np.nan, 3: np.nan, 4: np.nan, 5: np.nan, 6: np.nan, 7: np.nan, 8: np.nan, 9: np.nan, 10: np.nan, 11: np.nan, 12: np.nan, 13: np.nan}, (pd.Timestamp('2021-06-28 00:00:00'), 'total_orders'): {0: 86.0, 1: 66.0, 2: 188.0, 3: 556.0, 4: 12.0, 5: 4.0, 6: 56.0, 7: 90.0, 8: 26.0, 9: 49.0, 10: 250.0, 11: 7.0, 12: 2.0, 13: 44.0}, (pd.Timestamp('2021-06-28 00:00:00'), 'total_sales'): {0: 4390.11, 1: 24825.059999999998, 2: 48592.39999999998, 3: 60629.77, 4: 831.22, 5: 1545.71, 6: 34584.99, 7: 5641.54, 8: 6798.75, 9: 13290.13, 10: 42692.68000000001, 11: 947.65, 12: 329.0, 13: 29889.65}, (pd.Timestamp('2021-07-05 00:00:00'), 'Sales_1'): {0: 13.690399997999998, 1: 38.723000005199985, 2: 72.4443400032, 3: 36.75802000560001, 4: 5.691999996, 5: 7.206999998399999, 6: 66.55265999039996, 7: 6.4613199911999954, 8: 12.845630001599998, 9: 26.032340003999998, 10: 30.1634600016, 11: 1.0203399996, 12: 1.4089999991999997, 13: 43.67116000320002}, (pd.Timestamp('2021-07-05 00:00:00'), 'Sales_2'): {0: 22.874363860953647, 1: 29.5726042895728, 2: 55.926190956481534, 3: 54.7820864335212, 4: 6.671946105284065, 5: 7.113126469779095, 6: 54.517, 7: 6.194107518, 8: 6.083562133, 9: 12.69221484, 10: 18.71872129, 11: 0.956574175, 12: 0.910216433, 13: 21.92632044}, (pd.Timestamp('2021-07-05 00:00:00'), 'last_week_sales'): {0: 4390.11, 1: 24825.059999999998, 2: 48592.39999999998, 3: 60629.77, 4: 831.22, 5: 1545.71, 6: 34584.99, 7: 5641.54, 8: 6798.75, 9: 13290.13, 10: 42692.68000000001, 11: 947.65, 12: 329.0, 13: 29889.65}, (pd.Timestamp('2021-07-05 00:00:00'), 'total_orders'): {0: 109.0, 1: 48.0, 2: 174.0, 3: 587.0, 4: 13.0, 5: 5.0, 6: 43.0, 7: 62.0, 8: 13.0, 9: 37.0, 10: 196.0, 11: 8.0, 12: 1.0, 13: 33.0}, (pd.Timestamp('2021-07-05 00:00:00'), 'total_sales'): {0: 3453.02, 1: 17868.730000000003, 2: 44707.82999999999, 3: 60558.97999999999, 4: 1261.0, 5: 1914.6000000000001, 6: 24146.09, 7: 6201.489999999999, 8: 5513.960000000001, 9: 9645.87, 10: 25086.785, 11: 663.0, 12: 448.61, 13: 26332.7}}).set_index(['group','category']) </code></pre> <p>I am trying to get a <code>total</code> for each column per <code>category</code>. So in this <code>df</code> example adding 2 lines below <code>Lamp</code> denoting the totals of each column. Red lines indicate the desired <code>totals</code> placement:</p> <p><a href="https://i.stack.imgur.com/JWfWK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JWfWK.png" alt="enter image description here" /></a></p> <p>What I've tried:</p> <pre><code>df_out['total'] = df_out.sum(level=1).loc[:, (slice(None), 'total_sales')] </code></pre> <p>But get:</p> <blockquote> <p>ValueError: Wrong number of items passed 4, placement implies 1</p> </blockquote> <p>I also checked this <a href="https://stackoverflow.com/questions/61239640/add-total-row-to-dataframe-with-multi-level-index">question</a> but could not apply it to my self.</p>
<p>Let us try <code>groupby</code> on <code>level=0</code></p> <pre><code>s = df_full.groupby(level=0).sum() s.index = pd.MultiIndex.from_product([s.index, ['Total']]) df_out = df_full.append(s).sort_index() </code></pre> <hr /> <pre><code>print(df_out) 2021-06-28 00:00:00 2021-07-05 00:00:00 Sales_1 Sales_2 last_week_sales total_orders total_sales Sales_1 Sales_2 last_week_sales total_orders total_sales group category A Books 9.93745 21.890370 NaN 86.0 4390.11 13.69040 22.874364 4390.11 109.0 3453.020 Candy 30.71300 28.300471 NaN 66.0 24825.06 38.72300 29.572604 24825.06 48.0 17868.730 Lamp 61.16600 52.172000 NaN 56.0 34584.99 66.55266 54.517000 34584.99 43.0 24146.090 PC 3.65800 6.384937 NaN 12.0 831.22 5.69200 6.671946 831.22 13.0 1261.000 Pencil 58.81200 53.520397 NaN 188.0 48592.40 72.44434 55.926191 48592.40 174.0 44707.830 Printer 12.08800 6.807139 NaN 4.0 1545.71 7.20700 7.113126 1545.71 5.0 1914.600 Table 25.66200 52.425509 NaN 556.0 60629.77 36.75802 54.782086 60629.77 587.0 60558.980 Total 202.03645 221.500823 0.0 968.0 175399.26 241.06742 231.457318 175399.26 979.0 153910.250 B Books 6.31944 5.916853 NaN 90.0 5641.54 6.46132 6.194108 5641.54 62.0 6201.490 Candy 12.33312 5.810765 NaN 26.0 6798.75 12.84563 6.083562 6798.75 13.0 5513.960 Lamp 40.69122 20.944784 NaN 44.0 29889.65 43.67116 21.926320 29889.65 33.0 26332.700 PC 1.99920 0.913782 NaN 7.0 947.65 1.02034 0.956574 947.65 8.0 663.000 Pencil 24.05441 12.124332 NaN 49.0 13290.13 26.03234 12.692215 13290.13 37.0 9645.870 Printer 0.32400 0.869208 NaN 2.0 329.00 1.40900 0.910216 329.00 1.0 448.610 Table 24.38466 17.880716 NaN 250.0 42692.68 30.16346 18.718721 42692.68 196.0 25086.785 Total 110.10605 64.460440 0.0 468.0 99589.40 121.60325 67.481717 99589.40 350.0 73892.415 </code></pre>
python|pandas|multi-index
4
1,901,990
68,313,071
Mastermind Python problem for correct colour and correct place and correct colour and wrong place
<p>Hi Below is my master mind game python code, now I have a problem that example: when the answer is &quot;RGRG&quot; and I enter the alphabet &quot;GRRG&quot; by right the output for Correct colour and correct place should be = 2 and Correct colour and wrong place should be = 2. but now the output is wrong and it shows that colour and correct place = 2 but Correct colour and wrong place = 0. So if anybody know what part of my code is wrong?</p> <pre><code>import random from random import choice print(&quot;Welcome to Chee Fung Mastermind Games&quot;) opening = input(&quot;Hi Do you need instruction or starightly enter into the game. [I/G]\n&quot;) if opening == &quot;I&quot; or opening == &quot;i&quot; : print(&quot;Computer will automatically generate four colour from list.&quot;) print(&quot;Player must guess 4 colours numbers correct from the list to win the game.&quot;) print(&quot;You have 10 times chances to atempt the game.&quot;) print(&quot;There will be 5 colours in the below list.&quot;) print(&quot;(R)Red, (G)Green, (B)Blue, (P)Purple, (O)Orange&quot;) print(&quot;Player no need to enter the whole word, just need to enter the first alphabetical of the colours.&quot;) print(&quot;Example: for Red colour you just need to enter 'r' or 'R'.&quot;) else: print(&quot;Let's start the game.&quot;) prev_curt_color = 0 colors = [&quot;R&quot;, &quot;G&quot;, &quot;B&quot;, &quot;Y&quot;, &quot;W&quot;, &quot;P&quot;] attempts = 0 game = True # computer randomly picks four-color code color_code = [] for i in range(4): color_code.append(choice(colors)) print (color_code) # player guesses the number while game: correct_color = 0 guessed_color = 0 player_guess = input(&quot;Please enter the four colour:&quot;).upper() attempts += 1 # checking if player's input is correct if len(player_guess) != len(color_code): print (&quot;\nThe color code has exactly four colors. please try again!&quot;) attempts -= 1 continue for i in range(4): if player_guess[i] not in colors: print (&quot;\nLook up what colors you can use in this game.&quot;) attempts -= 1 break # comparison between player's input and secret code and player_guess[i] in color_code if correct_color != 4: for i in range(4): if player_guess[i] == color_code[i]: correct_color += 1 elif player_guess[i] != color_code[i] and player_guess[i] in color_code and correct_color &gt; prev_curt_color: guessed_color += 1 prev_curt_color = correct_color print(&quot;Correct colour and correct place: &quot;, correct_color) print(&quot;Correct colour and wrong place: &quot;, guessed_color) if correct_color == 4: if attempts == 1: print (&quot;Wow! You guessed at the first attempt!&quot;) else: print (&quot;Well done... You needed &quot; + str(attempts) + &quot; attempts to guess.&quot;) game = False if attempts &gt;= 1 and attempts &lt;10 and correct_color != 4: print(&quot;The attempt time &quot; + str(attempts)) print(&quot;Next attempt: &quot;) elif attempts &gt;= 10: print (&quot;You didn't guess! The secret color code was: &quot; + str(color_code)) game = False # play or not to play while game == False: finish_game = input(&quot;\nDo you want to play again (Y/N)?&quot;).upper() attempts = 0 if finish_game ==&quot;N&quot;: print (&quot;Thanks for the game! Bye, bye!&quot;) break elif finish_game == &quot;Y&quot;: game = True print (&quot;So, let's play again... Guess the secret code: &quot;) </code></pre>
<p>I've reviewed your code and debug it. then I found a problem when your <code>color_code = ['R','G','G','G']</code> and <code>player_guess ='GGGG'</code> when the program first time try to compare <code>correct_color</code> with <code>prev_curt_color</code>. they're both equal to 0. so when you want to figure out the numbers of Correct colour and wrong place. It will shows 0 instead of 1. here is the edited code down below.</p> <pre><code> if correct_color != 4: for i in range(4): if player_guess[i] == color_code[i]: correct_color += 1 prev_curt_color = correct_color elif player_guess[i] != color_code[i] and player_guess[ i] in color_code and correct_color &gt;= prev_curt_color: guessed_color += 1 </code></pre> <p>BTW, It seems that you should learn about how to debug a python program here is a tutorial, I hope it can help <a href="https://www.jetbrains.com/help/pycharm/debugging-your-first-python-application.html#where-is-the-problem" rel="nofollow noreferrer">Debug Tutorial</a></p>
python
0
1,901,991
59,201,564
How to decode string with unicode in python?
<p>I have the following line:</p> <pre><code>%7B%22appVersion%22%3A1%2C%22modulePrefix%22%3A%22web-experience-app%22%2C%22environment%22%3A%22production%22%2C%22rootURL%22%3A%22/%22%2C%22 </code></pre> <p>Expected Result:</p> <pre><code>{"appVersion":1,"modulePrefix":"web-experience-app","environment":"production","rootURL":"/"," </code></pre> <p>You can check it out <a href="https://www.online-toolz.com/tools/text-unicode-entities-convertor.php" rel="nofollow noreferrer">here</a>. What I tried:</p> <pre class="lang-py prettyprint-override"><code>foo = '%7B%22appVersion%22%3A1%2C%22modulePrefix%22%3A%22web-experience-app%22%2C%22environment%22%3A%22production%22%2C%22rootURL%22%3A%22/%22%2C%22' codecs.decode(foo, 'unicode-escape') foo.encode('utf-8').decode('utf-8') </code></pre> <p>This does not work. What other options are there?</p>
<p>The string is urlencoded. You can convert it by reversing the urlencoding.</p> <pre><code>from urllib import parse s = '%7B%22appVersion%22%3A1%2C%22modulePrefix%22%3A%22web-experience-app%22%2C%22environment%22%3A%22production%22%2C%22rootURL%22%3A%22/%22%2C%22' unquoted = parse.unquote(s) unquoted '{"appVersion":1,"modulePrefix":"web-experience-app","environment":"production","rootURL":"/","' </code></pre> <p>This looks like part of a larger JSON string. The complete object can be de-serialised with <code>json.loads</code>.</p>
python-3.x|python-unicode|unicode-escapes
2
1,901,992
63,145,692
How to overwrite an image in Tkinter
<p>I created a simple image opening program which opens the image selected from filedialog by clicking a button, but wherever I select another image it just appears under the current image</p> <p>I want the next image selected to be replaced by the old image.</p> <p>Plz help what should I do</p> <pre><code>from tkinter import * from PIL import Image,ImageTk from tkinter import filedialog root=Tk() root.title('Image') def open(): global my_img root.filename = filedialog.askopenfilename(initialdir='/GUI',title='Select A File',filetypes=(('jpg files','*.jpg'),('png files','*.png'),('all files','*.*'))) my_img = ImageTk.PhotoImage(Image.open(root.filename)) my_image_lbl = Label(image=my_img).pack() my_btn = Button(root,text='Open File Manager',command=open).pack() root.mainloop() </code></pre>
<p>You should create the <code>my_image_lbl</code> outside <code>open()</code> and update its image inside the function:</p> <pre><code>from tkinter import * from PIL import Image,ImageTk from tkinter import filedialog root=Tk() root.title('Image') def open(): filename = filedialog.askopenfilename(initialdir='/GUI',title='Select A File',filetypes=(('jpg files','*.jpg'),('png files','*.png'),('all files','*.*'))) if filename: my_image_lbl.image = ImageTk.PhotoImage(file=filename) my_image_lbl.config(image=my_image_lbl.image) Button(root,text='Open File Manager',command=open).pack() my_image_lbl = Label(root) my_image_lbl.pack() root.mainloop() </code></pre>
python|python-3.x|tkinter|python-imaging-library|tk
4
1,901,993
62,357,718
How to make sure that a list of generated numbers follow a uniform distribution
<p>I have a list of 150 numbers from 0 to 149. I would like to use a for loop with 150 iterations in order to generate 150 lists of 6 numbers such that,t in each iteration <code>k</code>, the number <code>k</code> is included as well as 5 different random numbers. For example:</p> <pre><code>S0 = [0, r1, r2, r3, r4, r5] # r1, r2,..., r5 are random numbers between 0 and 150 S1 = [1, r1', r2', r3', r4', r5'] # r1', r2',..., r5' are new random numbers between 0 and 150 ... S149 = [149, r1'', r2'', r3'', r4'', r5''] </code></pre> <p>In addition, the numbers in each list have to be different and with a minimum distance of 5. This is the code I am using:</p> <pre><code>import random import numpy as np final_list = [] for k in range(150): S = [k] for it in range(5): domain = [ele for ele in range(150) if ele not in S] d = 0 x = k while d &lt; 5: d = np.Infinity x = random.sample(domain, 1)[0] for ch in S: if np.abs(ch - x) &lt; d: d = np.abs(ch - x) S.append(x) final_list.append(S) </code></pre> <p>Output:</p> <pre><code>[[0, 149, 32, 52, 39, 126], [1, 63, 16, 50, 141, 79], [2, 62, 21, 42, 35, 71], ... [147, 73, 38, 115, 82, 47], [148, 5, 78, 115, 140, 43], [149, 36, 3, 15, 99, 23]] </code></pre> <p>Now, the code is working but I would like to know if it's possible to force that number of repetitions that each number has through all the iterations is approximately the same. For example, after using the previous code, this plot indicates how many times each number has appeared in the generated lists:</p> <p><a href="https://i.stack.imgur.com/vO2Oh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vO2Oh.png" alt="rep" /></a></p> <p>As you can see, there are numbers that have appeared more than 10 times while there are others that have appeared only 2 times. Is it possible to reduce this level of variation so that this plot can be approximated as a uniform distribution? Thanks.</p>
<p>First, I am not sure that your assertion that the current results are not uniformly distributed is necessarily correct. It would seem prudent to me to try and examine the histogram over several repetitions of the process, rather than just one.</p> <p>I am not a statistician, but when I want to approximate uniform distribution (and assuming that the functions in <code>random</code> provide uniform distribution), what I try to do is to simply accept all results returned by <code>random</code> functions. For that, I need to limit the choices given to these functions ahead of calling them. This is how I would go about your task:</p> <pre><code>import random import numpy as np N = 150 def random_subset(n): result = [] cands = set(range(N)) for i in range(6): result.append(n) # Initially, n is the number that must appear in the result cands -= set(range(n - 4, n + 5)) # Remove candidates less than 5 away n = random.choice(list(cands)) # Select next number return result result = np.array([random_subset(n) for n in range(N)]) print(result) </code></pre> <p>Simply put, whenever I add a number <code>n</code> to the result set, I take out of the selection candidates, an environment of the proper size, to ensure no number of a distance of less than 5 can be selected in the future.</p> <p>The code is not optimized (multiple <code>set</code> to <code>list</code> conversions) but it works (as per my uderstanding).</p>
python|random|uniform-distribution
1
1,901,994
62,450,441
How to insert the txt.file string into MYSQL using python
<p>I have a text file which contains several character string this below:</p> <pre><code>0546afwq, 5fj532gs, 1824t4sa, sq234312 </code></pre> <p><a href="https://i.stack.imgur.com/3Rfi3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Rfi3.jpg" alt="enter image description here"></a></p> <p>Now I would like to import this .txt file to SQL database named TV by using Python and SQL, and create a table ADMIN, add the TXT_results column (varchar2 type).</p> <p>This is what I tried this far:</p> <pre><code>import cx_Oracle conn = cx_Oracle.connect('account/password@db') cur = conn.cursor() if conn: print("data base has already connected") with open(r'C:\Users\Desktop\result.txt') as infile: for line in infile: data = line.split(",") query = ("insert INTO admin(txt_results) VALUES (%s)") cur.execute(query, data) cur.close() conn.commit() conn.close() </code></pre> <hr> <p>And got the following error code : <code>"ORACLE] ORA-01036: illegal variable name/number"</code></p> <p>Could anybody help me to fix this problem?</p> <p>Thanks</p>
<p>For starters TXT_results is not defined in your db I assume what you wanted in place of that is a place holder for line which would be %s but you also should format the values of the line you read (separate with commas by field), it's also always good to specify the columns you're inserting for transparency</p>
python|mysql|oracle
0
1,901,995
35,434,363
Python: Generate random values from empirical distribution
<p>In Java, I usually rely on the <a href="http://commons.apache.org/proper/commons-math/apidocs/org/apache/commons/math3/random/EmpiricalDistribution.html" rel="noreferrer">org.apache.commons.math3.random.EmpiricalDistribution</a> class to do the following:</p> <ul> <li>Derive a probability distribution from observed data.</li> <li>Generate random values from this distribution.</li> </ul> <p>Is there any Python library that provides the same functionality? It seems like <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gaussian_kde.resample.html#scipy.stats.gaussian_kde.resample" rel="noreferrer">scipy.stats.gaussian_kde.resample</a> does something similar, but I'm not sure if it implements the same procedure as the Java type I'm familiar with.</p>
<pre><code>import numpy as np import scipy.stats import matplotlib.pyplot as plt # This represents the original "empirical" sample -- I fake it by # sampling from a normal distribution orig_sample_data = np.random.normal(size=10000) # Generate a KDE from the empirical sample sample_pdf = scipy.stats.gaussian_kde(orig_sample_data) # Sample new datapoints from the KDE new_sample_data = sample_pdf.resample(10000).T[:,0] # Histogram of initial empirical sample cnts, bins, p = plt.hist(orig_sample_data, label='original sample', bins=100, histtype='step', linewidth=1.5, density=True) # Histogram of datapoints sampled from KDE plt.hist(new_sample_data, label='sample from KDE', bins=bins, histtype='step', linewidth=1.5, density=True) # Visualize the kde itself y_kde = sample_pdf(bins) plt.plot(bins, y_kde, label='KDE') plt.legend() plt.show(block=False) </code></pre> <p><a href="https://i.stack.imgur.com/gtc7G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gtc7G.png" alt="resulting plot"></a></p> <p><code>new_sample_data</code> should be drawn from roughly the same distribution as the original data (to the degree that the KDE is a good approximation to the original distribution).</p>
python|statistics
7
1,901,996
31,259,075
In Pandas, can't show x-axis dates nicely and y-axis in unwanted logs
<p>Here's my chart:</p> <p><img src="https://i.stack.imgur.com/f35G7.png" alt="enter image description here"></p> <p>I have two issues; I can't get the datetime objects on the x-axis to come out nicely (i.e. January 1st, 2013) and I would like the y-axis labels to be absolute values, not log values.</p> <p>Here's my annotated code: (<code>date_sorted</code> is my Pandas dataframe)</p> <pre><code>fig = plt.figure() date_sorted.plot( x = ["ReleaseDate"], y = ["DomesticTotalGross"]) plt.title("Domestic Total Gross over Time") plt.xticks(rotation=45) plt.yscale('linear') # ---- this doesn't seem to do anything plt.ticklabel_format(useOffset=False) #--- this gives this error: AttributeError: This method only works with the ScalarFormatter. fig.autofmt_xdate() #thought this was supposed to convert my x-axis datetime objects into nice dates? </code></pre>
<p>Regarding the date format, one way to achieve your objective would be to reset your index to a date format instead of datetime:</p> <pre><code>date_sorted.set_index([ts.date for ts in date_sorted.index]).plot(x="ReleaseDate", y="DomesticTotalGross") </code></pre>
python|pandas|matplotlib
0
1,901,997
15,642,004
Import Error: No module named http django
<p>I have an error occurring in my wsgi.py file. It's complaining that:</p> <pre><code>File "(directory)/.local/lib/python2.7/site-packages/Django-1.5 py2.7.egg/django/core/handlers/wsgi.py", line 9, in &lt;module&gt; from django import http ImportError: cannot import name http </code></pre> <p>I checked that the directory http exists in (directory)/.local/lib/python2.7/site-packages/Django-1.5 py2.7.egg/django/. Also, when importing django.core, there is no problem, but when importing any of the other modules, it gives the same error. Here is the directory information for (directory)/.local/lib/python2.7/site-packages/Django-1.5 py2.7.egg/django/:</p> <pre><code>django: bin conf contrib core db dispatch forms http __init__.py __init__.pyc middleware shortcuts template templatetags test utils views </code></pre> <p>And here's the directory info for http:</p> <pre><code>http: cookie.py cookie.pyc __init__.py __init__.pyc multipartparser.py multipartparser.pyc request.py request.pyc response.py response.pyc utils.py utils.pyc </code></pre> <p>EDIT:</p> <p>error given in python shell:</p> <pre><code>&gt;&gt;from django import http Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; ImportError: cannot import name http &gt;&gt;from django import core &gt;&gt; </code></pre> <p>Error was solved by deleting a local directory also called django. However, the local directory was in place because I am putting my site on a shared server that I cannot edit the site-packages of. Is there a way for the application to read both from my local and the server's django folders so that I can use modules that are not on the server's django directory?</p>
<p>You can turn your local django folder into a namespace. This tells the python interpreter to continue traversing the path for modules, even if it finds a matching module earlier (i.e. the 'django' module you added). </p> <p>Add this to the __init__.py in your local django folder:</p> <pre><code>import pkg_resources pkg_resources.declare_namespace(__name__) </code></pre> <p>When python finds this module, it runs this code to register it as a namespace.</p>
python|django|web-applications
1
1,901,998
15,524,627
Passing values between threads using queue module in Python
<p>I am looking for a way to pass values(ex integers,arrays) between multiple threads in Python. I understand that this task can be achieved by using the Queue module, but I am not very familiar neither with python or this specific module. </p> <p>I have the following scenario: each thread needs to do some calculations based on its own data or data from other threads. Also each thread knows what other thread holds the data it needs for a specific job (all threads have an array of all threads, so any thread knows that for a task X he needs to get the data from a specific thread(row,col) from that array).</p> <p>How can this communication between threads be done using the Queue module or perhaps another technique(the Queue module seemed to be the right thing for this job). Any help is most appreciated. Thanks a lot.</p>
<h1>Using queues</h1> <p>Usually, a queue is used in a scenario with a bunch of worker threads that get their jobs from the queue. Free threads are waiting on the queue for new jobs to be put in it. Then the job is executed by a thread while all remaining threads are waiting for the next job. If there are more jobs posted than threads are available the queue starts to fill up.</p> <p>That doesn't apply to your scenario as you describe it. Maybe you can just read the data directly without putting it in a queue. If you write in shared data structures, you may consider a locking strategy.</p> <p>You should read up on parallel programming in general. The concepts are fairly language independent. Then you can read a tutorial about threads with Python. There is plenty of material on the internet about both topics. </p> <p><em>Edit:</em></p> <h1>Communication between threads using threading.Event</h1> <p>The simplest way to communicate between two threads is a <code>threading.Event</code> . The event can be set to true or false. Usually, one thread is setting the event and another thread checks the value of the Event and acts accordingly. For example, the event could indicate that there is something new to do. The indicating thread first fills up data structures that are necessary for the upcoming task and then sets the event true. Another thread that was waiting on the event is activated after the event is true. Subsequently, it reads out the data structures and performs the task.</p>
python|multithreading|queue|python-multithreading
2
1,901,999
25,336,589
mongoengine: SyntaxError: invalid syntax
<p>I got an error by using mongoengine and i don't know what's the reason ? </p> <p>this is my invalid syntax error :</p> <pre><code>Traceback (most recent call last): ... File "/home/mictadlo/.virtualenvs/unisnp/lib/python2.7/site-packages/mongoengine/document.py", line 4, in &lt;module&gt; import pymongo File "pymongo.py", line 33 } ^ SyntaxError: invalid syntax </code></pre> <p>with this code:</p> <pre><code>from mongoengine import * connect('dbtest') class Test(Document): tag = StringField(required=True) tlists = ListField(EmbeddedDocumentField('Tlist')) class Tlist(EmbeddedDocument): ref = StringField(required=True) for i in [('test1', "a"), ('test2', "b"), ('test3', "c"), ('test1', "a"), ('test2', "b"), ('test3', "c")]: test = Test() test.tag = i[0] tlist = Tlist() tlist.ref = i[1] test.tlists.append(tlist) test.save() </code></pre> <p>What did I do wrong?</p>
<p>I just copy and paste the content to the ide and it works now.</p>
python|mongodb|pymongo|mongoengine
0