Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,908,500
53,559,068
Use keras ImageDataGenerator.flow_from_directory() with Talos Scan()
<p>Talos is a module that allows you to do hyperparameter tuning on keras models you've already written code for. The conventional way it is used in examples is with the <code>Scan</code> class which is instantiated with <code>x</code> and <code>y</code> parameters. These parameters should contain an array with training data and labels respectively. </p> <pre><code>def modelbuilder(x_train, y_train, x_val, y_val, params): # modelbuilding out = model.fit(x_train, y_train) return model, out talos.Scan(x, y, params=params, model=modelbuilder) </code></pre> <p>However Keras provides a second way to import data with the <code>ImageDataGenerator</code> class, in stead of an array you just need a directory with the train/validation images.</p> <pre><code>train_datagen = ImageDataGenerator() train_generator = train_datagen.flow_from_directory( train_data_dir, batch_size=batch_size ) </code></pre> <p>Its unclear to me how I can <code>Scan</code> this, the datageneration should contains a hyperparameter (batch size) which should be inside the <code>modelbuilder</code> function. But at the same time <code>Scan</code> requires data arguments to be provided as an array. Any suggestion how I can combine Talos with the ImageDataGenerator. </p>
<p>You can now use fit_generator() in Talos experiments. See the <a href="https://github.com/autonomio/talos/issues/11" rel="nofollow noreferrer">corresponding issue</a> for more information. </p> <p>There is no specific instructions pertaining to "how to" as in accord with Talos philosophy you can use fit_generator exactly the way you would use it with a standalone Keras model. Just replace <code>model.fit(...)</code> with <code>model.fit_generator(...)</code> and use a generator as per your need.</p>
python|keras|talos
0
1,908,501
45,770,918
Python - retrieving whether article has author
<p>I am trying to write a Python script that retrieves whether an article has an author or not. </p> <p>I wrote the following:</p> <pre><code>s = "https://www.nytimes.com/2017/08/18/us/politics/steve-bannon-trump-white-house.html?hp&amp;action=click&amp;pgtype=Homepage&amp;clickSource=story-heading&amp;module=a-lede-package-region&amp;region=top-news&amp;WT.nav=top-news" def checkForAuthor(): r = requests.get(s) return "By" in r.text print(checkForAuthor()) </code></pre> <p>The issue is that function <code>checkForAuthor</code>returns <code>true</code> even when there's no author because it searches the whole HTML content for the word. Is there a better logic for finding an author without searching the whole document? Such as searching within a header so I don't even have to search the article content. I do need to make this general so that any website I go search within it will give me the result. Not sure there's anything out there like that.</p>
<p>To parse the html and look for the data you want, you should use the <code>BeautifulSoup</code> library.</p> <p>In the html of your URL, there is a <code>meta</code> tag with the author:</p> <pre><code>&lt;meta content="By MAGGIE HABERMAN, MICHAEL D. SHEAR and GLENN THRUSH" name="byl"/&gt; </code></pre> <p>So, to check if there is an author, you need to find it by its name (<code>byl</code>):</p> <pre><code>import requests from bs4 import BeautifulSoup s = "https://www.nytimes.com/2017/08/18/us/politics/steve-bannon-trump-white-house.html?hp&amp;action=click&amp;pgtype=Homepage&amp;clickSource=story-heading&amp;module=a-lede-package-region&amp;region=top-news&amp;WT.nav=top-news" def checkForAuthor(): soup = BeautifulSoup(requests.get(s).content, 'html.parser') meta = soup.find('meta', {'name': 'byl'}) return meta is not None </code></pre> <p>In fact, you can also get the author name with <code>meta["content"]</code></p>
python
1
1,908,502
45,754,774
Using pywin32 (XLWINGS) how do you read the text of an existing comment?
<p>I can set and delete comments in an Excel sheet but am unable to get (read)the contents of an existing comment. xlwings doesn't have a method for it so you need to drop down to the com object.</p> <pre><code>import xlwings as xw wb = xw.Workbook.active() xw.Range('A1').api.AddComment('Some Text') xw.Range('A1').api.DeleteComment() xw.Range('A1').api.AddComment('More Text') # Sadness on my best effort so far comment_text = xw.Range('A1').api.Comment.Shape.TextFrame.Characters.Text </code></pre>
<p>The code in the OP nor the suggested answer works for me. Here is how I did it in xlwings version 0.11.5 (on windows if that matters, Excel 2013, 2016, 2019):</p> <p>Adding comments to a cell (Please note you have to clear the comment if a comment already exists!):</p> <pre><code>import xlwings as xw path_to_excel_file = r'c:\temp\test.xlsx' wb = xw.Book(path_to_excel_file) sheet = wb.sheet['Sheet1'] coordinate = (1,1) comment = "test comment" sheet.range(coordinate).api.ClearComments() sheet.range(coordinate).api.AddComment(comment) </code></pre> <p>Reading the value of a comment:</p> <pre><code>import xlwings as xw path_to_excel_file = r'c:\temp\test.xlsx' wb = xw.Book(path_to_excel_file) sheet = wb.sheet['Sheet1'] coordinate = (1,1) xlsx_comment = sheet.range(coordinate).api.comment if xlsx_comment is not None: print(xlsx_comment.text()) else: print("No comment in this cell.") </code></pre> <p>I always have to google how to do it and I come across this thread so I figured this should be documented somewhere. Happy hunting!</p>
python|excel|pywin32|xlwings
1
1,908,503
46,172,809
unparsable piece of python code for a java developer
<p>I am comming from the java world and I really have a hard time to understand the following piece of code.</p> <pre><code>sortIx=['a2', 'a4', 'a1', 'a3', 'a5'] cItems=[sortIx] print cItems while len(cItems)&gt;0: cItems=[i[j:k] for i in cItems for j,k in ((0,len(i)/2), (len(i)/2,len(i))) if len(i)&gt;1] print cItems </code></pre> <p>What exactly does this line do <code>cItems=[i[j:k] for i in cItems for j,k in ((0,len(i)/2), (len(i)/2,len(i))) if len(i)&gt;1]</code>? How would you write this in java (or scala or groovy)?</p> <p><strong>EDIT</strong> </p> <p>Thx to Reblochon Masque i was able to understand this! If one is interested, translated into groovy the statement would be something like this: </p> <pre><code>cItems = cItems.findAll { it -&gt; it.size() &gt; 1} .collectMany { it -&gt; [it.subList(0, it.size().intDiv(2)), it.subList(it.size().intDiv(2), it.size())] } </code></pre>
<p>What does this line do?</p> <pre><code>cItems = [i[j:k] for i in cItems for j, k in ((0, len(i) / 2), (len(i) / 2, len(i))) if len(i) &gt; 1] </code></pre> <p>It partitions each inner lists contained into <code>cItems</code> into two sublists of half the size (containing each half of the elements), except when the sublist contains only one element, then it is ignored.</p>
java|python
3
1,908,504
33,118,101
Django Ngnix Uwsgi :Could not connect to the requested server host
<p>I tried to follow the tutorial for deploying django app on ec2 using <a href="https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-uwsgi-and-nginx-on-ubuntu-14-04" rel="nofollow">this tutorial</a>, however getting an error: </p> <blockquote> <p>"Could not connect to the requested server host"</p> </blockquote> <p>When trying to deploy the first site with ref to tutorial. Only thing I changed is the server_name firstsite.com to public IP of the machine. Please help me figure, where I can find out</p>
<p>Fixed my error,firstly it was a cache of ec2 on my browser, Secondly,my config ngnix was a symbolic link to the ngnix was not updated. I initially uninstalled nginx and then tried to replicate the scenario,then i updated my symbolic links which got my site running up</p>
python|django|nginx|amazon-ec2
0
1,908,505
13,101,793
Django production - No db table load in admin
<p>I'm building a Django app and now I'm in production. I have this problem: after performing manage.py syncdb (all it's ok) I go into admin and I can not find the models tables . My admin.py file is present and this is my file url.py:</p> <pre><code>from django.conf.urls.defaults import patterns, include, url from django.contrib import admin # Uncomment the next two lines to enable the admin: # from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', # Examples: # url(r'^$', 'stambol.views.home', name='home'), # url(r'^stambol/', include('stambol.foo.urls')), # Uncomment the admin/doc line below to enable admin documentation: # url(r'^admin/doc/', include('django.contrib.admindocs.urls')), # Uncomment the next line to enable the admin: url(r'^admin/', include(admin.site.urls)), ) </code></pre> <p>Where is the problem? </p>
<p>Make sure you have the below removed as comments inside <code>INSTALLED_APPS</code> in <code>settings.py</code>. Then run <code>./manage.py syncdb</code> again. They should like as shown below without the <code>#</code> in front of them</p> <pre><code> # Uncomment the next line to enable the admin: 'django.contrib.admin', # Uncomment the next line to enable admin documentation: 'django.contrib.admindocs', </code></pre>
python|django|admin
0
1,908,506
12,935,891
Converting string to base36 inconsistencies between languages.
<p>I have noticed some inconsistencies between Python and JavaScript when converting a string to base36. </p> <p><strong>Python Method:</strong></p> <pre><code>&gt;&gt;&gt; print int('abcdefghijr', 36) </code></pre> <p>Result: 37713647386641447</p> <p><strong>Javascript Method:</strong></p> <pre><code>&lt;script&gt; document.write(parseInt("abcdefghijr", 36)); &lt;/script&gt; </code></pre> <p>Result: 37713647386641450</p> <p>What causes the different results between the two languages? What would be the best approach to produce the same results irregardless of the language? </p> <p>Thank you. </p>
<p>That number takes 56 bits to represent. JavaScript's numbers are actually <a href="http://en.wikipedia.org/wiki/Double_precision" rel="nofollow">double-precision binary floating point numbers</a>, or <code>double</code> for short. These are 64 bit in total, and can represent a far wider <em>range</em> of values than a 64 bit integers, but due to how they achieve that (they represent a number as <code>mantissa * 2^exponent</code>), they cannot represent <em>all</em> numbers in that range, just the ones that are a multiple of <code>2^exponent</code> where the multiple fits into the mantissa (which includes 2^0 = 1, so you get <em>all</em> integers the mantissa can handle directly). The mantissa is 53 bits, which is insufficient for this number. So it gets rounded to a number which <em>can</em> be represented.</p> <p>What you can do is use an arbitrary precision number type defined by a third party library like <code>gwt-math</code> or <code>Big.js</code>. These numbers aren't hard to implement if you know your school arithmetic. Doing it efficiently is another matter, but also an area of extensive research. And not <em>your</em> problem if you use an existing library.</p>
javascript|python|base36
10
1,908,507
21,553,428
Passing values to a dict whilst in a for loop in Python
<p>So, I have a folder which has subfolders, each containing 40 images.</p> <p>I am iterating over every image within each subfolder using the following code:</p> <pre><code>for (dirname, dirnames, filenames) in os.walk('C:\.....') for filename in filenames: </code></pre> <p>The images are titled as follows:</p> <p><strong>1st folder:</strong> IM-0239-0001.dcm, IM-0239-0002.dcm, IM-0239-0003.dcm...........IM-0239-0040.dcm</p> <p><strong>2nd folder</strong> IM-0248-0001.dcm, IM-0248-0002.dcm, IM-0248-0003.dcm.....IM-0248-0040.dcm</p> <p>What I want to do is perform an operation on each of the 40 images in a sub-folder, get the average of these and then export this value along with the sub-folder identifier e.g "0239" to a dictionary outside of the for loop.</p> <p>I am having trouble with passing the average value BEFORE moving onto the next sub-folder. My ideal solution, if say 4 sub-folders exists, passes 4 sub-folder UIs and their corresponding average value to a dictionary.</p> <p>I have tried using .split based function (below) combined with an if statement inside the above loop to compare parts[1] in order to detect a sub-folder ending:</p> <pre><code>parts = image_name.split('-') return '-'.join((parts[1], parts[0], parts[2])) </code></pre> <p>but I looking for a more pythonic, chic and faster way of doing this.</p> <p>Help much appreciated.</p>
<p>If I understand you correctly, you have one image directory with one level of subdirectories containing images. You want to process the images grouped by the containing subdirectory.</p> <p>You can first read in the structure:</p> <pre><code>#! /usr/bin/python3 import os tobeprocessed = {} for curdir, subdirs, files in os.walk ('/home/lorenzo/stackoverflow/images'): if not files: continue tobeprocessed [curdir] = files print (tobeprocessed) </code></pre> <p>Now you have a dictionary whose keys are the folders and whose values are the containing files, which you can process as you see fit.</p>
python|python-2.7|for-loop|dictionary
2
1,908,508
21,869,048
CSV file parsing (python)
<p>I am having some issues parsing a csv file with 14 columns.</p> <pre><code>for row in training_set_data: if skiprow: skiprow = False else: for r in range(len(row)): row[r] = float(row[r]) training_set.append(row) </code></pre> <p>this seems to be working to just get a list of the vectors, but the next thing I want to do is collect the first 13 entries in each row and make one set of vectors, and then collect the last column and make a separate set of vectors of that. My code currently looks like this for the 13 entry vectors:</p> <pre><code>def inputVector(inputs): for r in inputs: inputs.pop(13) return inputs </code></pre> <p>This is not working and when I go to print it, it is still 14 entries long. Can anyone tell me what I am doing wrong? Sorry if the question doesn't make too much sense, I am pretty new to coding.</p> <p>Edit: First 11 lines of the csv file and the call to input vecto</p> <pre><code>53,1,3,130,197,1,2,152,0,1.2,3,0,3,0 42,1,4,136,315,0,0,125,1,1.8,2,0,6,1 46,1,4,140,311,0,0,120,1,1.8,2,2,7,1 42,1,4,140,226,0,0,178,0,0,1,0,3,0 54,1,4,140,239,0,0,160,0,1.2,1,0,3,0 67,0,3,115,564,0,2,160,0,1.6,2,0,7,0 65,0,3,140,417,1,2,157,0,0.8,1,1,3,0 56,0,4,134,409,0,2,150,1,1.9,2,2,7,1 65,0,3,160,360,0,2,151,0,0.8,1,0,3,0 57,0,4,120,354,0,0,163,1,0.6,1,0,3,0 55,0,4,180,327,0,1,117,1,3.4,2,0,3,1 inputV = inputVector(training_set) </code></pre>
<p>Try something like this:</p> <pre><code>first_13s = [] last_1s = [] for r in inputs: first_13s.append(r[:13]) last_1s.append(r[13]) </code></pre> <p>also you can replace a number of lines in your first block of code just by using training_set_data[1:]</p> <p>python list slicing is very handy <a href="https://stackoverflow.com/questions/509211/pythons-slice-notation">Explain Python&#39;s slice notation</a></p> <p>also you can use list comprehensions for your float conversion:</p> <pre><code>for r in range(len(row)): row[r] = float(row[r]) </code></pre> <p>becomes</p> <pre><code>row = [float(r) for r in row] </code></pre> <p>so the first block can be done like this:</p> <pre><code>for row in training_set_data[1:]: row = [float(r) for r in row] training_set.append(row) </code></pre>
python|csv
2
1,908,509
21,740,359
Python MySQLdb TypeError: not all arguments converted during string formatting
<p>Upon running this script:</p> <pre><code>#! /usr/bin/env python import MySQLdb as mdb import sys class Test: def check(self, search): try: con = mdb.connect('localhost', 'root', 'password', 'recordsdb'); cur = con.cursor() cur.execute( "SELECT * FROM records WHERE email LIKE '%s'", search ) ver = cur.fetchone() print "Output : %s " % ver except mdb.Error, e: print "Error %d: %s" % (e.args[0],e.args[1]) sys.exit(1) finally: if con: con.close() test = Test() test.check("test") </code></pre> <p>I get an error of:</p> <pre><code>./lookup Traceback (most recent call last): File "./lookup", line 27, in &lt;module&gt; test.check("test") File "./lookup", line 11, in creep cur.execute( "SELECT * FROM records WHERE email LIKE '%s'", search ) File "/usr/local/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 187, in execute query = query % tuple([db.literal(item) for item in args]) TypeError: not all arguments converted during string formatting </code></pre> <p>I have zero idea why. I'm trying to do parameterized querys, but it's been nothing but a pain. I'm somewhat new to Python, so it's probably an obvious problem.</p>
<p>Instead of this:</p> <pre><code>cur.execute( "SELECT * FROM records WHERE email LIKE '%s'", search ) </code></pre> <p>Try this:</p> <pre><code>cur.execute( "SELECT * FROM records WHERE email LIKE %s", [search] ) </code></pre> <p>See the MySQLdb <a href="http://mysql-python.sourceforge.net/MySQLdb.html#some-examples" rel="noreferrer">documentation</a>. The reasoning is that <code>execute</code>'s second parameter represents a list of the objects to be converted, because you could have an arbitrary number of objects in a parameterized query. In this case, you have only one, but it still needs to be an iterable (a tuple instead of a list would also be fine). </p>
python|python-2.7
111
1,908,510
24,930,394
sci-kit image don't load image
<p>I'm testing for first time sci-kit image. (contratulations for the initative)</p> <pre><code>img = skimage.data.camera() print img.shape img_l = skimage.data.load(filepath) print img_l.shape img_i = skimage.data.imread(filepath) print img_i.shape </code></pre> <p>The output I get are:</p> <pre><code>() () () </code></pre> <p>Any ideas? Thanks in advance.</p>
<p>It is easier to see the image and variables with Spyder ide. There you can understand what is wrong. I recommend you to install winpython which comes with sci-kit, spyder and lots of more libraries. </p> <pre><code>from skimage import data, io img = data.camera() print (img.shape) io.imshow(img) io.show() </code></pre>
python|python-2.7|scikit-image
0
1,908,511
24,790,655
Several request handlers for a single url
<p>Typical way to link a request handler to a URL looks like this:</p> <pre><code>application = webapp2.WSGIApplication([('/', RequestHandler1)] </code></pre> <p>If I would like to link several request handlers to a single URL. Is this possible? I was thinking something like this:</p> <pre><code>application = webapp2.WSGIApplication([('/', (RequestHandler1, RequestHandler2)] </code></pre>
<p>You can do a regex match to match the url parameter</p> <p>for example:</p> <pre><code>application = webapp2.WSGIApplication([ ('/([a-z]+)', RequestHandler1), # matched the word parameter ('/([0-9]+)', RequestHandler2) # matched the numeric parameter ] </code></pre> <p>so, you can separate the business logic based on parameter condition...</p> <p>alternatives, why not u set up the get parameters?</p> <pre><code>/application?paramcondition="A" --&gt; URL /application?paramcondition="0" --&gt; URL class blabla(webapp2.RequestHandler): def get(self): param = self.request.get("paramcondition") if param == "A": # do something elif param == "0": # do another </code></pre> <p>so you only need 1 URL handler...</p>
python|google-app-engine|python-2.7|webapp2
0
1,908,512
41,185,282
Python - Generate random vertices around center (x, y) position
<p>I haven't been able to find an article or an stackoverflow question around this subject as I need it.</p> <p>I'm making 2D game and I want to generate a random shaped asteroid. Every asteroid has a center, a <code>X, Y</code> position, I want t generate let's say 8 vertices around it (the number of vertices is variable)</p> <p><a href="https://i.stack.imgur.com/24q4M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/24q4M.png" alt="enter image description here"></a></p> <p>If I can achieve this shape now I want to add a random element, let's say a variable and random distance from the center to create a random rock shape.</p> <p><a href="https://i.stack.imgur.com/Y2nRw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y2nRw.png" alt="enter image description here"></a></p> <p>I've researched some trigonometry concept called Polar to <strong>Cartisian Coordinates</strong> but haven't been able to figure out how to apply it to this goal. Although there's tutorials for it in Javascript, it uses libraries and functions that I would have to make in python from scratch.</p> <p><strong>EDIT:</strong> So your answer is focused on explaining the formula to get the vertices around the center, these are the things that I already know how to do, </p> <p>1) Draw a polygon in the graphics library I'm using 2) Get random numbers</p> <p>Thank you so much in advance!</p>
<p>You can use the <code>random</code> library to randomly sample in polar coordinates. Sample an orientation, then sample a radial distance. In the following example I chose that the orientation would be uniformly distributed in <code>[0, 2*pi]</code> and the radius would be normally distributed with user input mean and standard deviation.</p> <pre><code>import random import math def generate_points(center_x, center_y, mean_radius, sigma_radius, num_points): points = [] for i in range(num_points): theta = random.uniform(0, 2*math.pi) radius = random.gauss(mean_radius, sigma_radius) x = center_x + radius * math.cos(theta) y = center_y + radius * math.sin(theta) points.append([x,y]) return points </code></pre> <p>As an example of how to call it</p> <pre><code>&gt;&gt;&gt; generate_points(5.0, 7.0, 1.0, 0.1, 8) [[4.4478263120757875, 6.018608023032151], [4.407825651072504, 6.294849028359581], [5.0570272843718085, 6.17834681191539], [5.307793789416231, 6.156715230672773], [4.368508167422119, 7.712616387293795], [5.327972045495855, 5.917733119760926], [5.748935178651789, 6.437863588580371], [3.9312163910881033, 6.388093041756519]] </code></pre> <p>If you want the points to be wound in a particular order, then I would use something like <code>numpy.linspace</code> to walk either clockwise or counterclockwise to sample theta. For example</p> <pre><code>import random import math import numpy as np def generate_points(center_x, center_y, mean_radius, sigma_radius, num_points): points = [] for theta in np.linspace(0, 2*math.pi - (2*math.pi/num_points), num_points): radius = random.gauss(mean_radius, sigma_radius) x = center_x + radius * math.cos(theta) y = center_y + radius * math.sin(theta) points.append([x,y]) return points </code></pre> <p>If you do not want to install <code>numpy</code> you can write your own similar version of <code>linspace</code></p> <pre><code>def linspace(start, stop, num_steps): values = [] delta = (stop - start) / num_steps for i in range(num_steps): values.append(start + i * delta) return values </code></pre>
python|pygame
6
1,908,513
38,164,337
gevent spawing - in sequence rather than concurrent
<p>I am trying my hand on asynchronous programming with gevent and not able to understand the way my code works. </p> <p>I am trying to ping google.com using sockets on a closed port (22) and expecting the <code>ping</code> function to happen concurrently but it is not happening</p> <p>I have a python code as below</p> <pre><code>class Ping(object): def checkReachability(self,index): sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM) sock.settimeout(2) print index try: sock.connect(('google.com',22)) gevent.sleep(0) except socket.error as e: print str(e) sock.close() def ping(self): threads = [gevent.spawn(self.checkReachability, i) for i in range(3)] gevent.joinall(threads) if __name__ == 'main': m = Ping() m.ping() </code></pre> <p>I am expecting to see:</p> <pre><code>0 1 2 timeout timeout timeout </code></pre> <p>This is what I am getting</p> <pre><code>0 timed out 1 timed out 2 timed out </code></pre> <p>Any help would be helpful.</p> <p>Thanks</p>
<p>Figured the solution.</p> <p>Adding a monkey patch in import solved my issue.</p> <pre><code>from gevent import monkey monkey.patch_socket() </code></pre>
python|gevent
0
1,908,514
31,040,525
Insert element in Python list after every nth element
<p>Say I have a Python list like this:</p> <pre><code>letters = ['a','b','c','d','e','f','g','h','i','j'] </code></pre> <p>I want to insert an 'x' after every nth element, let's say three characters in that list. The result should be:</p> <pre><code>letters = ['a','b','c','x','d','e','f','x','g','h','i','x','j'] </code></pre> <p>I understand that I can do that with looping and inserting. What I'm actually looking for is a Pythonish-way, a one-liner maybe?</p>
<p>I've got two one liners.</p> <p>Given:</p> <pre><code>&gt;&gt;&gt; letters = ['a','b','c','d','e','f','g','h','i','j'] </code></pre> <ol> <li><p>Use <code>enumerate</code> to get index, add <code>'x'</code> every 3<sup>rd</sup> letter, <em>eg</em>: <code>mod(n, 3) == 2</code>, then concatenate into string and <code>list()</code> it.</p> <pre><code>&gt;&gt;&gt; list(''.join(l + 'x' * (n % 3 == 2) for n, l in enumerate(letters))) ['a', 'b', 'c', 'x', 'd', 'e', 'f', 'x', 'g', 'h', 'i', 'x', 'j'] </code></pre> <p>But as <a href="https://stackoverflow.com/users/2707864/sancho-s">@sancho.s</a> <a href="https://stackoverflow.com/questions/31040525/insert-element-in-python-list-after-every-nth-element/31040944?noredirect=1#comment88477417_31040944">points out</a> this doesn't work if any of the elements have more than one letter.</p></li> <li><p>Use nested comprehensions to flatten a list of lists<sup>(a)</sup>, sliced in groups of 3 with <code>'x'</code> added if less than 3 from end of list.</p> <pre><code>&gt;&gt;&gt; [x for y in (letters[i:i+3] + ['x'] * (i &lt; len(letters) - 2) for i in xrange(0, len(letters), 3)) for x in y] ['a', 'b', 'c', 'x', 'd', 'e', 'f', 'x', 'g', 'h', 'i', 'x', 'j'] </code></pre></li> </ol> <p>(a) <code>[item for subgroup in groups for item in subgroup]</code> flattens a jagged list of lists.</p>
python|list|indexing|insert|slice
21
1,908,515
30,901,180
Decode error - output not utf-8
<p>If I try to have Python print the string <code>"«»••"</code>, it instead returns </p> <p><code>[Decode error - output not utf-8]</code>. </p> <p>How can I fix this? I'm using Sublime Text 2, if it helps.</p> <p>EDIT: Apparently, </p> <pre><code>print("«»••") </code></pre> <p>works, but not</p> <pre><code>print("Hello world! «»••") </code></pre> <p>Note that I'm using this at the top of the file:</p> <pre><code># -*- coding: utf-8 -*- </code></pre> <p>EDIT x2:</p> <pre><code>repr("Hello world! «»••") </code></pre> <p>returns</p> <pre><code>'Hello world! \xc2\xab\xc2\xbb\xe2\x80\xa2\xe2\x80\xa2' </code></pre>
<p>Not 100% sure if this will solve your problem or not but if I try formatting:</p> <pre><code># -*- coding: utf8 -*- print("Hello world! %s" %"«»••") </code></pre> <p>I'm able to produce the output that isn't working for you. </p> <p>Nothing shows up however if they are both contained in the same string. Same error if I try to concatenate the strings as well.</p>
python|sublimetext2
0
1,908,516
28,918,821
Flask - SQLAlchemy - clear tables as well as many-to-many linking table
<p>I have two relations: Service and Stop. They form a many-to-many relationship with each other. I would like to clear all data from both tables while also emptying their many-to-many linking table.</p> <p>I tried the following.</p> <pre><code>Service.query.delete() Stop.query.delete() </code></pre> <p>This cleared both tables, however <strong>all data in the linking table</strong> remained untouched.</p> <p>Service model:</p> <pre><code>class Service(Base): __tablename__ = 'service' id = Column(Integer, primary_key=True) name = Column(String(250), nullable=False) service_type = Column(String(250), nullable=False) description = Column(String(250), nullable=False) </code></pre> <p>Stop model:</p> <pre><code>class Stop(Base): __tablename__ = 'stop' id = Column(Integer, primary_key=True) name = Column(String(250), nullable=False) latitude = Column(Float(), nullable=False) longitude = Column(Float(), nullable=False) services = relationship("Service", secondary=stop_service, backref="stop") </code></pre> <p>Can someone please tell me how to automatically clear the linking table too, without having to loop through the "stop" table?</p>
<p>Just set <a href="http://docs.sqlalchemy.org/en/rel_0_9/core/constraints.html#on-update-on-delete" rel="nofollow"><code>ondelete</code> to <code>CASCADE</code></a> in your joining table (<code>stop_service</code>)'s definition:</p> <pre><code>stop_service = Table("stop_service", meta, Column("service_id", ForeignKey('Service.id', ondelete="CASCADE")), Column("stop_id", ForeignKey('Stop.id', ondelete="CASCADE"))) </code></pre>
python|database|flask|sqlalchemy|cascade
2
1,908,517
8,582,177
Python: calling external functions within functions
<p>I am new to Python, and am having a problem: I want to write a function (<code>Jacobian</code>) which takes a function and a point as arguments, and returns the <a href="http://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant" rel="nofollow">jacobian</a> of that function at the given point. </p> <p>Unsurprisingly, <code>Jacobian</code> relies on NumPy and SciPy. When I call <code>Jacobian</code> from another script, I get either:</p> <ol> <li>An error that says I cannot import a module into a function (when I have an import statement for NumPy/SciPy in <code>Jacobian</code>) or </li> <li>Errors that various NumPy/Scipy functions (e.g. <code>zeros()</code>) are not defined, (when I omit the import statement to avoid the error mentioned above. </li> </ol> <p>What am I doing wrong?</p> <p>Also, if someone knows of an implementation of <code>Jacobian</code>, that would be useful as well. There doesn't seem to be one in SciPy. </p>
<p>You can import at the module level and then use the imported names from inside any functions. Or you can import any required names directly inside a function.</p> <p>There is one situation where you cannot use <code>import</code> inside a function: you are not allowed to do <code>from somemodule import *</code> because the Python compiler wants to know all of the local variables in the function and with <code>import *</code> it cannot tell in advance what names will be imported.</p> <p>The solution is simple: never use <code>import *</code>, always import exactly the names that you want to use.</p> <p>P.S. It helps if you copy the code that is giving the problem and the <strong>exact</strong> error message you are getting. I'm guessing here that this is your problem but you'll get faster and more accurate answers if you provide the relevant details.</p>
python|python-3.x
2
1,908,518
52,370,396
Unable to call a function inside Main()
<p>i am a starter currently working on a personal project to develop a small application on Python 2 using Tkinter. However, i am facing problem with calling a function that i created. <em>1.</em> Knowing that Tkinter on Python is event-based, there for i created a button to upload a particular file and then store the values in the file to a List variable. <em>2.</em> After which, i then use another button to try and call the print function to print values inside the values stored in the List variable. </p> <p>Hope someone can help me out.</p> <pre><code>def uploadFile(): openFile = tkFileDialog.askopenfilename(filetypes = (("",".csv"),("All files", "*.*"))) with open(openFile) as file: reader = csv.reader(file) xUF = [row for row in reader] return xUF def printUF(): for row in xUF: print row count=0 if count&gt;5: break count += 1 def main(): l1=tk.Label(root,text="Upload File") l1.pack() l1.place(x=55,y=170) #buttons/placements for GUI b1=tk.Button(root,text="Browse", command=uploadFile) b1.pack() b1.place(x=100,y=200) b2=tk.Button(root,text="Print", command=printUF()) b2.pack() root.title("HELLO") root.geometry("500x500") root.mainloop() if __name__ == '__main__': main() </code></pre>
<p>I don't quite know what your error is. But I suspect it is this: </p> <p>You are passing <code>command=printUF()</code> to the <code>Button</code> class when I think you want just <code>command=printUF</code> because <code>Button</code> probably wants a function to call when it's hit. As you have it, you're passing the result of <code>printUF</code> instead of the function itself to <code>Button</code>. The line should look like this:</p> <pre><code>b2=tk.Button(root,text="Print", command=printUF) </code></pre>
python|python-2.7|tkinter
0
1,908,519
19,263,333
Regular Expression that Includes a Character Only If Another Character Precedes It
<p>I'm new to Stack so not sure if I'm asking this right.</p> <p>I'm trying to form a regular expression to match all characters except 3 specific ones (%,&amp;,and$) but I want to ignore that exception if a backslash () proceeds any of those characters. For example, if I have the string</p> <pre><code>abcd\$&amp; </code></pre> <p>I would want the regular expression to match</p> <pre><code>abcd\$ </code></pre> <p>because a backsplash preceeds the dollar sign, but not match the ^ because no backslash precedes it.</p> <p>So far I have:</p> <pre><code>^[^%$&amp;]+ </code></pre> <p>which matches any string that doesn't have the characters (%, $, or &amp;), but it stops at the backslash rather than include the backslash and the next character.</p> <p>Thanks in advance!</p>
<pre><code>^([^%$&amp;\\]|\\.)+$ </code></pre> <p>should work.</p> <p>It also excludes <code>\</code> from the charset and then allows <code>\</code> followed by any character.</p>
python|regex|string
1
1,908,520
19,006,510
Not enough arguments - what is the difference
<p>I have two almost identical programs. One works, the other does not. The program that is not working displays the error "not enough arguments for format string". It probably has something to do with the "%" in the variable dbname, but I can't figure out why one program works and the other does not. In both programs I'm attempting to use a wildcard in a SELECT statement with LIKE.</p> <p>Working program:</p> <pre><code>import subprocess import sys import commands from sqlalchemy import create_engine from sqlalchemy import Date, DateTime from sqlalchemy import create_engine from sqlalchemy import MetaData, Column, Table, ForeignKey from sqlalchemy import Integer, String from sqlalchemy.sql import select engine = create_engine('mysql://UID:PASS@999.999.99.99:9999/access_benchmark_staging', echo=True) dest = engine.connect() dest.execute("truncate table TABAUTH") def dbapull(applid, ssid, host, port, dbname): print "dbapull " + dbname + "" source = pyodbc.connect('Driver={IBM DB2 ODBC DRIVER};Database=' + ssid +';Hostname=' + host + ';Port=' + port + ';Protocol=TCPIP;Uid=user;Pwd=password', echo=True) src = source.cursor() src.execute("SELECT DISTINCT SUBSTR(CURRENT SERVER,1,7) AS SSID, SUBSTR(B.GRANTEE,1,8) AS GRANTEE, B.UPDATEAUTH AS U, B.INSERTAUTH AS I, B.DELETEAUTH AS D, A.CREATOR, B.DBNAME, B.TTNAME, B.ALTERAUTH AS C FROM " + ssid + ".SYSIBM.SYSTABLES A LEFT JOIN " + ssid + ".SYSIBM.SYSTABAUTH B ON A.CREATOR = B.TCREATOR AND A.NAME = B.TTNAME WHERE A.CREATOR IN ('PFPROD','PGPROD','PSPROD','PS','PROD') AND A.DBNAME LIKE " + dbname + " AND (B.UPDATEAUTH &lt;&gt; ' ' OR B.INSERTAUTH &lt;&gt; ' ' OR B.DELETEAUTH &lt;&gt; ' ') AND A.TYPE IN ('T','V') AND B.GRANTEETYPE = ' ' AND A.NAME NOT IN ('DSN_VIEWREF_TABLE','DSN_PGRANGE_TABLE','DSN_SORTKEY_TABLE','DSN_SORT_TABLE','DSN_DETCOST_TABLE','DSN_FILTER_TABLE','DSN_PTASK_TABLE','DSN_STATEMNT_TABLE','DSN_PGROUP_TABLE','DSN_STRUCT_TABLE','DSN_PREDICAT_TABLE','PLAN_TABLE') ORDER BY 2") for row in src: #print (row) row[0] = str(row[0]).strip() row[1] = str(row[1]).strip() row[2] = str(row[2]).strip() row[3] = str(row[3]).strip() row[4] = str(row[4]).strip() row[5] = str(row[5]).strip() row[6] = str(row[6]).strip() row[7] = str(row[7]).strip() row[8] = str(row[8]).strip() result = dest.execute("insert ignore into TABAUTH values ('" + applid + "','" + row[0] + "','" + row[1] + "','" + row[2] + "','" + row[3] + "','" + row[4] + "','" + row[5] + "','" + row[6] + "','" + row[7] + "','" + row[8] + "')") dbapull("AAA", "BBB", "CCC", "DDD", "'%PMC%'") dest.close() </code></pre> <p>Non-working program:</p> <pre><code>import subprocess import sys import commands from sqlalchemy import create_engine from sqlalchemy import Date, DateTime from sqlalchemy import create_engine from sqlalchemy import MetaData, Column, Table, ForeignKey from sqlalchemy import Integer, String from sqlalchemy.sql import select engine = create_engine('mysql://UID:PASS@999.999.99.99:9999/access_benchmark_staging', echo=True) dest = engine.connect() dest.execute("truncate table DS_Users") def userpull(appl, dbname): print "DS User pull " + dbname + " " source = create_engine('mysql://UID:PASS@999.999.99.99:9999/access_benchmark_staging', echo=True) src = engine.connect() src.execute("SELECT MF.profile_name AS profile_name, MF.groupuser_access as groupuser_access, MF.group_id as group_id, MF.user_id as user_id, MF.user_name as user_name, MF.default_group as default_group, MF.last_racinit as last_racinit, MF.password_last_changed_date as password_last_changed_date, MF.user_id_status as user_id_status, MF.creation_date as creation_date, ldap.uid as uid, ldap.company as company, ldap.emp_name as emp_name, ldap.title as title, ldap.contract_exp as contact_exp, ldap.dept_name as dept_name, ldap.emp_status as emp_status, ldap.disabled_date as disabled_date, ldap.term_date as term_date, ldap.bus_unit as bus_unit, ldap.manager_id as manager_id FROM (SELECT DST.profile_name, DST.groupuser_access, GRP.group_id, USR.user_id, USR.user_name, USR.default_group, USR.last_racinit, USR.password_last_changed_date, USR.user_id_status, USR.creation_date FROM AU_KRC_USER_REPORT USR INNER JOIN AU_KRC_GROUP_REPORT GRP ON USR.user_id = GRP.user_id INNER JOIN AU_KRC_DATASET_REPORT DST ON GRP.group_id = DST.groupuser_id WHERE (DST.profile_name LIKE " + dbname + " AND DST.profile_name NOT IN ('" + appl + ".SYSINFO.ABEND')) AND DST.groupuser_access IN ('UPDAT', 'ALTER') AND DST.groupuser_type = 'GROUP' ) MF LEFT OUTER JOIN ldap.ldap_raw ldap ON MF.user_id = ldap.kmart_mf GROUP BY MF.group_id, MF.user_id, MF.user_name, MF.default_group, MF.last_racinit, MF.password_last_changed_date, MF.user_id_status, MF.creation_date, ldap.uid, ldap.company, ldap.emp_name,ldap.title, ldap.contract_exp, ldap.dept_name, ldap.emp_status, ldap.disabled_date, ldap.term_date, ldap.bus_unit, ldap.manager_id") for row in src: #print (row) row[0] = str(row[0]).strip() row[1] = str(row[1]).strip() row[2] = str(row[2]).strip() row[3] = str(row[3]).strip() row[4] = str(row[4]).strip() row[5] = str(row[5]).strip() row[6] = str(row[6]).strip() row[7] = str(row[7]).strip() row[8] = str(row[8]).strip() row[9] = str(row[9]).strip() row[10] = str(row[10]).strip() row[11] = str(row[11]).strip() row[12] = str(row[12]).strip() row[13] = str(row[13]).strip() row[14] = str(row[14]).strip() row[15] = str(row[15]).strip() row[16] = str(row[16]).strip() row[17] = str(row[17]).strip() row[18] = str(row[18]).strip() row[19] = str(row[19]).strip() row[20] = str(row[20]).strip() result = dest.execute("insert ignore into DS_Users values ('" + appl +"','" + row[0] + "','" + row[1] + "','" + row[2] + "','" + row[3] + "','" + row[4] + "','" + row[5] + "','" + row[6] + "','" + row[7] + "','" + row[8] + "','" + row[9] + "','" + row[10] + "','" + row[11] + "','" + row[12] + "','" + row[13] + "','" + row[14] + "','" + row[15] + "','" + row[16] + "','" + row[17] + "','" + row[18] + "','" + row[19] + "','" + row[20] + "')") userpull("PP", "'%PP.%'") </code></pre>
<p>Try to put your sql in a transaction and check if the driver you are using for the DB support the "%" for queries.</p>
python
0
1,908,521
62,406,964
Specifying test rows for backtest and machine learning
<p>I want to use a machine learning to predict the price movement of an asset. so far I got the data and results. now I want to back test the model. the premise is very simple: just buy whenever the predicted value is 1 and hold. I want to apply predicting model and iterate over the testing rows from the bottom up to the specified number, check whether the predicted output matches the corresponding label (the label here is -1,1), then do some calculation.</p> <p>here is the code:</p> <pre><code>def backtest(): x = df[['open', 'high', 'low', 'close', 'vol']] y = df['label'] z = np.array(df['log_ret'].values) test_size = 366 rf = RandomForestClassifier(n_estimators = 100) rf.fit(x[:-test_size],y[:-test_size]) invest_amount = 1000 trade_qty = 0 correct_count = 0 for i in range(1, test_size): if rf.predict(x[-i])[0] == y[-i]: correct_count += 1 if rf.predict(x[-i])[0] == 1: invest_return = invest_amount + (invest_amount * (z[-i]/100)) trade_qty += 1 print('accuracy:', (correct_count/test_size)*100) print('total trades:', trade_qty) print('profits:', invest_return) backtest() </code></pre> <p>So far I am stuck on this:</p> <pre><code>--------------------------------------------------------------------------- KeyError Traceback (most recent call last) ~\anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance) 2645 try: -&gt; 2646 return self._engine.get_loc(key) 2647 except KeyError: pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() KeyError: -1 During handling of the above exception, another exception occurred: KeyError Traceback (most recent call last) &lt;ipython-input-29-feab89792f26&gt; in &lt;module&gt; 22 23 for i in range(1, test_size): ---&gt; 24 if rf.predict(x[-i])[0] == y[-i]: 25 correct_count += 1 26 ~\anaconda3\lib\site-packages\pandas\core\frame.py in __getitem__(self, key) 2798 if self.columns.nlevels &gt; 1: 2799 return self._getitem_multilevel(key) -&gt; 2800 indexer = self.columns.get_loc(key) 2801 if is_integer(indexer): 2802 indexer = [indexer] ~\anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance) 2646 return self._engine.get_loc(key) 2647 except KeyError: -&gt; 2648 return self._engine.get_loc(self._maybe_cast_indexer(key)) 2649 indexer = self.get_indexer([key], method=method, tolerance=tolerance) 2650 if indexer.ndim &gt; 1 or indexer.size &gt; 1: pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() KeyError: -1 </code></pre>
<p>This code below, solves the problem with a few modifications:</p> <pre><code>def backtest(): x = df[['open', 'high', 'low', 'close', 'vol']] y = df['label'] z = np.array(df['log_ret'].values) test_size = 366 rf = RandomForestClassifier(n_estimators = 100) rf.fit(x[:-test_size],y[:-test_size]) invest_amount = 1000 trade_qty = 0 correct_count = 0 for i in range(1, test_size)[::-1]: if rf.predict(x[x.index == i])[0] == y[i]: correct_count += 1 if rf.predict(x[x.index == i])[0] == 1: invest_return = invest_amount + (invest_amount * (z[i]/100)) trade_qty += 1 print('accuracy:', (correct_count/test_size)*100) print('total trades:', trade_qty) print('profits:', invest_return) backtest() </code></pre> <p><strong>Explaining the modifications:</strong></p> <ol> <li>Accessing the dataframe row by filtering the index <code>x[x.index == i]</code>;</li> <li>Modifying the negative index for a backwards range with fewer adaptations <code>range(1, test_size)[::-1]</code>;</li> </ol> <p><strong>Generating a test case:</strong></p> <pre><code>import numpy as np import pandas as pd from sklearn.ensemble import RandomForestClassifier data = {'open': np.random.rand(1000), 'high': np.random.rand(1000), 'low': np.random.rand(1000), 'close': np.random.rand(1000), 'vol': np.random.rand(1000), 'log_ret': np.random.rand(1000), 'label': np.random.choice([-1,1], 1000)} df = pd.DataFrame(data) </code></pre> <p>This produces the following result:</p> <pre><code>&gt;&gt; backtest() accuracy: 99.72677595628416 total trades: 181 profits: 1006.8351193358026 </code></pre>
python
1
1,908,522
62,102,156
Flask app prints environment variable as object instead of string returned from os.getenv
<p>I have the following code snippet in my main.py:</p> <pre><code>import os from app import create_app from models import db, bcrypt if __name__ == '__main__': HOST = os.environ.get('SERVER_HOST', 'localhost') try: PORT = int(os.environ.get('SERVER_PORT', '5555')) except ValueError: PORT = 5555 env_name = os.getenv('FLASK_ENV', "Please set FLASK_ENV") print("env_name: ", env_name) app = create_app(env_name) </code></pre> <p>I run it using <code>flask run</code> inside <code>pipenv shell</code> and bump into the following error in the line which prints out the <code>env_name</code>. I have tried both <code>set FLASK_ENV=development</code> (Windows 10) and using <code>.env</code> but to no avail. I use python-3.8.3</p> <pre><code>(src-4Nvvrxp5) C:\Projects\Python\PythonFlaskRestAPI\src&gt;flask run * Serving Flask app "main.py" (lazy loading) * Environment: development * Debug mode: on * Restarting with stat env_name: &lt;flask.cli.ScriptInfo object at 0x000002D6BA598940&gt; * Debugger is active! * Debugger PIN: 269-678-937 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) </code></pre> <p>Any advice and insight is appreciated.</p>
<p>So upon further research, I was able to reproduce your problem(somewhat) locally. As you can see in the image below.</p> <p><a href="https://i.stack.imgur.com/KhscW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KhscW.png" alt="enter image description here"></a></p> <p>Now the reason this is happening is because of this paragraph from flask official <a href="https://flask.palletsprojects.com/en/1.1.x/cli/#application-discovery" rel="nofollow noreferrer">documentation</a>:</p> <p><a href="https://i.stack.imgur.com/Lkpu6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Lkpu6.png" alt="enter image description here"></a></p> <p>What you want to focus on is this :</p> <blockquote> <p>If the application factory takes only one argument and no parentheses follow the factory name, the <strong><em>ScriptInfo instance is passed</em></strong> as a positional argument.</p> </blockquote> <p>As such there is no error that is occuring, the code is working as expected.</p> <p>Now if your concern is the fact that (set FLASK_ENV=development) command is not setting it correctly, I would point out that it is indeed setting it correctly as seen here in your OP:</p> <pre><code>(src-4Nvvrxp5) C:\Projects\Python\PythonFlaskRestAPI\src&gt;flask run * Serving Flask app "main.py" (lazy loading) * Environment: development * Debug mode: on * Restarting with stat </code></pre> <p>The third line in the terminal out above says "* Environment: development" , whereas the default value according to the <a href="https://flask.palletsprojects.com/en/1.1.x/cli/#environments" rel="nofollow noreferrer">documentation</a> is "* Environment: production "</p> <p>Let me know if that resolved your concerns and queries :D . Good Luck!</p>
python-3.x|flask|environment-variables|pipenv
0
1,908,523
22,384,099
parsing text file without splitting up multi-word names
<p>I am trying to use a text file to take the numerical output from an xml and turn it into a name that is then written to a separate text file. my problem is that the name sometimes has spaces in it and I'm not sure how to deal with this. </p> <p>The program access several txt and xml files online where items are referred to by a reference number. It compares info from these files to criteria i set and then adds the reference number to a list. What i am trying to do is convert this reference number to the associated name. I have a text file with the reference numbers and names and want to use the index() function to find the reference number and then write the associated name to a text file. I'm having trouble parsing it so that I can do the conversion. My problem is the list is written like this:</p> <pre><code>number name\n 14 apple\n 27 anjou pear\n 36 asian pear\n 7645 langsat\n </code></pre> <p>so if I just use the .split() i end up with some of the names being split. I have tried replacing the white space between the numbers and names with a '\n' and splitting it at that but that didn't work either. If I replace the space with ' , ' and split over that I end up the names including the next lines number ['apple\n15'] which writes to text as two lines and leaves me with the same problem of not being able to split it over the white space...</p> <p>any advice???</p> <p>I have now implemented the Dict() function as suggested bellow which works ACCEPT that I still have the \n at the end of the name...</p> <pre><code>ttn = dict() f=open('typeid2.txt', 'r') for line in f: number, name = line.split(None,1) ttn[number] = name </code></pre> <p>if I call <code>ttn['14']</code> i get <code>'apple\n'</code></p>
<p>Regular expressions are very powerful and useful but it takes a lot to get used to using them with some authority. I would suggest instead that you stick with split here is the help info from split that describes how to use a maxsplit value to limit the number of splits.</p> <pre><code>Help on built-in function split: split(...) S.split([sep [,maxsplit]]) -&gt; list of strings Return a list of the words in the string S, using sep as the delimiter string. If maxsplit is given, at most maxsplit splits are done. If sep is not specified or is None, any whitespace string is a separator and empty strings are removed from the result. </code></pre> <p>So for your code, assuming you have some lines to split</p> <pre><code>mytest = dict() for each_line in data: number, name = line.split(None,1) mytest[number] = name </code></pre> <p>will return something like this</p> <blockquote> <blockquote> <blockquote> <p>mytest {'27': 'anjou pear', '7645': 'langsat', 'number': 'name', '36': 'asian pear', '14': 'apple'} to access the help suppose you have some string mystring then just type</p> </blockquote> </blockquote> </blockquote> <pre><code>help(mystring.split) </code></pre> <p>The difference between my first attempt and this one was due to the comment below. In my first attempt the leading spaces on the name value were retained however, by using None, all white space characters were removed on the first split so this gets more specifically to what you are looking for.</p>
python|regex|parsing|text|python-3.x
1
1,908,524
22,339,447
When scraping image url src, get data:image/jpeg;base64
<p>I was trying to scrape the image url from a website using python urllib2.</p> <p>Here is my code to get the html string:</p> <pre><code>req = urllib2.Request(url, headers = urllib2Header) htmlStr = urllib2.urlopen(req, timeout=15).read() </code></pre> <p>When I view from the browser, the html code of the image looks like this:</p> <pre><code>&lt;img id="main-image" src="http://abcd.com/images/41Q2VRKA2QL._SY300_.jpg" alt="" rel="" style="display: inline; cursor: pointer;"&gt; </code></pre> <p>However, when I read from the htmlStr I captured, the image was converted to base64 image, which looks like this:</p> <pre><code>&lt;img id="main-image" src="data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAUDBAQEAwUEBAQFBQU...."&gt; </code></pre> <p>I am wondering why this happened. Is there a way to get the original image url rather than the base64 image string?</p> <p>Thanks.</p>
<p>you could use <a href="http://www.crummy.com/software/BeautifulSoup/" rel="nofollow"><strong>BeautifulSoup</strong></a></p> <p><strong>Example:</strong></p> <pre><code>import urllib2 from bs4 import BeautifulSoup url = "www.theurlyouwanttoscrape.com" html = urllib2.urlopen(url) soup = BeautifulSoup(html) img_src = soup.find('img', {'id':'main_image'})['src'] </code></pre>
python|html|image|web-scraping
0
1,908,525
57,996,174
Decreasing the image size after Face Detection
<p>I tried this face detection program that uses <code>haarcascades</code> in <code>opencv</code>.I was able to get the required output (Finding the number of faces in the provided image), but there is a slight problem with the resultant image which draws rectangles around the faces.Instead of the original image the output image is a zoomed version of the original which doesn't show the entirety of it.</p> <p><strong>Sample</strong></p> <p><strong>Input</strong>:<a href="https://i.stack.imgur.com/bfZUt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bfZUt.jpg" alt="enter image description here"></a></p> <p><strong>Output</strong>: <a href="https://i.stack.imgur.com/BsBaH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BsBaH.jpg" alt="enter image description here"></a></p> <p>So this is what the output looks like after the program is run.</p> <p>The code:</p> <pre><code>import cv2 import sys # Get user supplied values imagePath = sys.argv[1] cascPath = "haarcascade_frontalface_default.xml" # Create the haar cascade faceCascade = cv2.CascadeClassifier(cascPath) # Read the image image = cv2.imread(imagePath) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Detect faces in the image faces = faceCascade.detectMultiScale( gray, scaleFactor=1.2, minNeighbors=5, minSize=(30,30) ) print("Found {0} faces!".format(len(faces))) # Draw a rectangle around the faces for (x, y, w, h) in faces: cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2) cv2.imshow("Faces found", image) cv2.waitKey(0) </code></pre> <p>In the prompt:</p> <pre><code>C:\Myproject&gt;python main.py NASA.jpg Found 20 faces! </code></pre> <p>Program gives more or less the correct answer.The scale factor can be modified to get accurate results. So my question is what can be done to get a complete image in the output?Please add any other suggestions too,I'll be thankful. Thanks for reading!</p> <p><strong>EDIT:</strong></p> <p>After a suggestion i used <code>imwrite</code> and saved the output image which seems very fine,but still the displayed image after running the program remains same.</p> <p><strong>Saved Image</strong>- <a href="https://i.stack.imgur.com/SkGYU.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SkGYU.jpg" alt="enter image description here"></a></p>
<p>Your image is too big to be displayed on your screen. Add: <code>cv2.namedWindow('Faces found', cv2.WINDOW_NORMAL)</code> before <code>cv2.imshow("Faces found", image)</code> That line will create resizeable window. </p>
python|opencv|face-detection
4
1,908,526
43,555,869
Scapy is not Capturing any packets
<p>I am trying the following command in the python interpreter but it is not capturing any packet. </p> <pre><code>a = sniff(prn = prnp0f) </code></pre>
<p>You probably should try add callback in sniff</p> <pre><code>sniff(iface == "mon0", prn == PacketHandler()) </code></pre> <p>Where PacketHandler YOUR callback</p>
python-2.7|scapy|sniffing
0
1,908,527
43,839,035
Python Variable Amount Of Input
<p>I'm working on a program that determines whether a graph is strongly connected.</p> <p>I am reading standard input on a sequence of lines.</p> <p>The lines have two or three whitespace-delimited tokens, the name of the source and destination vertices, and an optional decimal edge weight.</p> <p>Input might look like this:</p> <pre><code>''' Houston Washington 1000 Vancouver Houston 300 Dallas Sacramento 800 Miami Ames 2000 SanFrancisco LosAngeles ORD PVD 1000 ''' </code></pre> <p>How can I read in this input and add it to my graph? I believe I will be using a collection like this:</p> <pre><code>flights = collections.defaultdict(dict) </code></pre> <p>Thank you for any help!</p>
<p>with <code>d</code> as your data, you can use split your line with '\n' in it and then strip trailing white space and find the last occurrence of <code></code>. With that you can slice your string to get the name and the number associated with it. </p> <p>Here I've stored the data to a dictionary. You can modify it according to your requirement!</p> <p>Use regular expression modules <code>re.sub</code> to remove the extra spaces.</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; d '\nHouston Washington 1000\nVancouver Houston 300\nDallas Sacramento 800\nMiami Ames 2000\nSanFrancisco LosAngeles\nORD PVD 1000\n' &gt;&gt;&gt;[{'Name':re.sub(r' +',' ',each[:each.strip().rfind(' ')]).strip(),'Flight Number':each[each.strip().rfind(' '):].strip()} for each in filter(None,d.split('\n'))] [{'Flight Number': '1000', 'Name': 'Houston Washington'}, {'Flight Number': '300', 'Name': 'Vancouver Houston'}, {'Flight Number': '800', 'Name': 'Dallas Sacramento'}, {'Flight Number': '2000', 'Name': 'Miami Ames'}, {'Flight Number': 'LosAngeles', 'Name': 'SanFrancisco'}, {'Flight Number': '1000', 'Name': 'ORD PVD'}] </code></pre> <p><strong>Edit:</strong></p> <p>To match your flights dict,</p> <pre><code>&gt;&gt;&gt; flights={'Houston':{'Washington':''},'Vancouver':{'Houston':''}} #sample dict &gt;&gt;&gt; for each in filter(None,d.split('\n')): ... flights[each.split()[0]][each.split()[1]]=each.split()[2] </code></pre>
python|dictionary|input|graph|collections
1
1,908,528
54,519,740
How to create the duplicate widget?
<p>I'm trying create new TabbedPanelItem with properties already created widget. But i'm getting new empty widget or replace exist. </p> <p>.py </p> <pre><code> class MainScreen(Screen): def add(self, tabbed_item): new_tabbed_item = TabbedPanelItem() new_tabbed_item.properties = copy(tabbed_item) new_tabbed_item.text = "2" self.ids.tab_panel.add_widget(new_tabbed_item) </code></pre> <p>.kv</p> <pre><code> &lt;MainScreen&gt;: AnchorLayout: canvas.before: ... TabbedPanel: id: tab_panel ... TabbedPanelItem: Button: on_press: root.add(tab_item) TabbedPanelItem: id: tab_item .... </code></pre>
<p>When I try to run you're code nothing pops up. You don't have enough code to test. I'm not sure what your goal is, but if you want to have a TabbedPanelItem with stuff already created without having to reproduce the same code (if that's your goal), try using @. An example: <code>MyTabbedPanel@TabbedPanelItem:</code>. Then you can add everything you want it to do, and reuse it instead of retyping out the code everytime.</p>
python|python-3.x|widget|kivy
0
1,908,529
54,539,919
Fill cells based on values in pandas
<p>Given the following dataframe...</p> <pre><code>Key ID Type Group1 Group2 Group3 Group4 Sex Race 1 A1 Type 1 x x x x Male White 2 A1 Type 2 x x x x 3 A2 Type 1 Male Black 4 A2 Type 2 5 A3 Type 1 x x x x Female White 6 A3 Type 2 x x x x 7 A3 Type 3 x x x x 8 A3 Type 4 x x x x </code></pre> <p>How can I populate the <code>Sex</code> and <code>Race</code> for all rows based on the <code>ID</code>?</p> <pre><code>Key ID Type Group1 Group2 Group3 Group4 Sex Race 1 A1 Type 1 x x x x Male White 2 A1 Type 2 x x x x Male White 3 A2 Type 1 Male Black 4 A2 Type 2 Male Black 5 A3 Type 1 x x x x Female White 6 A3 Type 2 x x x x Female White 7 A3 Type 3 x x x x Female White 8 A3 Type 4 x x x x Female White </code></pre> <p>I know I can use something like <code>df.loc[df['ID'] == A1, 'Sex'].iloc[0]</code> to get the <code>Sex</code> for a particular <code>ID</code>, but not sure how I can have all blanks for <code>Sex</code> populated based on the <code>Sex</code> for each <code>ID</code>.</p>
<p>You can group the data by id and ffill/bfill</p> <pre><code>df1.replace('', np.nan, inplace = True) df1['Sex'] = df1.groupby('ID').Sex.apply(lambda x: x.ffill().bfill()) </code></pre>
python-3.x|pandas|pandas-groupby
2
1,908,530
39,152,790
How can I organize case-insensitive text and the material following it?
<p>I'm very new to Python so it'd be very appreciated if this could be explained as in-depth as possible.</p> <p>If I have some text like this on a text file:</p> <pre><code>matthew : 60 kg MaTtHew : 5 feet mAttheW : 20 years old maTThEw : student MaTTHEW : dog owner </code></pre> <p>How can I make a piece of code that can write something like...</p> <pre><code>Matthew : 60 kg , 5 feet , 20 years old , student , dog owner </code></pre> <p>...by only gathering information from the text file?</p>
<pre><code>def test_data(): # This is obviously the source data as a multi-line string constant. source = \ """ matthew : 60 kg MaTtHew : 5 feet mAttheW : 20 years old maTThEw : student MaTTHEW : dog owner bob : 70 kg BoB : 6 ft """ # Split on newline. This will return a list of lines like ["matthew : 60 kg", "MaTtHew : 5 feet", etc] return source.split("\n") def append_pair(d, p): k, v = p if k in d: d[k] = d[k] + [v] else: d[k] = [v] return d if __name__ == "__main__": # Do a list comprehension. For every line in the test data, split by ":", strip off leading/trailing whitespace, # and convert to lowercase. This will yield lists of lists. # This is mostly a list of key/value size-2-lists pairs = [[x.strip().lower() for x in line.split(":", 2)] for line in test_data()] # Filter the lists in the main list that do not have a size of 2. This will yield a list of key/value pairs like: # [["matthew", "60 kg"], ["matthew", "5 feet"], etc] cleaned_pairs = [p for p in pairs if len(p) == 2] # This will iterate the list of key/value pairs and send each to append_pair, which will either append to # an existing key, or create a new key. d = reduce(append_pair, cleaned_pairs, {}) # Now, just print out the resulting dictionary. for k, v in d.items(): print("{}: {}".format(k, ", ".join(v))) </code></pre>
python
0
1,908,531
55,424,333
How to check if permutation of a string is a palindrome
<p>Im new to python and im trying to check if any permutation of a string is a palindrome. Here is my code:</p> <pre><code>def isPal(s): a = set() for i in s: if i in a: a.remove(i) else: if (i != '\n') or (i != ' '): a.add(i) if len(a) &lt;= 1: print(s + ' is a palindrome permutation\n') else: print(s + ' is not a palindrome permutation\n') print(a) </code></pre> <p>The problem I am having is that it I dont want my set to include spaces or any puncuation thats in the string. Is there any way to only check the letters? For example, the string "Mr. owl ate my Metal worm" shouldnt use the period or spaces when checking if it is a palindrome.</p>
<p>You can certainly check all permutations, but there is a much more efficient approach. </p> <p>Note that in order for a string to be a palindrome, then every letter is mirrored around the center of the string. That means a collection of letters can form a palindrome if there is at most one letter that has an odd count.</p> <p>Here is how you can implement this:</p> <p>The first step is to convert the string to lower case and remove the nonalpha characters (like spaces and punctuation). We can do that by using a <a href="https://stackoverflow.com/questions/34835951/what-does-list-comprehension-mean-how-does-it-work-and-how-can-i-use-it">list comprehension</a> to iterate over each character in the string and keep only those where <code>str.isalpha()</code> returns <code>True</code>.</p> <pre><code>myString = "Mr. owl ate my Metal worm" alpha_chars_only = [x for x in myString.lower() if x.isalpha()] print(alpha_chars_only) #['m', 'r', 'o', 'w', 'l', 'a', 't', 'e', 'm', 'y', 'm', 'e', 't', 'a', 'l', 'w', 'o', 'r', 'm'] </code></pre> <p>Next count each letter. You can use <code>collections.Counter</code> for this:</p> <pre><code>from collections import Counter counts = Counter(alpha_chars_only) print(counts) #Counter({'m': 4, 'a': 2, 'e': 2, 'l': 2, 'o': 2, 'r': 2, 't': 2, 'w': 2, 'y': 1}) </code></pre> <p>Finally count the number of letters that have an odd count. If the count is <code>0</code> or <code>1</code>, a palindrome must be possible.</p> <pre><code>number_of_odd = sum(1 for letter, cnt in counts.items() if cnt%2) print(number_of_odd) #1 </code></pre> <p>Putting that all together, you can make a function:</p> <pre><code>def any_palindrome(myString): alpha_chars_only = [x for x in myString.lower() if x.isalpha()] counts = Counter(alpha_chars_only) number_of_odd = sum(1 for letter, cnt in counts.items() if cnt%2) return number_of_odd &lt;= 1 print(any_palindrome(mystring)) #True </code></pre>
python
4
1,908,532
34,234,045
Python: creating variable name from header row in csv read
<p>I have looked for the answered questions but my case is different.</p> <p>I am reading a large csv file with a header row and has 50 names in the header corresponding to 50 data columns in csv file. I want to create 50 arrays and each array will store data as I proceed to read and parse the file line by line. I want to store the 50 arrays in variable names same as the column name read the header line. </p>
<pre><code>data = csv.reader(open("my_text.csv")) columns = zip(*data) dataMap = {d[0]:d[1:] for d in columns} print dataMap["Timestamp"] # or whatever </code></pre> <p>is a much preferred method ... if you really want variable names try</p> <pre><code>globals().update({d[0]:d[1:] for d in columns}) print Timestamp # or whatever </code></pre> <p>but I strongly advise against this</p> <p>really what it sounds like you want is <code>pandas.DataFrame.from_csv</code> though</p> <pre><code>df = pandas.DataFrame.from_csv("data.txt") print df["Timestamp"] # or whatever your header names might be </code></pre>
python
2
1,908,533
34,168,094
Project Euler 4 while loop issues
<p>So I recently started trying to solve the Project Problems, and I'm trying to solve problem 4. I wrote code that should work, but a certain while loop refuses to run. Here is the code:</p> <pre><code>def project_euler_problem_4(): x = 998001 y = 999 while x &gt; 10000: if x == int(str(x)[::-1]): while y &gt; 100: if x % y == 0: print x print y print x/y break y = y -1 x = x -1 </code></pre> <p>The problem arises when I tried to call the while loop after the if statement. My computer science teacher nor I have any idea what's causing problems. If you could help that would be great. Thanks!</p>
<p>In the innermost loop, <code>y</code> will become 99. It will never be reinitialized back to 999 again. So it will only ever run once.</p> <p>Change it so that <code>y</code> is set back to 999 for the next test.</p> <pre><code>def project_euler_problem_4(): x = 998001 while x &gt; 10000: if x == int(str(x)[::-1]): y = 999 while y &gt; 100: if x % y == 0: print x print y print x/y break y = y -1 x = x -1 </code></pre>
python
1
1,908,534
7,405,915
Creating a minidump in a Python application (Windows)
<p>I'm working on a Python application. Sometimes the interpreter crashes when in a third party C++ DLL.</p> <p>I'm thinking about writing a Python extension that installs a handler for unhandled structured exceptions (Windows) in order to write a minidump to the disk and log the stack trace of every Python thread.</p> <p>Two questions:</p> <ol> <li><p>Does a Python extension with a similar purpose already exist? According to my own Google search, nothing seems to be publicly available, but maybe I didn't search enough.</p></li> <li><p>Is it feasible to implement something like this? (I'm experienced in C++ and Windows programming, but have never implemented a Python extension...)</p></li> </ol>
<p>Check out <a href="http://pypi.python.org/pypi/faulthandler" rel="nofollow">FaultHandler</a> on PyPI.</p>
python|winapi|stack-trace|crash-dumps
1
1,908,535
7,412,773
Joining a list in python with different conditions
<p>I want to join a list with a conditional statement such as:</p> <pre><code>str = "\n".join["a" if some_var is True, "b", "c", "d" if other_var==1, "e"] </code></pre> <p>Each element has a different conditional clause (if at all) so a normal list comprehension is not suitable in this case.</p> <p>The solution I thought of is:</p> <pre><code>lst = ["a" if some_var is True else None, "b", "c", "d" if other_var==1 else None, "e"] str = "\n".join[item for item in lst if item is not None] </code></pre> <p>If there a more elegant pythonic solution?</p> <p>Thanks,</p> <p>Meir</p> <hr> <p>More explanation: In the above example, if some_var equals to True and other_var equals to 1 I would like to get the following string:</p> <pre><code>a b c d e </code></pre> <p>If some_var is False and other_var equals to 1 I would like to get the following string:</p> <pre><code>b c d e </code></pre> <p>If some_var is True and other_var is not equals to 1 I would like to get the following string:</p> <pre><code>a b c e </code></pre>
<p>If each element should only be added to a list if a condition is met, state each condition separately and add the element if it's met. A list comprehension is for when you have an existing list and you want to process the elements in it in some way.</p> <pre><code>lst = [] if some_var is True: lst.append('a') lst.extend(['b', 'c']) if other_var == 1: lst.append('d') lst.append('e') </code></pre>
python|list-comprehension
1
1,908,536
72,738,499
Python Regex Cycle for Extracting info
<p>I am trying to make a function to make an apply function that the end is to find the numbers followed by 3 characters in this case <code>alc</code>. The expected result should be = 54</p> <pre><code>import pandas as pd import regex as re numeros=[0,1,2,3,4,5,6,7,8,9] i=&quot;sdASK23LJFASDFKJGHASDLKJF123HALSDKJFHASDF54 alcobas&quot; df=df.head(3) def re_alcoba(i): i=i.replace(&quot; &quot;, &quot;&quot;) patron_acoba=re.compile(r&quot;alc&quot;) matches=patron_acoba.finditer(i) contador=1 numero_alcobas=[] for match in matches: index=match.start() while contador &lt; 3: numero=i[index-contador] contador+=1 if numero in numeros: numero_alcobas.insert(0,numero) respuesta=&quot;&quot;.join(numero_alcobas) return respuesta respuesta=re_alcoba(i) </code></pre> <p><a href="https://i.stack.imgur.com/zNEkn.jpg" rel="nofollow noreferrer">My Cicle wont work</a></p>
<p>If you want numbers directly before <code>alc</code> then you don't need all this code but simply <code>(\d+)alc</code></p> <pre><code>import regex as re i = &quot;sdASKLJFASDFKJGHASDLKJFHALSDKJFHASDF54alcobas&quot; i = i.replace(&quot; &quot;, &quot;&quot;) results = re.findall(&quot;(\d+)alc&quot;, i) print(results) # ['54'] i = &quot;4asd5alc&quot; i = i.replace(&quot; &quot;, &quot;&quot;) results = re.findall(&quot;(\d+)alc&quot;, i) print(results) # ['5'] </code></pre>
python|regex
1
1,908,537
31,945,947
login to yahoo email account using Python Selenium webdrive
<p>I need to login to yahoo email account using Selenium with Python.</p> <p>this is my code</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.keys import Keys driver = webdriver.Firefox() driver.get("https://login.yahoo.com") print driver.current_url logintxt = driver.find_element_by_name("username") logintxt.send_keys("email") pwdtxt = driver.find_element_by_name("passwd") pwdtxt.send_keys("pass") button = driver.find_element_by_id("login-signin") button.click() driver.get("https://mail.yahoo.com") print driver.current_url </code></pre> <p>but when I print the current url, it always gives me the login page, which mean that it didn't login.</p> <p>any idea about how to fix it ? I'm using Centos 6 with python 2.6</p>
<p>Wait for it (using <code>WebDriverWait</code>) to redirect you to the <code>yahoo</code> main page on successful login before navigating to the Yahoo mail box:</p> <pre><code>from selenium.webdriver.support.wait import WebDriverWait button = driver.find_element_by_id("login-signin") button.click() # give it time to log in wait = WebDriverWait(driver, 10) wait.until(lambda driver: driver.current_url == "https://www.yahoo.com/") driver.get("https://mail.yahoo.com") </code></pre>
python|selenium|selenium-webdriver|automation|yahoo
2
1,908,538
68,242,961
comparing int value in list throws index out of range Error
<p>I'm struggling to grasp the problem here. I already tried everything but the issue persist. Basically I have a list of random numbers and when I try to compare the vaalues inside loop it throws &quot;IndexError: list index out of range&quot;</p> <p>I even tried with range(len(who) and len(who) . Same thing. When put 0 instead &quot;currentskill&quot; which is int variable it works. What I don't understand is why comparing both values throws this Error. It just doesn't make sence...</p> <p>Am I not comparing a value but the index itself ???</p> <p>EDIT: I even tried with print(i) / print(who[i] to see if everything is clean and where it stops, and I'm definitelly not going outside of index</p> <pre><code>who = [2, 0, 1] currentskill = 1 for i in who: if who[i] == currentskill: # IndexError: list index out of range who.pop(i) </code></pre>
<p>As stated by @Hemesh</p> <blockquote> <p>The problem is once you start popping out elements list size varies</p> </blockquote> <p>Problem solved. I'm just popping the element outside the loop now and it works:</p> <pre><code>def deleteskill(who, currentskill): temp = 0 for i in range(len(who)): if who[i] == currentskill: temp = i who.pop(temp) </code></pre>
python-3.x
1
1,908,539
26,186,753
Any way to use rrule going backwards
<p>Is there any way to run <code>dateutils.rrule.rrule</code> going back in time? For example, I would like </p> <pre><code>[dt.datetime(2014, 8, 29, 0, 0), dt.datetime(2014, 9, 5, 0, 0), dt.datetime(2014, 9, 12, 0, 0), dt.datetime(2014, 9, 19, 0, 0), dt.datetime(2014, 9, 26, 0, 0)] </code></pre> <p>in this example:</p> <pre><code>import datetime as dt from dateutil.rrule import rrule, WEEKLY date0 = dt.datetime(2014, 10, 3) date1 = dt.datetime(2014, 8, 26) rrule(WEEKLY, dtstart=date0).between(date0, date1) </code></pre> <p>But, this gives me an empty list... </p> <p><em>*Insert sad, frowny face*</em></p>
<pre><code>import datetime as dt import dateutil.rrule as RR date0 = dt.datetime(2014, 10, 3) date1 = dt.datetime(2014, 8, 26) start = min([date0, date1]) end = max([date0, date1]) dow = RR.weekday(date0.weekday()) print(RR.rrule(RR.WEEKLY, byweekday=dow, dtstart=start).between(start, end)) </code></pre> <p>yields</p> <pre><code>[datetime.datetime(2014, 8, 29, 0, 0), datetime.datetime(2014, 9, 5, 0, 0), datetime.datetime(2014, 9, 12, 0, 0), datetime.datetime(2014, 9, 19, 0, 0), datetime.datetime(2014, 9, 26, 0, 0)] </code></pre> <p>I don't think there is a way to define an rrule with <code>dtstart=date0</code> which generates dates <em>before</em> <code>date0</code>. You must use the earlier date. Moreover, <code>between(a, b)</code> must be called with <code>a &lt;= b</code>, or else the result will be empty.</p>
python|python-2.7|python-dateutil|rrule
3
1,908,540
60,060,413
Error "'NoneType' object has no attribute 'offset'" when analysing GPX data
<p>I am following this tutorial while learing Python (<a href="https://towardsdatascience.com/how-tracking-apps-analyse-your-gps-data-a-hands-on-tutorial-in-python-756d4db6715d" rel="nofollow noreferrer">https://towardsdatascience.com/how-tracking-apps-analyse-your-gps-data-a-hands-on-tutorial-in-python-756d4db6715d</a>).</p> <p>I am at the step where I want to plot 'time' and 'elevation'. But when I do this with:</p> <pre><code>plt.plot(df['time'], df['ele']) plt.show() </code></pre> <p>I get the error "'NoneType' object has no attribute 'offset'". If I plot 'longitude' and 'latitude' everything works fine. I cannot find a way to solve this problem by myself. This is "my" code so far:</p> <pre><code>import gpxpy import matplotlib.pyplot as plt import datetime from geopy import distance from math import sqrt, floor import numpy as np import pandas as pd import chart_studio.plotly as py import plotly.graph_objects as go import haversine #Import Plugins gpx_file = open('01_Karlsruhe_Schluchsee.gpx', 'r') gpx = gpxpy.parse(gpx_file) data = gpx.tracks[0].segments[0].points ## Start Position start = data[0] ## End Position finish = data[-1] df = pd.DataFrame(columns=['lon', 'lat', 'ele', 'time']) for point in data: df = df.append({'lon': point.longitude, 'lat' : point.latitude, 'ele' : point.elevation, 'time' : point.time}, ignore_index=True) print(df) plt.plot(df['time'], df['ele']) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/SBjiu.jpg" rel="nofollow noreferrer">Picture of my dataframe</a></p>
<p>Removing the timezone from your 'time' column might do the trick. You can do this with <code>tz_localize</code>. Note that you have to call method <code>dt</code> to access column datetime properties:</p> <pre><code>df['time'] = df['time'].dt.tz_localize(None) </code></pre>
python|pandas|matplotlib|graph|gpx
7
1,908,541
1,824,418
A clean, lightweight alternative to Python's twisted?
<p>A (long) while ago I wrote a web-spider that I multithreaded to enable concurrent requests to occur at the same time. That was in my Python youth, in the days before I knew about the <a href="http://www.dabeaz.com/python/GIL.pdf" rel="noreferrer">GIL</a> and the associated woes it creates for multithreaded code (IE, most of the time stuff just ends up serialized!)...</p> <p>I'd like to rework this code to make it more robust and perform better. There are basically two ways I could do this: I could use the new <a href="http://docs.python.org/library/multiprocessing.html" rel="noreferrer">multiprocessing module</a> in 2.6+ or I could go for a reactor / event-based model of some sort. I would rather do the later since it's far simpler and less error-prone.</p> <p>So the question relates to what framework would be best suited to my needs. The following is a list of the options I know about so far:</p> <ul> <li><a href="http://twistedmatrix.com/trac/" rel="noreferrer">Twisted</a>: The granddaddy of Python reactor frameworks: seems complex and a bit bloated however. Steep learning curve for a small task.</li> <li><a href="http://eventlet.net/" rel="noreferrer">Eventlet</a>: From the guys at <a href="http://lindenlab.com/" rel="noreferrer">lindenlab</a>. Greenlet based framework that's geared towards these kinds of tasks. I had a look at the code though and it's not too pretty: non-pep8 compliant, scattered with prints (why do people do this in a framework!?), API seems a little inconsistent.</li> <li><a href="http://code.google.com/p/pyev/" rel="noreferrer">PyEv</a>: Immature, doesn't seem to be anyone using it right now though it is based on libevent so it's got a solid backend.</li> <li><a href="http://docs.python.org/library/asyncore.html" rel="noreferrer">asyncore</a>: From the stdlib: über low-level, seems like a lot of legwork involved just to get something off the ground.</li> <li><a href="http://www.tornadoweb.org/" rel="noreferrer">tornado</a>: Though this is a server oriented product designed to server dynamic websites it does feature an <a href="http://github.com/facebook/tornado/blob/master/tornado/httpclient.py" rel="noreferrer">async HTTP client</a> and a simple <a href="http://github.com/facebook/tornado/blob/master/tornado/ioloop.py" rel="noreferrer">ioloop</a>. Looks like it could get the job done but not what it was intended for. [edit: doesn't run on Windows unfortunately, which counts it out for me - its a requirement for me to support this lame platform]</li> </ul> <p>Is there anything I have missed at all? Surely there must be a library out there that fits the sweet-spot of a simplified async networking library!</p> <p>[edit: big thanks to <a href="https://stackoverflow.com/users/177663/intgr">intgr</a> for his pointer to <a href="http://code.google.com/p/cogen/" rel="noreferrer">this page</a>. If you scroll to the bottom you will see there is a really nice list of projects that aim to tackle this task in one way or another. It seems actually that things have indeed moved on since the inception of Twisted: people now seem to favour a <a href="http://en.wikipedia.org/wiki/Coroutine" rel="noreferrer">co-routine</a> based solution rather than a traditional reactor / callback oriented one. The benefits of this approach are clearer more direct code: I've certainly found in the past, especially when working with <a href="http://www.boost.org/doc/libs/1_41_0/doc/html/boost_asio.html" rel="noreferrer">boost.asio</a> in C++ that callback based code can lead to designs that can be hard-to-follow and are relatively obscure to the untrained eye. Using co-routines allows you to write code that looks a little more synchronous at least. I guess now my task is to work out which one of these many libraries I like the look of and give it a go! Glad I asked now...]</p> <p>[edit: perhaps of interest to anyone who followed or stumbled on this this question or cares about this topic in any sense: I found a really great writeup of the current state of <a href="http://nichol.as/asynchronous-servers-in-python" rel="noreferrer">the available tools</a> for this job]</p>
<p>Twisted is complex, you're right about that. Twisted is <em>not</em> bloated. </p> <p>If you take a look here: <a href="http://twistedmatrix.com/trac/browser/trunk/twisted/" rel="noreferrer">http://twistedmatrix.com/trac/browser/trunk/twisted</a> you'll find an organized, comprehensive, and very well tested suite of <em>many</em> protocols of the internet, as well as helper code to write and deploy very sophisticated network applications. I wouldn't confuse bloat with comprehensiveness.</p> <p>It's well known that the Twisted documentation isn't the most user-friendly from first glance, and I believe this turns away an unfortunate number of people. But Twisted is amazing (IMHO) if you put in the time. I did and it proved to be worth it, and I'd recommend to others to try the same.</p>
python|networking|twisted|asynchronous
101
1,908,542
32,528,943
Iterate over a list and strip tags in text from nested tags with scrapy
<p>I'm testing scrapy and can't figure out how to retrieve plain text without tags in it when it is nested in tags. Here is the URL I test it on: <a href="http://www.tripadvisor.com/ShowTopic-g293915-i3686-k8824646-What_s_the_coolest_thing_you_saw_or_did_in_Thailand-Thailand.html" rel="nofollow">http://www.tripadvisor.com/ShowTopic-g293915-i3686-k8824646-What_s_the_coolest_thing_you_saw_or_did_in_Thailand-Thailand.html</a></p> <p>Desired output: <a href="http://www.awesomescreenshot.com/image/566667/ad8bba1f52835dbe8c32c575913e0c39" rel="nofollow">content of the posts as separate elements in the item[body] object</a></p> <p>My code:</p> <pre><code>import scrapy from tripadvisor.items import TripadvisorItem class TripadvisorSpider(scrapy.Spider): [...] def parse_thread_contents(self, response): url = response.url item = TripadvisorItem() for sel in response.xpath('//div[@class="balance"]'): item['body'] = sel.xpath('//div[@class="postBody"]//p').extract() yield item </code></pre>
<p>You need to get the <code>text()</code> of the <code>p</code> elements. There is also a problem in the loop - you need to iterate over posts one by one and get the post bodies and collect them in a list:</p> <pre><code>item['body'] = ["".join(post.xpath('.//div[@class="postBody"]/p/text()').extract()) for post in response.xpath('//div[@class="postcontent"]')] </code></pre> <p>Also note that the dot at the beginning of the expression is also important - it would make the search <em>context-specific</em>.</p> <p>Demo:</p> <pre><code>In [1]: for post in response.xpath('//div[@class="postcontent"]'): ...: print("".join(post.xpath('.//div[@class="postBody"]/p/text()').extract())) ...: What's that memory you'll carry forever with you? Maybe you stayed on a floating hut in Khao Sok Lake, or you washed elephants in a sanctuary, or....I have no idea. Please share if you like, I'd love to hear! The heat when you you go to for the first time, my blessing ceremony with my husband on Bottle Beach is up there, as is the first time I met him in Samui. Phang Nga Bay on the west coast is stunning and took my breath away, I overnighted on a friend's boat and watched the stars come out. Hong Island was amazing and arriving at Koh Racha before it had hotels on it. Early morning mist on the river at Amphawa whilst looking across to a beautiful temple, the Chao Praya River in Bangkok, the Reclining Buddha at Wat Pho - I could go on and on. : ) First trip to few years back. Not very informed, no smart phone, no google earth....rent a bike, with my wife and we just ride the bike "till the road ends"...ended up at their local uni, watch student going in and out of the uni gate, sat on the road side having a coke. No worries...just me and my wife.Cassnu, pls...go on and on...we dont mind. ... </code></pre>
python|python-2.7|xpath|web-scraping|scrapy
1
1,908,543
28,075,708
How do you register a python-based client with GCM (Google Cloud Messenger) to get a registration_id for the device?
<p>I have been trying to figure out how to use one of the following python packages to create a python-based client that is capable of <strong>receiving</strong> XMPP-based messages via Google Cloud Messenging.</p> <ul> <li><p><a href="https://github.com/geeknam/python-gcm" rel="nofollow">https://github.com/geeknam/python-gcm</a></p></li> <li><p><a href="https://github.com/daftshady/py-gcm" rel="nofollow">https://github.com/daftshady/py-gcm</a></p></li> <li><p><a href="https://pypi.python.org/pypi/gcm-client/" rel="nofollow">https://pypi.python.org/pypi/gcm-client/</a></p></li> <li><p><a href="https://github.com/pennersr/pulsus" rel="nofollow">https://github.com/pennersr/pulsus</a></p></li> </ul> <p>From all I can see, (<a href="http://gcm-client.readthedocs.org/en/latest/" rel="nofollow">e.g., the documentation for gcm-client</a>), these packages can send messages to other clients that are identified by <code>registration_id</code>. But how do I get a registration IDs for each client in the first place? In other words, how do I register the client-app that I am creating so that it can receive messages?</p> <p>It is starting to seem to me that these are not clients per-se, but just libraries that can be used to push messages to clients. I hope that I am wrong about that and just missing a key concept.</p>
<p>Each client application has to call the <code>getRegistrationId()</code> to get the registration id once. Then they can receive messages. A more detailed function call is <a href="https://developer.android.com/google/gcm/client.html" rel="nofollow"><strong>here</strong></a></p> <p>I hope this give you an idea on client devices. :)</p>
python|google-cloud-messaging
0
1,908,544
44,229,726
Merging corresponding elements of two lists to a new list
<p>I have two lists as shown below,car_xy and car_id. I have wrote a sample code to merge the corresponding elements of both lists, but am not getting the desired output.</p> <pre><code>car_xy =[(650,700),(568,231),(789,123),(968,369)] car_id =[284,12,466,89] #required_details merges the two lists required_details = list(set(car_xy+car_id)) #now if i do print required_details the ouput will be a list like; required_details = [284,12,(650,700),89,(568,231),466,(968,369),(789,123)] #the required details adds the information in list randomly. What if i want the first elements of both the list together, like required_details = [[284,(650,700)],[12,(568,231),[466,(789,123)],[89,(968,369)]] </code></pre> <p>Any suggestions will be great.</p>
<p>Actually you need <code>[list(pair) for pair in zip(car_id, car_xy)]</code></p>
python|list
7
1,908,545
44,336,444
using Python to find 2 related strings that are on different lines
<p>I wrote a program that outputs data in a large file after iterating through the many devices it collects information from.<br> The new information from new devices is appended onto this file, so it's basically a large file with similar (but not exactly) the same information every 10 lines or so.</p> <p>What I need to do is FIND a specific string (in this case, I've worked in a special character used for identification purposes in each iteration of the data in the large file), then obtain the text that follows that particular identification character, 2 lines down. Brownie points if it allows me to check if this is the correct data I'm looking for (i.e., contains the word 'version').</p> <p>For example, the text file could look like:</p> <pre><code>trying 1.1.1.1 connected to 1.1.1.1 username: xxxx password: xxxx &gt;&gt;2001 issue command y y = version </code></pre> <p>The above text would be repeated around 100 times, with unique identifiers being listed after '>>'. What I need to do in Python is open the file with the text, loop through it, find the '>>' and collect the version listed 2 lines below. I then need to print them on the screen in a way that shows '>>2001 y = version' looping all the way through '>>2099 y = version'. <br></p>
<p>You can read the file into a list and <code>loop</code> through the list looking for your identifier then print the desired items. For example:</p> <p><strong>Code:</strong></p> <pre><code>with open('test.txt', 'r') as f: data = f.read().splitlines() for line in data: if line.startswith('&gt;&gt;'): print line, data[data.index(line)+2] </code></pre> <p><strong>Input file:</strong></p> <pre><code>trying 1.1.1.1 connected to 1.1.1.1 username: xxxx password: xxxx &gt;&gt;2001 issue command y y = version &gt;&gt;2002 issue command y y = versionx &gt;&gt;2003 issue command y y = versionz </code></pre> <p><strong>Output:</strong></p> <pre><code>&gt;&gt;2001 y = version &gt;&gt;2002 y = versionx &gt;&gt;2003 y = versionz </code></pre>
python
0
1,908,546
44,301,027
How to execute some code before flask socket.io server starts?
<p>I have an application based on flask-socketIO. I want to execute some code just before flask server starts. My app.py file looks like this:</p> <pre><code>from flask import Flask from flask_cors import CORS from flask_socketio import SocketIO app = Flask(__name__) app.config['SECRET_KEY'] = 'secret!' socketio = SocketIO(app) CORS(app) @socketio.on('connect') def test_connect(): print("Client connected") @socketio.on('disconnect') def test_disconnect(): print('Client disconnected') if __name__ == "__main__": socketio.run(app, debug=False, host='0.0.0.0') </code></pre> <p>When I run the program: <code>FLASK_APP=app.py flask run --host=0.0.0.0</code>, I get console output as follows: <code>* Serving Flask-SocketIO app "app"</code> Then some clients connect to my app and it works as I expected, but I want to see <code>print</code> wich is above <code>socketio.run(app)</code>.</p> <p>How can I execute code before the start of the server?</p>
<p>Use <code><a href="http://flask.pocoo.org/docs/0.12/api/#flask.Flask.before_first_request" rel="nofollow noreferrer">before_fist_request</a></code></p> <pre><code>@app.before_first_request def your_function(): print("execute before server starts") </code></pre> <p>Update: <a href="https://i.stack.imgur.com/7MAdO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7MAdO.png" alt="enter image description here"></a></p>
python|sockets|flask|flask-socketio
2
1,908,547
14,017,996
Is there a way to pass optional parameters to a function?
<p>Is there a way in Python to pass optional parameters to a function while calling it and in the function definition have some code based on "only if the optional parameter is passed"</p>
<p>The <a href="http://docs.python.org/2/reference/compound_stmts.html#function-definitions" rel="noreferrer">Python 2 documentation, <em>7.6. Function definitions</em></a> gives you a couple of ways to detect whether a caller supplied an optional parameter.</p> <p>First, you can use special formal parameter syntax <code>*</code>. If the function definition has a formal parameter preceded by a single <code>*</code>, then Python populates that parameter with any positional parameters that aren't matched by preceding formal parameters (as a tuple). If the function definition has a formal parameter preceded by <code>**</code>, then Python populates that parameter with any keyword parameters that aren't matched by preceding formal parameters (as a dict). The function's implementation can check the contents of these parameters for any "optional parameters" of the sort you want.</p> <p>For instance, here's a function <code>opt_fun</code> which takes two positional parameters <code>x1</code> and <code>x2</code>, and looks for another keyword parameter named "optional". </p> <pre><code>&gt;&gt;&gt; def opt_fun(x1, x2, *positional_parameters, **keyword_parameters): ... if ('optional' in keyword_parameters): ... print 'optional parameter found, it is ', keyword_parameters['optional'] ... else: ... print 'no optional parameter, sorry' ... &gt;&gt;&gt; opt_fun(1, 2) no optional parameter, sorry &gt;&gt;&gt; opt_fun(1,2, optional="yes") optional parameter found, it is yes &gt;&gt;&gt; opt_fun(1,2, another="yes") no optional parameter, sorry </code></pre> <p>Second, you can supply a default parameter value of some value like <code>None</code> which a caller would never use. If the parameter has this default value, you know the caller did not specify the parameter. If the parameter has a non-default value, you know it came from the caller.</p>
python
116
1,908,548
34,639,623
Using BeautifulSoup to Extract CData
<p>I'm trying to use BeautifulSoup from bs4/Python 3 to extract CData. However, whenever I search for it using the following, it returns an empty result. Can anyone point out what I'm doing wrong?</p> <pre><code>from bs4 import BeautifulSoup,CData txt = '''&lt;foobar&gt;We have &lt;![CDATA[some data here]]&gt; and more. &lt;/foobar&gt;''' soup = BeautifulSoup(txt) for cd in soup.findAll(text=True): if isinstance(cd, CData): print('CData contents: %r' % cd) </code></pre>
<p>The problem appears to be that the default parser doesn't parse CDATA properly. If you specify the correct parser, the CDATA shows up:</p> <pre><code>soup = BeautifulSoup(txt,'html.parser') </code></pre> <p>For more information on parsers, see <a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-a-parser" rel="noreferrer">the docs</a></p> <p>I got onto this by using <a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/#diagnose" rel="noreferrer">the diagnose function</a>, which <a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="noreferrer">the docs</a> recommend:</p> <blockquote> <p>If you have questions about Beautiful Soup, or run into problems, send mail to the discussion group. If your problem involves parsing an HTML document, be sure to mention what the diagnose() function says about that document.</p> </blockquote> <p>Using the diagnose() function gives you output of how the different parsers see your html, which enables you to choose the right parser for your use case.</p>
python|python-3.x|beautifulsoup|cdata
12
1,908,549
12,200,323
Error When using "make" for mod_wsgi 3.3
<p>I am getting the following error when trying to run <code>make</code> on mod_wsgi 3.3 compilation on CentOS x86_64:</p> <pre><code>/usr/local/include/python2.6/pyport.h:694:2: error: #error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)." </code></pre> <p>Setup Info:</p> <p>My configure script: </p> <pre><code>./configure --prefix=/usr/local --with-python=/usr/local/bin/python2.6 --with-apxs=/usr/sbin/apxs </code></pre> <p>make:</p> <pre><code>LD_RUN_PATH=/usr/local/lib make file /usr/local/bin/python2.6: /usr/local/bin/python2.6: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), not stripped file /usr/sbin/httpd: /usr/sbin/httpd: ELF 64-bit LSB shared object, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, stripped </code></pre> <p>Any ideas?</p>
<p>Your Python installation appears to not have been installed properly for both 32 bit and 64 bit.</p>
python|apache|mod-wsgi
0
1,908,550
42,057,510
Finding overlap of sessions
<p>I have an arbitrary number of sessions with start and end timestamps</p> <p>Some of these sessions overlap. Multiple sessions could overlap at the same time.</p> <p>I am trying to find an algorithm that can detect the number of seconds of overlap. IE given 3 sessions like</p> <pre><code>-ID-|-start-|-end-| --1-|-----4-|--10-| --2-|-----5-|--12-| --3-|-----8-|--13-| </code></pre> <p>have it return a number that is the number of seconds that the sessions overlap.</p> <p>I have read about <a href="https://en.wikipedia.org/wiki/Interval_tree" rel="nofollow noreferrer">interval trees</a> and looked at python packages like <a href="https://www.google.com/search?q=interval%20tree%20python&amp;oq=interval%20tree%20python&amp;aqs=chrome..69i57j0l3.4061j0j1&amp;sourceid=chrome&amp;ie=UTF-8" rel="nofollow noreferrer">this one</a>.</p> <p>However, I am unsure how to get the number of seconds of overlap for a given set of records. Do you know of an algorithm or package? Python preferred but open to other languages and I can reimplement.</p>
<p>The first idea which came to my mind with a complexity of O(n log n) for sorting. If <code>starts</code> and <code>ends</code> are sorted already, the algorithm has complexity of O(n).</p> <pre><code>int findOverlappingTimes(int[] starts, int ends[]) { // TODO: Sort starts array // TODO: Sort ends array // TODO: Assert starts.length == ends.length int currStartsIndex = 0; int currEndsIndex = 0; int currOverlaps = 0; int lastOverlapIndex = -1; int result = 0; while (currEndsIndex &lt; ends.length) { if (currStartsIndex &lt; starts.length &amp;&amp; starts[currStartsIndex] &lt; ends[currEndsIndex]) { if (++currOverlaps == 2) { // Start counting if at least two intervals overlap lastOverlapIndex = currStartsIndex; } currStartsIndex++; } else { if (--currOverlaps &lt;= 1 &amp;&amp; lastOverlapIndex != -1) { // Stop counting result += ends[currEndsIndex] - starts[lastOverlapIndex]; lastOverlapIndex = -1; } currEndsIndex++; } } return result; } </code></pre> <p>The output for your input set</p> <pre><code>findOverlappingTimes(new int[] { 4, 5, 8 }, new int[] { 10, 12, 13 }) </code></pre> <p>returns <code>7</code>.</p> <p>The basic idea behind the algorithm is to iterate over the sessions and count the number of currently overlapping sessions. We start counting the overlapping time if at least two sessions overlap at the current time and stop counting the overlapping time if the overlaps end.</p> <p>Here are some more test cases and their respective output:</p> <pre><code>findOverlappingTimes(new int[] { 0 }, new int[] { 0 }) = 0 findOverlappingTimes(new int[] { 10 }, new int[] { 10 }) = 0 findOverlappingTimes(new int[] { 10 }, new int[] { 20 }) = 0 findOverlappingTimes(new int[] { 10, 10 }, new int[] { 10, 10 }) = 0 findOverlappingTimes(new int[] { 10, 10 }, new int[] { 11, 11 }) = 1 findOverlappingTimes(new int[] { 10, 10, 10 }, new int[] { 11, 11, 12 }) = 1 findOverlappingTimes(new int[] { 10, 10, 10, 50, 90, 110 }, new int[] { 11, 12, 12, 100, 150, 160 }) = 52 findOverlappingTimes(new int[] { 4, 5, 8, 100, 200, 200, 300, 300 }, new int[] { 10, 12, 13, 110, 200, 200, 320, 330 }) = 27 </code></pre>
python|session|tree|intervals
1
1,908,551
41,923,015
Running setup.py installation - Relative module names not supported
<p>When trying to run <code>develop</code> or <code>install</code> task of <code>setuptools</code> I am getting the <code>Relative module names not supported</code> error.</p> <p>The command run is <code>$ python -m setup.py develop</code></p> <p>My <code>setup.py</code> script is pretty simple with one entry point:</p> <pre><code>setup( name='foo', version='1.2.3', # ... include_package_data=True, packages=find_packages(), entry_points={ 'console_scripts': [ 'foo = somepkg.somemodule:mainfunc' ] }, install_requires=['requests',], setup_requires=['pytest-runner'], tests_require=['pytest', 'betamax', 'flexmock'] ) </code></pre>
<p>The issue was solved by not running <code>setup.py</code> as a module, i.e. running</p> <pre><code>$ python setup.py develop </code></pre> <p>instead of </p> <pre><code>$ python -m setup.py develop </code></pre>
python|setuptools
3
1,908,552
42,059,052
django-rest-framework: How to serialize join of database models?
<p>I am working on a simple project for selling products from a website for which the model definition is as follows:</p> <pre><code>class Product(models.Model): """ Model for Products """ price = models.FloatField() description = models.TextField() url = models.CharField(max_length=200) def __str__(self): return self.description class Order(models.Model): """ Model for Orders """ UNPAID = 0 PAID = 1 FAILED = 2 STATUS = ( (UNPAID, 'UNPAID'), (PAID, 'PAID'), (FAILED, 'FAILED'), ) user = models.ForeignKey(User) product = models.ForeignKey(Product) orderdate = models.DateTimeField() token = models.CharField(max_length=30) paymentstatus = models.IntegerField(choices=STATUS) </code></pre> <p>Correspondingly the Serializers are defined as following:</p> <pre><code>class ProductSerializer(serializers.ModelSerializer): """ Serialize Product list """ class Meta: """ Metadata for Product Serializationt to expose API """ model = Product fields = ('id', 'price', 'description', 'url') class OrderSerializer(serializers.ModelSerializer): """ Serialize Order of Product """ class Meta: """ Order metadata """ model = Order fields = ('id', 'user', 'orderdate', 'token', 'paymentstatus', 'product') class OrderDetailSerializer(serializers.ModelSerializer): """ Serialize Order Details """ product = ProductSerializer(read_only=True) class Meta: """ Order metadata """ model = Order fields = ('id', 'user', 'orderdate', 'paymentstatus', 'product') </code></pre> <p>In the above example, is it possible to combine <code>OrderSerializer</code> and <code>OrderDetailsSerializer</code> into a single serializer ?</p> <p>I use the <code>OrderSerializer</code> when the user places a new <code>Order</code> i.e writes to database and <code>OrderDetailSerializer</code> to fetch the details of an order from the database.</p>
<p>You can do this with a single Serializer by using a custom field and by making the token write_only</p> <pre><code>class ProductSerializer(serializers.ModelSerializer): """ Serialize Product list """ class Meta: """ Metadata for Product Serializationt to expose API """ model = Product fields = ('id', 'price', 'description', 'url') class ProductField(serializers.PrimaryKeyRelatedField): def to_representation(self, value): id = super(ProductField, self).to_representation(value) try: product = Product.objects.get(pk=id) serializer = ProductSerializer(product) return serializer.data except Product.DoesNotExist: return None def get_choices(self, cutoff=None): queryset = self.get_queryset() if queryset is None: return {} return OrderedDict([(item.id, self.display_value(item)) for item in queryset]) class OrderSerializer(serializers.ModelSerializer): """ Serialize Order of Product """ product = ProductField(queryset=Product.objects.all()) class Meta: """ Order metadata """ model = Order fields = ('id', 'user', 'orderdate', 'token', 'paymentstatus', 'product') extra_kwargs = {'token': {'write_only': True}} </code></pre> <p>The custom field will allow you to use the Model ID when posting, while getting the nested serializer when you are getting the item. eg.</p> <p>POST:</p> <pre><code>{ "product": 10, ... } </code></pre> <p>GET:</p> <pre><code>{ "product": { "url": "http://localhost:8000/..." "price": "$2.50", ... } ... } </code></pre>
python|django|python-2.7|django-rest-framework
3
1,908,553
41,824,854
How to parse list of lists in django template?
<pre><code>{% for repo in repo_info %} {% for branch in branch_info[forloop.counter] %} &lt;li&gt;Branch Name --&gt; {{ branch }}&lt;/li&gt; {% endfor %} {% endfor %} </code></pre> <p><code>branch_info</code> is a list of lists.</p> <p>It gives me error that could not parse the remainder on this ---> branch_info[forloop.counter]</p> <p>Is there any way to parse over the list elements which are also a list?</p>
<p>You can create a simple <a href="https://docs.djangoproject.com/en/dev/howto/custom-template-tags/" rel="nofollow noreferrer">template tag</a> that returns the data at the requested index</p> <pre><code># some file named my_template_tags.py @register.simple_tag def at_index(data, index): return data[index] </code></pre> <p>This will throw an exception if you use an invalid index. If you don't want an exception, you will have to catch it and return some valid data.</p> <p>It can also be used with dictionaries but you pass in the key instead of the index.</p> <pre><code>{% load my_template_tags %} {% for repo in repo_info %} {% for branch in branch_info|at_index:forloop.counter %} &lt;li&gt;Branch Name --&gt; {{ branch }}&lt;/li&gt; {% endfor %} {% endfor %} </code></pre>
python|django|list|django-templates
0
1,908,554
47,479,980
Groupby and filtering in Pandas
<p>For a dataframe like this:</p> <pre><code> mpg yr name 0 18 70 chevrolet malibu 1 15 70 buick skylark 2 18 70 ford torino 3 16 70 chevrolet el camino 4 17 71 chevrolet chevelle </code></pre> <p>I can get mean MPG by year like this:</p> <pre><code>auto.groupby('yr')['mpg'].mean() </code></pre> <p>I tried the following to get mean MPG by year for chevrolet:</p> <pre><code>auto.groupby(['yr', auto['name'].str.contains('chevrolet')])['mpg'].mean() </code></pre> <p>However it creates an additional True/False boolean column, like so, where False is Non-Chevrolet and True is Chevrolet:</p> <pre><code>yr name 70 False 16.5 True 17.0 71 False NaN True 17.0 </code></pre> <p>What I am looking for is:</p> <pre><code>yr mpg x y </code></pre> <p>Can you please A) explain why my attempt didn't work and B) help correcting my mistake and explaining why it needs to be done that way. Thank you!</p>
<p>We should filter before the <code>groupby</code> </p> <pre><code>auto[auto['name'].str.contains('chevrolet')].groupby('yr')['mpg'].mean() Out[226]: yr 70 17 71 17 Name: mpg, dtype: int64 </code></pre> <p>Your method create another groupby key with [True,False], then , pandas will groupby it and column <code>yr</code></p> <p>EDIT: </p> <p>You can think this is what your data frame looks like </p> <pre><code>auto['yourkey']=auto['name'].str.contains('chevrolet') auto Out[228]: mpg yr name yourkey 0 18 70 chevroletmalibu True 1 15 70 buickskylark False 2 18 70 fordtorino False 3 16 70 chevroletelcamino True 4 17 71 chevroletchevelle True </code></pre>
python
1
1,908,555
47,225,864
Using pytest on ranges of many parameters
<p>Let's say I have a <code>Simulation</code> object whose core attribute is a dictionary of parameters that takes something like the following form:</p> <pre><code>@pytest.fixture def param_base(): '''Dict of parameter defaults''' return { "fs" : 2e4, "sweep_length" : 1, "num_trials" : 300, ... "pool_tau" : 1.00, "quantal_size" : -10, "a_tau" : (0.001,0.005) } </code></pre> <p>I would like to write a pytest function that simply runs this simulation with a range of values for each of these parameters. A slightly differently structured dictionary can encapsulate this idea:</p> <pre><code>@pytest.fixture def param_ranges(): '''Dict of parameter ranges''' p_name = [ "cav_p_open", "num_trials", "num_stim", "num_cav", "cav_i", "num_cav_ratio", "vesicle_prox", ] p_sets = [ [0,0.01,0.99,1], #cav_p_open [1,10,300], #num_trials [1,2,5], #num_stim [1,3,10], #num_cav [0,1,5,10], #cav_i [1,2], #num_cav_ratio [0,0.01,0.25,1], #vesicle_prox ] return dict(zip(p_name,p_sets)) </code></pre> <p>Importantly, <strong>I do NOT want to run all combinations of all of these parameters</strong>, as the number of simulations grows far too quickly. I <strong>only want to vary one parameter at a time</strong>, while leaving the other parameters at their default values.</p> <p>My current solution is as follows (continued after the above code):</p> <pre><code>parameter_names = [ "cav_p_open", "num_trials", "num_stim", "num_cav", "cav_i", "num_cav_ratio", "vesicle_prox", ] @pytest.mark.parametrize("p_name", parameter_names) def test_runModel_range_params(p_name,param_ranges,param_base): alt_params = copy.deepcopy(param_base) p_range = param_ranges[p_name] for i in range(len(p_range)): alt_params[p_name] = p_range[i] SIM = utils.Simulation(params = alt_params) </code></pre> <p>Which works pretty well, but because I'm looping through each parameter range, I can only see if the code fails because <code>utils.Simulation</code> failed at <strong>some</strong> value of a particular parameter, without knowing <strong>which</strong> one it failed on specifically.</p> <p>So I think what I'm looking for is something like a nested version of <code>pytest.mark.parameterize</code> where I can run <code>test_runModel_range_params</code> on each of the range values for each parameter.</p> <p>Any ideas? Extra points for elegance!</p>
<p>I think what you are looking for is stacked parametrization. From the <a href="https://docs.pytest.org/en/latest/parametrize.html#pytest-mark-parametrize-parametrizing-test-functions" rel="nofollow noreferrer">docs</a>:</p> <blockquote> <p>To get all combinations of multiple parametrized arguments you can stack parametrize decorators:</p> </blockquote> <pre><code>import pytest @pytest.mark.parametrize("control_var1, control_var2", [(0, 1), ('b','a')]) @pytest.mark.parametrize("default_var1, default_var2", [(2, 3), ('b','a')]) def test_foo(control_var1, control_var2, default_var1, default_var2): pass </code></pre>
python|nested|pytest|parameterized-tests
1
1,908,556
47,104,627
Access variable that was changed from another module
<p>I'm studying Python and creating a simple chat bot. Consider I have a module with main function:</p> <pre><code># bot.py class QueueWrapper: pass wrapper = QueueWrapper() # also tried with dict def main(): wrapper.queue = init_queue() if __name__ == '__main__': main() </code></pre> <p>And consider there is an another module where i want to access <code>queue</code> from bot module, but function from this module gets invoked some time after <code>bot.py</code> module got invoked:</p> <pre><code># another_module.py from bot import wrapper def create_job(): wrapper.queue.do_smth() # &lt;- error. object has no attribute ... </code></pre> <p>And when I try to access <code>queue</code> that should be in <code>wrapper</code> object I get and error saying there is no <code>queue</code> in <code>wrapper</code>. But if I run in debug mode over <code>bot</code> module I can clearly see that <code>wrapper.queue</code> contains object. But when <code>create_job</code> function from <code>another_module.py</code> is invoked it doesn't know that there were a <code>queue</code> in <code>wrapper</code>. </p> <p>The problem here in my opinion is that var <code>queue</code> from <code>bot.py</code> gets initialized after <code>main()</code> and <code>init_queue()</code> had finished working but module itself gets imported into <code>another_module</code> before that.</p> <p>What am I doing wrong (probably missing something about variable scope) and how can I get my <code>wrapper.queue</code> initialized in when <code>create_job()</code> is invoked?</p> <p>Thanks in advance!</p>
<p>You could use a <a href="https://docs.python.org/3/library/functions.html#property" rel="nofollow noreferrer">property</a>, so that the <code>queue</code> attribute is automatically initialised when it is first accessed: </p> <pre><code>class QueueWrapper: _queue = None @property def queue(self): if self._queue is None: self._queue = init_queue() return self._queue wrapper = QueueWrapper() </code></pre>
python|python-3.x|python-import
0
1,908,557
47,375,936
No module named pytesseract error
<p>I am trying to use pytesseract for OCR, on a raspberry pi using Raspbian</p> <p>I have read several questions on this topic, but can't find an answer that works, they usually say to install pytesseract with pip, and I did it.</p> <p>my code is very simple:</p> <pre><code>import pytesseract from PIL import Image print(pytesseract.image_to_string(Image.open('test.jpg'))) </code></pre> <p>But it returns error message : <em>"ImportError: No module named 'pytesseract'</em> .</p> <p>I have installed tesseracrt-ocr (the <strong>whereis tesseract-ocr</strong> command returns <em>/usr/share/tesseract-ocr</em>)</p> <p>I have installed pytesseract with <strong>pip install tesseract</strong> (which returns <em>successfully installed Pillow-4.3.0 olefile-0.44 pytesseract-0.1.7</em> ... but the <strong>whereis pytesseract</strong> command does not return anything --> a problem?).</p> <p>Do you have any idea of the problem I have ?</p>
<p>See after installing pytesseract ,using </p> <pre><code>&lt;cmd&gt;C:\&gt; pip install pytesseract </code></pre> <p>Try :</p> <blockquote> <blockquote> <blockquote> <p>import pytesseract</p> </blockquote> </blockquote> </blockquote> <p>If above is not working then it has do something with the installation , Check if pytesseract folder is available under "\Python27\Lib\site-packages" ,</p> <p>Try the above command from site packages, Hope this helps , Else there is something wrong with installation .</p>
python|tesseract|python-tesseract
5
1,908,558
47,194,629
super function in Multiple inheritance in python
<p>I have written this code in python 3.4 and used classes. I have implemented multiple inheritance in this code with super() function. And i want to call the <strong>init</strong>() function of library class. But i am unable, can anyone tell me the mistake? </p> <p><b>code</b></p> <pre><code>class college: def __init__(self, rollno): print("Roll no:", rollno) class library: def __init__(self, libcardno): print("Library card no:", libcardno) class student(college, library): def __init__(self, name): print("Name:", name) super().__init__(5560) super().__init__(60) </code></pre> <p><b>output</b></p> <pre><code>&gt;&gt;&gt; obj = student("John") Name: John Roll no: 5560 Roll no: 60 </code></pre> <p>Just understand the question, it's not a duplicate of another one.</p>
<p>You can directly call <code>__init__</code> method of respective parent class.</p> <pre><code>class student(college, library): def __init__(self, name): print("Name:", name) college.__init__(self,5560) library.__init__(self,60) </code></pre>
python|python-3.x|class|inheritance|multiple-inheritance
4
1,908,559
70,910,607
web scrapping: Why does it return a null value, Maybe a Java script issue?
<p>ok, so I'm new to web scraping. I followed a tutorial I found on the internet and it works a treat for a specific website. so I tried to change it up to work for another site. I think I have figured out the headers as I get a 200 response, But when I'm targeting a div to pull its value I am just met with null. So my question is am I doing something wrong here? I have tried to follow other tuts to see if it would answer my question, But I guess because I am new I'm not really sure what to look for?!</p> <p><strong>EDIT:</strong> I should be a bit more specific. so as you can see in my code, I am trying to scrape data from Chaos cards website, I think I have the search function sorted (could be wrong?) but what I'm trying to achieve is when I inspect the page I would like to take the data from</p> <p><code>&lt;div class=&quot;product-detail__content&quot;&gt;Out of stock &lt;/div&gt;</code> Specifically the &quot;Out of stock&quot; part. as I know this div will contain &quot;in stock&quot; assuming it is. But when I target this div I am just met with null</p> <p>All I am trying to do is set up a scrapper that when a user in discord types a specific product it will search the website, if it is in stock or not, it will return saying in stock or not in stock. But for now I'm trying to take baby steps, and just get it to firstly print the data I'm after</p> <p><strong>CODE</strong></p> <pre><code>import os import asyncio import discord import bs4 as bs import requests r = requests.session() client = discord.Client() headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36 Edg/97.0.1072.76'} @client.event async def on_ready(): print(f'{client.user.name} - Have a good day &lt;3') result = requests.get (&quot;https://www.chaoscards.co.uk/&quot;, headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36 Edg/97.0.1072.76'}) print(result.status_code) def site_search(keyword): resp = r.get(f'https://www.chaoscards.co.uk/prod/{keyword}', headers = headers) # print(resp.text) soup = bs.BeautifulSoup(resp.text, 'lxml') in_stock ='' out_of_stock ='' for x in soup.find_all('div', {'class': 'product-detail__content'}): if ' Out of stock ' in (x): in_stock = 'Out of stock bro' if ' In stock ' in str(): out_of_stock = 'Its in stock ' #current_image_url = soup.find('img', {'itemprop': 'image'}).get('src') # #current_name = soup.find('p', {'class': 'listing-title'}).get_text() return in_stock,out_of_stock @client.event async def on_message(message): if message.content.startswith('.sm'): keyword = message.content.split('.sm')[1] print(site_search(keyword)) in_stock,out_of_stock = site_search(keyword) </code></pre> <p><strong>EDIT 2:</strong> So i printed the text from <code>resp = r.get(f'https://www.chaoscards.co.uk/prod/{keyword}', headers = headers)</code> And received this in return</p> <pre><code>&lt;html lang=&quot;en-US&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot; /&gt; &lt;meta http-equiv=&quot;Content-Type&quot; content=&quot;text/html; charset=UTF-8&quot; /&gt; &lt;meta http-equiv=&quot;X-UA-Compatible&quot; content=&quot;IE=Edge,chrome=1&quot; /&gt; &lt;meta name=&quot;robots&quot; content=&quot;noindex, nofollow&quot; /&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width,initial-scale=1&quot; /&gt; &lt;title&gt;Just a moment...&lt;/title&gt; &lt;style type=&quot;text/css&quot;&gt; html, body {width: 100%; height: 100%; margin: 0; padding: 0;} body {background-color: #ffffff; color: #000000; font-family:-apple-system, system-ui, BlinkMacSystemFont, &quot;Segoe UI&quot;, Roboto, Oxygen, Ubuntu, &quot;Helvetica Neue&quot;,Arial, sans-serif; font-size: 16px; line-height: 1.7em;-webkit-font-smoothing: antialiased;} h1 { text-align: center; font-weight:700; margin: 16px 0; font-size: 32px; color:#000000; line-height: 1.25;} p {font-size: 20px; font-weight: 400; margin: 8px 0;} p, .attribution, {text-align: center;} #spinner {margin: 0 auto 30px auto; display: block;} .attribution {margin-top: 32px;} @keyframes fader { 0% {opacity: 0.2;} 50% {opacity: 1.0;} 100% {opacity: 0.2;} } @-webkit-keyframes fader { 0% {opacity: 0.2;} 50% {opacity: 1.0;} 100% {opacity: 0.2;} } #cf-bubbles &gt; .bubbles { animation: fader 1.6s infinite;} #cf-bubbles &gt; .bubbles:nth-child(2) { animation-delay: .2s;} #cf-bubbles &gt; .bubbles:nth-child(3) { animation-delay: .4s;} .bubbles { background-color: #f58220; width:20px; height: 20px; margin:2px; border-radius:100%; display:inline-block; } a { color: #2c7cb0; text-decoration: none; -moz-transition: color 0.15s ease; -o-transition: color 0.15s ease; -webkit-transition: color 0.15s ease; transition: color 0.15s ease; } a:hover{color: #f4a15d} .attribution{font-size: 16px; line-height: 1.5;} .ray_id{display: block; margin-top: 8px;} #cf-wrapper #challenge-form { padding-top:25px; padding-bottom:25px; } #cf-hcaptcha-container { text-align:center;} #cf-hcaptcha-container iframe { display: inline-block;} &lt;/style&gt; &lt;meta http-equiv=&quot;refresh&quot; content=&quot;35&quot;&gt; &lt;script type=&quot;text/javascript&quot;&gt; //&lt;![CDATA[ (function(){ window._cf_chl_opt={ cvId: &quot;2&quot;, cType: &quot;non-interactive&quot;, cNounce: &quot;66939&quot;, cRay: &quot;6d5bfeb08acc8771&quot;, cHash: &quot;18474546270a019&quot;, cPMDTk: &quot;wjoavPcyn4sd4H8OTvY2JlyVlLXStFtB1PtHY4IbL58-1643559283-0-gaNycGzNB70&quot;, cUPMDTk: &quot;\/prod\/Pokemon-Leafeon-V-Star-Special-Collection-Box?__cf_chl_tk=wjoavPcyn4sd4H8OTvY2JlyVlLXStFtB1PtHY4IbL58-1643559283-0-gaNycGzNB70&quot;, cFPWv: &quot;b&quot;, cTTimeMs: &quot;1000&quot;, cRq: { ru: &quot;aHR0cHM6Ly93d3cuY2hhb3NjYXJkcy5jby51ay9wcm9kL1Bva2Vtb24tTGVhZmVvbi1WLVN0YXItU3BlY2lhbC1Db2xsZWN0aW9uLUJveA==&quot;, ra: &quot;TW96aWxsYS81LjAgKFdpbmRvd3MgTlQgMTAuMDsgV2luNjQ7IHg2NCkgQXBwbGVXZWJLaXQvNTM3LjM2IChLSFRNTCwgbGlrZSBHZWNrbykgQ2hyb21lLzk3LjAuNDY5Mi45OSBTYWZhcmkvNTM3LjM2IEVkZy85Ny4wLjEwNzIuNzY=&quot;, rm: &quot;R0VU&quot;, d: &quot;iWUrdApuyTqwp7Sa1s7+bi5hqVur/PkVsEkqFAgmNisGGdY/Hz93xG5mIaMzA9XizszFqLjvwVKypShAl3Lm45xvxp8eYawYXrvO505H8+ouA9KL2g+cmlQJrfXxkdmFI5QseUz1MIX/PGL/2S4A1HCLT7gmpXqr+muDiazQCUs7XUTOla+n/YWWyPERFG/uhI8+uOckDxuY+F8HdGDGE8xus50JmOBLgGMC4gELQfxSTyg7Ed7Lw1YUquPfkjSt9Q4aQ2nOWtuzYmO3zV/UTeu0qSsrMI/p7pPYi9ZDANElXlNnuUhFcMd2aDSnUF/aYdNG09p2RTiG3/Jkj5fPpGt4gm9X98Dd6X+OndUT/x01iSCq4NTgwgxjmubgZMbmuryIaU2eFKIV7o7TuJkIz1x6p4mdhapTdMMhsfVTS1iNWy0L0TwedlFeUaCNPv+lH76ely2NypA/hUtDUVYz1Eey/bwaxGZBp9McRcVwpsPbTCwddxr9Oc29obSDNCid5gpRPhu1Efs0a9zixzPEjQEjZD5tJ7SaFnmI6n7A6Hjc9YzHmvjPrNAUv++ZuWAD&quot;, t: &quot;MTY0MzU1OTI4My4yOTAwMDA=&quot;, m: &quot;HvTOqkkdUexOvObprQaK20tiA50EsMdMAUNxBs9a76U=&quot;, i1: &quot;KnbCImKzNxo3XehPmg6jWg==&quot;, i2: &quot;oGYSEcaLbEuXjAZsN7GZBg==&quot;, zh: &quot;JJbyu7T+3hg5jWQCnkKHsP/7REhUTr23SkrwnAaFfjA=&quot;, uh: &quot;l4HLyhywYXQDOYBGJBbVDnfNOSLbBOqVMJwcpsr3qjc=&quot;, hh: &quot;8JWW5AsAg62xfggeMY1P1hRpDlpOqO6xoRTKU6X/36Q=&quot;, } } window._cf_chl_enter = function(){window._cf_chl_opt.p=1}; })(); //]]&gt; &lt;/script&gt; &lt;/head&gt; &lt;body&gt; &lt;table width=&quot;100%&quot; height=&quot;100%&quot; cellpadding=&quot;20&quot;&gt; &lt;tr&gt; &lt;td align=&quot;center&quot; valign=&quot;middle&quot;&gt; &lt;div class=&quot;cf-browser-verification cf-im-under-attack&quot;&gt; &lt;noscript&gt; &lt;h1 data-translate=&quot;turn_on_js&quot; style=&quot;color:#bd2426;&quot;&gt;Please turn JavaScript on and reload the page.&lt;/h1&gt; &lt;/noscript&gt; &lt;div id=&quot;cf-content&quot; style=&quot;display:none&quot;&gt; &lt;div id=&quot;cf-bubbles&quot;&gt; &lt;div class=&quot;bubbles&quot;&gt;&lt;/div&gt; &lt;div class=&quot;bubbles&quot;&gt;&lt;/div&gt; &lt;div class=&quot;bubbles&quot;&gt;&lt;/div&gt; &lt;/div&gt; &lt;h1&gt;&lt;span data-translate=&quot;checking_browser&quot;&gt;Checking your browser before accessing&lt;/span&gt; www.chaoscards.co.uk.&lt;/h1&gt; &lt;div id=&quot;no-cookie-warning&quot; class=&quot;cookie-warning&quot; data-translate=&quot;turn_on_cookies&quot; style=&quot;display:none&quot;&gt; &lt;p data-translate=&quot;turn_on_cookies&quot; style=&quot;color:#bd2426;&quot;&gt;Please enable Cookies and reload the page.&lt;/p&gt; &lt;/div&gt; &lt;p data-translate=&quot;process_is_automatic&quot;&gt;This process is automatic. Your browser will redirect to your requested content shortly.&lt;/p&gt; &lt;p data-translate=&quot;allow_5_secs&quot; id=&quot;cf-spinner-allow-5-secs&quot; &gt;Please allow up to 5 seconds&amp;hellip;&lt;/p&gt; &lt;p data-translate=&quot;redirecting&quot; id=&quot;cf-spinner-redirecting&quot; style=&quot;display:none&quot;&gt;Redirecting&amp;hellip;&lt;/p&gt; &lt;/div&gt; &lt;form class=&quot;challenge-form&quot; id=&quot;challenge-form&quot; action=&quot;/prod/Pokemon-Leafeon-V-Star-Special-Collection-Box?__cf_chl_f_tk=wjoavPcyn4sd4H8OTvY2JlyVlLXStFtB1PtHY4IbL58-1643559283-0-gaNycGzNB70&quot; method=&quot;POST&quot; enctype=&quot;application/x-www-form-urlencoded&quot;&gt; &lt;input type=&quot;hidden&quot; name=&quot;md&quot; value=&quot;lBy7XQRIP3rCTVaX6BoLog981WTI9wl7VPUnFUhdr80-1643559283-0-AfIJze-AsdFTbXwD6zN0kNrMUN92opj5F0JV4HP_IIHIJajx_7BeYxgFsAgzPKKs7B76uy2sTy0NMNe5Lonr5nsHsVd0d8oakLrUtEc43FE_-loi5O9yohJL7zVGcrm5BD3ZjEJMgxY3VwIM0TIl4QifHX3Xiacvm9Us_1J5_OALeEt8dyCDKBUbdhJbfkAV36zEt1-iFbst-6wTI-t_LM6YSJOD9j1K_sxVqdUzAawDadHBGslCDmRO4mA2LTGMhZdNdVN_RUZkUpqWKatfeHID4Hp-w3fx3tW4lxHE6gC86Ud8f-YgeYHKUDkfA_YomWCUxk9WFwoEYlr7MqQhQgWfBgxhAJNpXEbcaIb9e71bSZvbGw8BCLipFXuSk2ZvFofI-CdPIymN17v4S2xNgL92cGpXRhcr1OwJT6iFPJ8zuxPXPGud3C9ZeHnXbntYoYRQFXRcpcYcKIbBJEG8lIhJ4aWqmVkpkmai5oGlf0tnolsiO_5-i8cCEazYlbcUCqKnVDt6UGfuQNJdQXTNmmwNusmt4kPFLztjhNjKydzWHO6AWswLkMzj7rC1759cGdsyBiQkzb632-4Yqvi4f6ZOwBOEWfE0t8ZwdQtkEWy4U84c9j6hM8MG_xgl3t_0yKWRIFANVD9vkN1pqTfJRo8bQPm9oD3KmvRrVl5y_5InKhUotZYMJVV6DhV98WvHVOvjOGqJMPs75vQ0VaqQUiPzlyJ1MQ0G4Qe-sZzoIP0cxuvkCbQE2kxhRrzN887jWQ&quot; /&gt; &lt;input type=&quot;hidden&quot; name=&quot;r&quot; value=&quot;IxoGI_uynuxxTjqGKlMnSQ0FLUh3S6TIZtjcFTDgzzE-1643559283-0-ASk8gczAHx3QxOhXW8WEDt3t1OSXiJ7qJHx+ppz1M0nJipXy14O9Y2KKa1Q/qTKOeLAkBCnJuHaVX7YBvcXDde6M8x8kRdlX/AS1CNXDoqegpDIwjQKyyw0/e19MMsryFGK5ltynnTh9NKTFHJFOEcTF8OKBZqgcGH0dEGH3I1e/lPhMAAsMmWkE0i7aPiwTtEPYRkL/z8gpJbyDyqF/pL+ykLEqtpq3EDfFYbdMn4Fv27XNs4YKU9z4Z1DrjECS/Nwo4hCq0ZLYafLBnFHp9ZzIVEpGrM07Teci91bqTz3COri7Y3YZ0Vyj7NsZ/DPA+ykGWKU794u7OeSpIR4iifH6AEJA5ZVjhPMr46W7cvbgEAReq8TA+QdkIo7IA4Yn/Zcu77hx2ESjMpGMbXbJE4OrjZ8Xng/GoG18lBpF0nJUA9QAeUQ4cDOcHK8OkfHObBdTN4qGtQywBGdR7Wm8ZsxDjxry1kOKx1r4wXH1/PdOB0C5wWPVz5k6UPtIJOeqDfc8q7GFQ4f1UmHIeHE3Xg5FfntTitBbAQwNEZ/ymhpO2iGeLjog/wgAtiNY/qgnpTkpJXTjYgZoENwu9VgPIAaJt9wOUPLGnSkQu9nTDDnlbo2DwLmQKdfIYtCUfSF2DNNcyrk7LzWDHc5mWsfXhG/d9J2Ns9nJ4hWHcovnqOHHGLI7QLjBNKBW8+OrFn52OkYdCfXKrcC1PiV3mybK1gYT2uPWGjzOEodQ3x4GzII4qhvonkEPlaTKFnTA3sygjmsoQmbc6GnFQxP0kBIyI5B7qtF29/g2jTSB6ymvHQR+oNtrkvfaxM0tSt0tiiUV6HiI/83jCBmWkt6552D2PskpfPLgZqf968KCL5M9YfDBEBHlBswKZMBK6TvPGtS04P4S5gmi+M1rBuaubxKLZhUIs0V2OOy+HAZsJfluf6SNJe3W9x8EPqnXWT0b3tM3ybuYy4yj31JdChBk+On5zVqAoPpaWLQTeRLinVW2ludZ7KMFJltS9LqAJ0evwNcEJAnklwuE9/4uagEJjEsuWkf3C6UIyCFB5lfKlofe4hhwxkanVjds+Eg1bIJld0xqUNjPmZdA3LIWnzAq3iL5OoWN2WOAz87k7XI4A9H/ruSiPvtHf5KIOtX3fxDVP3TziOAtvb81p+pgK+WiL3LAEbEasDMw9O3HBSaXw54Gmq+gfNkoPDGCgyP7C25WH67yeqkoVtq64Q3EOpSglfjyyEmQyXT24Gs14zta3Ul6N1jSM38CDd3tIV/XCZZg3xa5TggKjI43lKe2dflR3pllF2Bpg8LH1JVMH6NKsts3TkBAy+KWrExBPOeoHgu0BZCIxs9nh1kk0k/LFQhjC6ENDW6swlJ+4hlv9865jTuu5DA4emNvpmHXKmjQ0OlQXpJJYhMcqRAoHpsT9TSaO2MYYZpbHx4kmYJIy04N5jY9TB8vzfnimnxrTYKrrM+zSxNPVCXZDVh8LUaxQKqYbgl5LsecA2QFzIc66SQc/8waruFwstNO/f8x/6ijA9s3EWrueKmYK6yeQqWrw4iVO30xppcSLK3lvk0aUYyu1TiQOXCokDCFUDIrxG/S3PEq4UgNIpTF3aRhBtkq49XCYd7MfCteVBzkDQu28IaN+JdojGY8LrVdR4VInr6p8+fmpirQZ7WgfWWLHJhqr8pF8eHG60yt372F+c5QecYvwGtitOitHbjOXKeLDKoXmtnnguTMRw4Xwz+ICfhz/wZ96PzlgKuPydwREQ4DbrhMf+mmRCc0EWi2QTGGdt56EiR/lJmXq8FpiRTgYuTRxSTtbtwFS1BHKrgdrc+Zuqm3h7t9WRvlRj8KhZEXsDJVWJgKDVT0sjox3phvRlo68Gr016valv5Lr+JAujzr1azDMgSaQhNL4cCuxW5jzL5Q3V/k9JgjEg==&quot;/&gt; &lt;input type=&quot;hidden&quot; value=&quot;b8506ea0b61c6bf512de56146f25f432&quot; id=&quot;jschl-vc&quot; name=&quot;jschl_vc&quot;/&gt; &lt;!-- &lt;input type=&quot;hidden&quot; value=&quot;&quot; id=&quot;jschl-vc&quot; name=&quot;jschl_vc&quot;/&gt; --&gt; &lt;input type=&quot;hidden&quot; name=&quot;pass&quot; value=&quot;1643559284.29-RM/SqTEMYf&quot;/&gt; &lt;input type=&quot;hidden&quot; id=&quot;jschl-answer&quot; name=&quot;jschl_answer&quot;/&gt; &lt;/form&gt; &lt;script type=&quot;text/javascript&quot;&gt; //&lt;![CDATA[ (function(){ var a = document.getElementById('cf-content'); a.style.display = 'block'; var isIE = /(MSIE|Trident\/|Edge\/)/i.test(window.navigator.userAgent); var trkjs = isIE ? new Image() : document.createElement('img'); trkjs.setAttribute(&quot;src&quot;, &quot;/cdn-cgi/images/trace/jschal/js/transparent.gif?ray=6d5bfeb08acc8771&quot;); trkjs.id = &quot;trk_jschal_js&quot;; trkjs.setAttribute(&quot;alt&quot;, &quot;&quot;); document.body.appendChild(trkjs); var cpo=document.createElement('script'); cpo.type='text/javascript'; cpo.src=&quot;/cdn-cgi/challenge-platform/h/b/orchestrate/jsch/v1?ray=6d5bfeb08acc8771&quot;; window._cf_chl_opt.cOgUQuery = location.search === '' &amp;&amp; location.href.indexOf('?') !== -1 ? '?' : location.search; window._cf_chl_opt.cOgUHash = location.hash === '' &amp;&amp; location.href.indexOf('#') !== -1 ? '#' : location.hash; if (window._cf_chl_opt.cUPMDTk &amp;&amp; window.history &amp;&amp; window.history.replaceState) { var ogU = location.pathname + window._cf_chl_opt.cOgUQuery + window._cf_chl_opt.cOgUHash; history.replaceState(null, null, &quot;\/prod\/Pokemon-Leafeon-V-Star-Special-Collection-Box?__cf_chl_rt_tk=wjoavPcyn4sd4H8OTvY2JlyVlLXStFtB1PtHY4IbL58-1643559283-0-gaNycGzNB70&quot; + window._cf_chl_opt.cOgUHash); cpo.onload = function() { history.replaceState(null, null, ogU); }; } document.getElementsByTagName('head')[0].appendChild(cpo); }()); //]]&gt; &lt;/script&gt; &lt;div id=&quot;trk_jschal_nojs&quot; style=&quot;background-image:url('/cdn-cgi/images/trace/jschal/nojs/transparent.gif?ray=6d5bfeb08acc8771')&quot;&gt; &lt;/div&gt; &lt;/div&gt; &lt;div class=&quot;attribution&quot;&gt; DDoS protection by &lt;a rel=&quot;noopener noreferrer&quot; href=&quot;https://www.cloudflare.com/5xx-error-landing/&quot; target=&quot;_blank&quot;&gt;Cloudflare&lt;/a&gt; &lt;br /&gt; &lt;span class=&quot;ray_id&quot;&gt;Ray ID: &lt;code&gt;6d5bfeb08acc8771&lt;/code&gt;&lt;/span&gt; &lt;/div&gt; &lt;/td&gt; &lt;/tr&gt; &lt;/table&gt; &lt;/body&gt; &lt;/html&gt; ```` One thing that stood out to me is this ```&lt;h1 data-translate=&quot;turn_on_js&quot; style=&quot;color:#bd2426;&quot;&gt;Please turn JavaScript on and reload the page.&lt;/h1&gt;``` So I am using beautiful soup and I have heard it cant handle java script? Is this whats affecting my search? Has anyone got tips, or if you may know the answer to my question but would prefer to point me in the correct direction, I would really appreciate it! Thank You! </code></pre>
<p>So I found out my problem. as you can see from the update I made on the original post. I was being blocked from accessing the site. This is due to it being a Java script loaded site, and apparently beautiful soup can’t load Java script . Therefore I have scraped the code and followed a new tutorial that uses Selenium and now it works perfectly.</p> <p>For anyone who stumbles across this post with the same issue I will provide a link to the tutorial I followed in hopes it helps you!</p> <p>Link: <a href="https://replit.com/talk/learn/Python-Selenium-Tutorial-The-Basics/148030" rel="nofollow noreferrer">https://replit.com/talk/learn/Python-Selenium-Tutorial-The-Basics/148030</a></p>
python|web-scraping|discord.py
1
1,908,560
71,056,638
Python get variables from AppleScript program
<p>I have recently been working with AppleScript, and was wondering if there is a way to get variables from AppleScript, and use them in Python. For example, say I have an AppleScript program called <code>test.scpt</code> and it has the following code in it:</p> <pre><code>set var1 to &quot;hello&quot; set var2 to &quot;hi&quot; </code></pre> <p>In my python, I want to do something (like print) the values of those two variables:</p> <pre><code>print(var1) print(var2) </code></pre> <p>I looked at answers like <a href="https://stackoverflow.com/questions/24445991/getting-variable-from-applescript-and-using-in-python">this</a>, however most of the ones I found at years old. Is there a newer/better way to do this, and if not, what is the best way?</p> <p>EDIT: I have also tried using <code>osascript</code> and <code>os.system</code> together, however I cannot store the output as a variable, as to the best of my knowledge osascript just runs the code, but cannot store variables.</p>
<p>Options:</p> <ul> <li><p><a href="https://pypi.org/project/py-applescript/" rel="nofollow noreferrer">NSAppleScript</a></p> </li> <li><p><a href="http://appscript.sourceforge.net/asoc.html" rel="nofollow noreferrer">AppleScript-ObjC+PyObjC</a></p> </li> </ul>
python|applescript
0
1,908,561
11,680,843
How to match exact phrase using xapian and python?
<p>This is my code:</p> <pre><code>db = xapian.Database(path/to/database) enquire = xapian.Enquire stemmer = xapian.Stem(&lt;supported language&gt;) query_parser = xapian.QueryParser() query_parser.set_database(db) query_parser.set_stemmer(stemmer) query_parser.set_default_op(xapian.query.OP_OR) xapian_flags = xapian.QueryParser.FLAG_BOOLEAN | xapian.QueryParser.FLAG_SYNONYM | xapian.QueryParser.FLAG_LOVEHATE query = query_parser.parse_query('"this exact phrase"', xapian_flags) enquiry.set_query(query) </code></pre> <p>This isn't matching "this exact phrase" (I am able to achieve pretty much everything but exact matches). Note that I've included the double quotes mentioned in the documentation. Is there a way of achieving this?</p>
<p>By explicitly setting the flags to the query parser you override the default of <code>FLAG_PHRASE | FLAG_LOVEHATE | FLAG_BOOLEAN</code>. What you've done therefore is to turn on synonym support but turn off phrase searching, which is what the double quotes relies on.</p> <p>Note that phrase searching isn't strictly the same as exact matching, although without more context it's difficult to advise if this is the wrong approach to take for your situation.</p>
python|xapian
1
1,908,562
33,577,001
pygame.midi.MidiException: 'Device id invalid, out of range.'
<p>I'm on Mac OSX Yosemite 10.10.5 and I'm trying to play midi sounds with pygame.midi with this code:</p> <pre><code>import pygame.midi pygame.midi.init() print pygame.midi.get_default_output_id() # -1 print pygame.midi.get_device_info(0) # None player = pygame.midi.Output(0) </code></pre> <p>And I get this output:</p> <pre><code>-1 None Traceback (most recent call last): File "midi.py", line 11, in &lt;module&gt; player = pygame.midi.Output(0) File "/Library/Python/2.7/site-packages/pygame/midi.py", line 414, in __init__ raise MidiException("Device id invalid, out of range.") pygame.midi.MidiException: 'Device id invalid, out of range.' </code></pre> <p>I've tried a bunch of different device id's (0-128) to look where my output speakers might be found, but I can't find anything. It's like my mac doesn't have an audio output, but it should.</p>
<p>The purpose of <code>pygame.midi</code> is to access MIDI devices.</p> <p>Speakers are not MIDI devices. Neither is anything else on your computer.</p> <p>If you want to ensure that your program can output MIDI data, you have to install a software synthesizer.</p>
python|macos|pygame|midi
0
1,908,563
33,794,719
adding a new contact information in txt file
<p>I have this long python code and I'm having trouble finishing or fixing it and I need help.</p> <p>First I have these codes - </p> <p>This will just display the menus and i have created several def functions. One is for creating data and saving to the txt file, and the other is to use a hash function to split the name. Contact info as data is created in the txt file. Finally, in a while loop I have to somehow call up the menu codes and this is where I get stuck, or I may need to fix the whole thing. Also when I put a phone number in like 555-5555, it makes an error. How would I input a number like this value?</p> <pre><code>def menu(): print("Contact List Menu:\n") print("1. Add a Contact") print("2. Display Contacts") print("3. Exit\n") menu() choice = int(input("What would you like to do?: ")) def data(): foo = open("foo.txt", "a+") name = input("enter name: ") number = int(input("enter the number: ")) foo.write(name + " " + str(number)) foo.close() def contact(): data = open("foo.txt") file = {} for person in data: (Id, number) = person.split() file[number] = Id data.close() while choice !=3: if choice == 1: print(data()) if choice ==2: print(data()) menu() choice = int(input("What would you like to do?: ")) </code></pre> <p>It seems that the program never stops and I have to use option 3 from the menu to exit the program.</p>
<p>Phone number like <code>555-5555</code> is not valid integer number so keep it as a text.</p> <p>Inside <code>menu()</code> you call <code>menu()</code> which call <code>menu()</code>, etc. It is recursion. When you choose <code>3</code> you leave last <code>menu()</code> and return to previous <code>menu()</code>.</p> <hr> <p><strong>EDIT:</strong></p> <p>btw: you have to add "\n" in <code>write</code> </p> <pre><code>def menu(): print("Contact List Menu:\n") print("1. Add a Contact") print("2. Display Contacts") print("3. Exit\n") def data(): foo = open("foo.txt", "a+") name = input("enter name: ") number = int(input("enter the number: ")) foo.write(name + " " + str(number) + "\n") # new line foo.close() def contact(): data = open("foo.txt") for person in data: name, number = person.split() print(name, number) data.close() #---------------- menu() choice = int(input("What would you like to do?: ")) while choice !=3: if choice == 1: data() if choice == 2: contact() menu() choice = int(input("What would you like to do?: ")) </code></pre>
python|hash
3
1,908,564
33,547,965
Computing AUC and ROC curve from multi-class data in scikit-learn (sklearn)?
<p>I am trying to use the <code>scikit-learn</code> module to compute AUC and plot ROC curves for the output of three different classifiers to compare their performance. I am very new to this topic, and I am struggling to understand how the data I have should input to the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html#sklearn.metrics.roc_curve" rel="noreferrer"><code>roc_curve</code></a> and <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.auc.html#sklearn.metrics.auc" rel="noreferrer"><code>auc</code></a> functions.</p> <p>For each item within the testing set, I have the true value and the output of each of the three classifiers. The classes are <code>['N', 'L', 'W', 'T']</code>. In addition, I have a confidence score for each value output from the classifiers. How do I pass this information to the roc_curve function?</p> <p>Do I need to <code>label_binarize</code> my input data? How do I convert a list of <code>[class, confidence]</code> pairs output by the classifiers into the <code>y_score</code> expected by <code>roc_curve</code>?</p> <p>Thank you for any help! Good resources about ROC curves would also be helpful.</p>
<p><strong>You need to use <code>label_binarize</code> function and then you can plot a multi-class ROC.</strong></p> <p><strong>Example using Iris data:</strong></p> <pre><code>import matplotlib.pyplot as plt from sklearn import svm, datasets from sklearn.model_selection import train_test_split from sklearn.preprocessing import label_binarize from sklearn.metrics import roc_curve, auc from sklearn.multiclass import OneVsRestClassifier from itertools import cycle plt.style.use('ggplot') iris = datasets.load_iris() X = iris.data y = iris.target # Binarize the output y = label_binarize(y, classes=[0, 1, 2]) n_classes = y.shape[1] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.5, random_state=0) classifier = OneVsRestClassifier(svm.SVC(kernel='linear', probability=True, random_state=0)) y_score = classifier.fit(X_train, y_train).decision_function(X_test) fpr = dict() tpr = dict() roc_auc = dict() for i in range(n_classes): fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) colors = cycle(['blue', 'red', 'green']) for i, color in zip(range(n_classes), colors): plt.plot(fpr[i], tpr[i], color=color, lw=1.5, label='ROC curve of class {0} (area = {1:0.2f})' ''.format(i, roc_auc[i])) plt.plot([0, 1], [0, 1], 'k--', lw=1.5) plt.xlim([-0.05, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic for multi-class data') plt.legend(loc="lower right") plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/ExF13.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ExF13.png" alt="enter image description here"></a></p>
python|machine-learning|scikit-learn|roc|auc
11
1,908,565
46,812,286
tensorflow - CTC loss decreasing but decoder ouput empty
<p> I am using tensorflow's <code>ctc_cost</code> and <code>ctc_greedy_decoder</code>. When I train the model minimizing <code>ctc_cost</code>, the cost whent down, but when I decode it always out put nothing. Is there any reason this might happen? My code is as following.</p> <p>I am wondering if I preprocessed the data correctly. I am predicting sequnces of phones on given frames of fbank features. There are 48 phones (48 classes), and each frame has 69 features. I set <code>num_classes</code> to 49 so logits will have dimension <code>(max_time_steps, num_samples, 49)</code>. And for my sparse tensor, I have my values range from 0 to 47 (48 reserved for blank). I never added blanks to my data, I don't think I should? (Should I do anything like that??)</p> <p>When trained, the cost decreases after each iteration and epochs, but edit distance never decreased. In fact it stays at 1 because the decoder almost always predict and empty sequence. Is there anything I'm doing wrong?</p> <pre class="lang-py prettyprint-override"><code>graph = tf.Graph() with graph.as_default(): inputs = tf.placeholder(tf.float32, [None, None, num_features]) targets = tf.sparse_placeholder(tf.int32) seq_len = tf.placeholder(tf.int32, [None]) seq_len_t = tf.placeholder(tf.int32, [None]) cell = tf.contrib.rnn.LSTMCell(num_hidden) stack = tf.contrib.rnn.MultiRNNCell([cell] * num_layers) outputs, _ = tf.nn.dynamic_rnn(stack, inputs, seq_len, dtype=tf.float32) outputs, _ = tf.nn.dynamic_rnn(stack, inputs, seq_len, dtype=tf.float32) input_shape = tf.shape(inputs) outputs = tf.reshape(outputs, [-1, num_hidden]) W = tf.Variable(tf.truncated_normal([num_hidden, num_classes], stddev=0.1)) b = tf.Variable(tf.constant(0., shape=[num_classes])) logits = tf.matmul(outputs, W) + b logits = tf.reshape(logits, [input_shape[0], -1, num_classes]) logits = tf.transpose(logits, (1, 0, 2)) loss = tf.nn.ctc_loss(targets, logits, seq_len) cost = tf.reduce_mean(loss) decoded, log_probabilities = tf.nn.ctc_greedy_decoder(logits, seq_len, merge_repeated=True) optimizer = tf.train.MomentumOptimizer(initial_learning_rate, 0.1).minimize(cost) err = tf.reduce_mean(tf.edit_distance(tf.cast(decoded[0],tf.int32), targets)) saver = tf.train.Saver() with tf.Session(graph=graph) as session: X, Y, ids, seq_length, label_to_int, int_to_label = get_data('train') session.run(tf.global_variables_initializer()) print(seq_length) num_batches = len(X)//batch_size + 1 for epoch in range(epochs): print ('epoch'+str(epoch)) for batch in range(num_batches): input_X, target_input, seq_length_X = get_next_batch(batch,X, Y ,seq_length,batch_size) feed = {inputs: input_X , targets: target_input, seq_len: seq_length_X} print ('epoch'+str(epoch)) _, print_cost, print_er = session.run([optimizer, cost, err], feed_dict = feed) print('epoch '+ str(epoch)+' batch '+str(batch)+ ' cost: '+str(print_cost)+' er: '+str(print_er)) save_path = saver.save(session, '/tmp/model.ckpt') print('model saved') X_t, ids_t, seq_length_t = get_data('test') feed_t = {inputs: X_t, seq_len: seq_length_t} print(X.shape) print(X_t.shape) print(type(seq_length_t[0])) de, lo = session.run([decoded[0], log_probabilities],feed_dict = feed_t) with open('predict.pickle', 'wb') as f: pickle.dump((de, lo), f) </code></pre>
<p>I got the same problem and solved by increasing the initial learning rate. </p> <p>In addition, outputting the LER on the validation set is necessary to check the progress of the training process.</p>
tensorflow
0
1,908,566
37,628,639
Python 3.5 simple way to play audio
<p>Oke so for the last week or so i've been looking around for a easy way to play sound files (wav or mp3 doesn't matter).</p> <p>But alot of the posts i found just did not work or where really old. Does anyone have a simple way to do this wich is not outdated?</p> <p>i've tryed winsound and pyaudio. pymedia and pygame don't even want to be installed.</p> <p>i'm so lost becaus this is the thing that holds me back from finishing my code. Also i think this would help alot of starting Python users.</p> <p>EDIT: i tryed using the code suggested here <a href="https://stackoverflow.com/questions/6951046/pyaudio-help-play-a-file">pyaudio help play a file</a>. But this gives me a RIFF id error.</p>
<p><a href="https://people.csail.mit.edu/hubert/pyaudio/docs/#example-blocking-mode-audio-i-o" rel="nofollow">Documentation</a> provides the code to play a audio file(.wav).</p> <p>Edited:</p> <pre><code>import pyaudio import wave import sys CHUNK = 1024 wf = wave.open("Path to audio file", 'rb') # instantiate PyAudio (1) p = pyaudio.PyAudio() # open stream (2) stream = p.open(format=p.get_format_from_width(wf.getsampwidth()), channels=wf.getnchannels(), rate=wf.getframerate(), output=True) # read data data = wf.readframes(CHUNK) # play stream (3) while len(data) &gt; 0: stream.write(data) data = wf.readframes(CHUNK) # stop stream (4) stream.stop_stream() stream.close() # close PyAudio (5) p.terminate() </code></pre>
python-3.x|audio
0
1,908,567
37,701,373
How can I rename multiple columns with numbers in python?
<p>I have a data frame with ~2400 columns and I would like to rename all the columns from <code>1</code> to <code>2400</code>.<br /> My current columns names are numbers and almost all of them are duplicated.</p> <p>I was trying something like that but it doesn't work :</p> <pre><code># An example import pandas as pd # Create an example dataframe data = {'Commander': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'], 'Date': ['2012, 02, 08', '2012, 02, 08', '2012, 02, 08', '2012, 02, 08', '2012, 02, 08'],'Score': [4, 24, 31, 2, 3]} df = pd.DataFrame(data, index = ['Cochice', 'Pima', 'Santa Cruz', 'Maricopa', 'Yuma']) ncol = len(df.columns) for col in df.columns : for i in range(ncol) : df.rename(columns={col: str(i)}, inplace=True) </code></pre> <p>Thank you in advance.</p>
<p>IIUC you can just do</p> <pre><code>df.columns = pd.Index(np.arange(1,len(df.columns)+1).astype(str) </code></pre> <p>So this just overwrites the columns with a new <code>Index</code> object generated from <code>np.arange</code> and we cast the dtype to <code>str</code> using <code>astype</code></p> <p>Example:</p> <pre><code>In [244]: df = pd.DataFrame(np.random.randn(4,4)) df.columns Out[244]: RangeIndex(start=0, stop=4, step=1) In [243]: df.columns = pd.Index(np.arange(1,len(df.columns)+1)).astype(str) df.columns Out[243]: Index(['1', '2', '3', '4'], dtype='object') </code></pre> <p>On your example:</p> <pre><code>In [245]: data = {'Commander': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'], 'Date': ['2012, 02, 08', '2012, 02, 08', '2012, 02, 08', '2012, 02, 08', '2012, 02, 08'],'Score': [4, 24, 31, 2, 3]} df = pd.DataFrame(data, index = ['Cochice', 'Pima', 'Santa Cruz', 'Maricopa', 'Yuma']) df.columns = pd.Index(np.arange(1,len(df.columns)+1)).astype(str) df.columns Out[245]: Index(['1', '2', '3'], dtype='object') </code></pre>
python|dictionary|pandas|rename|multiple-columns
1
1,908,568
30,041,469
UnicodeDecode error
<p>When querying on string data in a SQL database using pyodbc (specifically pypyodbc), I received this error:</p> <blockquote> <p>UnicodeDecodeError: 'utf8' codec can't decode byte 0x92 in position 56: invalid start byte</p> </blockquote>
<p>This issue was resolved by passing <code>unicode_results=True</code> into pypyodbc.connect. Reference documentation here: <a href="https://code.google.com/p/pyodbc/wiki/Module" rel="nofollow">https://code.google.com/p/pyodbc/wiki/Module</a></p>
python|sql
0
1,908,569
29,963,173
Calling 'make' from Python inside Vagrant VM
<p>Okay, so this question is very project-specific, but it's a problem for me nonetheless.</p> <p>I have a Python/django website, hosted on localhost from an Ubuntu VM set up by Vagrant. From this website I want to paste in C code and compile it via a series of Python functions. In one of these functions I call <code>make</code> like this:</p> <pre><code>arg2 = os.path.join(Default_SDK_PATH, "examples/peripheral/blinky") arg4 = os.path.join(Default_SDK_PATH, "examples/peripheral/blinky/makefile") args = ' '.join(['make', '-C', arg2, '-f', arg4]) p = subprocess.Popen( args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True ) output, errors = p.communicate() p.wait() </code></pre> <p>I specify <code>arg2</code> and <code>arg4</code> more for testing than anything else - It's just to be 100% sure that the correct makefile is used.</p> <h3>### ### ###</h3> <p>OK! So my problem comes when the subprocess runs. The makefile is called with <code>make</code>, but failes. When I check the build log I can see the error message <code>arm-none-eabi-gcc: error: nano.specs: No such file or directory</code>.</p> <p>When I call <code>vagrant up</code> for the first time a file named bootstrap.sh is called. I've tried adding new commands to this file</p> <pre><code>sudo apt-get remove binutils-arm-none-eabi gcc-arm-none-eabi sudo add-apt-repository ppa:terry.guo/gcc-arm-embedded sudo apt-get update sudo apt-get install gcc-arm-none-eabi=4.9.3.2015q1-0trusty13 </code></pre> <p>to uninstall Ubuntu's original GCC and install the latest GCC toolchain. No success there either. I've also tried returning the whole filestructure to file just to check if the files in question exists, and they do!</p> <p>Can anyone point me in the right direction here? Thanks in advance.</p>
<p>Whoop-di-hoo, I solved it!</p> <p>I don't exactly know <em>why</em>, but <code>sudo apt-get remove binutils-arm-none-eabi gcc-arm-none-eabi</code> doesn't seem to do anything, so the original GCC-files still exist when I try to install the new GCC.</p> <p>Also, the new GCC is installed in <code>/usr/bin</code>, while the old GCC has it's own specified folder.</p> <p>So I edited my Makefile to get <code>arm-none-eabi-gcc-4.9.3</code> from <code>/usr/bin</code> instead of the old <code>arm-none-eabi-gcc</code>. <code>nano.specs</code> is now included, and life is great!</p>
python|ubuntu|gcc|makefile|vagrant
0
1,908,570
27,447,399
scrapy export empty csv
<p>My question is the following : scrapy export empty csv.</p> <p>My code structural shape : </p> <p>items.py :</p> <pre><code>import scrapy class BomnegocioItem(scrapy.Item): title = scrapy.Field() pass </code></pre> <p>pipelines.py :</p> <pre><code>class BomnegocioPipeline(object): def process_item(self, item, spider): return item </code></pre> <p>settings.py:</p> <pre><code>BOT_NAME = 'bomnegocio' SPIDER_MODULES = ['bomnegocio.spiders'] NEWSPIDER_MODULE = 'bomnegocio.spiders' LOG_ENABLED = True </code></pre> <p>bomnegocioSpider.py :</p> <pre><code>from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from bomnegocio.items import BomnegocioItem from scrapy.selector import HtmlXPathSelector from scrapy.http import Request from scrapy import log import csv import urllib2 class bomnegocioSpider(CrawlSpider): name = 'bomnegocio' allowed_domains = ["http://sp.bomnegocio.com/regiao-de-bauru-e-marilia/eletrodomesticos/fogao-industrial-itajobi-4-bocas-c-forno-54183713"] start_urls = [ "http://sp.bomnegocio.com/regiao-de-bauru-e-marilia/eletrodomesticos/fogao-industrial-itajobi-4-bocas-c-forno-54183713" ] rules = (Rule (SgmlLinkExtractor(allow=r'/fogao') , callback="parse_bomnegocio", follow= True), ) print "=====&gt; Start data extract ...." def parse_bomnegocio(self,response): #hxs = HtmlXPathSelector(response) #items = [] item = BomnegocioItem() item['title'] = response.xpath("//*[@id='ad_title']/text()").extract()[0] #items.append(item) return item print "=====&gt; Finish data extract." #//*[@id="ad_title"] </code></pre> <p>terminal :</p> <pre><code>$ scrapy crawl bomnegocio -o dataextract.csv -t csv =====&gt; Start data extract .... =====&gt; Finish data extract. 2014-12-12 13:38:45-0200 [scrapy] INFO: Scrapy 0.24.4 started (bot: bomnegocio) 2014-12-12 13:38:45-0200 [scrapy] INFO: Optional features available: ssl, http11 2014-12-12 13:38:45-0200 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'bomnegocio.spiders', 'FEED_FORMAT': 'csv', 'SPIDER_MODULES': ['bomnegocio.spiders'], 'FEED_URI': 'dataextract.csv', 'BOT_NAME': 'bomnegocio'} 2014-12-12 13:38:45-0200 [scrapy] INFO: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState 2014-12-12 13:38:45-0200 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats 2014-12-12 13:38:45-0200 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware 2014-12-12 13:38:45-0200 [scrapy] INFO: Enabled item pipelines: 2014-12-12 13:38:45-0200 [bomnegocio] INFO: Spider opened 2014-12-12 13:38:45-0200 [bomnegocio] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2014-12-12 13:38:45-0200 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023 2014-12-12 13:38:45-0200 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080 2014-12-12 13:38:45-0200 [bomnegocio] DEBUG: Crawled (200) &lt;GET http://sp.bomnegocio.com/regiao-de-bauru-e-marilia/eletrodomesticos/fogao-industrial-itajobi-4-bocas-c-forno-54183713&gt; (referer: None) 2014-12-12 13:38:45-0200 [bomnegocio] DEBUG: Filtered offsite request to 'www.facebook.com': &lt;GET http://www.facebook.com/sharer.php?t=&amp;u=http%3A%2F%2Fsp.bomnegocio.com%2Fregiao-de-bauru-e-marilia%2Feletrodomesticos%2Ffogao-industrial-itajobi-4-bocas-c-forno-54183713&gt; 2014-12-12 13:38:45-0200 [bomnegocio] INFO: Closing spider (finished) 2014-12-12 13:38:45-0200 [bomnegocio] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 308, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 8503, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2014, 12, 12, 15, 38, 45, 538024), 'log_count/DEBUG': 4, 'log_count/INFO': 7, 'offsite/domains': 1, 'offsite/filtered': 1, 'request_depth_max': 1, 'response_received_count': 1, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'start_time': datetime.datetime(2014, 12, 12, 15, 38, 45, 119067)} 2014-12-12 13:38:45-0200 [bomnegocio] INFO: Spider closed (finished) </code></pre> <p>Why ? </p> <p>===> 2014-12-12 13:38:45-0200 [bomnegocio] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)</p> <pre><code>$ nano dataextract.csv </code></pre> <p>Look's empty. =(</p> <p>I do some hypotheses :</p> <p>i) My crawl sentence provide wrong xpath ? I go to terminal and type</p> <pre><code>$ scrapy shell "http://sp.bomnegocio.com/regiao-de-bauru-e-marilia/eletrodomesticos/fogao-industrial-itajobi-4-bocas-c-forno-54183713" &gt;&gt;&gt; response.xpath("//*[@id='ad_title']/text()").extract()[0] u'\n\t\t\t\n\t\t\t\tFog\xe3o industrial itajobi 4 bocas c/ forno \n\t\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t- ' </code></pre> <p>Answer : No, the problem is not in the xpath sentence</p> <p>ii) Mys "import" ? On the log view do not show "import"problems.</p> <p>Thank you for your attention and I now look forward to hearing your views.</p>
<p>There are a few issues with this spider:</p> <p>1) <code>allowed_domains</code> is meant to be used for domains, so you want to use:</p> <pre><code>allowed_domains = ["bomnegocio.com"] </code></pre> <p>2) The usage of the rules are not very adequate here, because they are meant for defining how the site should be crawled -- which links to follow. In this case, you don't need to follow any links, you just want to scrape the data directly from the URLs you're listing in <code>start_urls</code>, so I suggest you just get rid of the <code>rules</code> attribute, make the spider extend <code>scrapy.Spider</code> instead and scrape the data in the default callback <code>parse</code>:</p> <pre><code>from testing.items import BomnegocioItem import scrapy class bomnegocioSpider(scrapy.Spider): name = 'bomnegocio' allowed_domains = ["bomnegocio.com"] start_urls = [ "http://sp.bomnegocio.com/regiao-de-bauru-e-marilia/eletrodomesticos/fogao-industrial-itajobi-4-bocas-c-forno-54183713" ] def parse(self,response): print "=====&gt; Start data extract ...." yield BomnegocioItem( title=response.xpath("//*[@id='ad_title']/text()").extract()[0] ) print "=====&gt; Finish data extract." </code></pre> <p>Note also how the print statements are now inside the callback and the usage of <code>yield</code> instead of <code>return</code> (which allows you to generate several items from one page).</p>
python|csv|xpath|scrapy
0
1,908,571
65,813,263
How to suppress "WARNING:tensorflow:AutoGraph could not transform <bound method Layer.__call__ of ... >>"?
<p>I get this message every time I run tf.keras.Sequential().predict_on_batch:</p> <pre><code>WARNING: AutoGraph could not transform &lt;bound method Layer.__call__ of &lt;tensorflow.python.keras.engine.sequential.Sequential object at 0x000001F927581348&gt;&gt; and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: </code></pre> <p>It completely fills my console, and I can't find a way to suppress this.</p> <p>Python version: 3.7.7 Tensorflow version: 2.1.0</p>
<p>I've come across this in custom layers with for loops in them. I've been decorating the call method with <code>@tf.autograph.experimental.do_not_convert</code></p> <p>For example:</p> <pre class="lang-py prettyprint-override"><code>class DoNothing(tf.keras.layers.Layer): def __init__(self, **kwargs): super().__init__(**kwargs) def get_config(self): config = {} base_config = super().get_config() return {**base_config, **config} @tf.autograph.experimental.do_not_convert def call(self, args, **kwargs): return args # Yes, this layer really doesn't do anything </code></pre>
python|tensorflow
2
1,908,572
43,132,422
Could somebody tell me where the heck the IndexError is? (Python)
<p>I am having trouble figuring out where the IndexError in the code is so could somebody help me out</p> <p>Here's the code</p> <pre><code>def gap(g, m, n): def is_prime(n): for i in range(2, n): if n % i == 0: return False return True result = [] r = [] primes = [p for p in range(m, n) if is_prime(p)] for i in range(len(primes) - 1): if primes[i] - primes[i + 1] == g: r.append(primes[i - 1]) r.append(primes[i]) result.append(r[0]) result.append(r[1]) if result == []: return None else: return result print (gap(2, 100, 110)) </code></pre> <p>and here's the output</p> <pre><code>Traceback (most recent call last): File "C:\Users\Harry\Documents\randompythonprograms\editable.py", line 20, in &lt;module&gt; print (gap(2, 100, 110)) File "C:\Users\Harry\Documents\randompythonprograms\editable.py", line 14, in gap result.append(r[0]) IndexError: list index out of range </code></pre>
<p>This is the reason for an error.</p> <pre><code>r.append(primes[i - 1]) </code></pre> <p>It takes i as first value id of list and you pass (i - 1).</p>
python|python-3.x
0
1,908,573
43,158,350
Different error when I run the same tensorflow-r1.0 program twice
<p>At the first time I run my program and I got the error:</p> <blockquote> <p>tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value noise_z/0__mnn/bias1</p> </blockquote> <p>But when I run it again, the error became: </p> <blockquote> <p>tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value noise_z/1__mnn/0weight_</p> </blockquote> <p>Please note that the variable name is different. It is very annoying to debug. I wonder why this happens and how can I fix it?</p> <p>Here is the code involved with the error:</p> <pre><code>with tf.variable_scope('noise_z'): for noise_idx in range(num_noise): noise = gaussian_sampler(mu_noise, var_noise, 1) noise_vec = multi_layer_nn(noise, [dim_noise, 64, embedding_size], name=str(noise_idx)+'_') noise_vecs.append(noise_vec) def fully_con_layer(input_, fan_in, fan_out, name, initializer=tf.orthogonal_initializer()): w = tf.get_variable(name+'_weight_', shape=[fan_in, fan_out], initializer=initializer) b = tf.get_variable('bias'+name, [fan_out], initializer=tf.random_uniform_initializer()) return tf.nn.sigmoid(tf.matmul(input_, w)+b) def multi_layer_nn(input_, num_unit_each_layer, name, initializer=tf.orthogonal_initializer()): x = input_ num_layer = len(num_unit_each_layer)-1 for layer in range(num_layer): with tf.variable_scope(name+'_'+"mnn"): x = fully_con_layer(x, num_unit_each_layer[layer], num_unit_each_layer[layer+1], str(layer)) return x </code></pre>
<p>If you run <code>tf.global_variables_initializer()</code> and <code>sess.run(init_op)</code> before you call the function (as you say you do in your comment), the variables defined in the function will be not initialized. You have to run <code>sess.run(init_op)</code> after all variables are defined.</p>
python|tensorflow
1
1,908,574
43,135,244
Convert ascii to hex in Python 3
<p>I try to convert ascii to hex value. But my script sometimes working, sometimes does not work. I wonder why. Code likes below:</p> <p>ascii_ch = B13515068</p> <pre><code>for i in range(50): #In excel I have 50 row ascii_ch = sheet['C%s'%(i+2)].value #after C2, convert hex ascii_to_hex= "".join("{:02x}".format(ord(c)) for c in ascii_ch ) sheet['I%s'%(i+2)] = ascii_to_hex wb.save('a.xlsx') </code></pre> <p>I want to ascii_to_hex= 423133353135303638</p> <p>Sometimes code works properly, but Generally I get an error like below; <a href="https://i.stack.imgur.com/8ii3w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8ii3w.png" alt="enter image description here"></a></p>
<p>It looks like not all cells actually have values associated with them. When a cell has no value, <code>ascii_ch = sheet['C%s'%(i+2)].value</code> will set <code>ascii_ch</code> to <code>None</code>. In the next line, you iterate over <code>ascii_ch</code>. But it does not make any sense to iterate over <code>None</code>!</p> <p>You probably want to check for that, like this:</p> <pre><code>for i in range(50): #In excel I have 50 row ascii_ch = sheet['C%s'%(i+2)].value #after C2, convert hex if ascii_ch is None: # Maybe warn the user that a value is missing? continue # go on to the next cell </code></pre>
python|string|python-3.x|hex|ascii
1
1,908,575
48,649,645
QTableView model using QComboBox
<p>In this example below I have a simple QTableView which is populated using an AbstractModel. Each row in the table displays information related to a class object called Asset. It has a property called Items that contains a list of strings. I want to know how can i populate the QTableView with a combobox displaying this list of strings for each row.</p> <p>Secondly when a user changes the item selected in the dropdown, i would like to trigger an event so i can later use it to properly change the color of the colored dot to green or red depending on the object's property called 'Status'</p> <p>The status would indicate if the Current Version (meaning the latest item in the dropdown list) is the chosen item. If its the last item in the list, meaning the latest item, it would be green, otherwise it's red. </p> <p>The property 'Active' indicates which item in the dropdownlist is currently selected.</p> <p>If the status is 0 then it's out dated and if the status is 1 that means the latest version in the dropdownlist is being used.</p> <p><a href="https://i.stack.imgur.com/F9H13.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F9H13.png" alt="enter image description here"></a></p> <pre><code>import sys from PySide import QtGui, QtCore class Asset(object): def __init__(self, name, items=None, status=0, active=0): self._status = 0 self._name = '' self._items = [] self._active = active self.name = name self.items = items if items != None else [] self.status = status class AssetModel(QtCore.QAbstractTableModel): attr = ["Name", "Options"] def __init__(self, *args, **kwargs): QtCore.QAbstractTableModel.__init__(self, *args, **kwargs) self._items = [] def clear(self): self._items = [] self.reset() def rowCount(self, index=QtCore.QModelIndex()): return len(self._items) def columnCount(self, index=QtCore.QModelIndex()): return len(self.attr) def addItem(self, sbsFileObject): self.beginInsertRows(QtCore.QModelIndex(), self.rowCount(), self.rowCount()) self._items.append(sbsFileObject) self.endInsertRows() def headerData(self, section, orientation, role=QtCore.Qt.DisplayRole): if orientation == QtCore.Qt.Horizontal and role == QtCore.Qt.DisplayRole: return AssetModel.attr[section] return QtCore.QAbstractTableModel.headerData(self, section, orientation, role) def getItem(self, index): row = index.row() if index.isValid() and 0 &lt;= row &lt; self.rowCount(): return index.data(role=QtCore.Qt.UserRole) return None def getSelectedItems(self, selection): objs = [] for i, index in enumerate(selection): item = self.getItem(index) objs.append(item) return objs def data(self, index, role=QtCore.Qt.DisplayRole): if not index.isValid(): return None if 0 &lt;= index.row() &lt; self.rowCount(): item = self._items[index.row()] col = index.column() if 0 &lt;= col &lt; self.columnCount(): if role == QtCore.Qt.DisplayRole: if col == 0: return getattr(item, 'name', '') if col == 1: return (getattr(item, 'items', [])) elif role == QtCore.Qt.UserRole: if col == 0: return item elif role == QtCore.Qt.DecorationRole: if col == 0: status = getattr(item, 'status', 0) col = QtGui.QColor(255,0,0,255) if status == 1: col = QtGui.QColor(255,128,0,255) elif status == 2: col = QtGui.QColor(255,255,0,255) px = QtGui.QPixmap(120,120) px.fill(QtCore.Qt.transparent) painter = QtGui.QPainter(px) painter.setRenderHint(QtGui.QPainter.Antialiasing) px_size = px.rect().adjusted(12,12,-12,-12) painter.setBrush(col) painter.setPen(QtGui.QPen(QtCore.Qt.black, 4, QtCore.Qt.SolidLine, QtCore.Qt.RoundCap, QtCore.Qt.RoundJoin)) painter.drawEllipse(px_size) painter.end() return QtGui.QIcon(px) class Example(QtGui.QWidget): def __init__(self): super(Example, self).__init__() self.resize(400,300) # controls asset_model = QtGui.QSortFilterProxyModel() asset_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) asset_model.setSourceModel(AssetModel()) self.ui_assets = QtGui.QTableView() self.ui_assets.setEditTriggers(QtGui.QAbstractItemView.NoEditTriggers) self.ui_assets.setModel(asset_model) self.ui_assets.verticalHeader().hide() main_layout = QtGui.QVBoxLayout() main_layout.addWidget(self.ui_assets) self.setLayout(main_layout) self.unit_test() def unit_test(self): assets = [ Asset('Doug', ['v01', 'v02', 'v03'], 0), Asset('Amy', ['v10', 'v11', 'v13'], 1), Asset('Kevin', ['v11', 'v22', 'v53'], 2), Asset('Leslie', ['v13', 'v21', 'v23'], 0) ] self.ui_assets.model().sourceModel().clear() for i, obj in enumerate(assets): self.ui_assets.model().sourceModel().addItem(obj) def main(): app = QtGui.QApplication(sys.argv) ex = Example() ex.show() sys.exit(app.exec_()) if __name__ == '__main__': main() </code></pre>
<p>You have 2 tasks:</p> <ul> <li>Make your model editable because when using the combobox you must edit the values, in addition you must implement new roles to access all the properties of Asset, for it modify the class Asset:</li> </ul> <hr> <pre><code>class Asset(object): def __init__(self, name, items=[], active=0): self.active = active self.name = name self.items = items @property def status(self): return self.active == len(self.items) - 1 </code></pre> <p>To make an editable model, you must implement the <code>setData()</code> method and enable the <code>Qt.ItemIsEditable</code> flag:</p> <pre><code>class AssetModel(QtCore.QAbstractTableModel): attr = ["Name", "Options"] ItemsRole = QtCore.Qt.UserRole + 1 ActiveRole = QtCore.Qt.UserRole + 2 def __init__(self, *args, **kwargs): QtCore.QAbstractTableModel.__init__(self, *args, **kwargs) self._items = [] def flags(self, index): fl = QtCore.QAbstractTableModel.flags(self, index) if index.column() == 1: fl |= QtCore.Qt.ItemIsEditable return fl def clear(self): self.beginResetModel() self._items = [] self.endResetModel() def rowCount(self, index=QtCore.QModelIndex()): return len(self._items) def columnCount(self, index=QtCore.QModelIndex()): return len(self.attr) def addItem(self, sbsFileObject): self.beginInsertRows(QtCore.QModelIndex(), self.rowCount(), self.rowCount()) self._items.append(sbsFileObject) self.endInsertRows() def headerData(self, section, orientation, role=QtCore.Qt.DisplayRole): if orientation == QtCore.Qt.Horizontal and role == QtCore.Qt.DisplayRole: return AssetModel.attr[section] return QtCore.QAbstractTableModel.headerData(self, section, orientation, role) def data(self, index, role=QtCore.Qt.DisplayRole): if not index.isValid(): return None if 0 &lt;= index.row() &lt; self.rowCount(): item = self._items[index.row()] col = index.column() if role == AssetModel.ItemsRole: return getattr(item, 'items') if role == AssetModel.ActiveRole: return getattr(item, 'active') if 0 &lt;= col &lt; self.columnCount(): if role == QtCore.Qt.DisplayRole: if col == 0: return getattr(item, 'name', '') if col == 1: return getattr(item, 'items')[getattr(item, 'active')] elif role == QtCore.Qt.DecorationRole: if col == 0: status = getattr(item, 'status') col = QtGui.QColor(QtCore.Qt.red) if status else QtGui.QColor(QtCore.Qt.green) px = QtGui.QPixmap(120, 120) px.fill(QtCore.Qt.transparent) painter = QtGui.QPainter(px) painter.setRenderHint(QtGui.QPainter.Antialiasing) px_size = px.rect().adjusted(12, 12, -12, -12) painter.setBrush(col) painter.setPen(QtGui.QPen(QtCore.Qt.black, 4, QtCore.Qt.SolidLine, QtCore.Qt.RoundCap, QtCore.Qt.RoundJoin)) painter.drawEllipse(px_size) painter.end() return QtGui.QIcon(px) def setData(self, index, value, role=QtCore.Qt.EditRole): if 0 &lt;= index.row() &lt; self.rowCount(): item = self._items[index.row()] if role == AssetModel.ActiveRole: setattr(item, 'active', value) return True return QtCore.QAbstractTableModel.setData(self, index, value, role) </code></pre> <ul> <li>Use a delegate, for it you must overwrite the methods <code>createEditor()</code>, <code>setEditorData()</code> and <code>setModelData()</code> where we created the <code>QComboBox</code>, updated the selection of the <code>QComboBox</code> with the information of the model, and updated the model with the selection of the <code>QComboBox</code>. We also use <code>paint()</code> to make the <code>QComboBox</code> persistent.</li> </ul> <hr> <pre><code>class AssetDelegate(QtGui.QStyledItemDelegate): def paint(self, painter, option, index): if isinstance(self.parent(), QtGui.QAbstractItemView): self.parent().openPersistentEditor(index) QtGui.QStyledItemDelegate.paint(self, painter, option, index) def createEditor(self, parent, option, index): combobox = QtGui.QComboBox(parent) combobox.addItems(index.data(AssetModel.ItemsRole)) combobox.currentIndexChanged.connect(self.onCurrentIndexChanged) return combobox def onCurrentIndexChanged(self, ix): editor = self.sender() self.commitData.emit(editor) self.closeEditor.emit(editor, QtGui.QAbstractItemDelegate.NoHint) def setEditorData(self, editor, index): ix = index.data(AssetModel.ActiveRole) editor.setCurrentIndex(ix) def setModelData(self, editor, model, index): ix = editor.currentIndex() model.setData(index, ix, AssetModel.ActiveRole) </code></pre> <p>Then we establish the delegate and pass it as a parent to the QTableView so that it can be persisted automatically:</p> <pre><code>self.ui_assets.setItemDelegateForColumn(1, AssetDelegate(self.ui_assets)) </code></pre> <p>The complete code can be found at the following <a href="https://gist.github.com/eyllanesc/5170fb413e38951ec1ab9f6886b77e80" rel="nofollow noreferrer">link</a>.</p> <p><a href="https://i.stack.imgur.com/smARh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/smARh.png" alt="enter image description here"></a></p>
python|pyside|qtableview|qcombobox
3
1,908,576
67,144,885
Open a .csv after using Dataframe.to_csv Python
<p>Is there a way to open a .csv file right after using Dataframe.to_csv?</p> <p>Currently, I am using os.startfile to open the .csv file in a folder (search for .csv file and open it) - but I want to open the specific .csv I just created using df.to_csv.</p> <p>Here is my current code using os.startfile:</p> <pre><code>dirName3 = r&quot;\\xx\xx\SourceFolder&quot; fn2 = [f2 for f2 in os.listdir(dirName3)\ if f2.endswith('.csv') and os.path.isfile(os.path.join(dirName3, f2))][0] path3 = os.path.join(dirName3, fn2) open1 = os.startfile(path3) </code></pre> <p>The above code will open the .csv file I've created but only if it is top of the folder. So if there are others in the folder it may not be at the top and may open a different file.</p> <p><strong>I also can't specify an absolute path because the .csv name (using df.to_csv) will change day to day based on user input. I also won't be able to search by date because there may be multiple files from the same day in the folder.</strong></p> <p>Any help appreciated.</p>
<p>Answering my own question after discussion with others in comments above.</p> <p>Came up with this to solve the problem:</p> <pre><code>import os dirName3 = r&quot;\\xx\xx\Source Folder&quot; fn2 = [f2 for f2 in os.listdir(dirName3)\ if f2.endswith(str(datetime.now().strftime('%d_%m_%y_')) + Qname1 + '.csv') and os.path.isfile(os.path.join(dirName3, f2))][0] path3 = os.path.join(dirName3, fn2) open1 = os.startfile(path3) </code></pre> <p>Using f.endswith - instead of '.csv' as in my original above, I used the same information I used to write the csv (using to_csv function which isn't included here). This really only works because I have included a date stamp in the file names - because the Qname1 (user input) can be similar for different days I need the date to differentiate between files.</p> <p>Cheers stackoverflow.</p>
python|pandas|csv
0
1,908,577
4,221,475
Python Numpy nan_to_num help
<p>I'm having some trouble with Numpy's nan_to_num function: I've got an array where the last column contains fewer elements than the other columns, and when I import it into Python, it places 'nan's to fill out the array. That's just fine until I need to do some other things that get tripped-up by the 'nan's.</p> <p>I'm trying to use 'nan_to_num' with no success. It's likely a small thing I'm missing, but I can't figure it out.</p> <p>Here's some simple input and output:</p> <p>input:</p> <pre><code>a = numpy.array([[1, nan, 3]]) print a numpy.nan_to_num(a) print a </code></pre> <p>output</p> <pre><code>[[ 1. nan 3.]] [[ 1. nan 3.]] </code></pre> <p>The second 'nan' should be a zero... <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.nan_to_num.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/generated/numpy.nan_to_num.html</a></p> <p>Thanks in advance.</p>
<p>You haven't changed the value of <code>a</code>. Try this:</p> <pre><code>a = numpy.array([[1, nan, 3]]) a = numpy.nan_to_num(a) print a </code></pre>
python|function|numpy
4
1,908,578
4,109,436
Processing multiple values for one single option using getopt/optparse?
<p>Is it possible to fetch multiple values for one option using getopt or optparse, as shown in the example below:</p> <pre><code>./hello_world -c arg1 arg2 arg3 -b arg4 arg5 arg6 arg7 </code></pre> <p>Please note that the number of actual values for each option (-c, -b) could be either 1 or 100. I do not want to use: <code>./hello_world -c "arg1 arg2 arg3" -b "arg4 arg5 arg6 arg7"</code></p> <p>It seems to me that this may not be possible (and perhaps in violation of POSIX), please correct me if I'm wrong.</p> <p>I've seen examples where all the non-options at the end of the line (<code>./hello_world -c arg1 -b arg1 arg2 arg3</code>) can be gathered... but not for the first of multiple option.</p> <p>I'd like my app to work on a wide range of platforms with different Python versions, so I've not looked at argparser.</p>
<p>Yes, it can be done with optparse.</p> <p>This is an example:</p> <pre><code>./test.py --categories=aaa --categories=bbb --categories ccc arg1 arg2 arg3 </code></pre> <p>which prints:</p> <pre><code>arguments: ['arg1', 'arg2', 'arg3'] options: {'categories': ['aaa', 'bbb', 'ccc']} </code></pre> <p>Full working example below:</p> <pre><code>#!/usr/bin/env python import os, sys from optparse import OptionParser from optparse import Option, OptionValueError VERSION = '0.9.4' class MultipleOption(Option): ACTIONS = Option.ACTIONS + ("extend",) STORE_ACTIONS = Option.STORE_ACTIONS + ("extend",) TYPED_ACTIONS = Option.TYPED_ACTIONS + ("extend",) ALWAYS_TYPED_ACTIONS = Option.ALWAYS_TYPED_ACTIONS + ("extend",) def take_action(self, action, dest, opt, value, values, parser): if action == "extend": values.ensure_value(dest, []).append(value) else: Option.take_action(self, action, dest, opt, value, values, parser) def main(): PROG = os.path.basename(os.path.splitext(__file__)[0]) long_commands = ('categories') short_commands = {'cat':'categories'} description = """Just a test""" parser = OptionParser(option_class=MultipleOption, usage='usage: %prog [OPTIONS] COMMAND [BLOG_FILE]', version='%s %s' % (PROG, VERSION), description=description) parser.add_option('-c', '--categories', action="extend", type="string", dest='categories', metavar='CATEGORIES', help='comma separated list of post categories') if len(sys.argv) == 1: parser.parse_args(['--help']) OPTIONS, args = parser.parse_args() print "arguments:", args print "options:", OPTIONS if __name__ == '__main__': main() </code></pre> <p>More information at <a href="http://docs.python.org/library/optparse.html#adding-new-actions">http://docs.python.org/library/optparse.html#adding-new-actions</a></p>
python|getopt
18
1,908,579
48,387,950
TypeError: 'GoodsCategory' object does not support indexing
<p>goods.py</p> <pre><code> class Goods(models.Model): category = models.ForeignKey(GoodsCategory, verbose_name='xxx') goods_sn = models.CharField(default='', max_length=50, verbose_name='xxx') name = models.CharField(max_length=300, verbose_name='xxx') click_num = models.IntegerField(default=0, verbose_name='xxx') sold_num = models.IntegerField(default=0, verbose_name='xxx') </code></pre> <p>import_goods_data.py</p> <pre><code>from apps.goods.models import Goods, GoodsCategory, GoodsImage from db_tools.data.product_data import row_data for goods_detail in row_data: goods = Goods() goods.name = goods_detail['name'] goods.market_price = float(int(goods_detail['market_price'].replace('¥', '').replace('&amp;', ''))) goods.shop_price = float(int(goods_detail['sale_price'].replace('&amp;', '').replace('$', ''))) goods.goods_brief = goods_detail['desc'] if goods_detail['desc'] is not None else '' goods_goods_desc = goods_detail['goods_desc'] if goods_detail['goods_desc'] is not None else '' goods.goods_front_image = goods_detail['images'][0] if goods_detail['images'] is not None else '' category_name = goods_detail['categorys'][-1] category = GoodsCategory.objects.filter(name=category_name) if category: goods.category = category[0] goods.save() </code></pre> <p>Because I got a error.So let me try to write it this way:</p> <pre><code>categories = GoodsCategory.objects.filter(name=category_name) if categories.exists(): category = categories[0] else: category = GoodsCategory.objects.create(name=category_name) goods.category = category[0] goods.save() </code></pre> <p>But I had another error. <strong>TypeError: 'GoodsCategory' object does not support indexing</strong></p>
<p>No need to index the category object</p> <p>In the if condition you already indexed the QuerySet and got the category object, in the else condition you are creating a new object using create, which will return an instance.</p> <p>So try the following,</p> <pre><code>categories = GoodsCategory.objects.filter(name=category_name) # You get a QuerySet here if categories.exists(): category = categories[0] # Okay to index a QuerySet when something exists in it, you get the category object when you index it else: category = GoodsCategory.objects.create(name=category_name) # You get a GoodsCategory object here, cannot index this and no need to, since you already have the object goods.category = category goods.save() </code></pre>
python|django
1
1,908,580
64,560,278
trim json based on matching keys using python
<p>I have a JSON in which I am trying to trim if a certain criteria is matched. Below is my JSON:</p> <pre><code>{'Items': [ {'type': 'track', 'event': 'flag', 'properties': {'old': 'ABC11001' }, 'options': {'target': 'unflag' }, 'userId': None, 'anonymousId': 'c7ccc67e-f7d4-4198-9cef-7d6895c1bd3b', 'meta': {'timestamp': 1603043772959 }, '_': {'originalAction': 'track', 'called': 'track', 'from': 'engineEnd' }, 'traits': {'lfid': 'foobar' }, 'id': '40cfb8a0-116b-11eb-9635-df40a485d353', 'id': 'footer', 'partner_resid': '934591a0-e05d-dd62-a55a-sh1735ec3981' }, {'type': 'track', 'event': 'next', 'properties': {'old': 'ABC110023', 'new': 'ABC110026' }, 'options': {'target': 'nextTop' }, 'userId': None, 'anonymousId': 'c7ccc67e-f7d4-4198-9cef-7d6895c1bd3b', 'meta': {'timestamp': 1603043943118 }, '_': {'originalAction': 'track', 'called': 'track', 'from': 'engineEnd' }, 'traits': {'lfid': 'foobar' }, 'id': 'a63ab410-116b-11eb-83a1-99c63d9410bc', 'lfid': 'foobar', 'partner_resid': '934591a0-e05d-dd62-a55a-sh1735ec3981' }, {'type': 'track', 'event': 'flag', 'properties': {'old': 'ABC110099' }, 'options': {'target': 'unflag' }, 'userId': None, 'anonymousId': 'c7ccc67e-f7d4-4198-9cef-7d6895c1bd3b', 'meta': {'timestamp': 1603043137542 }, '_': {'originalAction': 'track', 'called': 'track', 'from': 'engineEnd' }, 'traits': {'lfid': 'foobar' }, 'id': 'c6116880-1169-11eb-9880-c39a5007644a', 'lfid': 'foobar', 'partner_resid': '934591a0-e05d-dd62-a55a-sh1735ec3981' }, {'type': 'track', 'event': 'flag', 'properties': {'new': 'ABC002234' }, 'options': {'target': 'flag' }, 'userId': None, 'anonymousId': 'c7ccc67e-f7d4-4198-9cef-7d6895c1bd3b', 'meta': {'timestamp': 1603042870105 }, '_': {'originalAction': 'track', 'called': 'track', 'from': 'engineEnd' }, 'traits': {'lfid': 'foobar' }, 'id': '26a94d80-1169-11eb-9880-c39a5007644a', 'lfid': 'foobar', 'partner_resid': '934591a0-e05d-dd62-a55a-sh1735ec3981' }, {'type': 'track', 'event': 'active', 'properties': {'new': 'ABC883322' }, 'options': {'target': 'bottomNext' }, 'userId': None, 'anonymousId': 'c7ccc67e-f7d4-4198-9cef-7d6895c1bd3b', 'meta': {'timestamp': 1603037276643 }, '_': {'originalAction': 'track', 'called': 'track', 'from': 'engineEnd' }, 'traits': {'lfid': 'foobar' }, 'id': '20b1aab0-115c-11eb-9e18-b5713e730a08', 'lfid': 'foobar', 'partner_resid': '958791a0-e05d-4e01-a55a-da48e3ec3981' }, } </code></pre> <p>Here I am trying to filter with <code>event</code> having <code>active</code> and <code>properties</code> with only <code>new</code> But failing to produce: Below is my code:</p> <pre><code> for idx, elem in enumerate(json_result['Items']): if json_result.get(&quot;Items&quot;).get(&quot;elem&quot;).properties.get(&quot;old&quot;) in elem.keys(): del json_result['Items'][idx] </code></pre> <p>Not sure where I am doing wrong</p> <p><code>result_json</code> should contain only:</p> <pre><code> {'type': 'track', 'event': 'active', 'properties': {'new': 'ABC883322' }, 'options': {'target': 'bottomNext' }, 'userId': None, 'anonymousId': 'c7ccc67e-f7d4-4198-9cef-7d6895c1bd3b', 'meta': {'timestamp': 1603037276643 }, '_': {'originalAction': 'track', 'called': 'track', 'from': 'engineEnd' }, 'traits': {'lfid': 'foobar' }, 'id': '20b1aab0-115c-11eb-9e18-b5713e730a08', 'lfid': 'foobar', 'partner_resid': '958791a0-e05d-4e01-a55a-da48e3ec3981'} </code></pre>
<p>you do have a mistake in your json (missing closing list bracket) --&gt; if I understand correctly you want to filter only those objects from json list that have <code>event=active</code> and key <code>&quot;new&quot; in properties</code>. Why not to create new result list and store there only matching objects.</p> <pre class="lang-py prettyprint-override"><code>items = dct[&quot;Items&quot;] result = [] for obj in items: if obj[&quot;event&quot;] == &quot;active&quot; and &quot;new&quot; in obj[&quot;properties&quot;]: result.append(obj) </code></pre>
python|json
1
1,908,581
72,875,063
HTMX websockets extension doesn't connect to Django channels
<p>I try to connect a Django channels Consumer to a HTMX <code>ext-ws</code> element, but I can't get a step further.</p> <pre class="lang-py prettyprint-override"><code>class MessageConsumer(WebsocketConsumer): def connect(self): self.accept() print(&quot;connect&quot;) #self.send( # &quot;type&quot;: &quot;websocket.send&quot;, # &quot;text&quot;: &quot;...&quot; #) </code></pre> <pre class="lang-html prettyprint-override"><code>... &lt;head&gt; ... &lt;script src=&quot;{% static 'common/js/htmx/htmx.min.js' %}&quot; defer&gt;&lt;/script&gt; &lt;script src=&quot;{% static 'common/js/htmx/ext/ws.js' %}&quot; defer&gt;&lt;/script&gt; ... &lt;/head&gt; ... </code></pre> <p>The HTMX.js and the ws.js gets loaded correctly at the client's browser.</p> <pre><code>&lt;div id=&quot;messages-container&quot; hx-ws=&quot;connect:/ws/messages/&quot; {# hx-ext=&quot;ws&quot; ws-connect=&quot;/ws/messages/&quot; does not work at all #} &gt; &lt;div id=&quot;message&quot;&gt;&lt;/div&gt; &lt;/div&gt; </code></pre> <p>If I use the old HTMX-builtin <code>hx-ws</code> method, at least the websocket connects. But I can't get a message to the HTMX element (I thought the <code>#message</code> div should be replaced).</p> <p>If I use the new-style (HTMX extension) syntax (<code>hx-ext=&quot;ws&quot; ws-connect...</code>)</p> <p>Can anyone point me to the right direction?</p>
<p>As per docs you need to also include <code>hx-swap-oob=&quot;true&quot;</code> attribute into the html that you send back from websocket:</p> <p>See example from htmx GH: <a href="https://github.com/bigskysoftware/htmx/blob/master/test/servers/ws/server.go#L24" rel="nofollow noreferrer">https://github.com/bigskysoftware/htmx/blob/master/test/servers/ws/server.go#L24</a></p> <p>When using the plugin version, what worked for me was removing the defer attribute from htmx related script tags. Not sure why including defer is causing the issue though.</p> <p><strong>Update:</strong></p> <p>A GH issue has been opened by OP: <a href="https://github.com/bigskysoftware/htmx/issues/957" rel="nofollow noreferrer">https://github.com/bigskysoftware/htmx/issues/957</a></p>
python|django|websocket|channels|htmx
2
1,908,582
55,865,618
how to read and compare different words in a single line of a file which is saved in utf-8 format? in python?
<p>I want to read in word by word a specific line of a file (which is in UTF-8 encoding format). I can read the entire line with the code:</p> <pre><code>read_language = open(X, "r", encoding='UTF8') # here X is a predefined file name T=read_language.readline() </code></pre> <p>The main problem is the utf-8 space is not same as normal space character.</p> <p>this is for reading linse but I want to read each word from line and know the index number of each word. I also want to compare it with predefined word.</p> <p>The string in my file is <code>समीकरण ज + अ</code>. I want to read the first word (<code>समीकरण</code>), then the next word, and so on, until the line ends. I also want to compare check for <code>+</code>s in an if statement to perform further operations.</p>
<p>This function will read a line and print all the words. It splits the line using a regex of white spaces (\s) and adds the index using the <a href="http://book.pythontips.com/en/latest/enumerate.html" rel="nofollow noreferrer">enumerate</a> function.</p> <pre><code>def read_words(file_name): with open(file_name, "r", encoding="UTF8") as read_language: line = read_language.readline() for idx, word in enumerate(re.split(r"\s", line)): print (idx, word) </code></pre> <p>you can upgrade it to be a generator using yield:</p> <pre><code>def read_words(file_name): with open(file_name, "r", encoding="UTF8") as read_language: line = read_language.readline() for idx, word in enumerate(re.split(r"\s", line)): yield (idx, word) </code></pre> <p>You can add the compare function inside the for loop and perform any logic you want with the word.</p>
python|file-handling|python-unicode
0
1,908,583
49,942,050
define a lambda function in single line using semicolons in python 3
<p>In python3, following works:</p> <pre><code>print(3); print(5) </code></pre> <p>However following gives a syntax error due to semicolon:</p> <pre><code>(lambda key: (print(3); print(5))) </code></pre> <p>Why is that, and is there a way to write a lambda function in single line (I intend to pass it as a short argument, without defining the function elsewhere)</p>
<p>Existing answers cover the "how?" of the question, but not the "why". Which is to say, <em>why</em> doesn't <code>lambda: print(3); print(5)</code> work? The answer is in the language specification.</p> <p>From <a href="https://docs.python.org/3/reference/expressions.html#lambda" rel="nofollow noreferrer">https://docs.python.org/3/reference/expressions.html#lambda</a>:</p> <blockquote> <p>Lambda expressions (sometimes called lambda forms) are used to create anonymous functions. The expression <code>lambda arguments: expression</code> yields a function object. [...] Note that <strong>functions created with lambda expressions cannot contain statements</strong> or annotations.</p> </blockquote> <p>From <a href="https://docs.python.org/3/reference/simple_stmts.html?highlight=semicolon#simple-statements" rel="nofollow noreferrer">https://docs.python.org/3/reference/simple_stmts.html?highlight=semicolon#simple-statements</a>:</p> <blockquote> <p>A simple statement is comprised within a single logical line. Several simple <strong>statements may occur on a single line separated by semicolons</strong>.</p> </blockquote> <p><code>print(3); print(5)</code> contains a semicolon, so it is a collection of simple statements. But a lambda can't contain statements. So a lambda can't contain <code>print(3); print(5)</code>.</p> <hr> <p>So why does <code>(lambda key: (print(3), print(5)))</code> work? It's because <code>(print(3), print(5))</code> is not a statement. It's an expression: in particular, it is a tuple literal (formally, a <a href="https://docs.python.org/3/reference/expressions.html#parenthesized-forms" rel="nofollow noreferrer">parenthesized form</a> whose expression list contains at least one comma), whose first element is a <a href="https://docs.python.org/3/reference/expressions.html#grammar-token-call" rel="nofollow noreferrer">call</a> to <code>print</code> with argument 3, and whose second element is a call to <code>print</code> with argument 5. All of this is a single expression, so lambda accepts it without trouble.</p>
python|python-3.x|lambda|syntax-error
4
1,908,584
64,840,507
Python: how to show timestamp other than datetime function
<p>I get below dictionary data from aws. In python, how can I get it show timestamp instead of datetime.datetime(2020, 10, 26, 10, 57, 19, 215000, tzinfo=tzlocal()) there?</p> <p>Thanks,</p> <blockquote> <p>{ &quot;ConfigRuleName&quot;: &quot;required-tags&quot;, &quot;OrderingTimestamp&quot;: datetime.datetime( 2020, 10, 26, 10, 57, 19, 215000, tzinfo=tzlocal() ), &quot;ResourceId&quot;: &quot;arn:aws:cloudformation:us-east-1:553763988947:stack/es-edge-security-headers-kells/f1924880-8311-11ea-9a26-0af77bd56d08&quot;, &quot;ResourceType&quot;: &quot;AWS::CloudFormation::Stack&quot;, }</p> </blockquote>
<p>Not sure what is that you are looking for.. Isn't it already a datetime object ?</p> <pre><code>from datetime import datetime from dateutil.tz import * d=datetime(2020, 10, 26, 10, 57, 19, 215000, tzinfo=tzlocal()) print(d.timestamp()) print(str(d)) </code></pre> <p>Output :</p> <pre><code>1603690039.215 2020-10-26 10:57:19.215000+05:30 </code></pre>
python
1
1,908,585
63,764,251
Second statement in If is not getting executed
<p>I am new to python so forgive me if I have made a stupid mistake. I am trying to create a script in python3.8 to automate whether or not if a service is running and then take an action from there. So for example if the ssh service is running then I would like the script to send a command for stopping it. However my second statement in the if loop is not getting executed it just outputs the information from the command &quot;service ssh status&quot; and that is it. Can you please help with this?</p> <pre><code>#!/usr/bin/env python3.8 import os ssh_status = os.system('service ssh status | grep running') if ssh_status == 'running': os.system('service ssh stop') </code></pre>
<p>os.system returns the exit code of the command. It will never return a string like &quot;running&quot;, and therefore the if block will not run. Likely what you are looking for is the command <code>subprocess.check_output()</code>.</p>
python|linux
1
1,908,586
72,057,152
Check whether deployed in cloud
<p>I have a python program running in a Docker container. My authentication method depends on whether the container is deployed in GCP or not. Ideally I'd have a function like this:</p> <pre class="lang-py prettyprint-override"><code>def deployment_environment(): # return 'local' if [some test] else 'cloud' pass </code></pre> <p>What's the most idiomatic way of checking this? My instinct is to use env named <code>[APP_NAME]_DEPLOYMENT_ENVIRONMENT</code> which gets set either way -- but making sure this is set correctly has too many moving parts. Is there a GCP package/tool which can check for me?</p>
<p>There are two solutions I've arrived at:</p> <h3>With env</h3> <p>Set an <a href="https://cloud.google.com/functions/docs/configuring/env-var" rel="nofollow noreferrer">env var</a> when deploying, like so:</p> <pre class="lang-sh prettyprint-override"><code>gcloud functions deploy [function-name] --set-env-vars ENV_GCP=1 </code></pre> <p>Then, in your code:</p> <pre class="lang-py prettyprint-override"><code>import socket def deployment_environment(): return 'cloud' if ('ENV_GCP' in os.environ) else 'local' </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Pros</th> <th>Cons</th> </tr> </thead> <tbody> <tr> <td>intent is clear, both setting and using env</td> <td>more involved</td> </tr> <tr> <td>idiomatic</td> <td>relies on user setting env correctly</td> </tr> </tbody> </table> </div><h3>Via Python, with Sockets</h3> <pre class="lang-py prettyprint-override"><code>import socket def deployment_environment(): try: socket.getaddrinfo('metadata.google.internal', 80) return 'cloud' except socket.gaierror: return 'local' </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Pros</th> <th>Cons</th> </tr> </thead> <tbody> <tr> <td>more succinct</td> <td>makes improper use of <code>try/catch</code></td> </tr> <tr> <td>doesn't rely on an extra step of setting env</td> <td>dependency on socket package &amp; GCP runtime contract</td> </tr> </tbody> </table> </div>
python|google-cloud-platform|cloud|google-cloud-run
0
1,908,587
68,658,171
DocuSign envelope with editable tabs that have a pre-set value
<p>I am integrating with DocuSign and need the ability to include <em>editable</em> pre-filled tabs on the documents in the envelope I create. I am accessing their API using the requests library in Python.</p> <p>DocuSign's special <code>pre-filled</code> tabs explicitly lock in the value and do not allow for recipient/signer editing. I have been able to add and fill the value for regular <code>text</code> tabs that unfortunately are also locked for editing.</p> <p>Further context:<br /> I am writing an API endpoint that will retrieve recipient information, create the envelope off of a template with tabs pre-filled, and then return the front end a URL to display for in-app signing.</p>
<p>Here is the code that worked for creating an envelope instance with pre-filled and editable tabs:</p> <pre class="lang-py prettyprint-override"><code>def make_new_envelope(template_id, account_id): data = json.dumps({ &quot;templateId&quot;: &quot;&lt;template_id&gt;&quot;, &quot;status&quot;: &quot;sent&quot;, &quot;templateRoles&quot;: [ { &quot;clientUserId&quot;: &quot;&lt;any string of your choosing&gt;&quot;, # required but determined by you &quot;email&quot;: &quot;test@test.com&quot;, &quot;name&quot;: &quot;Maggie Nelson&quot;, &quot;roleName&quot;: &quot;signer&quot;, &quot;tabs&quot;: { &quot;textTabs&quot;: [ { &quot;tabLabel&quot;: &quot;Text a494f42f-654e-49e1-9add-dd87bdc0dd04&quot;, &quot;fontSize&quot;: &quot;size9&quot;, &quot;fontColor&quot;: &quot;black&quot;, &quot;font&quot;: &quot;lucidaconsole&quot;, &quot;required&quot;: &quot;true&quot;, &quot;locked&quot;: &quot;false&quot;, &quot;value&quot;: &quot;home phone number&quot;, &quot;maxLength&quot;: 4000, &quot;width&quot;: 84, &quot;height&quot;: 27, &quot;documentId&quot;: &quot;1&quot;, &quot;recipientId&quot;: &quot;1&quot;, &quot;pageNumber&quot;: 1, &quot;tabId&quot;: &quot;d94dd078-c808-4ced-a1b1-d3d1a577bca0&quot;, &quot;tabType&quot;: &quot;text&quot;, &quot;xPosition&quot;: 232, &quot;yPosition&quot;: 343, } ] } } ] }) envelope_url = &quot;/v2.1/accounts/{account_id}/envelopes&quot;.format(account_id=account_id) x = requests.post(url=BASE_URL + envelope_url, headers=headers, data=data) return x.json()['envelopeId'] </code></pre> <p>I am not positive that all of this is absolutely necessary to include in the json object describing the tab besides the option <code>&quot;locked&quot;: &quot;false&quot;</code>.</p> <p>The <code>tabLabel</code> and <code>tabId</code> values have no relationship to the template's defined tabs. It appears possible that they can be set to whatever you wish.</p> <p>With regards to updating tabs on an envelope that has already been created...I have not been successful yet. I will note that the bug detailed here does still appear to exist <a href="https://stackoverflow.com/questions/59865511/calling-envelopesapiupdate-document-tabs-returns-an-error">Calling EnvelopesApi#update_document_tabs returns an error</a></p>
python|docusignapi
1
1,908,588
68,652,197
Pycharm doesn't recognise Sqoop libraries
<p>I am on Pycharm trying to use Sqoop import job to load MySQL data in to HDFS. I downloaded this package on terminal</p> <pre><code>pip install pysqoop </code></pre> <p>I tried running this package</p> <pre><code>from pysqoop.SqoopImport import Sqoop sqoop = Sqoop(help=True) code = sqoop.perform_import() </code></pre> <p>This was the error</p> <pre><code>/home/amel/PycharmProjects/pythonProject/venv/bin/python /home/amel/PycharmProjects/pythonProject/Hello.py Traceback (most recent call last): File &quot;/home/amel/PycharmProjects/pythonProject/Hello.py&quot;, line 1, in &lt;module&gt; from pysqoop.SqoopImport import Sqoop ModuleNotFoundError: No module named 'pysqoop' Process finished with exit code 1 </code></pre> <p>How can I solve this problem?</p>
<p>There is an option that your python code is running in a different python environment than your main one go to pycharm file-&gt;settings-&gt;project-&gt;project-interpreter. Than change your env.</p> <p>OR</p> <p>put your cursor on pysqoop and press alt+enter and choose &quot;install package pysqoop&quot;</p>
python|mysql|python-3.x|hadoop|sqoop
1
1,908,589
10,715,880
How to deal with circle degrees in Numpy?
<p>I need to calculate some direction arrays in numpy. I divided 360 degrees into 16 groups, each group covers 22.5 degrees. I want the 0 degree in the middle of a group, i.e., get directions between -11.25 degrees and 11.25 degrees. But the problem is how can I get the group between 168.75 degrees and -168.75 degrees?</p> <pre><code>a[numpy.where(a&lt;0)] = a[numpy.where(a&lt;0)]+360 for m in range (0,3600,225): b = (a*10 &gt; m)-(a*10 &gt;= m+225).astype(float) c = numpy.apply_over_axes(numpy.sum,b,0) </code></pre>
<p>If you want to divide data into 16 groups, having 0 degree in the middle, why are you writing <code>for m in range (0,3600,225)</code>?</p> <pre><code>&gt;&gt;&gt; [x/10. for x in range(0,3600,225)] [0.0, 22.5, 45.0, 67.5, 90.0, 112.5, 135.0, 157.5, 180.0, 202.5, 225.0, 247.5, 270.0, 292.5, 315.0, 337.5] ## this sectors are not the ones you want! </code></pre> <p>I would say you should start with <code>for m in range (-1125,36000,2250)</code> (note that now I am using a 100 factor instead of 10), that would give you the groups you want...</p> <pre><code>wind_sectors = [x/100.0 for x in range(-1125,36000,2250)] for m in wind_sectors: #DO THINGS </code></pre> <hr> <p>I have to say I don't really understand your script and the goal of it... To deal with circle degrees, I would suggest something like:</p> <ul> <li>a condition, where you put your problematic data, i.e., the one where you have to deal with the transition around zero;</li> <li>a condition where you put all the other data.</li> </ul> <p>For example, in this case, I am printing all the elements from my array that belong to each sector:</p> <pre><code>import numpy def wind_sectors(a_array, nsect = 16): step = 360./nsect init = step/2 sectores = [x/100.0 for x in range(int(init*100),36000,int(step*100))] a_array[a_array&lt;0] = a_arraya_array[a_array&lt;0]+360 for i, m in enumerate(sectores): print 'Sector'+str(i)+'(max_threshold = '+str(m)+')' if i == 0: for b in a_array: if b &lt;= m or b &gt; sectores[-1]: print b else: for b in a_array: if b &lt;= m and b &gt; sectores[i-1]: print b return "it works!" # TESTING IF THE FUNCTION IS WORKING: a = numpy.array([2,67,89,3,245,359,46,342]) print wind_sectors(a, 16) # WITH NDARRAYS: b = numpy.array([[250,31,27,306], [142,54,260,179], [86,93,109,311]]) print wind_sectors(b.flat[:], 16) </code></pre> <hr> <p><strong>about</strong> <code>flat</code> <strong>and</strong> <code>reshape</code> <strong>functions:</strong></p> <pre><code>&gt;&gt;&gt; a = numpy.array([[0,1,2,3], [4,5,6,7], [8,9,10,11]]) &gt;&gt;&gt; original = a.shape &gt;&gt;&gt; b = a.flat[:] &gt;&gt;&gt; c = b.reshape(original) &gt;&gt;&gt; a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) &gt;&gt;&gt; b array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) &gt;&gt;&gt; c array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) </code></pre>
arrays|numpy|geometry|degrees
1
1,908,590
4,855,823
Get mouse deltas using Python! (in Linux)
<p>I know that Linux gives out a 9-bit <a href="https://en.wikipedia.org/wiki/Two%27s_complement" rel="nofollow noreferrer">two's complement</a> data out of the /dev/input/mice. I also know that you can get that data via /dev/hidraw0 where hidraw is your USB device giving out raw data from the HID.</p> <p>I know the data sent is the delta of the movement (displacement) rather than position. By the by I can also view gibberish data via the &quot;cat /dev/input/mice&quot;.</p> <p>By using the Python language, how can I read this data? I really rather get that data as in simple integers. But it has proven hard. The real problem is reading the damn data. Is there a way to read bits and do bit arithmetic? (Currently I'm not worrying over <em>root</em> user-related issues. Please assume the script is run as <em>root</em>.)</p> <p>(My main reference was <a href="http://www.computer-engineering.org/ps2mouse/" rel="nofollow noreferrer">http://www.computer-engineering.org/ps2mouse/</a>)</p>
<p>I'm on a basic device and not having access to X or ... so event.py doesn't works.</p> <p>So here's my simpler decode code part to interpret from "deprecated" '/dev/input/mice':</p> <pre><code>import struct file = open( "/dev/input/mice", "rb" ); def getMouseEvent(): buf = file.read(3); button = ord( buf[0] ); bLeft = button &amp; 0x1; bMiddle = ( button &amp; 0x4 ) &gt; 0; bRight = ( button &amp; 0x2 ) &gt; 0; x,y = struct.unpack( "bb", buf[1:] ); print ("L:%d, M: %d, R: %d, x: %d, y: %d\n" % (bLeft,bMiddle,bRight, x, y) ); # return stuffs while( 1 ): getMouseEvent(); file.close(); </code></pre>
python|linux|mouse|hid
19
1,908,591
61,856,466
Django equivalent to HAVING in MySQL and filter by model's field
<p>I have two tables:</p> <pre><code>table1 ----------------- | id | salary | | 1 | 2500 | | 2 | 500 | ----------------- table2 ------------------------------- | id | outlay | table1_fk | | 1 | 20 | 1 | | 2 | 40 | 1 | | 3 | 1000 | 2 | ------------------------------- </code></pre> <p>and I need to select all rows from table1 + <em>sum of outlays</em> where <em>salary</em> is bigger than <em>SUM of outlays</em> the MySQL query would be:</p> <pre class="lang-sql prettyprint-override"><code>SELECT t1.*, COALESCE(SUM(t2.outlay),0) AS total_outlay FROM table1 AS t1 LEFT JOIN table2 AS t2 ON t1.id = t2.table1_fk GROUP BY t1.id HAVING total_outlay &lt; t1.salary; </code></pre> <p>Is it possible with Django ORM? So far I have this:</p> <pre class="lang-py prettyprint-override"><code>Model1.objects.filter(somefilterlogic).annotate(outlay_total=Sum(&quot;model2__outlay&quot;)) </code></pre> <hr> I'm using Django 2.2.5 and MySQL
<p>You can <a href="https://docs.djangoproject.com/en/dev/ref/models/querysets/#filter" rel="nofollow noreferrer"><strong><code>.filter(&hellip;)</code></strong> [Django-oc]</a> <em>after</em> the <a href="https://docs.djangoproject.com/en/dev/ref/models/querysets/#annotate" rel="nofollow noreferrer"><strong><code>.annotate(&hellip;)</code></strong> [Django-doc]</a> clause:</p> <pre><code>from django.db.models import <b>F</b>, Sum Model1.objects.filter( somefilterlogic ).annotate( outlay_total=Sum('model2__outlay') ).filter( <b>total_outlay__lt=F('salary')</b> )</code></pre> <p>As you say, you can also make use of <a href="https://docs.djangoproject.com/en/3.0/ref/models/database-functions/#coalesce" rel="nofollow noreferrer"><strong><code>Coalesce</code></strong> [Django-doc]</a> to use zero in case there are no related <code>Model2</code> objects:</p> <pre><code>from django.db.models import F, Sum from django.db.models.functions import Coalesce Model1.objects.filter( somefilterlogic ).annotate( outlay_total=<b>Coalesce(</b>Sum('model2__outlay')<b>, 0)</b> ).filter( total_outlay__lt=F('salary') )</code></pre>
python|mysql|django|django-models|orm
2
1,908,592
67,306,268
How to select a range of NumPy values for bar chart
<p>I created a bar chart using Matplotlib from the count of unique strings in a NumPy array. Now I would like to display only the top 10 most frequent species in the bar chart. I am new to Python so I am having trouble figuring it out. This is also my first question here, so let me know if I'm missing any important information</p> <pre><code>test_indices = numpy.where((obj.year == 2014) &amp; (obj.native == &quot;Native&quot;)) SpeciesList2014 = numpy.append(SpeciesList2014, obj.species_code[test_indices]) labels, counts = numpy.unique(SpeciesList2014, return_counts=True) indexSort = numpy.argsort(counts) plt.bar(labels[indexSort][::-1], counts[indexSort][::-1], align='center') plt.xticks(rotation=45) plt.show() </code></pre>
<p>You already have the values in a sorted array but you only want to select the ten values with the most counts.</p> <p>It seems your array is sorted with larger counts as last values so you can exploit the numpy indexing as</p> <pre><code>plt.bar(labels[indexSort][-1:-11:-1], counts[indexSort][-1:-11;-1], align='center') </code></pre> <p>where <code>[a:b:c]</code> means a=start index, b=end index c= step, and negative values represent counting from the end of the array. Or alternatively:</p> <pre><code>n=counts.shape[0] plt.bar(labels[indexSort][n-11:], counts[indexSort][n-11:], align='center') </code></pre> <p>which plots in increasing order.</p>
python|matplotlib|bar-chart
2
1,908,593
60,611,320
Asynchronous python function calls, keep rescheduling functions in asyncio.gather without waiting for longest running task
<p>So currently i have asynchronous python code (using asyncio) that looks like this: </p> <pre><code>while datetime.now() &lt; time_limit: last_start_run_time = datetime.now() result = await asyncio.gather( *( get_output(output_source) ) for output_source in output_sources ) for output in res: output_dict.update(my_output_dict) if (datetime.now() - last_start_run_time).seconds &lt; upper_bound_wait: await asyncio.sleep(delay) </code></pre> <p>The problem with this code is that it always waits for the longest running <strong>get_output</strong> call to call the function again for all output sources.</p> <p>I am wondering how can i rewrite this code in a way that calls each <strong>get_output</strong> call for each <strong>output_source</strong> as soon as it had finished it's earlier run (if it is within the <strong>upper_bound_wait</strong> ), i would also like the delay to be per <strong>get_output</strong> function call rather than after it finishes all of them.</p> <p>How can this be achieved used asyncio?</p>
<p>Suggestion: move all your logic to a coroutine and create the tasks in a simple loop. each task will decide for itself when to delay, when to repeat, and when to exit.</p> <pre><code>async def get_output_repeatable(upper_bound_wait, output_source): while datetime.now() &lt; time_limit: last_start_run_time = datetime.now() output_dict.update(await get_output(output_source)) if (datetime.now() - last_start_run_time).seconds &lt; upper_bound_wait: await asyncio.sleep(delay) def run_them_all(): for output_source in output_sources: asyncio.create_task(get_output_repeatedly(upper_bound_wait, output_source)) </code></pre>
python|python-3.x|asynchronous|async-await|python-asyncio
2
1,908,594
60,613,526
Append List values to DataFrame Column data
<p>I have a CSV file as below:</p> <ul> <li><strong>name</strong></li> <li>john </li> <li>eve</li> </ul> <p>And a list: state=['India','US']</p> <p>I want to add new column to the existing csv file with the list items as data to that column</p> <p>What I want:</p> <ul> <li><strong>name</strong> - <strong>state</strong></li> <li>john - India</li> <li>eve - Us</li> </ul>
<p>You can do it by declaring a new list as a column.</p> <p>code example:</p> <pre><code>import pandas as pd data = {'name': ['john', 'eve']} df = pd.DataFrame(data) state = ['India','US'] df['state'] = state print(df) </code></pre>
python|jupyter-notebook
0
1,908,595
60,671,689
How to simply walk through directories and subdirectories and create archive if found certain files
<p>I would like to create 2 scripts. First would be responsible for traversing through all subdirectories in parent folder, looking for files with extension <code>"*.mp4", "*.txt","*.jpg"</code> and if folder (for example <code>testfolder</code>) with such three files is found, another scripts performs operation of creating archive <code>testfolder.tar</code>.</p> <p>Here is my directory tree for testing those scripts: <a href="https://imgur.com/4cX5t5N" rel="nofollow noreferrer">https://imgur.com/4cX5t5N</a></p> <p><code>rootDirectory</code> contains <code>parentDirectory1</code> and <code>parentDirectory2</code>. <code>parentDirectories</code> contain <code>childDirectories</code>.</p> <p>Here is code of <code>dirScanner.py</code> trying to print extensions of files in subdirs:</p> <pre><code>import os rootdir = r'C:\Users\user\pythonprogram\rootDirectory' for directory in os.walk(rootdir): for subdirectory in directory: extensions = [] if os.path.isfile(os.curdir): extensions.append(os.path.splitext(os.curdir)[-1].lower()) print(extensions) </code></pre> <p>However it absolutely does not work as I expect it to work. How should I traverse through <code>parentDirectories</code> and <code>childDirectiories</code> in <code>rootDirectory</code>?</p> <p>I would like to keep it simple, in the way "Okay I'm in this directory, the files of this directory are XXX, Should/Shouldn't pack them"</p> <p>Also, this is my other script that should be responsible for packing files for specified path. I'm trying to learn how to use classes however I don't know if I understand it correctly.</p> <pre><code>import tarfile class folderNeededToBePacked: def __init__(self, name, path): self.path = path self.name = name def pack(self): tar = tarfile.open(r"{0}/{1}.tar".format(self.path, self.name), "w") for file in self.path: tar.add(file) tar.close() </code></pre> <p>I'd be thankful for all tips and advices how to achieve the goal of this task.</p>
<p>It's a simple straight forward task without many complex concepts which would call for being implemented as a class, so I would not use one for this.</p> <p>The idea is to walk through all directories (recursively) and if a <em>matching</em> directory is found, pack the three files of this directory into the archive.</p> <p>To walk through the directory tree you need to fix your usage of 'os.walk()' according to its documentation:</p> <pre><code>tar = tarfile.open(...) for dirpath, dirnames, filenames in os.walk(root): found_files = dir_matching(root, dirpath) for found_file in found_files: tar.add(found_file) tar.close() </code></pre> <p>And the function <code>dir_matching()</code> should return a list of the three found files (or an empty list if the directory doesn't match, i.e. at least one of the three necessary files is missing):</p> <pre><code>def dir_matching(root, dirpath): jpg = glob.glob(os.path.join(root, dirpath, '*.jpg') mp4 = glob.glob(os.path.join(root, dirpath, '*.mp4') txt = glob.glob(os.path.join(root, dirpath, '*.txt') if jpg and mp4 and txt: return [ jpg[0], mp4[0], txt[0] ] else: return [] </code></pre> <p>Of course you could add more sophisticated checks e.g. whether <em>exactly</em> one jpg etc. is found, but that depends on your concrete specifications.</p>
python|directory|operating-system|tar
1
1,908,596
71,308,957
how to efficiently rename a lot of blobs in GCS
<p>Lets say that on Google Cloud Storage I have bucket: <em>bucket1</em> and inside this bucket I have thousands of blobs I want to rename in this way:</p> <p><strong>Original blob</strong>: <em>bucket1/subfolder1/subfolder2/data_filename.csv</em></p> <p><strong>to</strong>: <em>bucket1/subfolder1/subfolder2/data_filename/data_filename_backup.csv</em></p> <p><code>subfolder1</code>, <code>subfolder2</code> and <code>data_filename.csv</code> - they can have different names, however the way to change names of all blobs is as above.</p> <p>What is the most efficient way to do this? Can I use Python for that?</p>
<p>You can use whatever programming language you want where Google offers an SDK for working with Cloud Storage. There is not going to be much of an advantage to any particular language you choose.</p> <p>There is not really an &quot;efficient&quot; way of doing this. What you will end up doing in your code is pretty standard:</p> <ol> <li><a href="https://cloud.google.com/storage/docs/listing-objects" rel="nofollow noreferrer">List the objects</a> that you want to rename.</li> <li>Iterate that list.</li> <li>For each object, <a href="https://cloud.google.com/storage/docs/copying-renaming-moving-objects#rename" rel="nofollow noreferrer">change the name</a>.</li> </ol> <p>You will get better performance overall if you run the code in a Google Cloud Shell or other Google Cloud compute environment in the same region as your bucket.</p>
python|google-cloud-platform|google-cloud-storage|gsutil
1
1,908,597
64,258,338
How to improve my if else logic while appending from list of dicts?
<p>I have a list of dicts <code>x</code> with some metadata and I may/may not have another list of just strings <code>y</code> without metadata which is a subset of the previous list.</p> <p>I want to iterate over <code>x</code> and append the string with some of the metadata to another list <code>z</code>. If I have <code>y</code>, I want to just add metadata of the strings in <code>y</code> from <code>x</code> to <code>z</code>.</p> <p>I have this solution so far and I think this can be improved, if not I will remove the question.</p> <pre class="lang-py prettyprint-override"><code>x = [ {&quot;a&quot;: &quot;valuea0&quot;, &quot;b&quot;: &quot;valueb0&quot;}, {&quot;a&quot;: &quot;valuea1&quot;, &quot;b&quot;: &quot;valueb1&quot;}, {&quot;a&quot;: &quot;valuea2&quot;, &quot;b&quot;: &quot;valueb2&quot;}, ] y = [&quot;valueb0&quot;, &quot;valueb1&quot;] z = [] def so_question(x, **kwargs): test_kwarg = kwargs.get(&quot;test_kwarg&quot;, None) for item in x: test_string = item[&quot;b&quot;] if test_kwarg: if test_string in test_kwarg: z.append( { &quot;p&quot;: item[&quot;a&quot;], &quot;q&quot;: item[&quot;b&quot;], } ) else: z.append( { &quot;p&quot;: item[&quot;a&quot;], &quot;q&quot;: item[&quot;b&quot;], } ) return z print(so_question(x, test_kwarg=y)) </code></pre> <p>Expected output:</p> <pre><code>z = [ {&quot;a&quot;: &quot;valuea0&quot;, &quot;b&quot;: &quot;valueb0&quot;}, {&quot;a&quot;: &quot;valuea1&quot;, &quot;b&quot;: &quot;valueb1&quot;}, ] </code></pre> <p>How can I improve this if/else logic since I am doing the same thing in both if and else?</p>
<p>You can combine both the conditions where you append as follows</p> <pre><code>x = [ {&quot;a&quot;: &quot;valuea0&quot;, &quot;b&quot;: &quot;valueb0&quot;}, {&quot;a&quot;: &quot;valuea1&quot;, &quot;b&quot;: &quot;valueb1&quot;}, {&quot;a&quot;: &quot;valuea2&quot;, &quot;b&quot;: &quot;valueb2&quot;}, ] y = [&quot;valueb0&quot;, &quot;valueb1&quot;] z = [] def so_question(x, **kwargs): test_kwarg = kwargs.get(&quot;test_kwarg&quot;, None) for item in x: test_string = item[&quot;b&quot;] if not test_kwarg or test_string in test_kwarg: z.append( { &quot;p&quot;: item[&quot;a&quot;], &quot;q&quot;: item[&quot;b&quot;], } ) return z print(so_question(x, test_kwarg=y)) </code></pre> <p>You else condition is captured in not test_kwarg. In case of if condition, it goes to the or part and, there you can give your nested if condition. Using &quot;or&quot; as either case, you are going to append.</p>
python|python-3.x|if-statement
2
1,908,598
70,201,307
Loop through list to extract specific patterns
<p>I have a quite specific question that I'm unsure about how to go forward with.</p> <p>I have a list of numbers and I want to extract some specific patterns from them where I loop through the list and create a new one, it's easier to explain with an example.</p> <p>Say I have a list, a = [2, 9, 3, 2, 3, 5, 7, 9].</p> <p>What I want to do with this list is loop through 4 numbers at a time and give them corresponding letters, depending on when they occur in the sequence.</p> <p>i.e. First four numbers = 2932 = ABCA</p> <p>Second sequence of numbers = 9323 = ABCB</p> <p>Third sequence = 3235 = ABAC</p> <p>Fourth sequence = 2357 = ABCD</p> <p>Fifth sequence = 3579 = ABCD</p> <p>I then want to take these sequences and add them to another list which would now look like,</p> <p>b = [ABCA, ABCB, ABAC, ABCD, ABCD]</p> <p>I'm really unsure about how the format of the code should be, the length of the new list will always be 3 less than the original. Any help would be great, thanks.</p>
<p>You can use a dictionary to assign letters to numbers and read that dictionary again to access the relevant letters. It has 2 loops, which is not ideal but it does the job:</p> <pre><code>a = [2, 9, 3, 2, 3, 5, 7, 9] len_a = len(a) output = [] letters = 'ABCD' for i in range(len_a-3): d = {} k = 0 for j in a[i:i+4]: if j not in d: d[j] = letters[k] k += 1 else: continue letter_assignment = ''.join([d[j] for j in a[i:i+4]]) output.append(letter_assignment) </code></pre> <p>Output:</p> <pre><code>print(output) # ['ABCA', 'ABCB', 'ABAC', 'ABCD', 'ABCD'] </code></pre>
python|python-3.x|list|for-loop
1
1,908,599
11,338,415
Confusion with Python subprocess
<p>I'm trying to run some external executable code from within Python and then make use of the output.The code I'm using takes an external file and returns a single number (the number of images encoded in that file). When I run from the command line, I see the following:</p> <pre><code>me@ubuntu:~/nist/hsfsys/bin$ ./nummis /usr/local/hsfsys/data/by_class/4a/train_4a.mis 3962 </code></pre> <p>Where 3962 is a correct output as near as I can tell</p> <p>However, when I try to use subprocess from within Python, I get the following error:</p> <pre><code>me@ubuntu:~/nist/hsfsys/bin$ python Python 2.7.3 (default, Apr 20 2012, 22:39:59) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import subprocess &gt;&gt;&gt; subprocess.check_output(["./nummis","/usr/local/hsfsys/data/by_class/4a/train_4a.mis"]) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/usr/lib/python2.7/subprocess.py", line 544, in check_output raise CalledProcessError(retcode, cmd, output=output) subprocess.CalledProcessError: Command '['./nummis', '/usr/local/hsfsys/data/by_class/4a/train_4a.mis']' returned non-zero exit status 32 &gt;&gt;&gt; subprocess.call(["./nummis","/usr/local/hsfsys/data/by_class/4a/train_4a.mis"]) 3962 32 </code></pre> <p>How should I interpret this "non-zero exit status 32" ? If something is wrong, why don't I see it on the command line? If nothing is wrong, why is Python complaining &amp; how can I get it to stop complaining?</p>
<p>The command line only reports the exit status when explicitly asked for it.</p> <p>After calling your program from the command line, try</p> <pre><code>echo $? </code></pre> <p>in order to show the exit status. If it shows <code>32</code> as well, it is the called program which is guilty. It doesn't properly <code>return 0;</code> or <code>return EXIT_SUCCESS;</code> in its <code>main()</code>.</p>
python|subprocess
2