Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,901,200
59,942,427
destory token after logout
<p>I want to destroy auth token when user get logged out. User get logged out successfully in the view that I have provided.But I need to destroy token when user get logout.</p> <pre><code>views.py </code></pre> <pre><code>class UserLoginViewSet(viewsets.ViewSet): def create(self,request): try: data=request.data email=data.get('email') password=data.get('password') date_of_birth=data.get('date_of_birth') if not all([email,password,date_of_birth]): raise Exception('all fields are mandetory') user=authenticate(username=email,password=password) if user is not None: token=generate_token() user_info=MyUser.objects.get(email=email) data=({ 'email':user_info.email, 'password':user_info.password, #'data_of_birth':user_info.data_of_birth }) return Response({"message": "You are successfully logged in", "user_info":data,"token": token, "success": True},status=status.HTTP_200_OK) else : raise Exception('not authorised') except Exception as error: traceback.print_exc() return Response({"message": str(error), "success": False}, status=status.HTTP_200_OK) def delete(self,request): logout(request) return Response({'successfull':True})``` #my user is logging out correctly,but i want to doi this by deleting token </code></pre>
<p>you can do like this</p> <pre><code>class UserLoginViewSet(viewsets.ViewSet): def create(self,request): try: data=request.data email=data.get('email') password=data.get('password') date_of_birth=data.get('date_of_birth') if not all([email,password,date_of_birth]): raise Exception('all fields are mandetory') user=authenticate(username=email,password=password) if user is not None: token=generate_token() user_info=MyUser.objects.get(email=email) data=({ 'email':user_info.email, 'password':user_info.password, #'data_of_birth':user_info.data_of_birth }) return Response({"message": "You are successfully logged in", "user_info":data,"token": token, "success": True},status=status.HTTP_200_OK) else : raise Exception('not authorised') except Exception as error: traceback.print_exc() return Response({"message": str(error), "success": False}, status=status.HTTP_200_OK) class LogoutView(APIView): permission_classes = (permissions.IsAuthenticated,) def get(self, request): request.user.auth_token.delete() logout(request) return Response({"message": "success", 'code': status.HTTP_200_OK, 'detail': "logout success"}) </code></pre> <p>In app urls.py add new url:</p> <pre><code>path('logout/',LogoutView.as_view()), </code></pre>
django|python-3.x|django-rest-framework
1
1,901,201
2,852,912
Searching through large data set
<p>how would i search through a list with ~5 mil 128bit (or 256, depending on how you look at it) strings quickly and find the duplicates (in python)? i can turn the strings into numbers, but i don't think that's going to help much. since i haven't learned much information theory, is there anything about this in information theory? </p> <p>and since these are hashes already, there's no point in hashing them again</p>
<p>If it fits into memeory, use set(). I think it will be faster than sort. O(n log n) for 5 million items is going to cost you.</p> <p>If it does not fit into memory, say you've lot more than 5 million record, divide and conquer. Break the records at the mid point like 1 x 2^127. Apply any of the above methods. I guess information theory helps by stating that a good hash function will distribute the keys evenly. So the divide by mid point method should work great.</p> <p>You can also apply divide and conquer even if it fit into memory. Sorting 2 x 2.5 mil records is faster than sorting 5 mil records.</p>
python|arrays|search|string|duplicates
4
1,901,202
6,167,127
How can I put multiple statements in one line?
<p>I know a little bit of comprehensions in Python, but they seem very hard to 'read'. The way I see it, a comprehension might accomplish the same as the following code:</p> <pre><code>for i in range(10): if i == 9: print('i equals 9') </code></pre> <p>This code is much easier to read than how comprehensions currently work, but I've noticed you can’t have two <code>:</code>s in one line. This brings me to:</p> <p>Is there a way I can get the following example into <em>one line</em>?</p> <pre><code>try: if sam[0] != 'harry': print('hello', sam) except: pass </code></pre> <p>Something like this would be great:</p> <pre><code>try: if sam[0] != 'harry': print('hellp', sam) except:pass </code></pre> <p>But again I encounter the conflicting <code>:</code>s.</p> <p>I'd also love to know if there's a way to run <code>try</code> (or something like it) without <code>except</code>. It seems entirely pointless that I need to put <code>except:pass</code> in there. It’s a wasted line.</p>
<p>Unfortunately, what you want is not possible with Python (which makes Python close to useless for command-line one-liner programs). Even explicit use of parentheses does not avoid the syntax exception. You can get away with a sequence of simple statements, separated by semi-colon:</p> <pre><code>for i in range(10): print "foo"; print "bar" </code></pre> <p>But as soon as you add a construct that introduces an indented block (like <code>if</code>), you need the line break. Also,</p> <pre><code>for i in range(10): print "i equals 9" if i==9 else None </code></pre> <p>is legal and might approximate what you want.</p> <p>As for the <code>try ... except</code> thing: It would be totally useless <strong>without</strong> the <code>except</code>. <code>try</code> says "I want to run this code, but it might throw an exception". If you don't care about the exception, leave away the <code>try</code>. But as soon as you put it in, you're saying "I want to handle a potential exception". The <code>pass</code> then says you wish to not handle it specifically. But that means your code will continue running, which it wouldn't otherwise.</p>
python
178
1,901,203
67,679,137
Iteration over a list of dictionaries with for loop
<p>In my code I'm trying to iterate over a list of dictionaries and use the values of those dictionaries to create new objects. The problem is that apparently when I write the for loop, instead than iterating over the dictionaries it seems to iterate directly over the elements inside the first dictionary, and I don't really understand why!</p> <p>This is the code:</p> <pre><code>class Phase: def __init__(self, workshop, machining, operator, placings): self.workshop = workshop self.machining = machining self.operator = operator self.placings = placings class Part: def __init__(self, name, phases): self.name = name self.phases = phases class Loader: def __init__(self, cycles_file, workshops_file): self.cycles_file = cycles_file self.workshops_file = workshops_file self.workshops = [] cycles = {'part1': [{'workshop': 'ws1', 'machining': 10, 'operator': 0, 'placings': 2}, {'workshop': 'ws2', 'machining': 7, 'operator': 3, 'placings': 0}], 'part2': {'workshop': 'ws3', 'machining': 5, 'operator': 5, 'placings': 0}} parts = [] for part in cycles: print('part: ', part) name = part phases = [] print('cycle: ', cycles[part], type(cycles[part])) for phase in cycles[part]: print('phase: ', phase, type(phase)) workshop = phase.get('workshop') machining = phase.get('machining') operator = phase.get('operator') placings = phase.get('placings') phase = Phase(workshop, machining, operator, placings) phases.append(phase) part = Part(name, phases) parts.append(part) workshops = {'ws1': {'turns': 3, 'turn duration': 8}, 'ws2': {'turns': 2, 'turn duration': 7.5}, 'ws3': {'turns': 2, 'turn duration': 7.5}} loader = Loader(cycles, workshops) </code></pre> <p>And this is the result I get:</p> <pre><code>part: part1 cycle: [{'workshop': 'ws1', 'machining': 10, 'operator': 0, 'placings': 2}, {'workshop': 'ws2', 'machining': 7, 'operator': 3, 'placings': 0}] &lt;class 'list'&gt; phase: {'workshop': 'ws1', 'machining': 10, 'operator': 0, 'placings': 2} &lt;class 'dict'&gt; phase: {'workshop': 'ws2', 'machining': 7, 'operator': 3, 'placings': 0} &lt;class 'dict'&gt; part: part2 cycle: {'workshop': 'ws3', 'machining': 5, 'operator': 5, 'placings': 0} &lt;class 'dict'&gt; phase: workshop &lt;class 'str'&gt; Traceback (most recent call last): File &quot;C:\Users\damia\PycharmProjects\logistic_management_tool\try2.py&quot;, line 42, in &lt;module&gt; workshop = phase.get('workshop') AttributeError: 'str' object has no attribute 'get' </code></pre> <p>The problem here is that phase should be a dict, as it is before entering the for loop!</p>
<p>I believe the problem is that the phases of part2 are not defined as a list, hence the second iteration of the <em>&quot;internal&quot;</em> loop is trying to iterate over the keys of the <em>&quot;internal&quot;</em> dictionary.</p> <p>Try adding a pair of square brackets around the dict, like so</p> <pre><code>cycles = {'part1': [{'workshop': 'ws1', 'machining': 10, 'operator': 0, 'placings': 2}, {'workshop': 'ws2', 'machining': 7, 'operator': 3, 'placings': 0}], 'part2': [{'workshop': 'ws3', 'machining': 5, 'operator': 5, 'placings': 0}]} # ^ ^ </code></pre>
python|dictionary|for-loop|types
4
1,901,204
67,904,971
Pynput ImportError - Failed to execute script
<p>I'm using the pine project and improving some things in object identification in the game, but when I try to convert to .exe using pyinstaller this message comes up.</p> <p><a href="https://i.stack.imgur.com/9lNOc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9lNOc.png" alt="enter image description here" /></a></p> <p>Link of project pine : <a href="https://github.com/petercunha/Pine" rel="nofollow noreferrer">https://github.com/petercunha/Pine</a></p>
<p>This is related to <code>pynput</code>'s recent changes. See <a href="https://github.com/moses-palmer/pynput/issues/312" rel="nofollow noreferrer">this</a> issue and please try:</p> <pre><code>pyinstaller --console --onefile --hidden-import &quot;pynput.keyboard._win32&quot; --hidden-import &quot;pynput.mouse._win32&quot; pine.py </code></pre> <p>which incorporates the needed backends that pyinstaller can't directly see into the command with hidden imports. <code>--console</code> and <code>--onefile</code> are optional; you can replace them with your previous commands.</p>
python|pyinstaller|pynput
0
1,901,205
67,624,778
Python mkdir with parent and exist_ok not creating final directory
<p>The code is looping over data in an sqlite3 database and creating directories to extract information into; however, the very final directory is never created. It should be DCIM/dir1 DCIM/dir2 DCIM...</p> <pre><code>for row in rows: localfile = row[0] fileloc = row[1] # Skip empty entries if not localfile or not fileloc: continue # Get path location of file, create it, ignore if exists path = Path(fileloc) mkpath = path.parent.absolute() targetpath = os.path.join(os.path.join(os.environ.get(&quot;HOME&quot;), mkpath)) print(f&quot;Creating {targetpath}&quot;) if not os.path.exists(targetpath): #Path(os.path.dirname(targetpath)).mkdir(parents=True, exist_ok=True) os.makedirs(os.path.dirname(targetpath), 0o755, True) </code></pre> <p>I'm sure this is not optimal, yet, but the real issue is that $HOME/DCIM is created but $HOME/DCIM/dir1 etc. The print statement is showing the correct output:</p> <pre><code>Creating /usr/home/jim/DCIM/dir1 Creating /usr/home/jim/DCIM/dir2 Creating /usr/home/jim/DCIM/dir3 Creating /usr/home/jim/DCIM/dir4 </code></pre> <p>But DCIM is empty. I thought that parent might be overwriting, but that doesn't make sense after trying this with $HOME and reading the documentation. I have a feeling it has something to do with the call to path.parent.absolute(), but if I try using os.path.dirname, I get the same results. Sorry if this was already answered, I found many &quot;how to create directories&quot; but nothing that covers this issue. Also sorry for any formatting issues - this is my first post to this StackOverflow.</p>
<p>Since each value of <code>targetpath</code> is already the absolute path of each directory you want to create, when you call <code>os.path.dirname</code> on it, since the path doesn't end with <code>/</code>, you chop everything to the right of the last <code>/</code> on it (in your case the inner directory).</p> <p>So basically you don't need to call <code>os.path.dirname</code> on it, just do:</p> <pre><code>os.makedirs(targetpath, 0o755, True) </code></pre>
python|mkdir
1
1,901,206
66,850,734
Authorization header error: No module named 'certifi', python request API
<p>i have a problem running my script in linux using python 2.6, i get an error:</p> <blockquote> <p>/usr/lib/python2.6/site-packages/requests-2.18.4-py2.6.egg/requests/<em>init</em>.py:80: RequestsDependencyWarning: urllib3 (1.22) or chardet (2.2.1) doesn't match a supported version! RequestsDependencyWarning) Traceback (most recent call last): File &quot;mapping_prepaid.py&quot;, line 3, in from requests import post,Session File &quot;/usr/lib/python2.6/site-packages/requests-2.18.4-py2.6.egg/requests/<em>init</em>.py&quot;, line 97, in from . import utils File &quot;/usr/lib/python2.6/site-packages/requests-2.18.4-py2.6.egg/requests/utils.py&quot;, line 24, in from . import certs File &quot;/usr/lib/python2.6/site-packages/requests-2.18.4-py2.6.egg/requests/certs.py&quot;, line 15, in from certifi import where</p> </blockquote> <p>my code (running using python 2.6 on linux):</p> <pre><code>from _future_ import print_function from contextlib import closing from requests import post,Session #from multiprocessing import Pool from multiprocessing.dummy import Pool from datetime import datetime as dt from time import time from csv import DictReader import sys BASE_URL = 'https://xx.xxx.xxx.xx:443/siebel/v1.0/service/project/' headers = {'Content-Type': 'application/json','Authorization': 'Basic Qkxxxxxxx='} #run a file from command line list_dict = [] filename = sys.argv[1] with open(filename, 'r') as g: f_reader = DictReader(g, delimiter='|', fieldnames=['transactionId','channel','userId','intRef','serviceId','projectId']) for d in f_reader: list_dict.append(dict(d)) def resource_post(data_from_json): timestamp = dt.now().strftime('%Y%m%d %H:%M:%S:%f')[:-3] # timestamp data_from_json[&quot;timestamp&quot;] = timestamp post_data = { &quot;body&quot;: data_from_json, } response = post(BASE_URL, headers=headers, json=post_data) response.raise_for_status() return response.json() def main(): start = time() with open(&quot;output.txt&quot;, &quot;a&quot;) as outf: pool = Pool() with closing(pool) as p: #ubah for resp in p.imap_unordered(resource_post, list_dict): print(resp) print(resp[&quot;responseCode&quot;], resp[&quot;ResponseMessage&quot;], resp[&quot;transactionId&quot;], sep=&quot;|&quot;, file=outf) elapsed = (time() - start) print(&quot;\n&quot;, &quot;time elapsed is :&quot;, elapsed) with open(&quot;complete.txt&quot;, &quot;a&quot;) as outf: print(filename,&quot;completed&quot;,file=outf) if _name_ == '_main_': main() </code></pre> <p>please help me to solve this problem. error in my library and my headers.</p>
<p>I recommend you to use Python3 because most libraries doesn't support Python2 these days. It says you didn't downloaded certifi. <code>pip install certifi</code> I don't know much about this &quot;certifi&quot; library. But it is installed with pip automatically on my Mac.</p>
python|api|curl|request|multiprocessing
0
1,901,207
63,892,064
Data from txt input to list - Python
<p>I have data in text file :</p> <pre><code>2,58 1,23 0,14 6,58 4,2 1,3 </code></pre> <p>I want to have this data from my text file in list written in this format:</p> <pre><code>[[2, 58, 1, 23, 0, 14] [6, 58, 4, 2, 1, 3]] </code></pre> <p>I tried this :</p> <pre><code>folder = open('text.txt', encoding = 'utf-8') data = [numbers.strip().replace(',',' ').split(' ') for numbers in folder] folder.close print(data) </code></pre> <p>But I received result like this : <code>[['2', '58', '1', '23', '0', '14']['6', '58', '4', '2', '1', '3']]</code></p> <p>If I'm trying to set <code>int()</code> to numbers in many places in list I receiving this error : <em>int() argument must be a string, a bytes-like object or a number, not <code>list</code></em></p> <p>So I need just change all string in this list from <code>str</code> to <code>int</code>, can you help me, please?</p>
<p>Try to use <a href="https://stackoverflow.com/questions/13638898/how-to-use-filter-map-and-reduce-in-python-3?r=SearchResults&amp;s=3%7C87.2793"><code>map()</code></a> function it takes two arguments where one is function, condition, etc and the second is an iterable objects like list, set, etc so in your question will be like <code>map (int ,yourlist)</code> and after cast it into list like <code>list(map (int ,yourlist))</code> for more see <a href="https://stackoverflow.com/questions/1303347/getting-a-map-to-return-a-list-in-python-3-x">this question</a> to know why you should cast it</p> <h2>Example</h2> <pre><code>folder = open('text.txt', encoding = 'utf-8') data = [list(map(int, numbers.strip().replace(',',' ').split(' '))) for numbers in folder] folder.close() print(data) </code></pre>
python|list
0
1,901,208
42,778,913
assign values to an array in a loop in tensorflow
<p>I have a array of ones in tensorflow and I want to update its values based on another array in a for loop. Here is the code:</p> <pre><code>def get_weights(labels, class_ratio=0.5): weights = tf.ones_like(labels, dtype=tf.float64)) pos_num = class_ratio * 100 neg_num = 100 - class_ratio * 100 for i in range(labels.shape[0]): if labels[i] == 0: weights[i].assign(pos_num/neg_num) else: weights[i].assign(neg_num) return weights </code></pre> <p>an then I have this code to call the above function:</p> <pre><code>with tf.Graph().as_default(): labels = tf.placeholder(tf.int32, (5,)) example_weights = get_weights(labels, class_ratio=0.1) with tf.Session() as sess: np_labels = np.random.randint(0, 2, 5) np_weights = sess.run(example_weights, feed_dict={labels: np_labels}) print("Labels: %r" % (np_labels,)) print("Weights: %r" % (np_weights,)) </code></pre> <p>but when I run it, it gives me this error:</p> <p><code>ValueError: Sliced assignment is only supported for variables</code></p> <p>How can I assign/update values of an array in tensorflow?</p>
<p>A <a href="https://www.tensorflow.org/api_docs/python/tf/Tensor" rel="nofollow noreferrer"><code>tf.Tensor</code></a> in TensorFlow is a read-only value&mdash;in fact, a symbolic expression for computing a read-only value&mdash;so you cannot in general assign values to it. (The main exceptions are <a href="https://www.tensorflow.org/api_docs/python/tf/Variable" rel="nofollow noreferrer"><code>tf.Variable</code></a> objects.) This means that you are encourage to use "functional" operations to define your tensor. For example, there are several ways to generate the <code>weights</code> tensor functionally:</p> <ul> <li><p>Since <code>weights</code> is defined as an element-wise transformation of <code>labels</code>, you can use <a href="https://www.tensorflow.org/api_docs/python/tf/map_fn" rel="nofollow noreferrer"><code>tf.map_fn()</code></a> to create a new tensor (containing a <a href="https://www.tensorflow.org/api_docs/python/tf/cond" rel="nofollow noreferrer"><code>tf.cond()</code></a> to <a href="https://stackoverflow.com/a/35833133/3574081">replace the <code>if</code> statement</a>) by mapping a function across it:</p> <pre><code>def get_weights(labels, class_ratio=0.5): pos_num = tf.constant(class_ratio * 100) neg_num = tf.constant(100 - class_ratio * 100) def compute_weight(x): return tf.cond(tf.equal(x, 0), lambda: pos_num / neg_num, lambda: neg_num) return tf.map_fn(compute_weight, labels, dtype=tf.float32) </code></pre> <p>This version allows you to apply an arbitrarily complicated function to each element of <code>labels</code>.</p></li> <li><p>However, since the function is simple, cheap to compute, and representable using simple TensorFlow ops, you can avoid using <code>tf.map_fn()</code> and instead use <a href="https://www.tensorflow.org/api_docs/python/tf/where" rel="nofollow noreferrer"><code>tf.where()</code></a>:</p> <pre><code>def get_weights(labels, class_ratio=0.5): pos_num = tf.fill(tf.shape(labels), class_ratio * 100) neg_num = tf.fill(tf.shape(labels), 100 - class_ratio * 100) return tf.where(tf.equal(labels, 0), pos_num / neg_num, neg_num) </code></pre> <p>(You could also use <code>tf.where()</code> instead of <code>tf.cond()</code> in the <code>tf.map_fn()</code> version.)</p></li> </ul>
python|tensorflow
3
1,901,209
42,754,967
TypeError: write() takes at least 5 arguments (2 given) - inherited module from v8 to v10 community
<p>I'm migrating some modules from v8 to v10 community</p> <p>This time I have this error when trying to create an invoice:</p> <pre><code>Traceback (most recent call last): File "/home/kristian/.virtualenvs/odoov10/lib/python2.7/site-packages/odoo-10.0rc1c_20161005-py2.7.egg/odoo/http.py", line 638, in _handle_exception return super(JsonRequest, self)._handle_exception(exception) File "/home/kristian/.virtualenvs/odoov10/lib/python2.7/site-packages/odoo-10.0rc1c_20161005-py2.7.egg/odoo/http.py", line 675, in dispatch result = self._call_function(**self.params) File "/home/kristian/.virtualenvs/odoov10/lib/python2.7/site-packages/odoo-10.0rc1c_20161005-py2.7.egg/odoo/http.py", line 331, in _call_function return checked_call(self.db, *args, **kwargs) File "/home/kristian/.virtualenvs/odoov10/lib/python2.7/site-packages/odoo-10.0rc1c_20161005-py2.7.egg/odoo/service/model.py", line 119, in wrapper return f(dbname, *args, **kwargs) File "/home/kristian/.virtualenvs/odoov10/lib/python2.7/site-packages/odoo-10.0rc1c_20161005-py2.7.egg/odoo/http.py", line 324, in checked_call result = self.endpoint(*a, **kw) File "/home/kristian/.virtualenvs/odoov10/lib/python2.7/site-packages/odoo-10.0rc1c_20161005-py2.7.egg/odoo/http.py", line 933, in __call__ return self.method(*args, **kw) File "/home/kristian/.virtualenvs/odoov10/lib/python2.7/site-packages/odoo-10.0rc1c_20161005-py2.7.egg/odoo/http.py", line 504, in response_wrap response = f(*args, **kw) File "/home/kristian/odoov10/odoo-10.0rc1c-20161005/odoo/addons/web/controllers/main.py", line 862, in call_kw return self._call_kw(model, method, args, kwargs) File "/home/kristian/odoov10/odoo-10.0rc1c-20161005/odoo/addons/web/controllers/main.py", line 854, in _call_kw return call_kw(request.env[model], method, args, kwargs) File "/home/kristian/.virtualenvs/odoov10/lib/python2.7/site-packages/odoo-10.0rc1c_20161005-py2.7.egg/odoo/api.py", line 679, in call_kw return call_kw_model(method, model, args, kwargs) File "/home/kristian/.virtualenvs/odoov10/lib/python2.7/site-packages/odoo-10.0rc1c_20161005-py2.7.egg/odoo/api.py", line 664, in call_kw_model result = method(recs, *args, **kwargs) File "/home/kristian/odoov10/odoo-10.0rc1c-20161005/odoo/addons/account/models/account_invoice.py", line 342, in create invoice = super(AccountInvoice, self.with_context(mail_create_nolog=True)).create(vals) File "/home/kristian/odoov10/odoo-10.0rc1c-20161005/odoo/addons/mail/models/mail_thread.py", line 227, in create thread = super(MailThread, self).create(values) File "/home/kristian/.virtualenvs/odoov10/lib/python2.7/site-packages/odoo-10.0rc1c_20161005-py2.7.egg/odoo/models.py", line 3798, in create record = self.browse(self._create(old_vals)) File "/home/kristian/.virtualenvs/odoov10/lib/python2.7/site-packages/odoo-10.0rc1c_20161005-py2.7.egg/odoo/models.py", line 3958, in _create self.recompute() File "/home/kristian/.virtualenvs/odoov10/lib/python2.7/site-packages/odoo-10.0rc1c_20161005-py2.7.egg/odoo/models.py", line 5277, in recompute recs.browse(ids)._write(dict(vals)) File "/home/kristian/odoov10/odoo-10.0rc1c-20161005/odoo/addons/account/models/account_invoice.py", line 356, in _write (reconciled &amp; pre_reconciled).filtered(lambda invoice: invoice.state == 'open').action_invoice_paid() File "/home/kristian/odoov10/odoo-10.0rc1c-20161005/odoo/addons/account/models/account_invoice.py", line 573, in action_invoice_paid return to_pay_invoices.write({'state': 'paid'}) TypeError: write() takes at least 5 arguments (2 given) </code></pre> <p>It doesn't says where the actual error comes in, it must be from the module I'm migrating, this is the method which has <code>'state' : 'paid'</code> in my module:</p> <pre><code>@api.multi def action_invoice_create(self, cr, uid, ids, wizard_brw, inv_brw, context=None): """ If the invoice has control number, this function is responsible for passing the bill to damaged paper @param wizard_brw: nothing for now @param inv_brw: damaged paper """ invoice_line_obj = self.env('account.invoice.line') invoice_obj = self.env('account.invoice') acc_mv_obj = self.env('account.move') acc_mv_l_obj = self.env('account.move.line') tax_obj = self.env('account.invoice.tax') invoice = {} if inv_brw.nro_ctrl: invoice.update({ 'name': 'PAPELANULADO_NRO_CTRL_%s' % ( inv_brw.nro_ctrl and inv_brw.nro_ctrl or ''), 'state': 'paid', 'tax_line': [], }) else: raise osv.except_osv( _('Validation error!'), _("You can run this process just if the invoice have Control" " Number, please verify the invoice and try again.")) invoice_obj.write(cr, uid, [inv_brw.id], invoice, context=context) for line in inv_brw.invoice_line: invoice_line_obj.write( cr, uid, [line.id], {'quantity': 0.0, 'invoice_line_tax_id': [], 'price_unit': 0.0}, context=context) tax_ids = self.env('account.tax').search(cr, uid, [], context=context) tax = tax_obj.search(cr, uid, [('invoice_id', '=', inv_brw and inv_brw.id)], context=context) if tax: tax_obj.write(cr, uid, tax[0], {'invoice_id': []}, context=context) tax_obj.create(cr, uid, { 'name': 'SDCF', 'tax_id': tax_ids and tax_ids[0], 'amount': 0.00, 'tax_amount': 0.00, 'base': 0.00, 'account_id': inv_brw.company_id.acc_id.id, 'invoice_id': inv_brw and inv_brw.id}, {}) move_id = inv_brw.move_id and inv_brw.move_id.id if move_id: acc_mv_obj.button_cancel(cr, uid, [inv_brw.move_id.id], context=context) acc_mv_obj.write(cr, uid, [inv_brw.move_id.id], {'ref': 'Damanged Paper'}, context=context) acc_mv_l_obj.unlink(cr, uid, [i.id for i in inv_brw.move_id.line_id]) return inv_brw.id </code></pre> <p>Any ideas about this?</p> <p>Since this is a code which operates on records, I've added the <code>@api.multi</code> decorator, but I'm not sure if this is the problem.</p> <p><strong>EDIT</strong></p> <p>This is the method on <code>account</code> module:</p> <pre><code>@api.multi def action_invoice_paid(self): # lots of duplicate calls to action_invoice_paid, so we remove those already paid to_pay_invoices = self.filtered(lambda inv: inv.state != 'paid') if to_pay_invoices.filtered(lambda inv: inv.state != 'open'): raise UserError(_('Invoice must be validated in order to set it to register payemnt.')) if to_pay_invoices.filtered(lambda inv: not inv.reconciled): raise UserError(_('You cannot pay an invoice which is partially paid. You need to reconcile payment entries first.')) return to_pay_invoices.write({'state': 'paid'}) </code></pre> <p>But I'm not sure if this is a bug, or the method from my module, which inherits <code>account.invoice</code> class is causing it, I think is the latter.</p>
<p>You're mixing up old API with new API here:</p> <pre class="lang-py prettyprint-override"><code>@api.multi def action_invoice_create( self, cr, uid, ids, wizard_brw, inv_brw, context=None): </code></pre> <p><code>cr</code>, <code>uid</code>, <code>ids</code> and <code>context</code> are handled by <code>self.env</code> so you don't need to declare them anymore. Odoo will wrap the method to old or new API style automatically, if it's needed.</p> <p>New API style should be:</p> <pre class="lang-py prettyprint-override"><code>@api.multi def action_invoice_create(self, wizard_brw, inv_brw): </code></pre> <p>And one more hint for Odoo 10: It's <code>invoice_line_ids</code> now (finally) ;-)</p>
python|openerp|odoo-10
1
1,901,210
50,322,660
Custom Data Generator for Keras LSTM with TimeSeriesGenerator
<p>So I'm trying to use Keras' <a href="https://keras.io/models/sequential/" rel="noreferrer">fit_generator</a> with a custom data generator to feed into an LSTM network.</p> <h1>What works</h1> <p>To illustrate the problem, I have created a toy example trying to predict the next number in a simple ascending sequence, and I use the Keras <a href="https://keras.io/preprocessing/sequence/#timeseriesgenerator" rel="noreferrer">TimeseriesGenerator</a> to create a Sequence instance:</p> <pre><code>WINDOW_LENGTH = 4 data = np.arange(0,100).reshape(-1,1) data_gen = TimeseriesGenerator(data, data, length=WINDOW_LENGTH, sampling_rate=1, batch_size=1) </code></pre> <p>I use a simple LSTM network:</p> <pre><code>data_dim = 1 input1 = Input(shape=(WINDOW_LENGTH, data_dim)) lstm1 = LSTM(100)(input1) hidden = Dense(20, activation='relu')(lstm1) output = Dense(data_dim, activation='linear')(hidden) model = Model(inputs=input1, outputs=output) model.compile(loss='mse', optimizer='rmsprop', metrics=['accuracy']) </code></pre> <p>and train it using the <code>fit_generator</code> function:</p> <pre><code>model.fit_generator(generator=data_gen, steps_per_epoch=32, epochs=10) </code></pre> <p>And this trains perfectly, and the model makes predictions as expected.</p> <h1>The problem</h1> <p>Now the problem is, in my non-toy situation I want to process the data coming out from the TimeseriesGenerator before feeding the data into the <code>fit_generator</code>. As a step towards this, I create a generator function which just wraps the TimeseriesGenerator used previously.</p> <pre><code>def get_generator(data, targets, window_length = 5, batch_size = 32): while True: data_gen = TimeseriesGenerator(data, targets, length=window_length, sampling_rate=1, batch_size=batch_size) for i in range(len(data_gen)): x, y = data_gen[i] yield x, y data_gen_custom = get_generator(data, data, window_length=WINDOW_LENGTH, batch_size=1) </code></pre> <p>But now the strange thing is that when I train the model as before, but using this generator as the input,</p> <pre><code>model.fit_generator(generator=data_gen_custom, steps_per_epoch=32, epochs=10) </code></pre> <p>There is no error but the training error is all over the place (jumping up and down instead of consistently going down like it did with the other approach), and the model doesn't learn to make good predictions.</p> <p>Any ideas what I'm doing wrong with my custom generator approach?</p>
<p>It could be because the object type is changed from <code>Sequence</code> which is what a <code>TimeseriesGenerator</code> is to a generic generator. The <code>fit_generator</code> function treats these differently. A cleaner solution would be to inherit the class and override the processing bit:</p> <pre><code>class CustomGen(TimeseriesGenerator): def __getitem__(self, idx): x, y = super()[idx] # do processing here return x, y </code></pre> <p>And use this class like before as the rest of internal logic will remain the same.</p>
python|keras|lstm
10
1,901,211
50,633,189
Why is python treating lists like this when defining a function?
<p>I was making a code, and variables started to behave strangely and get assigned to things which I thought they shouldn't. So, I decided to reduce the situation to minimal complexity in order to solve my doubts, and this is what happened:</p> <p>The following code:</p> <pre><code>a = [2] def changeA(c): d = c d[0] = 10 return True changeA(a) print(a) </code></pre> <p>prints '[10]'. This doesn't make sense to me, since I never assigned the list "a" to be anything after the first assignment. Inside the function changeA, the local variable d is assigned to be the input of the function, <em>and it seems to me that this assignment is happening both ways, and even changing the "outside"</em>. If so, why? If not, why is this happening?</p> <p>I've also noticed that the code </p> <pre><code>a = [2] def changeA(c): d = list(c) d[0] = 10 return True changeA(a) print(a) </code></pre> <p>behaves normally (i.e., as I would expect).</p> <p>EDIT: This question is being considered a duplicate of <a href="https://stackoverflow.com/questions/240178/list-of-lists-changes-reflected-across-sublists-unexpectedly">this one</a>. I don't think this is true, since it is also relevant here that the locality character of procedures inside a function is being violated.</p>
<p>Python variables are references to objects, and some objects are mutable. Numbers are not, neither are strings nor tuples, but lists, sets and dicts are.</p> <p>Let us look at the following Python code</p> <pre><code>a = [2] # ok a is a reference to a mutable list b = a # b is a reference to the exact same list b[0] = 12 # changes the value of first element of the unique list print(a) # will display [12] </code></pre>
python|python-3.x|list|variables
2
1,901,212
26,773,458
module not setting main files variable content
<p>using pygame, I have made a main menu with a few buttons on it. i can detect when a button is pressed and do something about it.</p> <p>however the code that controls what happens when a button is pressed is in another file (using getattr) and this seems to be causing some issues.</p> <p>I am using the variable <code>menu_open</code> to control when things relating to the menu should be done. when the game starts up and after the one-off dev warning shows (works fine), it is set to True. everything works as intended until i click on my <code>new game</code> button. this should just create a blank screen. nothing happens.</p> <p>i have discovered that <code>menu_open</code> is still <code>True</code>. what appears to be happening is that the code that controls the <code>new game</code> button is in another file and for reasons i cannot understand seems to be working with a different version of <code>menu_open</code> than my main file is. (it's not setting my main files <code>menu_open</code> to <code>False</code> although its testing print statement prints <code>False</code>)</p> <p>code that controls what happens when the button is pressed:</p> <pre><code>def new_game(): print('starting a new game') import main main.menu_open=False print(2,main.menu_open) </code></pre> <p>start of my program:</p> <pre><code>import pygame,commands #line 1 done = False menu_open= False #deceleration of menu_open at start of program game_playing = False </code></pre> <p>code that updates the menu (should create a white screen when <code>menu_open</code> is False):</p> <pre><code>def display_frame(self,screen): global menu_open print(1,menu_open) screen.fill(WHITE) if menu_open: screen.blit(menu_image,[0,0]) for button in button_list: button.draw() pygame.display.flip() </code></pre> <p>code that causes the button control:</p> <pre><code>def run_logic(self): #worth noting this is called right before display_frame() global mouse_pos,mouse_press mouse_pos = pygame.mouse.get_pos() mouse_press = pygame.mouse.get_pressed() for button in button_list: button.check() #runs the following: def check(self): if self.hovered: if mouse_press[0] == True: try: command_to_call = getattr(commands,self.command) command_to_call() except: print('[DEV]: invalid command') </code></pre> <p>result of print statements:</p> <pre><code>1 True # button not pressed 1 True # True here is my main files 'menu_open' 1 True 1 True 1 True 1 True starting a new game #button pressed 2 False #false is the other files 'menu open' 1 True # True here is my main files 'menu_open' starting a new game 2 False 1 True starting a new game 2 False 1 True starting a new game 2 False 1 True #button released, menu still normal 1 True 1 True 1 True </code></pre> <p>i'm not very experienced with multi-file programming so any help is appreciated. also may be worth noting my IDE (pyscripter) bugs out alot with pygame. The button control so far has worked fine. i have made a quit button using it.</p> <p>if you need any more code from my program feel free to ask :) also if my code is fine and this is just a bug with python/pyscripter/pygame please say so.</p>
<p>The correct way to fix this is to move the variable to a separate module. But I'll explain what you're doing wrong regardless.</p> <p>Presumably the "start of [...] program" is in a file called <code>main.py</code>. And normally when you want access to a module you import it by path/filename, in this case <code>main</code>. However, the first script invoked by the interpreter is <em>not</em> named after its path/filename, but instead is <strong><em>always</em></strong> named <code>__main__</code> regardless of anything else. So the correct way to import it, <em>not that you should ever do it</em>, is to use <code>import __main__</code> and then access it via that name.</p>
python|module|pygame
0
1,901,213
26,592,540
Delegating command line arguments to another commands
<p>I need some Python module to support forwarding command line arguments to other commands.</p> <p><code>argparse</code> allows to parse arguments easily, but doesn't deliver any "deparsing" tool.</p> <p>I could just forward <code>os.sys.argv</code> if I hadn't need to delete or change values of any of them, but I have.</p> <p>I can imagine myself a class that just operates on array of strings all the time, without losing any information, but I failed finding any.</p> <p>Does somebody know such tool or maybe met similar problem and found out another nice way to handle?</p> <p>(Sorry for English :()</p>
<p>If you use the <code>subprocess</code> module to run the commands with delegated arguments you can specify your command as a list of strings that won't be subject to shell parsing (as long as you don't use <code>shell=True</code>). You therefore don't need to bother about quoting concerns the same way you would if you were reconstructing a command line. See <a href="https://docs.python.org/2/library/subprocess.html#frequently-used-arguments" rel="nofollow">https://docs.python.org/2/library/subprocess.html#frequently-used-arguments</a> for further details. </p>
python|argparse|python-module
0
1,901,214
45,241,733
Python problems with division
<p>I've been attempting to make a calculator where you enter a number then the calculator divides it by 10 then times it by 4 to work out 40%, then prints the final number.</p> <pre><code>a = input("Enter amount: ") b = a / 10 c = b * 4 print(c) </code></pre> <p>When I run the code I receive this error message: TypeError: unsupported operand type(s) for /: 'str' and 'int'</p>
<pre><code>a = int(input("Enter amount: ")) b = a / 10 c = b * 4 print(c) </code></pre> <p>the normal input takes input as <code>string</code>, you need to convert it to <code>int</code></p>
python|calculator
3
1,901,215
61,449,314
Python version problem/ Unable to import modules
<p>I'm having trouble importing <code>keras</code>, <code>tensorflow</code> and <code>pyspark</code> even though I have used pip3 to install them. The version that I installed it with is Python 3.8.2. However, when I checked the Python version that Anaconda is running on, it is 3.7.7. Is there anyway I can install and import these packages properly given this problem? </p> <p><img src="https://i.stack.imgur.com/3zXsA.png" alt="enter image description here"></p> <p><img src="https://i.stack.imgur.com/rXZoi.png" alt="enter image description here"></p>
<p>It seems you are installing keras and tensorflow to other python version. If you use pip3 install, it installs to Python 3.8.2 which is in a different environment from anaconda. </p> <p>You can install this packages to your anaconda environment (which is a Python 3.7.7) like this:</p> <pre><code>conda install tensorflow conda install keras conda install pyspark </code></pre> <p>You can restart and import packages after the installation.</p>
python|tensorflow|keras|conda
1
1,901,216
60,745,334
If text is contained in another dataframe then flag row with a binary designation
<p>I'm working on mining survey data. I was able to flag the rows for certain keywords:</p> <pre><code>survey['Rude'] = survey['Comment Text'].str.contains('rude', na=False, regex=True).astype(int) </code></pre> <p>Now, I want to flag any rows containing names. I have another dataframe that contains common US names. Here's what I thought would work, but it is not flagging any rows, and I have validated that names do exist in the 'Comment Text'</p> <pre><code>for row in survey: for word in survey['Comment Text']: survey['Name'] = 0 if word in names['Name']: survey['Name'] = 1 </code></pre>
<p>You are not looping through the series correctly. <code>for row in survey:</code> loops through the column names in <code>survey</code>. <code>for word in survey['Comment Text']:</code> loops though the comment strings. <code>survey['Name'] = 0</code> creates a column of all <code>0s</code>.</p> <p>You could use <a href="https://stackoverflow.com/questions/18079563/finding-the-intersection-between-two-series-in-pandas">set intersections</a> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer">apply()</a>, to avoid all the looping through rows:</p> <pre><code> survey = pd.DataFrame({'Comment_Text':['Hi rcriii', 'Hi yourself stranger', 'say hi to Justin for me']}) names = pd.DataFrame({'Name':['rcriii', 'Justin', 'Susan', 'murgatroyd']}) s2 = set(names['Name']) def is_there_a_name(s): s1 = set(s.split()) if len(s1.intersection(s2))&gt;0: return 1 else: return 0 survey['Name'] = survey['Comment_Text'].apply(is_there_a_name) print(names) print(survey) Name 0 rcriii 1 Justin 2 Susan 3 murgatroyd Comment_Text Name 0 Hi rcriii 1 1 Hi yourself stranger 0 2 say hi to Justin for me 1 </code></pre> <p>As a bonus, return <code>len(s1.intersection(s2))</code> to get the number of matches per line.</p>
python|pandas
0
1,901,217
57,868,833
Splitting a column and extracting from it
<p>I have a dataframe all_data, where the address column's head has been pasted below.</p> <pre><code>all_data['Address'].head() 0 Brocklebank Ground, Torver, LA21 8BS 1 23 Leigh Street, Aspull, WN2 1QQ 2 Dewsland, Ponthenry Road, Pontyates, SA15 5TY 3 1 Croft Close, Wainfleet, PE24 4DT 4 3 Landor Avenue, Killay, SA2 7BP Name: Address, dtype: object </code></pre> <p>I am attempting to extract just the postcode to put it into a new column:</p> <pre><code>all_data['Postcode'] = all_data['Address'].str.split(',')[-1] </code></pre> <p>I am receiving the following error message:</p> <pre><code>ValueError: Length of values does not match length of index </code></pre> <p>What should I be doing instead?</p>
<p>Note that most <code>Series</code> vectorised string operations must be preceded by the <a href="https://pandas.pydata.org/pandas-docs/version/0.25/reference/api/pandas.Series.str.html" rel="nofollow noreferrer"><code>str</code></a> accessor, which is also the case when taking slices of strings. So you're missing a <code>str</code> after the <code>str.split</code> to be able to slice the lists. </p> <pre><code>df['Address'].str.split().str[-1] 0 8BS 1 1QQ 2 5TY 3 4DT 4 7BP Name: Address, dtype: object </code></pre>
python|python-3.x|pandas
1
1,901,218
56,362,642
Python: Why does Python TCP-client receive data so slowly on different compared to same PC?
<p>I have two Python scripts, one TCP-server sending data (at a rate of 1/256 times a second) and a TCP-client receiving data. In the client script, I print the length of the received data. I sent the string "5.8" from the server (thus data of length 3).</p> <p>When client and server are on the same machine: The length of data received is always 3. When client and server are on different machines in the same local network: The length of data differs but is around 39 (13 times the data sent).</p> <p>Is there a possible explanation for this discrepancy?</p> <p>I think the network adding this much latency is unlikely, because the command line "ping" prints at most 2 ms latency with the largest amount of data.</p> <p>IMPORTANT: I'm using Python 2.7.</p> <pre class="lang-py prettyprint-override"><code>import socket def server(): host = 'localhost' # replace with IP address in case client is on another machine port = 5051 s = socket.socket() s.bind((host, port)) s.listen(1) client_socket, adress = s.accept() while True: client_socket.send('a'.encode()) client_socket.close() if __name__ == '__main__': server() </code></pre> <pre class="lang-py prettyprint-override"><code>import socket, random, time def client(): host = 'localhost' # replace with IP address in case client is on another machine port = 5051 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s_err = s.connect_ex((host, port)) print(s_err) while True: data = s.recv(2048) print(len(data)) # returns different values depending on client location s.close() if __name__ == '__main__': client() </code></pre>
<blockquote> <p>Is there a possible explanation for this discrepancy?</p> </blockquote> <p>TCP doesn't have a concept of a message. Data sent using multiple <code>send</code> calls can be received with one <code>recv</code> call and vice versa.</p> <p>TCP is a stream where you need to delimit the messages yourself, so that the reader can determine message boundaries. Most common ways:</p> <ol> <li>Prefix messages with fixed message length.</li> <li>Read until a message delimiter is encountered, e.g. <code>\n</code>.</li> </ol>
python|networking|tcp
0
1,901,219
56,027,512
What does this line of code read in plain English?
<p>I'm new to python and Django. I've been playing around with the Django polls tutorial and all is going well but I'm still getting used to the syntax.</p> <p>What does this line read in plain English?</p> <pre><code>return now - datetime.timedelta(days=1) &lt;= self.pub_date &lt;= now </code></pre> <p>The part I'm having trouble with is the &lt;= operator. I'm aware this usually means less than or equal to but I've never seen them being used in succession such as above.</p>
<p><strong>In short</strong>: it checks if <code>self.pub_date</code> is between 24 hours before <code>now</code> and <code>now</code>.</p> <p>Python allows <a href="https://docs.python.org/3/reference/expressions.html#comparisons" rel="noreferrer"><em>operator chaining</em> [Python-doc]</a>. It means that if you write <code>x &lt;= y &lt;= z</code>, that is short for <code>x &lt;= y and y &lt;= z</code>, except that <code>y</code> is evaluated only once.</p> <p>You thus can read this as:</p> <pre><code>return (now - datetime.timedelta(days=1)) &lt;= self.pub_date and self.pub_date &lt;= now</code></pre> <p>Now <code>now</code> is likely the current timestamp, so that means that <code>now - datetime.timedelta(days=1)</code> is 24 hours before <code>now</code>. So in short it checks if <code>self.pub_date</code> is between 24 hours before <code>now</code> and <code>now</code> (both inclusive). If that holds it returns <code>True</code>, otherwise it returns <code>False</code>.</p> <p>Likely - although we can not check that - <code>now</code> is the current timestamp, so it means if <code>self.pub_date</code> is between yesterday (same time) and the current timestamp.</p>
python|django
6
1,901,220
18,672,824
error in a python script to convert Youtube XML timed text into srt
<p>it is possible to get XML code of Youtube closed caption in this URL: </p> <pre><code>http://video.google.com/timedtext?hl=en&amp;lang=en&amp;v=VIDEO_ID </code></pre> <p>which VIDEO_ID is youtube video ID. to convert that code into srt file I used this script: </p> <p><a href="https://gist.github.com/golive/129171" rel="nofollow noreferrer">https://gist.github.com/golive/129171</a></p> <p>which is a python code. for running python code I used </p> <pre><code>C:\Python27\python youtube2srt.py </code></pre> <p>according to this: </p> <p><a href="https://stackoverflow.com/questions/9493086/python-how-do-you-run-a-py-file">Python - How do you run a .py file?</a></p> <p>I copied that code into a file named youtube2srt.py. I saved XML codes of that page to a file as youtube_xml.xml.</p> <p>when I run it I receive this error: </p> <p><img src="https://i.stack.imgur.com/dM1Uq.jpg" alt="enter image description here"></p> <p>when I delete two first lines &amp; run it, I get this error: </p> <p><img src="https://i.stack.imgur.com/oj7ng.jpg" alt="enter image description here"></p> <p>I have almost the same problem with this code: </p> <p><a href="https://gist.github.com/gorlum0/1290835" rel="nofollow noreferrer">https://gist.github.com/gorlum0/1290835</a></p> <p>what's the problem?!</p>
<p>The first file you try to run, youtube2srt.py, is actually youtube2srt.rb and is a ruby file - not python.</p> <p>The second probably requires you to install the package BeautifulSoup, which is not included in the standard python library</p>
python|xml|subtitle
2
1,901,221
18,689,810
What is the intrinsic name of a function?
<p>I understand that instrinsic names are assigned to refer to functions when these said functions refer to other functions. eg: f=max is f the intrinsic name or max?</p>
<p>If you mean the <code>__name__</code> property, it's the name that was used in the <code>def</code> statement that created the function.</p> <pre><code>Python 3.3.1 (v3.3.1:d9893d13c628, Apr 6 2013, 20:25:12) [MSC v.1600 32 bit (In tel)] on win32 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; def f (): ... return 0 ... &gt;&gt;&gt; f.__name__ 'f' &gt;&gt;&gt; g = f &gt;&gt;&gt; g.__name__ 'f' &gt;&gt;&gt; </code></pre> <p>Built-in functions have <code>__name__</code> properties matching their preset names.</p> <pre><code>&gt;&gt;&gt; max.__name__ 'max' &gt;&gt;&gt; h = max &gt;&gt;&gt; h.__name__ 'max' &gt;&gt;&gt; </code></pre> <p>Functions that were created by some other means than a <code>def</code> statement may have default values for the <code>__name__</code> property.</p> <pre><code>&gt;&gt;&gt; (lambda: 0).__name__ '&lt;lambda&gt;' &gt;&gt;&gt; </code></pre>
python|function|python-3.x
6
1,901,222
69,425,010
pyinstaller error: OSError: [WinError 6] The handle is invalid
<p>This File gets the wifi passwords using the terminal command <code>netsh wlan show profiles</code> I used pyinstaller to create a few .exe before and they worked jut fine.</p> <p>The Code:</p> <pre><code>import subprocess import time import sys import re command_output = subprocess.run([&quot;netsh&quot;, &quot;wlan&quot;, &quot;show&quot;, &quot;profiles&quot;], capture_output = True).stdout.decode() profile_names = (re.findall(&quot;All User Profile : (.*)\r&quot;, command_output)) wifi_list = [] if len(profile_names) != 0: for name in profile_names: wifi_profile = {} profile_info = subprocess.run([&quot;netsh&quot;, &quot;wlan&quot;, &quot;show&quot;, &quot;profile&quot;, name], capture_output = True).stdout.decode() if re.search(&quot;Security key : Absent&quot;, profile_info): continue else: wifi_profile[&quot;ssid&quot;] = name profile_info_pass = subprocess.run([&quot;netsh&quot;, &quot;wlan&quot;, &quot;show&quot;, &quot;profile&quot;, name, &quot;key=clear&quot;], capture_output = True).stdout.decode() password = re.search(&quot;Key Content : (.*)\r&quot;, profile_info_pass) if password == None: wifi_profile[&quot;password&quot;] = None else: wifi_profile[&quot;password&quot;] = password[1] wifi_list.append(wifi_profile) for x in range(len(wifi_list)): print(wifi_list[x]) time.sleep(5) print(&quot;No more WiFi Profiles Found&quot;) time.sleep(3) sys.exit() </code></pre> <p>This is the Error I get when Running the .exe:</p> <pre><code>Traceback (most recent call last): File &quot;GetWiFiPassWord.py&quot;, line 6, in &lt;module&gt; File &quot;subprocess.py&quot;, line 453, in run File &quot;subprocess.py&quot;, line 709, in __init__ File &quot;subprocess.py&quot;, line 1006, in _get_handles OSError: [WinError 6] The handle is invalid </code></pre>
<p>This error apparently is thrown, because of this:</p> <blockquote> <p>Line 1117 in subprocess.py is: <code>p2cread = _winapi.GetStdHandle(_winapi.STD_INPUT_HANDLE)</code></p> <p>The service processes do not have a STDIN associated with them (TBC).</p> </blockquote> <p>This problem can be avoided by supplying a file or null device as the stdin argument to <code>popen</code>.</p> <blockquote> <p>In <strong>Python 3.x</strong>, you can simply pass <code>stdin=subprocess.DEVNULL</code>. E.g.</p> <pre><code>subprocess.Popen( args=[self.exec_path], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, stdin=subprocess.DEVNULL) </code></pre> <p>In <strong>Python 2.x</strong>, you need to get a filehandler to null, then pass that to popen:</p> <pre><code>devnull = open(os.devnull, 'wb') subprocess.Popen( args=[self.exec_path], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, stdin=devnull) </code></pre> </blockquote> <p>Reference: <a href="https://stackoverflow.com/questions/40108816/python-running-as-windows-service-oserror-winerror-6-the-handle-is-invalid/40108817#40108817">OSError: (WinError 6) The handle is Invalid</a></p> <hr /> <p>In your problem:</p> <pre class="lang-py prettyprint-override"><code>subprocess.run([&quot;netsh&quot;, &quot;wlan&quot;, &quot;show&quot;, &quot;profiles&quot;], capture_output = True, stdin=subprocess.DEVNULL).stdout.decode() </code></pre>
python|windows|subprocess|pyinstaller
1
1,901,223
69,403,190
Mypy returns Error "Unexpected keyword argument" for subclass of a decorated class with attrs package
<p>I have two decorated classes using <a href="https://pypi.org/project/attrs/" rel="nofollow noreferrer">attrs package</a> as follows:</p> <pre class="lang-py prettyprint-override"><code>@attr.s(kw_only=True) class Entity: &quot;&quot;&quot; base class of all entities &quot;&quot;&quot; entity_id = attr.ib(type=str) # ... @attr.s(kw_only=True) class Customer(Entity): customer_name = attr.ib(type=Name) # ... </code></pre> <p>I get <code>Unexpected keyword argument &quot;entity_id&quot; for &quot;Customer&quot;</code> for code like this:</p> <pre class="lang-py prettyprint-override"><code>def register_customer(customer_name: str): return Customer( entity_id=unique_id_generator(), customer_name=Name(full_name=customer_name), ) </code></pre> <p>So how can I make <strong>Mypy</strong> aware of the <code>__init__</code> method of my parent class. I should mention that the code works perfectly and there is (at least it seems) no runtime error.</p>
<p>Your code is correct and should work. If I run the following simplified version:</p> <pre class="lang-py prettyprint-override"><code>import attr @attr.s(kw_only=True) class Entity: &quot;&quot;&quot; base class of all entities &quot;&quot;&quot; entity_id = attr.ib(type=str) # ... @attr.s(kw_only=True) class Customer(Entity): customer_name = attr.ib(type=str) def register_customer(customer_name: str) -&gt; Customer: return Customer( entity_id=&quot;abc&quot;, customer_name=customer_name, ) # ... </code></pre> <p>through Mypy 0.910 with attrs 21.2.0 on Python 3.9.7 I get:</p> <pre><code>Success: no issues found in 1 source file </code></pre> <hr /> <p>My theories:</p> <ul> <li>Old Mypy (there's a lot of changes all times, sometimes it takes time for the attrs plugin to be updated with new features).</li> <li>Old attrs (we try to keep up with the changes in attrs and the features provided by Mypy).</li> <li>Python 2 (since you're using the old syntax). <code>kw_only</code> used to be Python 3-only and I wouldn't be surprised if mypy has some resident logic around it?</li> </ul>
python|type-hinting|mypy|python-typing|python-attrs
3
1,901,224
69,619,481
How to remove duplicate values from a list of Python dictionaries where order is preserved?
<p>I am trying to turn a list of max 30 Python dictionaries with duplicate values into a summarised list. A further complication is that the order of the list is by date/time oldest on top and I need the summarised list to be the newest occurrence of the dictionary.</p> <pre><code>data = [ { &quot;client&quot;: { &quot;id&quot;: &quot;12345&quot; }, &quot;name&quot;: &quot;John&quot;, &quot;date&quot;: &quot;18-10-2021 12:31:08&quot; }, { &quot;client&quot;: { &quot;id&quot;: &quot;12345&quot; }, &quot;name&quot;: &quot;John&quot;, &quot;date&quot;: &quot;18-10-2021 12:31:19&quot; }, { &quot;client&quot;: { &quot;id&quot;: &quot;12345&quot; }, &quot;name&quot;: &quot;John&quot;, &quot;date&quot;: &quot;18-10-2021 12:31:25&quot; }, { &quot;client&quot;: { &quot;id&quot;: &quot;23456&quot; }, &quot;name&quot;: &quot;Simon&quot;, &quot;date&quot;: &quot;18-10-2021 12:32:48&quot; }, { &quot;client&quot;: { &quot;id&quot;: &quot;23456&quot; }, &quot;name&quot;: &quot;Simon&quot;, &quot;date&quot;: &quot;18-10-2021 12:33:12&quot; }, { &quot;client&quot;: { &quot;id&quot;: &quot;34567&quot; }, &quot;name&quot;: &quot;Bob&quot;, &quot;date&quot;: &quot;18-10-2021 12:34:15&quot; }, { &quot;client&quot;: { &quot;id&quot;: &quot;34567&quot; }, &quot;name&quot;: &quot;Bob&quot;, &quot;date&quot;: &quot;18-10-2021 12:34:34&quot; } ] summarised_ids = [] summarised_messages = [] for message in data[::-1]: if message['client']['id'] not in summarised_ids: summarised_ids.append(message['client']['id']) for message in data[::-1]: if message['client']['id'] in summarised_ids: summarised_messages.append(message) summarised_ids.remove(message['client']['id']) for message in summarised_messages: print(message) {'client': {'id': '34567'}, 'name': 'Bob', 'date': '18-10-2021 12:34:34'} {'client': {'id': '23456'}, 'name': 'Simon', 'date': '18-10-2021 12:33:12'} {'client': {'id': '12345'}, 'name': 'John', 'date': '18-10-2021 12:31:25'} </code></pre> <p>Currently it's very verbose and I don't know how I can better reduce these steps:</p> <ol> <li><p>Reverse iterate through the original list and add the ID to new summarised_ids list if it's not there</p> </li> <li><p>Reverse iterate through the original list again and append the message if the ID is in the summarised_ids list</p> </li> <li><p>Ignore message if the ID is already there</p> </li> <li><p>Print the summarised_messages list</p> </li> </ol>
<p>Try using a <a href="https://docs.python.org/3/library/stdtypes.html#typesmapping" rel="nofollow noreferrer">dictionary</a> to deduplicate the list:</p> <pre><code>result = list({ d[&quot;client&quot;][&quot;id&quot;] : d for d in data}.values()) for row in result: print(row) </code></pre> <p><strong>Output</strong></p> <pre><code>{'client': {'id': '12345'}, 'name': 'John', 'date': '18-10-2021 12:31:25'} {'client': {'id': '23456'}, 'name': 'Simon', 'date': '18-10-2021 12:33:12'} {'client': {'id': '34567'}, 'name': 'Bob', 'date': '18-10-2021 12:34:34'} </code></pre> <p>To match your exact output, you could do:</p> <pre><code>result = list({d[&quot;client&quot;][&quot;id&quot;]: d for d in data}.values())[::-1] </code></pre>
python
1
1,901,225
69,514,648
Check if message is number
<pre><code>@client.event async def on_message(message): counter = 1 if message.channel.id == 895739649425825803 and not message.author.bot: if message.content == str(counter): await message.add_reaction(&quot;✅&quot;) counter += 1 print(counter) else: # Here I want to check if message.content is a number. await message.add_reaction(&quot;❌&quot;) counter = 1 </code></pre> <p>I want to create a game in a text channel. counter starts with 1 and the users have to count. If it's not right the counter resets to 1. But the bot also resets if someone sends a message which is not a number. How can I check if it's a number?</p>
<p>Use <code>try-except</code> to change <code>message.content</code> to <code>int</code></p> <p>if <code>message.content</code> cant convert to <code>int</code> counter will be set as <code>1</code></p> <pre><code>@client.event async def on_message(message): counter = 1 if message.channel.id == 895739649425825803 and not message.author.bot: if message.content == str(counter): await message.add_reaction(&quot;✅&quot;) counter += 1 print(counter) else: # Here I want to check if message.content is a number. try: counter = int(message.counter) except ValueError: counter = 1 await message.add_reaction(&quot;❌&quot;) </code></pre>
python|discord.py
0
1,901,226
57,707,887
Keras CNN's input dimension error: expected 4-dim, but found 3-dim
<p>I have a question about coding CNN using Keras.</p> <p>The shape of input data(adj) is (20000, 50, 50); 20000 is the number of samples, 50 x 50 are 2-D data (like images). Batch size is 100. (actually, there are two inputs: adj=(20000, 50, 50), features=(20000, 50, 52). </p> <p>The issued part is like below:</p> <pre><code>from keras.layers import Conv2D, MaxPool2D, Flatten adj_visible1 = Input(shape=(50, 50, 1)) conv11 = Conv2D(16, kernel_size=5, activation='relu')(adj_visible1) pool11 = MaxPool2D(pool_size=(2, 2))(conv11) conv12 = Conv2D(8, kernel_size=5, activation='relu')(pool11) pool12 = MaxPool2D(pool_size=(2, 2))(conv12) flat1 = Flatten()(pool12) </code></pre> <p>But an error message occurred like below:</p> <pre><code>ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=3 </code></pre> <p>I found similar cases that print the same message, however, most of the reason is that they didn't consider the filter like (50, 50), not (50, 50, "1") for input shape.</p> <p>In my case, I used the shape (50, 50, 1) not (50, 50). However, it still prints the same error message.</p> <p>What should I do?</p> <p>I'm attaching the full code as follows:</p> <pre><code>from sklearn.cross_validation import train_test_split from keras.models import Sequential from keras.layers.core import Dense, Dropout from keras.optimizers import RMSprop, Adam, Adadelta from keras.utils import plot_model from keras.models import Model from keras.layers import Input, Flatten, MaxPool2D from keras.layers.convolutional import Conv2D from keras.layers.merge import concatenate from keras.callbacks import CSVLogger #Settings epoch = 100 batch_size = 100 test_size = 10000 # Load data adj = np.load('adj.npy') #(20000, 50, 50) features = np.load('features.npy') #(20000, 50, 52) Prop = np.load('Properties.npy') #(20000, 1) database = np.dstack((adj, features)) #(20000, 50, 102) #Train/Test split X_tr, X_te, Y_tr, Y_te = train_test_split(database, Prop, test_size=test_size) X_tr_adj, X_tr_features = X_tr[:, :, 0:50], X_tr[:, :, 50:] X_te_adj, X_te_features = X_te[:, :, 0:50], X_te[:, :, 50:] def create_model(): # first input model adj_visible1 = Input(shape=(50, 50, 1)) conv11 = Conv2D(16, kernel_size=5, activation='relu')(adj_visible1) pool11 = MaxPool2D(pool_size=(2, 2))(conv11) conv12 = Conv2D(8, kernel_size=5, activation='relu')(pool11) pool12 = MaxPool2D(pool_size=(2, 2))(conv12) flat1 = Flatten()(pool12) # second input model features_visible2 = Input(shape=(50, 52, 1)) conv21 = Conv2D(16, kernel_size=5, activation='relu')(features_visible2) pool21 = MaxPool2D(pool_size=(2, 2))(conv21) conv22 = Conv2D(8, kernel_size=5, activation='relu')(pool21) pool22 = MaxPool2D(pool_size=(2, 2))(conv22) flat2 = Flatten()(pool22) # merge input models merge = concatenate([flat1, flat2]) # interpretation model hidden1 = Dense(128, activation='relu')(merge) hidden2 = Dense(32, activation='relu')(hidden1) output = Dense(1, activation='linear')(hidden2) model = Model(inputs=[adj_visible1, features_visible2], outputs=output) model.compile(loss='mean_squared_error', optimizer=Adam()) # summarize layers print(model.summary()) return model def train_model(batch_size = 100, nb_epoch = 20): model = create_model() csv_logger = CSVLogger('CNN trial.csv') history = model.fit([X_tr_adj, X_tr_features], Y_tr, batch_size=batch_size, epochs=nb_epoch, verbose=1, validation_data=([X_te_adj, X_te_features], Y_te), callbacks=[csv_logger]) predictions_valid = model.predict(X_te_adj, X_te_features, batch_size=batch_size, verbose=1) return model train_model(nb_epoch = epoch) </code></pre> <p>I wrote the code with reference to the following material: <a href="https://machinelearningmastery.com/keras-functional-api-deep-learning/" rel="nofollow noreferrer">https://machinelearningmastery.com/keras-functional-api-deep-learning/</a></p>
<p>You have to use <strong>conv1D</strong> and <strong>MaxPool1D</strong> instead of <em>conv2D</em> and <em>MaxPool2D</em> cause your dataset is a single-channel image instead of 3 channel image. In conv1D layer expect the input to be in format Batch X Height X Width, while in conv2D it expects the input to dimension = 4 i.e Batch X Height X Width X Channels.</p> <pre><code>from sklearn.model_selection import train_test_split from keras.models import Sequential from keras.layers.core import Dense, Dropout from keras.optimizers import RMSprop, Adam, Adadelta from keras.utils import plot_model from keras.models import Model from keras.layers import Input, Flatten, MaxPool1D from keras.layers.convolutional import Conv1D from keras.layers.merge import concatenate from keras.callbacks import CSVLogger import numpy as np epoch = 100 batch_size = 100 test_size = 10000 adj = np.random.randint(0,high=100, size=(20000, 50, 50)) #(20000, 50, 50) features = np.random.randint(0,high=100, size=(20000, 50, 52)) #(20000, 50, 52) Prop = np.random.randint(0,high=100, size=(20000,)) #(20000, 1) database = np.dstack((adj, features)) #(20000, 50, 102) print( " shape of database :", database.shape) t X_tr, X_te, Y_tr, Y_te = train_test_split(database, Prop, test_size=test_size) X_tr_adj, X_tr_features = X_tr[:, :, 0:50], X_tr[:, :, 50:] X_te_adj, X_te_features = X_te[:, :, 0:50], X_te[:, :, 50:] def create_model(): # first input model adj_visible1 = Input(shape=(50, 50)) conv11 = Conv1D(16, kernel_size=5, activation='relu')(adj_visible1) pool11 = MaxPool1D(pool_size=2)(conv11) conv12 = Conv1D(8, kernel_size=5, activation='relu')(pool11) pool12 = MaxPool1D(pool_size=2)(conv12) flat1 = Flatten()(pool12) # second input model features_visible2 = Input(shape=(50, 52)) conv21 = Conv1D(16, kernel_size=5, activation='relu')(features_visible2) pool21 = MaxPool1D(pool_size=2)(conv21) conv22 = Conv1D(8, kernel_size=5, activation='relu')(pool21) pool22 = MaxPool1D(pool_size=2)(conv22) flat2 = Flatten()(pool22) # merge input models merge = concatenate([flat1, flat2]) # interpretation model hidden1 = Dense(128, activation='relu')(merge) hidden2 = Dense(32, activation='relu')(hidden1) output = Dense(1, activation='linear')(hidden2) model = Model(inputs=[adj_visible1, features_visible2], outputs=output) model.compile(loss='mean_squared_error', optimizer=Adam()) # summarize layers print(model.summary()) return model def train_model(batch_size = 100, nb_epoch = 20): model = create_model() csv_logger = CSVLogger('CNN trial.csv') history = model.fit([X_tr_adj, X_tr_features], Y_tr, batch_size=batch_size, epochs=nb_epoch, verbose=1, validation_data=([X_te_adj, X_te_features], Y_te), callbacks=[csv_logger]) return model train_model(nb_epoch = 10) </code></pre>
python-3.x|keras|deep-learning|conv-neural-network|dimension
1
1,901,227
57,597,972
Python web-scraping and downloading specific zip files in Windows
<p>I'm trying to download and stream the contents of specific zip files on a web page.</p> <p>The web page has labels and links to zip files that use a table structure and appear like this:</p> <pre><code>Filename Flag Link testfile_20190725_csv.zip Y zip testfile_20190725_xml.zip Y zip testfile_20190724_csv.zip Y zip testfile_20190724_xml.zip Y zip testfile_20190723_csv.zip Y zip testfile_20190723_xml.zip Y zip (etc.) </code></pre> <p>The word 'zip' above is the link to the zip file. I'd like to download ONLY the CSV zip files and only the first x (say 7) that appear on the page - but none of the XML zip files.</p> <p>A sample of the webpage code is here:</p> <pre><code>&lt;tr&gt; &lt;td class="labelOptional_ind"&gt; testfile_20190725_csv.zip &lt;/td&gt; &lt;/td&gt; &lt;td class="labelOptional" width="15%"&gt; &lt;div align="center"&gt; Y &lt;/div&gt; &lt;/td&gt; &lt;td class="labelOptional" width="15%"&gt; &lt;div align="center"&gt; &lt;a href="/test1/servlets/mbDownload?doclookupId=671334586"&gt; zip &lt;/a&gt; &lt;/div&gt; &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="labelOptional_ind"&gt; testfile_20190725_xml.zip &lt;/td&gt; &lt;td class="labelOptional" width="15%"&gt; &lt;div align="center"&gt; N &lt;/div&gt; &lt;/td&gt; &lt;td class="labelOptional" width="15%"&gt; &lt;div align="center"&gt; &lt;a href="/test1/servlets/mbDownload?doclookupId=671190392"&gt; zip &lt;/a&gt; &lt;/div&gt; &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="labelOptional_ind"&gt; testfile_20190724_csv.zip &lt;/td&gt; &lt;td class="labelOptional" width="15%"&gt; &lt;div align="center"&gt; </code></pre> <p>I think I'm almost there, but need a bit of help. What I've been able to do so far is: 1. Check for existence of a local download folder and create it if not there 2. Setup BeautifulSoup, read from the webpage all of the main labels (the first column of the table), and read all the zip links - i.e. the 'a hrefs' 3. For testing, manually set a variable to one of the labels and another to its corresponding zip file link, download the file and stream the CSV contents of the zip file</p> <p>What I need help with is: Downloading all main labels AND their corresponding links, then loop through each, skipping any XML labels/links, and downloading/streaming only the CSV labels/links</p> <p>Here's the code of I have:</p> <pre><code># Read zip files from page, download file, extract and stream output from io import BytesIO from zipfile import ZipFile import urllib.request import os,sys,requests,csv from bs4 import BeautifulSoup # check for download directory existence; create if not there if not os.path.isdir('f:\\temp\\downloaded'): os.makedirs('f:\\temp\\downloaded') # Get labels and zip file download links mainurl = "http://www.test.com/" url = "http://www.test.com/thisapp/GetReports.do?Id=12331" # get page and setup BeautifulSoup r = requests.get(url) soup = BeautifulSoup(r.content, "html.parser") # Get all file labels and filter so only use CSVs mainlabel = soup.find_all("td", {"class": "labelOptional_ind"}) for td in mainlabel: if "_csv" in td.text: print(td.text) # Get all &lt;a href&gt; urls for link in soup.find_all('a'): print(mainurl + link.get('href')) # QUESTION: HOW CAN I LOOP THROUGH ALL FILE LABELS AND FIND ONLY THE # CSV LABELS AND THEIR CORRESPONDING ZIP DOWNLOAD LINK, SKIPPING ANY # XML LABELS/LINKS, THEN LOOP AND EXECUTE THE CODE BELOW FOR EACH, # REPLACING zipfilename WITH THE MAIN LABEL AND zipurl WITH THE ZIP # DOWNLOAD LINK? # Test downloading and streaming zipfilename = 'testfile_20190725_xml.zip' zipurl = 'http://www.test.com/thisdownload/servlets/thisDownload?doclookupId=674992379' outputFilename = "f:\\temp\\downloaded\\" + zipfilename # Unzip and stream CSV file url = urllib.request.urlopen(zipurl) zippedData = url.read() # Save zip file to disk print ("Saving to ",outputFilename) output = open(outputFilename,'wb') output.write(zippedData) output.close() # Unzip and stream CSV file with ZipFile(BytesIO(zippedData)) as my_zip_file: for contained_file in my_zip_file.namelist(): with open(("unzipped_and_read_" + contained_file + ".file"), "wb") as output: for line in my_zip_file.open(contained_file).readlines(): print(line) </code></pre>
<p>For getting all required links you can use <code>find_all()</code> method with custom function. The function will search for <code>&lt;td&gt;</code> tags with text that ends with <code>"csv.zip"</code>.</p> <p><code>data</code> is HTML snippet from the question:</p> <pre><code>from bs4 import BeautifulSoup soup = BeautifulSoup(data, 'html.parser') for td in soup.find_all(lambda tag: tag.name=='td' and tag.text.strip().endswith('csv.zip')): link = td.find_next('a') print(td.get_text(strip=True), link['href'] if link else '') </code></pre> <p>Prints:</p> <pre><code>testfile_20190725_csv.zip /test1/servlets/mbDownload?doclookupId=671334586 testfile_20190724_csv.zip </code></pre>
python|web-scraping|beautifulsoup
3
1,901,228
42,346,912
Why the downloaded file numbers is not equal numbers of url's line in my log file?
<p>Platform: debian8 + python3.6 + scrapy 1.3.2.<br> Here is a simple scrapy script to download all the us stock quote.<br> Please to download the 7z file on webpage.</p> <p><a href="https://drive.google.com/open?id=0B9BpilWzmmMCRGQ0RF8xWWx3ZVU" rel="nofollow noreferrer">all urls to be downloaded</a></p> <p>To extract it with 7z.</p> <pre><code>7z x urls.7z -o/home </code></pre> <p>The sample data /home/urls.csv can be tested. To save the below scrapy script as /home/quote.py</p> <pre><code>import scrapy import csv CONCURRENT_REQUESTS = 3 CONCURRENT_REQUESTS_PER_SPIDER = 3 CLOSESPIDER_PAGECOUNT = 100000 CLOSESPIDER_TIMEOUT = 36000 DOWNLOAD_DELAY = 10 RETRY_ENABLED = False COOKIES_ENABLED = False RETRY_ENABLED = True RETRY_TIMES = 1 COOKIES_ENABLED = False downloaded = open('/home/downloaded.csv','w') class TestSpider(scrapy.Spider): def __init__(self, *args, **kw): self.timeout = 10 name = "quote" allowed_domains = ["chart.yahoo.com"] csvfile = open('/home/urls.csv') reader = csv.reader(csvfile) rows = [row[0] for row in reader] start_urls = rows def parse(self, response): content = response.body target = response.url filename = target.split("=")[1] open('/home/data/'+filename+'.csv', 'wb').write(content) downloaded.write(target+"\n") </code></pre> <p>The last two lines in /home/quote.py is important,<br> <strong>open('/home/data/'+filename+'.csv', 'wb').write(content)</strong> to open a file and save the data into the file.<br> The following ,<strong>downloaded.write(target+"\n")</strong>,it is to write a log to describe which url was downloaded instantly. </p> <p>To execute the spider with:</p> <pre><code>scrapy runspider /home/quote.py </code></pre> <p>In my opinion the numbers--all downloaded files is equal to line numbers of url in /home/downloaded.csv.</p> <pre><code>ls /home/data |wc -l 6012 wc /home/downloaded.csv 6124 </code></pre> <p>Why two numbers here aren't equal?<br> Please to test on your plarform and tell me the two numbers. </p>
<p>In your file 'urls.csv' there are repeated URL. For example <a href="https://chart.yahoo.com/table.csv?s=JOBS" rel="nofollow noreferrer">https://chart.yahoo.com/table.csv?s=JOBS</a>, or <a href="https://chart.yahoo.com/table.csv?s=JRJC" rel="nofollow noreferrer">https://chart.yahoo.com/table.csv?s=JRJC</a>. The function open() with mode 'w' truncate the file if it already exists, then rewrite it. You can check it with something like:</p> <pre><code> if not os.path.exists('/home/data/'+filename+'.csv'): open('/home/data/'+filename+'.csv', 'wb').write(content) downloaded.write(target+"\n") else: downloaded.write(target+" already written \n") </code></pre>
python|scrapy
1
1,901,229
42,355,202
Multiprocess, various process reading the same file
<p>I am trying to simulate some dna-sequencing reads, and,in order to speed-up the code, I am in need to run it on parallel.</p> <p>Basically, what I am trying to do is the following:I am sampling reads from the human genome, and I think that one the two process from multiprocessing module try to get data from the same file (the human genome) the processes gets corrupted and it is not able to get the desired DNA sequence. I have tried different things, but I am very new to parallel programming and I cannot solve my problem</p> <p>When I run the script with one core it works fine.</p> <p>This is the way I am calling the function</p> <pre><code>if __name__ == '__main__': jobs = [] # init the processes for i in range(number_of_cores): length= 100 lock = mp.Manager().Lock() p = mp.Process(target=simulations.sim_reads,args=(lock,FastaFile, "/home/inigo/msc_thesis/genome_data/hg38.fa",length,paired,results_dir,spawn_reads[i],temp_file_names[i])) jobs.append(p) p.start() for p in jobs: p.join() </code></pre> <p>And this is the function I am using to get the reads, were each process writes the data to a different file.</p> <pre><code>def sim_single_end(lc,fastafile,chr,chr_pos_start,chr_pos_end,read_length, unique_id): lc.acquire() left_split_read = fastafile.fetch(chr, chr_pos_end - (read_length / 2), chr_pos_end) right_split_read = fastafile.fetch(chr, chr_pos_start, chr_pos_start + (read_length / 2)) reversed_left_split_read = left_split_read[::-1] total_read = reversed_left_split_read + right_split_read seq_id = "id:%s-%s|left_pos:%s-%s|right:%s-%s " % (unique_id,chr, int(chr_pos_end - (read_length / 2)), int(chr_pos_end), int(chr_pos_start),int(chr_pos_start + (read_length / 2))) quality = "I" * read_length fastq_string = "@%s\n%s\n+\n%s\n" % (seq_id, total_read, quality) lc.release() new_record = SeqIO.read(StringIO(fastq_string), "fastq") return(new_record) </code></pre> <p>Here is the traceback:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap self.run() File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/home/inigo/Dropbox/PycharmProjects/circ_dna/simulations.py", line 107, in sim_ecc_reads new_read = sim_single_end(lc,fastafile, chr, chr_pos_start, chr_pos_end, read_length, read_id) File "/home/inigo/Dropbox/PycharmProjects/circ_dna/simulations.py", line 132, in sim_single_end new_record = SeqIO.read(StringIO(fastq_string), "fastq") File "/usr/local/lib/python3.5/dist-packages/Bio/SeqIO/__init__.py", line 664, in read first = next(iterator) File "/usr/local/lib/python3.5/dist-packages/Bio/SeqIO/__init__.py", line 600, in parse for r in i: File "/usr/local/lib/python3.5/dist-packages/Bio/SeqIO/QualityIO.py", line 1031, in FastqPhredIterator for title_line, seq_string, quality_string in FastqGeneralIterator(handle): File "/usr/local/lib/python3.5/dist-packages/Bio/SeqIO/QualityIO.py", line 951, in FastqGeneralIterator % (title_line, seq_len, len(quality_string))) ValueError: Lengths of sequence and quality values differs for id:6-chr1_KI270707v1_random|left_pos:50511537-50511587|right:50511214-50511264 (0 and 100). </code></pre>
<p>I am the OP of this answer that I did almost a year ago. The problem was that the package that I was using for reading the human genome file (pysam) was failing. The issue was a typo when calling multiprocessing.</p> <p>From the authors respose, this should work:</p> <pre><code> p = mp.Process(target=get_fasta, args=(genome_fa,)) </code></pre> <p>note the ',' to ensure you pass a tuple</p> <p>See <a href="https://github.com/pysam-developers/pysam/issues/409" rel="nofollow noreferrer">https://github.com/pysam-developers/pysam/issues/409</a> for more details</p>
python|multiprocessing|biopython|pysam
1
1,901,230
54,193,955
CSV to Pythonic List
<p>I'm trying to convert a CSV file into Python list I have strings organize in columns. I need an Automation to turn them into a list. my code works with Pandas, but I only see them again as simple text.</p> <pre><code>import pandas as pd data = pd.read_csv("Random.csv", low_memory=False) dicts = data.to_dict().values() print(data) </code></pre> <p>so the final results should be something like that : ('Dan', 'Zac', 'David')</p>
<p>You can simply do this by using csv module in python</p> <pre><code>import csv with open('random.csv', 'r') as f: reader = csv.reader(f) your_list = map(list, reader) print your_list </code></pre> <p>You can also refer <a href="https://stackoverflow.com/a/24662707/8677188">here</a></p>
python-3.x
0
1,901,231
54,055,124
Odd numbers only on OpenCV trackbar for Python?
<p>I am learning how to use OpenCV on Python for skin segmentation and right now I am mostly in the experimental phase, where I am playing with the Gaussian Blue to reduce the sharp contrasts which I am getting with Otsu's Binarization. </p> <p>One stratergy that I found very useful in my experimentation was to use the trackbar functionality on the display window to change various parameters such as selection of the kernel size and standard deviation of the Gaussian function. The trackbar works great when I change the std, but my program crashes when I do the same for kernel size.</p> <p>The reason for this is that kernel size takes only odd numbers > 1 as a tuple of two values. Since the track bar is continuous, when I move it and the trackbar reads an even number, the Gaussian function throws an error.</p> <p>I was hoping that you could provide me with a solution to create a trackbar with only odd numbers or even only numbers from an array, if possible. Thanks!</p> <pre><code># applying otsu binerization to video stream feed = cv2.VideoCapture(0) # create trackbars to control the amount of blur cv2.namedWindow('blur') # callback function for trackbar def blur_callback(trackbarPos): pass # create the trackbar cv2.createTrackbar('Blur Value', 'blur', 1, 300, blur_callback) # cv2.createTrackbar('Kernel Size', 'blur', 3, 51, blur_callback) while True: vid_ret, frame = feed.read() # flip the frames frame = cv2.flip(frame, flipCode=1) # convert the feed to grayscale frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # get blur value from trackbar and apply gaussian blur to frame_gray blurVal = cv2.getTrackbarPos('Blur Value', 'blur') # kernelSize = cv2.getTrackbarPos('Kernel Size', 'blur') frame_blur = cv2.GaussianBlur(frame_gray, (11, 11), blurVal) # apply Otsu binerization on vanilla grayscale otsu_ret, otsu = cv2.threshold(frame_gray, 0, 255, cv2.THRESH_OTSU) # apply Otsu binerization on blurred grayscale otsu_blue_ret, otsu_blur = cv2.threshold(frame_blur, 0, 255, cv2.THRESH_OTSU) # show the differnt images cv2.imshow('color', frame) # cv2.imshow('gray', frame_gray) cv2.imshow('blur', frame_blur) cv2.imshow('otsu', otsu) cv2.imshow('otsu_blur', otsu_blur) # exit key if cv2.waitKey(10) &amp; 0xFF == ord('q'): break # release the feed and close all windows feed.release() cv2.destroyAllWindows() </code></pre>
<p>I know this is an old post but here is my solution:</p> <pre class="lang-py prettyprint-override"><code>def on_blockSize_trackbar(val): global blockSize if (val%2)==0: blockSize = val+1 cv2.setTrackbarPos(blockSize_name, window_detection_name, blockSize) else: blockSize=val cv2.setTrackbarPos(blockSize_name, window_detection_name, blockSize) blockSize = max(blockSize, 1) </code></pre> <p>with the line to create the Trackbar</p> <pre><code>cv2.createTrackbar(blockSize_name, window_detection_name , blockSize, 100, on_blockSize_trackbar) </code></pre> <p>You will see that the number you pick on the trackbar will remain incremental with odd or even numbers but the real value sent in the code will only be odd numbers.</p>
python|opencv
1
1,901,232
65,352,078
What is the most efficient way to sort an array with the first item?
<p>I want to sort the array [[x₁, y₁], [x₂, y₂], [x₃, y₃],...] by the first term. I know that it is doable with bubble sorting, but is there more concise and efficient way to sort the array? Here is a working code for bubble sorting.</p> <pre><code>def bubble_sort(n, array): for i in range(n): swap = False for j in range(n-i-1): if array[j][0] &gt; array[j+1][0]: array[j][0], array[j+1][0] = array[j+1][0], array[j][0] swap = True if not swap: break return array </code></pre>
<p>Use the built-in sort method from Python.</p> <pre><code>import random test_list = [[random.randint(0, 10), random.randint(0, 10)] for i in range(10)] print(test_list) # sort using the first element test_list.sort(key=lambda x: x[0]) # or test_list = sorted(test_list, key=lambda x: x[0]) print(test_list) </code></pre> <p>The sorting algorithm used for <code>sorted</code> is <code>Timsort</code>, which can achieve <code>O(nlogn)</code> in worst case and <code>O(n)</code> in best case. It is faster than <code>O(n^2)</code> by bubble sort.</p> <p>reference: <a href="https://www.programiz.com/python-programming/methods/list/sort" rel="nofollow noreferrer">sorted</a>, <a href="https://en.wikipedia.org/wiki/Timsort" rel="nofollow noreferrer">timsort</a></p>
python|list
2
1,901,233
45,454,564
Influxdb request on time
<p>I have a problem with my request to Influxdb, if I request with now() time parameters it's work fine but when I try to request with variable parameters it's doesn't work.</p> <p>that is ok :</p> <pre><code>"SELECT * FROM \"%s\" WHERE session_id = '%s' AND time &gt; now() - 10s AND time &lt; now() - 9s ORDER BY time DESC LIMIT 25" %(table, session_id) </code></pre> <p>that is not :</p> <pre><code>"SELECT * FROM \"%s\" WHERE session_id = '%s' AND time &gt; \'%U\' AND time &lt; \'%U\' ORDER BY time DESC LIMIT 25" %(table, session_id, date_inf, date_sup) </code></pre> <p>date_inf and date_sup are timestamp nanosecond.</p> <p>Here is the doc and we can see at the example 3 what I want to do: <a href="https://docs.influxdata.com/influxdb/v1.3/query_language/data_exploration/#time-syntax" rel="nofollow noreferrer">https://docs.influxdata.com/influxdb/v1.3/query_language/data_exploration/#time-syntax</a></p> <p>If someone can help he is welcome.</p>
<p>Try this:</p> <pre><code>"SELECT * FROM \"%s\" WHERE session_id = **\'%s\'** AND time &gt; begin AND time &lt; end + 1s** ORDER BY time DESC LIMIT 25" %(table, session_id) </code></pre> <p>In this case "begin" and "end" are time variables in UTC format ('2018-02-20T04:05:25Z' etc.) To specify requests for influxDB you can use Data Explorer of Chronograf. </p>
python|sql|sql-like|influxdb
0
1,901,234
45,363,419
How would I code a hangman game with multiple outcomes possible?
<p>I am trying to code a hangman game that prints different words when the player gets the word with a certain amount of lives left, but I can't figure out how to do that. I have everything else put in place expect this concept. I have tried elif statements, but don't know what else to try except for this:</p> <pre><code>lives_remaining = 14 guessed_letters = '' def play(): word = pick_a_word() while True: guess = get_guess(word) if first_try(guess, word): print('Excellent!') break elif second_try(guess, word): print('Great!') break elif third_try(guess, word): print('Ok!') break elif fourth_try(guess, word): print('Close one!') break elif lives_remaining == 0: print('Nope!') print('The word was: ' + (word)) break def first_try(guess, word): if lives_remaining &gt; 12: def second_try(guess, word): if lives_remaining &gt; 8: def third_try(guess, word): if lives_remaining &gt; 4: def fourth_try(guess, word): if lives_remaining &gt; 0: def pick_a_word(): return random.choice(words) def get_guess(word): print_word_with_blanks(word) print('Lives Remaining: ' + str(lives_remaining)) guess = raw_input(' Guess a letter or whole word?') return guess def print_word_with_blanks(word): display_word = '' for letter in word: if guessed_letters.find(letter) &gt; -1: # letter found display_word = display_word + letter else: # letter not found display_word = display_word + '-' print(display_word) def process_guess(guess, word): if len(guess) &gt; 1: return whole_word_guess(guess, word) else: return single_letter_guess(guess, word) def whole_word_guess(guess, word): global lives_remaining if guess == word: return True else: lives_remaining = lives_remaining - 1 return False def single_letter_guess(guess, word): global guessed_letters global lives_remaining if word.find(guess) == -1: # letter guess was incorrect lives_remaining = lives_remaining - 1 guessed_letters = guessed_letters + guess if all_letters_guessed(word): return True return False def all_letters_guessed(word): for letter in word: if guessed_letters.find(letter) == -1: return False return True play() </code></pre> <p>I just don't know how I could possibly make those functions work. Any input would be greatly appreciated :)</p>
<pre><code>if numberOfLives == 1: print 'response1' elif numberOfLives == 2: print 'response2' elif ... </code></pre> <p>You could also <a href="https://www.pydanny.com/why-doesnt-python-have-switch-case.html" rel="nofollow noreferrer">use a dictionary</a>.</p>
python|python-2.7
0
1,901,235
45,674,311
setting up environment in virtaulenv using python3 stuck on setuptools, pip, wheel
<p>Running the following: </p> <p><code>virtualenv -p python3 venv</code></p> <p>gives: </p> <pre><code>Running virtualenv with interpreter /usr/bin/python3 Using base prefix '/usr' New python executable in /specific/a/home/cc/students/csguests/taivanbatb/venv/bin/python3 Also creating executable in /specific/a/home/cc/students/csguests/taivanbatb/venv/bin/python Installing setuptools, pip, wheel... </code></pre> <p>which is where it gets stuck. </p> <p>Calling CTRL-C gives: </p> <pre><code> File "/usr/local/bin/virtualenv", line 11, in &lt;module&gt; sys.exit(main()) File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 671, in main Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 2328, in &lt;module&gt; raise SystemExit(popen.wait()) File "/usr/lib/python2.7/subprocess.py", line 1376, in wait pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) File "/usr/lib/python2.7/subprocess.py", line 476, in _eintr_retry_call return func(*args) KeyboardInterrupt main() File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 713, in main symlink=options.symlink) File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 945, in create_environment download=download, File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 901, in install_wheel call_subprocess(cmd, show_stdout=False, extra_env=env, stdin=SCRIPT) File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 769, in call_subprocess line = stdout.readline() KeyboardInterrupt </code></pre> <p>Similar to <a href="https://stackoverflow.com/questions/43599428/virtualenv-hung-up-on-installing-setuptools">this</a>. </p> <p>As suggested in the linked question, I tried installing with <code>--no-wheel</code> but to no avail. And I am sure it is not a network connectivity problem because setting up an environment using python2 using <code>virtualenv env</code> gives no errors. </p> <p>The specific versions of all the packages I am using are as follows: </p> <p>python 3.4.0 python 2.7.6 virtualenv 15.1.0</p>
<p>1.Check your internet connections. </p> <p>2.Set python3 as your default python interpreter since you have python2.7 as your default python interpreter. Try using without any wheel by:</p> <pre><code>virtualenv venv --no-wheel </code></pre> <p>Then activate virtualenv and run:- </p> <pre><code>pip install --upgrade pip pip install setuptools --no-use-wheel --upgrade pip install wheel --no-cache </code></pre> <p>If you are behind proxy then use:-<br> <code>sudo pip download setuptools pip wheel --proxy http://&lt;yourproxyhere&gt;</code> </p> <p>After all this <code>virtualenv -p python3 venv</code> is working in my virtualenv <strong><em>perfectly</em></strong>.<br> <strong><em>NOTE</em></strong>: Assuming virtual environment is already set in your system and python3 is your default interpreter.</p> <blockquote> <p>Alternatively, you don't need to do <code>virtualenv -p python3 venv</code>. You can specify python interpreter(present in /usr/bin/* folder) which you want to use in virtualenv and use it like this:-<br> <strong>virtualenv --python=/usr/bin/pythonX.Y /home/username/path/to/virtualenv_name</strong> </p> <p>if you want to install in the current working directory then you can use:-<br> <strong>virtualenv --python=/usr/bin/pythonX.Y virtualenv_name</strong><br> <a href="https://help.pythonanywhere.com/pages/RebuildingVirtualenvs/" rel="nofollow noreferrer">REFERENCE</a> </p> </blockquote>
python|virtualenv|python-3.4
15
1,901,236
68,792,897
How can repetitive rows of data be collected in a single row in pandas?
<p>I have a dataset that contains the NBA Player's average statistics per game. Some player's statistics are repeated because of they've been in different teams in season.</p> <p>For example:</p> <pre><code> Player Pos Age Tm G GS MP FG 8 Jarrett Allen C 22 TOT 28 10 26.2 4.4 9 Jarrett Allen C 22 BRK 12 5 26.7 3.7 10 Jarrett Allen C 22 CLE 16 5 25.9 4.9 </code></pre> <p>I want to average Jarrett Allen's stats and put them into a single row. How can I do this?</p>
<p>You can <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="noreferrer"><code>groupby</code></a> and use <a href="https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.core.groupby.DataFrameGroupBy.agg.html" rel="noreferrer"><code>agg</code></a> to get the mean. For the non numeric columns, let's take the first value:</p> <pre class="lang-py prettyprint-override"><code>df.groupby('Player').agg({k: 'mean' if v in ('int64', 'float64') else 'first' for k,v in df.dtypes[1:].items()}) </code></pre> <p>output:</p> <pre><code> Pos Age Tm G GS MP FG Player Jarrett Allen C 22 TOT 18.666667 6.666667 26.266667 4.333333 </code></pre> <p>NB. content of the dictionary comprehension:</p> <pre><code>{'Pos': 'first', 'Age': 'mean', 'Tm': 'first', 'G': 'mean', 'GS': 'mean', 'MP': 'mean', 'FG': 'mean'} </code></pre>
python|pandas|dataframe|data-science
28
1,901,237
68,685,210
AWS not letting me ssh into instance and cannot connect to my website due to timeout error
<p>1 hour ago I could connect to my website, stringapi.net, but now, it is taking too long to respond and I also cannot ssh into my instance as that is also not loading (I assume also timeout error). Does anybody have any suggestions? I just setup certbot a few days ago, could it be a problem with this?</p> <p>I have checked and ec2 does not appear to be down.</p>
<p>Rebooting the instance worked!</p>
python|amazon-web-services|http|amazon-ec2|certbot
0
1,901,238
56,937,274
Triggering the external dag using another dag in Airflow
<p>Having list of tasks which calls different dags in master dag.I'm using the TriggerDagrunoperator to accomplish this. But facing few issues.</p> <ul> <li><p>TriggerDagrunoperator doesn't wait for completion of external dag, it triggers next task. I want that to wait until completion and next task should trigger based on the status. Came across ExternalTaskSensor. It is making the process complicated. Is there any other solution to fix this?</p></li> <li><p>If I trigger the master dag again, I want the task to restart from where it is failed. Right now, it's not restarting, but for time based schedule,it will.</p></li> </ul>
<blockquote> <p>.. I want that to wait until completion .. Came across ExternalTaskSensor. It is making the process complicated ..</p> </blockquote> <p>I'm unaware of any other way to achieve this. I myself did this <a href="https://stackoverflow.com/a/51359972/3679900">the same way</a>.</p> <hr> <blockquote> <p>If I trigger the master dag again, I want the task to restart from where it is failed...</p> </blockquote> <p>This requirement of your goes against the <a href="https://gtoonstra.github.io/etl-with-airflow/principles.html#etl-principles" rel="nofollow noreferrer">principle of idempotency</a> that <code>Airflow</code> demands. I'd suggest you try to re-work you jobs in incorporate idempotency (for instance in case of retries, you have to have idempotency). Meanwhile you can take inspiration from <a href="https://medium.com/@plieningerweb/use-apache-airflow-to-run-task-exactly-once-6fb70ca5e7ec" rel="nofollow noreferrer">some people</a> and try to achieve something similar (but it will be pretty complicated)</p>
python|airflow|airflow-scheduler
2
1,901,239
57,151,926
How do I turn an image (200x200 Black and White photos) into a single list of 40,000 values using numpy?
<p>As the title says I have an image (well a bunch of images) and I want to turn it from a 200x200 image into a 1-D list of 40,000.</p>
<p>Try to flatten the image and convert it to list. </p> <pre><code> img.ravel().tolist() </code></pre> <p>A ndarray of shape Nx200x200 can be converted by reshaping</p> <pre><code>bunch_of_images.reshape(N, 40000) </code></pre>
numpy|image-processing
1
1,901,240
57,198,115
How to create averaged rgb vectors from image pixel array?
<p>I have an image that I'd like to break up into 30px by 30px blocks. Then, I'd like to average each of the r,g, and b values of the pixels. So for example, a 900px by 900px would breaken up into 900 blocks (each 30px by 30px). Then I'd like to take the average of the r,g, and b values in each block. In the end, I'd like an array of 900 three-dimensional vectors, each representing the average r,g,b value of their respective block.</p> <p>I've tried using numpy and pillow to break up the blocks, but I don't seem to be splicing my pixel array correctly.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from PIL import Image item_image = Image.open("1.jpg") pixel_array = np.asarray(item_image) width, height = item_image.size blocks = [] BLOCK_WIDTH = 30 BLOCK_HEIGHT = 30 row_start = 0 row_end = BLOCK_HEIGHT column_start = 0 column_end = BLOCK_WIDTH print(len(pixel_array)) while row_end &lt; height: while column_end &lt; width: row = slice(row_start, row_end) col = slice(column_start, column_end) blocks.append(pixel_array[ row , col ]) column_start += BLOCK_WIDTH column_end += BLOCK_WIDTH row_start += BLOCK_HEIGHT row_end += BLOCK_HEIGHT averaged_blocks = [] for i in range(BLOCK_HEIGHT): for j in range(BLOCK_WIDTH): averaged_blocks.append(np.mean(blocks[i][j], axis = 0)) </code></pre> <p>I'm pretty new at working with images, so if anyone has any recommendations or suggestions I'd greatly appreciate it!</p>
<p>As you say you are open to suggestions, you could achieve what you ask in the shell with one line of <strong>ImageMagick</strong> which is installed on most Linux distros and is available for macOS and Windows.</p> <p>So, in shell or Command Prompt, take an image, split it into 30x30 tiles, average contents of each tile by resizing to 1x1, append all 1x1 means to a new single row image and print each pixel of that image when represented as 8-bit in text format:</p> <pre><code>magick image.png -crop 30x30 -resize 1x1\! +append -depth 8 txt: </code></pre> <p><strong>Sample Output</strong></p> <pre><code># ImageMagick pixel enumeration: 900,1,65535,rgb 0,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 1,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 2,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 3,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 4,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 5,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 6,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 7,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 8,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 9,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 10,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 11,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 12,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 13,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 14,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 15,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 16,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 17,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 18,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 19,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 20,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 21,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 22,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 23,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 24,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 25,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 26,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 27,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 28,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 29,0: (82.2696,63142.9,82.2696) #00F600 rgb(0,246,0) 30,0: (254.322,58412.4,254.322) #01E301 rgb(1,227,1) 31,0: (254.322,58412.4,254.322) #01E301 rgb(1,227,1) 32,0: (254.322,58412.4,254.322) #01E301 rgb(1,227,1) </code></pre>
python|numpy|python-imaging-library
1
1,901,241
25,437,773
Statistics of the ordering of columns
<p>Say I have a dataframe with <code>N</code> columns (e.g. <code>N=3</code>). Every row represents a sample:</p> <pre><code> A B C sample_1 64 46 69 sample_2 55 33 40 sample_3 67 51 78 sample_4 97 32 62 sample_5 50 36 39 </code></pre> <p>I would like to know what is the <strong>most common ordering</strong> of the columns <code>A</code>, <code>B</code>, <code>C</code> across rows. </p> <p>In the case above, one could sort every row manually:</p> <pre><code>sample_1: [B, A, C] sample_2: [B, C, A] sample_3: [B, A, C] sample_4: [B, C, A] sample_5: [B, C, A] </code></pre> <p>and then find out that the most common ordering is <code>[B, C, A]</code>, while <code>[B, A, C]</code> is the second most common. </p> <p>Are there any functions in Pandas, scipy or statsmodels that facilitate this analysis? For example, what if I want to find out <strong>how often</strong> each ordering happens?</p>
<p>Maybe:</p> <pre><code>&gt;&gt;&gt; from collections import Counter &gt;&gt;&gt; f = lambda ts: df.columns[np.argsort(ts).values] &gt;&gt;&gt; Counter(map(tuple, df.apply(f, axis=1).values)) Counter({('B', 'C', 'A'): 3, ('B', 'A', 'C'): 2}) </code></pre> <p>So the most common ordering is:</p> <pre><code>&gt;&gt;&gt; _.most_common(1) [(('B', 'C', 'A'), 3)] </code></pre> <p>Alternatively:</p> <pre><code>&gt;&gt;&gt; f = lambda ts: tuple(df.columns[np.argsort(ts)]) &gt;&gt;&gt; df.apply(f, axis=1, raw=True).value_counts() (B, C, A) 3 (B, A, C) 2 dtype: int64 </code></pre>
pandas|scipy|statsmodels
4
1,901,242
25,748,556
How to define a methode which splits the given url (In python webapp2 for Google app engine)
<p>I want to set up a Google App Enging (GAE) app which provides a login functionality with OAuth2 and OAuth1 for Twitter, Facebook, ...., Therefore I chose the authomatic module (<a href="http://peterhudec.github.io/authomatic/" rel="nofollow">http://peterhudec.github.io/authomatic/</a>) which seemd easy to use. But now I have a couple of problems (I am very new to that whole web service programming stuff).</p> <p>So what I have is:</p> <pre><code>import os import sys import webapp2 from authomatic import Authomatic from authomatic.adapters import Webapp2Adapter from config import CONFIG authomatic_dir = os.path.join(os.path.dirname(__file__), 'authomatic') sys.path.append(authomatic_dir) # Instantiate Authomatic. authomatic = Authomatic(config=CONFIG, secret='some random secret string') # Create a simple request handler for the login procedure. class Login(webapp2.RequestHandler): # The handler must accept GET and POST http methods and # Accept any HTTP method and catch the "provider_name" URL variable. def any(self, provider_name):#HERE IS THE PROBLEM ... class Home(webapp2.RequestHandler): def get(self): # Create links to the Login handler. self.response.write('Login with &lt;a href="login/gl"&gt;Google&lt;/a&gt;.&lt;br /&gt;') # Create routes. ROUTES = [webapp2.Route(r'/login/gl', Login, handler_method='any'), webapp2.Route(r'/', Home)] # Instantiate the webapp2 WSGI application. application = webapp2.WSGIApplication(ROUTES, debug=True) </code></pre> <p>And the error I get is:</p> <pre><code>"any() takes exactly 2 arguments (1 given)" </code></pre> <p>I tried to substitute any with get() or post() because I already had an app where I did an <code>redirect('blog/42')</code> and the <code>get(self, post_id)</code> automatically split the <code>42</code> to <code>post_id</code> (example can be found here <a href="http://udacity-cs253.appspot.com/static/hw5.tgz" rel="nofollow">http://udacity-cs253.appspot.com/static/hw5.tgz</a> (look at the PostPage class in blog.py))</p> <p>So I really do not understand all the magic which happens here; could someone please explain me how to solve this error, so that the get()-parameter <code>provider_name</code> is assigned the value <code>gl</code>.</p>
<p>Instead of </p> <pre><code>webapp2.Route(r'/login/gl', Login, handler_method='any') </code></pre> <p>Use </p> <pre><code>webapp2.Route(r'/login/&lt;provider_name&gt;', Login, handler_method='any') </code></pre> <p>And now the path after <code>/login/</code> will be passed to <code>def any</code> in the <code>provider_name</code> parameter.</p> <p>I.e. requesting <code>/login/gl</code> will pass "<code>gl</code>" as the <code>provider_name</code> to <code>def any</code>.</p>
python|google-app-engine|webapp2
2
1,901,243
61,619,201
TypeError: _bulk_create() got an unexpected keyword argument 'ignore_conflicts'
<p>While adding groups with permission from Django Admin Panel and adding other M2M relationships too. I got this error!!</p> <p>It says : <strong>TypeError: _bulk_create() got an unexpected keyword argument 'ignore_conflicts'</strong></p> <p>I can't find the error, Probably a noob mistake.</p> <pre><code>class GroupSerializer(serializers.ModelSerializer): permissions = PermissionSerializerGroup(many=True, required=False) class Meta: model = Group fields = ('id', 'name', 'permissions') extra_kwargs = { 'name': {'validators': []}, } def create(self, validated_data): print(validated_data) permissions_data = validated_data.pop("permissions") obj, group = Group.objects.update_or_create(name=validated_data["name"]) obj.permissions.clear() for permission in permissions_data: per = Permission.objects.get(codename=permission["codename"]) obj.permissions.add(per) obj.save() return obj </code></pre> <p>Here is the Traceback:</p> <pre><code> File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/core/handlers/exception.py", line 34, in inner response = get_response(request) File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/core/handlers/base.py", line 115, in _get_response response = self.process_exception_by_middleware(e, request) File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/core/handlers/base.py", line 113, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/contrib/admin/options.py", line 607, in wrapper return self.admin_site.admin_view(view)(*args, **kwargs) File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/utils/decorators.py", line 130, in _wrapped_view response = view_func(request, *args, **kwargs) File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/views/decorators/cache.py", line 44, in _wrapped_view_func response = view_func(request, *args, **kwargs) File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/contrib/admin/sites.py", line 231, in inner return view(request, *args, **kwargs) File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/contrib/admin/options.py", line 1638, in add_view return self.changeform_view(request, None, form_url, extra_context) File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/utils/decorators.py", line 43, in _wrapper return bound_method(*args, **kwargs) File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/utils/decorators.py", line 130, in _wrapped_view response = view_func(request, *args, **kwargs) File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/contrib/admin/options.py", line 1522, in changeform_view return self._changeform_view(request, object_id, form_url, extra_context) File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/contrib/admin/options.py", line 1566, in _changeform_view self.save_related(request, form, formsets, not add) File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/contrib/admin/options.py", line 1107, in save_related form.save_m2m() File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/forms/models.py", line 442, in _save_m2m f.save_form_data(self.instance, cleaned_data[f.name]) File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/db/models/fields/related.py", line 1618, in save_form_data getattr(instance, self.attname).set(data) File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/db/models/fields/related_descriptors.py", line 1008, in set self.add(*new_objs, through_defaults=through_defaults) File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/db/models/fields/related_descriptors.py", line 946, in add through_defaults=through_defaults, File "/home/suman/Desktop/suman1234/myvenv/lib/python3.6/site-packages/django/db/models/fields/related_descriptors.py", line 1129, in _add_items ], ignore_conflicts=True) TypeError: _bulk_create() got an unexpected keyword argument 'ignore_conflicts' </code></pre>
<p>I solved this issue by downgrading Django version to 2.2.12 It seems that Django v3.0+ has this issue</p>
python-3.x|django-rest-framework
0
1,901,244
23,603,973
Pynest ImportError: no module named nest
<p>Today I got PyNest working, after I followed the instructions about installation etc. from their <a href="http://www.nest-initiative.org/index.php/PyNEST" rel="nofollow noreferrer">official site</a>. My problem is that I have to run the following command before I can successfully import nest, otherwise I get an &quot;ImportError: No module named nest&quot; :</p> <pre><code>export PYTHONPATH=/opt/nest/lib/python2.7/site-packages:$PYTHONPATH </code></pre> <p>I found about this command on the official link I gave you above, but I don't understand why this happens. What I can guess is that, this command &quot;shows&quot; where my nest/python files are, but how can I make this command permanent, so I won't have to run in before every trial?</p> <p>EDIT1: I tried @SumitGupta 's answer and I can now import it when I run python from a terminal, but I get the same error when I try to import nest from Geany or iPython.</p> <p>(i use Ubuntu 12.04 through VMware virtualization from win8.1 if it matters)</p>
<p>try adding it in <i>.profile</i> or <i>.bashrc</i> or <i>.bashrc_profile</i> whatever exists depending upon which is called finally , in Ubuntu i guess its .bashrc . These files will be unde user's home</p>
python-2.7|python-import|importerror|nest-simulator
1
1,901,245
23,549,836
Multiprocessing + Requests Hangs with exception AttributeError: 'file' object has no attribute 'out'
<p>I'm trying to build a class that uses multiprocessing + requests to make several requests in parallel. I'm running into an issue where it just hangs and gives me a cryptic error message and I'm not sure way. </p> <p>Below is my code, it basically just uses a Pool with a callback to put results into a list. I have the requirement that I need a "hard timeout" for each URL, i.e. if a URL is taking more than a few seconds to get its content downloaded I just want to skip it. So I use a Pool timeout and do a diff on URLs attempted vs. URL content returned, the ones that were attempted but not returned are assumed to have failed. Here is my code:</p> <pre><code>import time import json import requests import sys from urlparse import parse_qs from urlparse import urlparse from urlparse import urlunparse from urllib import urlencode from multiprocessing import Process, Pool, Queue, current_process from multiprocessing.pool import ThreadPool from multiprocessing import TimeoutError import traceback from sets import Set from massweb.pnk_net.pnk_request import pnk_request_raw from massweb.targets.fuzzy_target import FuzzyTarget from massweb.payloads.payload import Payload class MassRequest(object): def __init__(self, num_threads = 10, time_per_url = 10, request_timeout = 10, proxy_list = [{}]): self.num_threads = num_threads self.time_per_url = time_per_url self.request_timeout = request_timeout self.proxy_list = proxy_list self.results = [] self.urls_finished = [] self.urls_attempted = [] self.targets_results = [] self.targets_finished = [] self.targets_attempted = [] def add_to_finished(self, x): self.urls_finished.append(x[0]) self.results.append(x) def add_to_finished_targets(self, x): self.targets_finished.append(x[0]) self.targets_results.append(x) def get_urls(self, urls): timeout = float(self.time_per_url * len(urls)) pool = Pool(processes = self.num_threads) proc_results = [] for url in urls: self.urls_attempted.append(url) proc_result = pool.apply_async(func = pnk_request_raw, args = (url, self.request_timeout, self.proxy_list), callback = self.add_to_finished) proc_results.append(proc_result) for pr in proc_results: try: pr.get(timeout = timeout) except: pool.terminate() pool.join() pool.terminate() pool.join() list_diff = Set(self.urls_attempted).difference(Set(self.urls_finished)) for url in list_diff: sys.stderr.write("URL %s got timeout" % url) self.results.append((url, "__PNK_GET_THREAD_TIMEOUT")) if __name__ == "__main__": f = open("out_urls_to_fuzz_1mil") urls_to_request = [] for line in f: url = line.strip() urls_to_request.append(url) mr = MassRequest() mr.get_urls(urls_to_request) </code></pre> <p>Here is the function being called by the threads:</p> <pre><code>def pnk_request_raw(url_or_target, req_timeout = 5, proxy_list = [{}]): if proxy_list[0]: proxy = get_random_proxy(proxy_list) else: proxy = {} try: if isinstance(url_or_target, str): sys.stderr.write("Requesting: %s with proxy %s\n" % (str(url_or_target), str(proxy))) r = requests.get(url_or_target, proxies = proxy, timeout = req_timeout) return (url_or_target, r.text) if isinstance(url_or_target, FuzzyTarget): sys.stderr.write("Requesting: %s with proxy %s\n" % (str(url_or_target), str(proxy))) r = requests.get(url_or_target.url, proxies = proxy, timeout = req_timeout) return (url_or_target, r.text) except: #use this to mark failure on exception traceback.print_exc() #edit: this is the line that was breaking it all sys.stderr.out("A request failed to URL %s\n" % url_or_target) return (url_or_target, "__PNK_REQ_FAILED") </code></pre> <p>This seems to work well for smaller sets of URLs, but here is the output:</p> <pre class="lang-none prettyprint-override"><code>Requesting: http://www.sportspix.co.za/ with proxy {} Requesting: http://www.sportspool.co.za/ with proxy {} Requesting: http://www.sportspredict.co.za/ with proxy {} Requesting: http://www.sportspro.co.za/ with proxy {} Requesting: http://www.sportsrun.co.za/ with proxy {} Requesting: http://www.sportsstuff.co.za/ with proxy {} Requesting: http://sportsstuff.co.za/2011-rugby-world-cup with proxy {} Requesting: http://www.sportstar.co.za/4-stroke-racing with proxy {} Requesting: http://www.sportstats.co.za/ with proxy {} Requesting: http://www.sportsteam.co.za/ with proxy {} Requesting: http://www.sportstec.co.za/ with proxy {} Requesting: http://www.sportstours.co.za/ with proxy {} Requesting: http://www.sportstrader.co.za/ with proxy {} Requesting: http://www.sportstravel.co.za/ with proxy {} Requesting: http://www.sportsturf.co.za/ with proxy {} Requesting: http://reimo.sportsvans.co.za/ with proxy {} Requesting: http://www.sportsvans.co.za/4x4andmoreWindhoek.html with proxy {} Handled exception:Traceback (most recent call last): File "mass_request.py", line 87, in get_fuzzy_targets pr.get(timeout = timeout) File "/usr/lib/python2.7/multiprocessing/pool.py", line 528, in get raise self._value AttributeError: 'file' object has no attribute 'out' </code></pre> <p>On that last exception, the program hangs and I have to completely kill it. AFAIK I'm never trying to access a file object with the attribute "out". My question is... how to fix!? Am I doing something obviously wrong here? Why isn't there a clearer exception?</p>
<p>I think that <code>sys.stderr.out("A request failed to URL %s\n" % url_or_target)</code> should be <code>sys.stderr.write("A request failed to URL %s\n" % url_or_target)</code></p>
python|python-2.7|multiprocessing|python-requests
3
1,901,246
23,485,673
Apply resampling to each group in a groupby object
<p>I've created a convenience method to perform resampling on an arbitrary dataframe:</p> <pre><code>def resample_data_to_hourly(df): df = df.resample('1H',how='mean',fill_method='ffill', closed='left',label='left') return df </code></pre> <p>And I would like to apply this function to every dataframe in a groupby object with something like the following:</p> <pre><code>df.transform(resample_data_to_hourly) df.aggregate(resample_data_to_hourly) dfapply(resample_data_to_hourly) </code></pre> <p>I've tried them all with no success. No matter what I do, no effect is had on the dataframe, even if I set the resulting value of the above to a new dataframe (which, to my understanding, I shouldn't have to do).</p> <p>I'm sure there is something straightforward and idiomatic about handling groupby objects with time series data that I am missing here, but I haven't been able to correct my program.</p> <p>How do I create functions like the above and have them properly apply to a groupby object? I can get my code to work if I iterate through each group as in a dictionary and add the results to a new dictionary which I can then convert back into a groupby object, but this is terribly hacky and I feel like I'm missing out on a lot of what Pandas can do because I'm forced into these hacky methods.</p> <p>EDIT ADDING BASE EXAMPLE:</p> <pre><code>rng = pd.date_range('1/1/2000', periods=10, freq='10m') df = pd.DataFrame({'a':pd.Series(randn(len(rng)), index=rng), 'b':pd.Series(randn(len(rng)), index=rng)}) </code></pre> <p>yields:</p> <pre><code> a b 2000-01-31 0.168622 0.539533 2000-11-30 -0.283783 0.687311 2001-09-30 -0.266917 -1.511838 2002-07-31 -0.759782 -0.447325 2003-05-31 -0.110677 0.061783 2004-03-31 0.217771 1.785207 2005-01-31 0.450280 1.759651 2005-11-30 0.070834 0.184432 2006-09-30 0.254020 -0.895782 2007-07-31 -0.211647 -0.072757 df.groupby('a').transform(hour_resample) // should yield resampled data with both a and b columns // instead yields only column b // df.apply yields both columns but in this case no changes will be made to the actual matrix // (though in this case no change would be made, sample data could be generated such that a change should be made) // if someone could supply a reliable way to generate data that can be resampled, that would be wonderful </code></pre>
<pre><code>data.groupby(level=0) .apply(lambda d: d.reset_index(level=0, drop=True) .resample("M", how="")) </code></pre>
python|numpy|pandas|dataframe
3
1,901,247
24,157,598
checking input against tuple of responses
<p>I have a code that check if the input matches a tuple of inputs</p> <pre><code>if name1 in confirms: </code></pre> <p>And here's the tuple</p> <pre><code>confirms = ('yes', 'yeah', 'yea' ) </code></pre> <p>But how do I make it so if something like 'yes I do' or 'yeah of course' is entered. It understands that Yeah is in the input and deals with it the same as just saying 'yeah'</p>
<p>I would would do something like this, lower casing the input to test :</p> <pre><code>[x.lower() in conf for x in name1.split()] </code></pre> <p>then just test if True is in this new list by using the <a href="https://docs.python.org/2/library/functions.html#any" rel="nofollow">any()</a> function</p> <pre><code>any([x.lower() in conf for x in input.split()]) </code></pre> <p>This has several draw backs, for instance if the user wrote a positive and a negative this would find the positive. I would maybe look into a different approach to your initial problem.</p>
python|python-2.7
0
1,901,248
20,415,661
GNU Radio--raw data from uhd_fft.py
<p>I would like to do spectrum sensing with GNU radio. Is there a good way to get the raw output from uhd_fft.py (the value for each frequency)? I would like to do this programatically (with code), rather than through a GUI.</p> <p>I have tried doing spectrum sensing with usrp_spectrum_sense.py, and this script has questionable accuracy and seems to be much slower than uhd_fft.py.</p> <p>Thanks!</p>
<p>You should really direct your question to the GNURadio mailing list. This is a very application-specific question, which isn't necessarily appropriate for SO.</p> <p><a href="https://lists.gnu.org/mailman/listinfo/discuss-gnuradio" rel="nofollow">https://lists.gnu.org/mailman/listinfo/discuss-gnuradio</a></p> <p>To answer your question a bit, uhd_fft.py is just a Python program that is doing a transform on your data. You can do the same thing in C++ with GNURadio. Just edit the Python code to dump the bin data instead of plotting it and you should get what you want.</p>
python|gnuradio
0
1,901,249
20,524,146
String formatting without index in python2.6
<p>I've got many thousands of lines of python code that has python2.7+ style string formatting (e.g. without indices in the <code>{}</code>s)</p> <pre><code>&quot;{} {}&quot;.format('foo', 'bar') </code></pre> <p>I need to run this code under python2.6 which <em>requires</em> the indices.</p> <p>I'm wondering if anyone knows of a painless way allow python2.6 to run this code. It'd be great if there was a <code>from __future__ import blah</code> solution to the problem. I don't see one. Something along those lines would be my first choice.</p> <p>A distant second would be some script that can automate the process of adding the indices, at least in the obvious cases:</p> <pre><code>&quot;{0} {1}&quot;.format('foo', 'bar') </code></pre>
<p>It doesn't quite preserve the whitespacing and could probably be made a bit smarter, but it will at least identify Python strings (apostrophes/quotes/multi line) correctly without resorting to a regex or external parser:</p> <pre><code>import tokenize from itertools import count import re with open('your_file') as fin: output = [] tokens = tokenize.generate_tokens(fin.readline) for num, val in (token[:2] for token in tokens): if num == tokenize.STRING: val = re.sub('{}', lambda L, c=count(): '{{{0}}}'.format(next(c)), val) output.append((num, val)) print tokenize.untokenize(output) # write to file instead... </code></pre> <p>Example input:</p> <pre><code>s = "{} {}".format('foo', 'bar') if something: do_something('{} {} {}'.format(1, 2, 3)) </code></pre> <p>Example output (note slightly iffy whitespacing):</p> <pre><code>s ="{0} {1}".format ('foo','bar') if something : do_something ('{0} {1} {2}'.format (1 ,2 ,3 )) </code></pre>
python|string-formatting|python-2.6|backport
7
1,901,250
71,955,291
creating environment variables for jupyter notebook in vscode
<p>In vscode <code>settings.json</code> file I can use the following option to define environment variables:</p> <pre class="lang-json prettyprint-override"><code>&quot;terminal.integrated.env.osx&quot; : { &quot;MY_ENV&quot;: &quot;test&quot; &quot;MY_ENVTYPE&quot;: &quot;qa&quot; } </code></pre> <p>Now whenever, I start a new shell in the workspace, the shell loads with the above environment variables, and I can access them typically with <code>os.environ[&quot;MY_ENV&quot;]</code> is my python scripts.</p> <p>But with the same <code>settings.json</code>, if I try to access the environment variables in a jupyter notebook I get <code>None</code>. So my question is, is there a way to define environment variables in vscode's <code>settings.json</code> file, so whenever I start a new notebook, the environment variables are loaded by default.</p> <p>Currently the workaround I have found is to add the following code snippet in a top code cell.</p> <pre class="lang-py prettyprint-override"><code> import os os.environ[&quot;MY_ENV&quot;] = &quot;test&quot; os.environ[&quot;MY_ENVTYPE&quot;] = &quot;qa&quot; </code></pre> <p>I am hoping there is a better way to do the same.</p>
<p>We could use <a href="https://github.com/theskumar/python-dotenv" rel="nofollow noreferrer">python-dotenv</a> to solve this problem. Using &quot;pip install python-dotenv&quot; to install the package. To configure the development environment Please add .env file in the root directory of the project:</p> <pre><code>. ├── . env └── test. py </code></pre> <p>Then we can use the following code to load environment:</p> <pre><code>%load_ext dotenv %dotenv </code></pre>
python|visual-studio-code|jupyter-notebook
1
1,901,251
36,011,046
How to specify path in os.system() to execute a shell script in Flask
<p>I am using Flask to execute a shell script and here is my actual code:</p> <pre><code>def execute(cmd, files): os.system(cmd) back =dict() for file in files: with open(file, 'r') as f: info = f.read() back[file] = info return back @app.route('/executeScript', methods = ['POST']) def executeScript(): output = execute('./script.sh', ['file1.txt', 'file2.txt']) return render_template('template.html', output=output) </code></pre> <p>But I want to put my script (script.sh) in a particular folder. For that I need to add the path in my code, but when add it, it doesn't work anymore. I've tried something like:</p> <pre><code>output = execute(['sh', 'path/to/myscript/script.sh'], ['path/to/myscript/file1.txt', 'path/to/myscript/file2.txt']) </code></pre> <p>But this is not working, the script is not executed at all. Any idea how to make it work?</p>
<p>According to the description of <a href="https://docs.python.org/2/library/os.html#os.system" rel="nofollow"><code>os.system</code></a> (emphasis mine):</p> <blockquote> <p>Execute the command <strong>(a string)</strong> in a subshell.</p> </blockquote> <p>When you try to run </p> <pre><code>execute(['sh', 'path/to/myscript/script.sh'], ...) </code></pre> <p>...you end up passing a list to <code>os.system</code>. Try</p> <pre><code>execute('sh path/to/myscript/script.sh', ...) </code></pre>
python|shell|flask
2
1,901,252
15,472,714
For a checkbox with multiple values in my html template how to access it through python code?
<p>I'm a <code>python</code> and <code>django</code> newbie. </p> <p>This is in my <code>html</code> template </p> <pre><code>&lt;input type ="checkbox" value={{ item.id }} name="ck1[]"&gt; </code></pre> <p>In views.py when i do a <code>checked = request.POST.get(['ck1'])</code> i get unhasable list error. Kindly guide me. </p>
<p>Please don't use PHP syntax when you're writing Django. <code>name="ck1[]"</code> is a PHP-ism that's completely unnecessary.</p> <p>If you want the field to be called <code>ck1</code>, just call use <code>name="ck1"</code>, and use `request.POST.getlist('ck1') in your view.</p> <p>If you really have to use that horrible bracket syntax, you'll need to use <code>request.POST.getlist('ck1[]')</code>, because Django quite sensibly believes that the name you use in the HTML is the name you should get in the POST data.</p>
python|html|django|templates
1
1,901,253
14,944,117
launching something within python
<p>Ok so as you can see to my question I'm a total newb at python. I'm building a python script and basically I want it to execute this line<br></p> <pre> <code> /Library/Frameworks/GDAL.framework/Programs/ogr2ogr -f "GeoJSON" output.json input.shp </code> </pre> <p>How do I get python to execute this as if I was typing it in my terminal? <br> Thanks</p>
<pre><code>import os os.system('/Library/Frameworks/GDAL.framework/Programs/ogr2ogr -f "GeoJSON" output.json input.shp') </code></pre> <p>More recently, it is recommended to use the <a href="http://docs.python.org/2/library/subprocess.html" rel="noreferrer">subprocess</a> package:</p> <pre><code>subprocess.call(['/Library/Frameworks/GDAL.framework/Programs/ogr2ogr', '-f', '"GeoJSON"', 'output.json', 'input.shp']) </code></pre>
python
6
1,901,254
46,531,861
Error compiling using stdlibc++ - symbol(s) not found for architecture x86_64
<p>When I try to run the following shell script at (<a href="https://github.com/renmengye/rec-attend-public" rel="nofollow noreferrer">https://github.com/renmengye/rec-attend-public</a>):</p> <pre><code>TF_INC=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())') g++ -std=c++11 -stdlib=libc++ -shared hungarian.cc -o hungarian.so -fPIC -I $TF_INC -D_GLIBCXX_USE_CXX11_ABI=0 </code></pre> <p>I get the following errors:</p> <pre><code>11 warnings generated. Undefined symbols for architecture x86_64: "tensorflow::DEVICE_CPU", referenced from: ___cxx_global_var_init.7 in hungarian-8050bd.o "tensorflow::TensorShape::DestructorOutOfLine()", referenced from: tensorflow::TensorShape::~TensorShape() in hungarian-8050bd.o "tensorflow::TensorShape::AddDim(long long)", referenced from: HungarianOp::Compute(tensorflow::OpKernelContext*) in hungarian-8050bd.o "tensorflow::TensorShape::TensorShape()", referenced from: HungarianOp::Compute(tensorflow::OpKernelContext*) in hungarian-8050bd.o "tensorflow::register_op::OpDefBuilderReceiver::OpDefBuilderReceiver(tensorflow::register_op::OpDefBuilderWrapper&lt;true&gt; const&amp;)", referenced from: ___cxx_global_var_init in hungarian-8050bd.o "tensorflow::OpDefBuilder::Input(tensorflow::StringPiece)", referenced from: tensorflow::register_op::OpDefBuilderWrapper&lt;true&gt;::Input(tensorflow::StringPiece) in hungarian-8050bd.o "tensorflow::OpDefBuilder::Output(tensorflow::StringPiece)", referenced from: tensorflow::register_op::OpDefBuilderWrapper&lt;true&gt;::Output(tensorflow::StringPiece) in hungarian-8050bd.o "tensorflow::OpDefBuilder::OpDefBuilder(tensorflow::StringPiece)", referenced from: tensorflow::register_op::OpDefBuilderWrapper&lt;true&gt;::OpDefBuilderWrapper(char const*) in hungarian-8050bd.o "tensorflow::kernel_factory::OpKernelRegistrar::InitInternal(tensorflow::KernelDef const*, tensorflow::StringPiece, tensorflow::OpKernel* (*)(tensorflow::OpKernelConstruction*))", referenced from: tensorflow::kernel_factory::OpKernelRegistrar::OpKernelRegistrar(tensorflow::KernelDef const*, tensorflow::StringPiece, tensorflow::OpKernel* (*)(tensorflow::OpKernelConstruction*)) in hungarian-8050bd.o "tensorflow::OpKernelContext::allocate_output(int, tensorflow::TensorShape const&amp;, tensorflow::Tensor**)", referenced from: HungarianOp::Compute(tensorflow::OpKernelContext*) in hungarian-8050bd.o "tensorflow::OpKernelContext::CtxFailureWithWarning(tensorflow::Status)", referenced from: HungarianOp::Compute(tensorflow::OpKernelContext*) in hungarian-8050bd.o "tensorflow::OpKernelContext::input(int)", referenced from: HungarianOp::Compute(tensorflow::OpKernelContext*) in hungarian-8050bd.o "tensorflow::KernelDefBuilder::Device(char const*)", referenced from: ___cxx_global_var_init.7 in hungarian-8050bd.o "tensorflow::KernelDefBuilder::KernelDefBuilder(char const*)", referenced from: tensorflow::register_kernel::Name::Name(char const*) in hungarian-8050bd.o "tensorflow::OpDef::~OpDef()", referenced from: tensorflow::OpRegistrationData::~OpRegistrationData() in hungarian-8050bd.o "tensorflow::OpKernel::OpKernel(tensorflow::OpKernelConstruction*)", referenced from: HungarianOp::HungarianOp(tensorflow::OpKernelConstruction*) in hungarian-8050bd.o "tensorflow::OpKernel::~OpKernel()", referenced from: HungarianOp::~HungarianOp() in hungarian-8050bd.o "tensorflow::internal::LogMessage::MinVLogLevel()", referenced from: HungarianOp::MinWeightedBipartiteCover(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;*, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;*, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;*) in hungarian-8050bd.o HungarianOp::GetEqualityGraph(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;) in hungarian-8050bd.o HungarianOp::MaxBipartiteMatching(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;*) in hungarian-8050bd.o HungarianOp::Augment(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;&amp;) in hungarian-8050bd.o "tensorflow::internal::LogMessage::LogMessage(char const*, int, int)", referenced from: HungarianOp::MinWeightedBipartiteCover(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;*, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;*, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;*) in hungarian-8050bd.o HungarianOp::GetEqualityGraph(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;) in hungarian-8050bd.o HungarianOp::MaxBipartiteMatching(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;*) in hungarian-8050bd.o HungarianOp::Augment(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;&amp;) in hungarian-8050bd.o "tensorflow::internal::LogMessage::~LogMessage()", referenced from: HungarianOp::MinWeightedBipartiteCover(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;*, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;*, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;*) in hungarian-8050bd.o HungarianOp::GetEqualityGraph(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;) in hungarian-8050bd.o HungarianOp::MaxBipartiteMatching(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;*) in hungarian-8050bd.o HungarianOp::Augment(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;&amp;) in hungarian-8050bd.o "tensorflow::internal::LogMessageFatal::LogMessageFatal(char const*, int)", referenced from: tensorflow::core::RefCounted::~RefCounted() in hungarian-8050bd.o HungarianOp::Compute(tensorflow::OpKernelContext*) in hungarian-8050bd.o tensorflow::TensorShape::dims() const in hungarian-8050bd.o HungarianOp::MinWeightedBipartiteCover(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;*, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;*, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;*) in hungarian-8050bd.o HungarianOp::MaxFlow(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;) in hungarian-8050bd.o HungarianOp::Augment(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;&amp;) in hungarian-8050bd.o tensorflow::KernelDefBuilder::~KernelDefBuilder() in hungarian-8050bd.o ... "tensorflow::internal::LogMessageFatal::~LogMessageFatal()", referenced from: tensorflow::core::RefCounted::~RefCounted() in hungarian-8050bd.o HungarianOp::Compute(tensorflow::OpKernelContext*) in hungarian-8050bd.o tensorflow::TensorShape::dims() const in hungarian-8050bd.o HungarianOp::MinWeightedBipartiteCover(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;*, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;*, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;*) in hungarian-8050bd.o HungarianOp::MaxFlow(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;) in hungarian-8050bd.o HungarianOp::Augment(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;&amp;, Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt;&amp;) in hungarian-8050bd.o tensorflow::KernelDefBuilder::~KernelDefBuilder() in hungarian-8050bd.o ... "tensorflow::internal::CheckOpMessageBuilder::ForVar2()", referenced from: std::__1::basic_string&lt;char, std::__1::char_traits&lt;char&gt;, std::__1::allocator&lt;char&gt; &gt;* tensorflow::internal::MakeCheckOpString&lt;int, int&gt;(int const&amp;, int const&amp;, char const*) in hungarian-8050bd.o "tensorflow::internal::CheckOpMessageBuilder::NewString()", referenced from: std::__1::basic_string&lt;char, std::__1::char_traits&lt;char&gt;, std::__1::allocator&lt;char&gt; &gt;* tensorflow::internal::MakeCheckOpString&lt;int, int&gt;(int const&amp;, int const&amp;, char const*) in hungarian-8050bd.o "tensorflow::internal::CheckOpMessageBuilder::CheckOpMessageBuilder(char const*)", referenced from: std::__1::basic_string&lt;char, std::__1::char_traits&lt;char&gt;, std::__1::allocator&lt;char&gt; &gt;* tensorflow::internal::MakeCheckOpString&lt;int, int&gt;(int const&amp;, int const&amp;, char const*) in hungarian-8050bd.o "tensorflow::internal::CheckOpMessageBuilder::~CheckOpMessageBuilder()", referenced from: std::__1::basic_string&lt;char, std::__1::char_traits&lt;char&gt;, std::__1::allocator&lt;char&gt; &gt;* tensorflow::internal::MakeCheckOpString&lt;int, int&gt;(int const&amp;, int const&amp;, char const*) in hungarian-8050bd.o "tensorflow::TensorShape::CheckDimsEqual(int) const", referenced from: Eigen::DSizes&lt;long, 3&gt; tensorflow::TensorShape::AsEigenDSizes&lt;3&gt;() const in hungarian-8050bd.o Eigen::DSizes&lt;long, 2&gt; tensorflow::TensorShape::AsEigenDSizes&lt;2&gt;() const in hungarian-8050bd.o "tensorflow::TensorShape::CheckDimsAtLeast(int) const", referenced from: Eigen::DSizes&lt;long, 3&gt; tensorflow::TensorShape::AsEigenDSizesWithPadding&lt;3&gt;() const in hungarian-8050bd.o Eigen::DSizes&lt;long, 2&gt; tensorflow::TensorShape::AsEigenDSizesWithPadding&lt;2&gt;() const in hungarian-8050bd.o "tensorflow::TensorShape::dim_size(int) const", referenced from: HungarianOp::Compute(tensorflow::OpKernelContext*) in hungarian-8050bd.o HungarianOp::ComputeHungarianBatch(tensorflow::Tensor const&amp;, tensorflow::Tensor*, tensorflow::Tensor*, tensorflow::Tensor*) in hungarian-8050bd.o HungarianOp::ComputeHungarian(tensorflow::Tensor const&amp;, tensorflow::Tensor*, tensorflow::Tensor*, tensorflow::Tensor*) in hungarian-8050bd.o Eigen::DSizes&lt;long, 3&gt; tensorflow::TensorShape::AsEigenDSizesWithPadding&lt;3&gt;() const in hungarian-8050bd.o HungarianOp::CopyInput(tensorflow::Tensor const&amp;) in hungarian-8050bd.o HungarianOp::CopyOutput(Eigen::Matrix&lt;float, -1, -1, 1, -1, -1&gt; const&amp;, tensorflow::Tensor*) in hungarian-8050bd.o Eigen::DSizes&lt;long, 2&gt; tensorflow::TensorShape::AsEigenDSizesWithPadding&lt;2&gt;() const in hungarian-8050bd.o ... "tensorflow::Tensor::tensor_data() const", referenced from: HungarianOp::CopyInput(tensorflow::Tensor const&amp;) in hungarian-8050bd.o "tensorflow::Tensor::CheckTypeAndIsAligned(tensorflow::DataType) const", referenced from: tensorflow::TTypes&lt;float, 3ul, long&gt;::ConstTensor tensorflow::Tensor::tensor&lt;float, 3ul&gt;() const in hungarian-8050bd.o tensorflow::TTypes&lt;float, 3ul, long&gt;::Tensor tensorflow::Tensor::tensor&lt;float, 3ul&gt;() in hungarian-8050bd.o tensorflow::TTypes&lt;float, 2ul, long&gt;::Tensor tensorflow::Tensor::tensor&lt;float, 2ul&gt;() in hungarian-8050bd.o "typeinfo for tensorflow::OpKernel", referenced from: typeinfo for HungarianOp in hungarian-8050bd.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) </code></pre> <p>I have looked into the following questions, nothing seems to work:</p> <ul> <li><a href="https://stackoverflow.com/questions/19774778/when-is-it-necessary-to-use-use-the-flag-stdlib-libstdc/19774902">When is it necessary to use use the flag -stdlib=libstdc++?</a></li> <li><a href="https://mathematica.stackexchange.com/questions/34692/mathlink-linking-error-after-os-x-10-9-mavericks-upgrade">https://mathematica.stackexchange.com/questions/34692/mathlink-linking-error-after-os-x-10-9-mavericks-upgrade</a></li> <li><a href="https://stackoverflow.com/questions/19637164/c-linking-error-after-upgrading-to-mac-os-x-10-9-xcode-5-0-1">C++ linking error after upgrading to Mac OS X 10.9 / Xcode 5.0.1</a></li> </ul> <p>If I change it to <code>libstdc++</code> :</p> <pre><code>TF_INC=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())') g++ -std=c++11 -stdlib=libstdc++ -shared hungarian.cc -o hungarian.so -fPIC -I $TF_INC -D_GLIBCXX_USE_CXX11_ABI=0 </code></pre> <p>I get the following error:</p> <pre><code>clang: warning: libstdc++ is deprecated; move to libc++ [-Wdeprecated] In file included from hungarian.cc:15: /Users/xyz/tensorflow012/lib/python2.7/site-packages/tensorflow/include/tensorflow/core/framework/op.h:20:10: fatal error: 'unordered_map' file not found #include &lt;unordered_map&gt; ^ 1 error generated. </code></pre> <p><strong>Before downvoting/closing, please read the question in its entirety.</strong></p>
<p>I figured it out. Since I am running on Mac, g++ needs this flag <code>-undefined dynamic_lookup</code></p>
macos|tensorflow|g++|clang|libc++
0
1,901,255
60,967,950
Selenium Can't Find Element Returning None or []
<p>im having trouble accessing element, here is my code:</p> <pre><code>driver.get(url) desc = driver.find_elements_by_xpath('//p[@class="somethingcss xxx"]') </code></pre> <p>and im trying to use another method like this</p> <pre><code>desc = driver.find_elements_by_class_name('somethingcss xxx') </code></pre> <p>the element i try to find like this</p> <pre><code>&lt;div data-testid="descContainer"&gt; &lt;div class="abc1123"&gt; &lt;h2 class="xxx"&gt;The Description&lt;span data-tid="prodTitle"&gt;The Description&lt;/span&gt;&lt;/h2&gt; &lt;p data-id="paragraphxx" class="somethingcss xxx"&gt;sometext here &lt;br&gt;text &lt;br&gt; &lt;br&gt;text &lt;br&gt; and several text with &lt;br&gt; tag below &lt;/p&gt; &lt;/div&gt; &lt;!--and another div tag below--&gt; </code></pre> <p>i want to extract tag p inside div class="abc1123", but it doesn't return any result, only return [] when i try to get_attribute or extract it to text. </p> <p>When i try extract another element using this method with another class, it works perfectly.</p> <p>Does anyone know why I can't access these elements?</p>
<p>Try the following css selector to locate p tag.</p> <pre><code>print(driver.find_element_by_css_selector("p[data-id^='paragraph'][class^='somethingcss']").text) </code></pre> <p>OR Use <code>get_attribute("textContent")</code></p> <pre><code>print(driver.find_element_by_css_selector("p[data-id^='paragraph'][class^='somethingcss']").get_attribute("textContent")) </code></pre>
python-3.x|selenium|xpath|selenium-chromedriver
0
1,901,256
49,529,447
How to use preprocessing_function in Keras 2.1.5
<p>I am trying to use Transfer learning on VGG16 pretrained model for image classification task with 13 classes by retraining last 4 layers of the pretrained netowrk. </p> <p>I am also using ImageDataGenerator from keras as mentioned <a href="https://keras.io/preprocessing/image/" rel="nofollow noreferrer">here</a>. </p> <p>In this method, I am not able to figure out how should i use vgg16's <code>preprocess_input</code> method imported from <code>from keras.applications.vgg16 import preprocess_input</code> in ImageDataGenerator.</p> <p>Whenever i run the code i get an error saying <strong>JpegImageFile’ object is not subscriptable</strong></p> <pre><code>from keras.applications import VGG16 from keras import layers from keras import optimizers from keras.models import Sequential from keras.layers import Conv2D from keras.layers import MaxPooling2D from keras.layers import Flatten from keras.layers import Dense from keras.applications.vgg16 import preprocess_input from keras.preprocessing.image import ImageDataGenerator train_dir = '' validation_dir = '' vgg_conv = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3)) for layer in vgg_conv.layers[:-4]: layer.trainable = False for layer in vgg_conv.layers: print(layer, layer.trainable) model = Sequential() # Add the vgg convolutional base model model.add(vgg_conv) model.add(layers.Flatten()) model.add(layers.Dense(1024, activation='relu')) model.add(layers.Dropout(0.5)) model.add(layers.Dense(13, activation='softmax')) model.summary() train_datagen = ImageDataGenerator( rescale=1. / 255, rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=True, fill_mode='nearest', preprocessing_function = preprocess_input ) validation_datagen = ImageDataGenerator(rescale=1. / 255) train_batchsize = 100 val_batchsize = 20 train_generator = train_datagen.flow_from_directory( train_dir, target_size=(224, 224), batch_size=train_batchsize, class_mode='categorical' ) validation_generator = validation_datagen.flow_from_directory( validation_dir, target_size=(224, 224), batch_size=val_batchsize, class_mode='categorical', shuffle=False) model.compile(loss='categorical_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['acc']) history = model.fit_generator( train_generator, steps_per_epoch=550, epochs=30, validation_data=validation_generator, validation_steps=430) model.save('small_last4.h5') </code></pre> <p>As suggested at various places, I also tried custom preprocess func. This also doesn't work.</p> <pre><code>vgg_mean = np.array([123.68, 116.779, 103.939], dtype=np.float32).reshape((3,1,1)) def vgg_preprocess(x): """ Subtracts the mean RGB value, and transposes RGB to BGR. The mean RGB was computed on the image set used to train the VGG model. Args: x: Image array (height x width x channels) Returns: Image array (height x width x transposed_channels) """ x = x - vgg_mean return x[:, ::-1] # reverse axis rgb-&gt;bgr </code></pre> <p>Interestingly this problem is only in Keras 2.1.5. In 2.1.4 it works fine. The drawback that I am facing in downgrading keras is that my training time has drastically increased.</p>
<p>You can add a labda layer before adding vgg_conv like this:</p> <pre><code>from keras.applications.inception_v3 import preprocess_input model = Sequential() model.add(Lambda(preprocess_input, name='preprocessing', input_shape=(224, 224, 3))) model.add(vgg_conv) ... </code></pre> <p>unfortunately using <code>preprocess_input</code> from <code>keras.applications.vgg16</code> doesn't seem to work for me but you could try importing from <code>inception_v3</code>. Hopefully we are talking about the same preprocessing but i am not entirely sure.</p>
python|tensorflow|keras
0
1,901,257
49,783,659
Combining multiple regex capturing groups and get first match
<p>I have multiple groups and I want to split the string on the first match, so the code looks like this:</p> <pre><code>regex_patterns = ( r"(?P&lt;group1&gt;345)", r"(?P&lt;group2&gt;123)", ) p = re.compile("|".join(regex_patterns)) p.split("012345", maxsplit=1) </code></pre> <p>this will output <code>["0", "123", None, "45"]</code>, so it will show <code>None</code> for <code>&lt;group2&gt;</code>. Is there a way to make this only output <code>["0", "123", "45"]</code> (i.e. ignore unmatched groups), and which group was matched?</p>
<p>re.split can keep what's split on, if it's in capture group.</p> <p>But your combined regex contains multiple named capture groups.<br> So simplify it and create a combined regex that has only 1 capture group.</p> <p><strong>Example snippet:</strong> </p> <pre><code>import re regex_patterns = ( '345', '123' ) regex_combined_pattern = '('+ '|'.join(regex_patterns) +')' print(regex_combined_pattern) str = '012345603456' result = re.split(regex_combined_pattern, str) print(result) </code></pre> <p><strong>Output:</strong> </p> <pre><code>(345|123) ['0', '123', '4560', '345', '6'] </code></pre>
regex|python-3.x|regex-group
1
1,901,258
49,457,930
JSON parsing with multiple nested levels
<p>We have source JSON with multiple nested levels that need to be flattened and then inserted into a relational table.</p> <p>The problem here is that we have multiple objects being returned with varying nested levels. We are looking to build a generic JSON parser that flattens any JSON and inserts into a table.</p> <p>For example, Type 1:</p> <pre><code>{ "a": 1, "b": 2, "c": 3, "d": [ { "a1": "i_1", "b1": "i_2" }, { "a1": "j_1", "b1": "j_2" } ] } </code></pre> <p>Type 2:</p> <pre><code>{ "a": 1, "b": 2, "d": [ { "a1": 1, "b1": 2, "c1": [ { "a2": 1 } ] } ] } </code></pre> <p>I want to design a blackbox where I just input the JSON and may be few parameters to flatten it out and then insert into corresponding tables for Type 1 and Type 2 Jsons. Is it possible to handle all possible cases within a python function</p> <p>This is sample output I need for Type 1 - </p> <pre><code>col a | col b | col c| col d_a1 | col d_b1 1 2 3 i_1 i_2 1 2 3 j_1 j_2 </code></pre>
<p>You need to make a recursive function.</p> <pre><code>def recursive_object_to_table(json, prefix=''): for key in json: new_key = prefix + key if type(json[key]) is not dict: if new_key not in table: table[new_key] = [] table[new_key].append(json[key]) else: recursive_object_to_table(json[key], new_key + '_') </code></pre>
python|json
0
1,901,259
62,740,845
Finding the longest path in a network based on vector edge weights
<p>I have this road network with elevation data for every <code>POINT</code> and a calculated grade value for every <code>LINESTRING</code>: <a href="https://i.stack.imgur.com/MPLHi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MPLHi.png" alt="network" /></a></p> <blockquote> <p><strong>Note:</strong> points plotted on graph are my own they do not represent the nodes on the graph which include all the missing endpoints</p> </blockquote> <p>I have converted it to a networkx <code>MultiGraph</code> which I have generated from a <code>GeoPandasDataframe</code>:</p> <pre><code>seg_df = pd.DataFrame( {'grade': grades}) seg_grade = gpd.GeoDataFrame( seg_df, geometry=new_seg_list) network = momepy.gdf_to_nx(seg_grade, approach='primal') </code></pre> <p>in the code above <code>grades</code> is a list of integer grade values and new_seg_list is a list of <code>LINESTRING</code> objects which match with the indices of the grade list ex:</p> <blockquote> <p>the grade of <code>new_seg_list[0]</code> = <code>grades[0]</code></p> </blockquote> <p>this grade value is the <code>elevation_change</code> from <code>LINESTRING.coords[0]</code> to <code>LINESTRING.coords[-1]</code> divided by the length of the <code>LINESTRING</code>.</p> <p>My network object has the correct node and edge values so that functions such as</p> <pre><code>nx.shortest_path(G=Gr, source=start_node, target=end_node, weight='length') </code></pre> <p>works correctly. How do I find the longest path (only use each edge once) that is entirely downhill (negative grade from <code>LINESTRING.coords[0]</code> to <code>LINESTRING.coords[-1]</code>)?</p> <p>The main difficulties that I'm having are the fact that the <code>grade</code> values are from the start vertex to the end vertex of each <code>LINESTRING</code> which makes it hard to translate into the <code>networkx</code> graph. I still have the elevation data for each node so if there is some way to calculate this grade as paths are tested that might be the best way</p>
<p>So, in general <a href="https://en.wikipedia.org/wiki/Longest_path_problem" rel="nofollow noreferrer">longest path problem is NP-hard</a>. However, it is solvable in linear time when the graph is a directed acyclic graph (DAG).</p> <p>For your case, I don't fully understand the problem, but perhaps you can build a DAG like this to produce your desired output:</p> <ul> <li>Only have an edge between nodes that are on different elevations, and make the direction from the one higher to lower.</li> </ul> <p>So then you'd have a DAG, where there are edges from higher <code>POINT</code>s to lower <code>POINT</code>s, and there you can find the longest path using <a href="https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.dag.dag_longest_path.html" rel="nofollow noreferrer">networkx.algorithms.dag.dag_longest_path</a> readily available in <code>networkx</code>.</p>
algorithm|networkx|graph-theory|geopandas|shapely
0
1,901,260
70,084,936
avoid writing df['column'] twice when doing df['column'] = df['column']
<p>I don't even know how to phrase this but is there a way in Python to reference the text before the equals without having to actually write it again?</p> <p>** EDIT - I'm using python3 in Jupyter</p> <p>I seem to spend half my life writing:</p> <pre><code>df['column'] = df['column'].some_changes </code></pre> <p>Is there a way to tell Python that I'm referencing the part before the equals sign?</p> <p>For example, I would write the following, where <code>&lt;%</code> is just to represent the reference to the text before the <code>=</code> (<code>df['column']</code>)</p> <pre><code>df['column'] = &lt;%.replace(np.nan) </code></pre>
<p>you are looking for <strong>in place</strong> methods. I believe you can pass <code>inplace=True</code> as an argument to most methods in pandas</p> <p>so it would be something just like</p> <pre><code>df['column'].replace(np.nan, inplace=True) </code></pre> <p><strong>edit</strong></p> <p>You could also do</p> <p><code>df[&quot;computed_column&quot;] = df[&quot;original_column&quot;].many_operations</code></p> <p>so you still have access to the original data down the line. And do all the needed operations at once instead of saving each step.</p> <p>One of the advantages of inplace not being the default is if you are doing a batch of operations and it fails midway your data is not mangled.</p>
python|pandas|jupyter
1
1,901,261
33,464,973
Python equivalent of ArrayPlot from Mathematica?
<p>After much googling, I decided I should just ask this question.</p> <p>I would like the following functionality, but I can't figure out how to do it at all.</p> <p><a href="https://reference.wolfram.com/language/ref/ArrayPlot.html" rel="nofollow">https://reference.wolfram.com/language/ref/ArrayPlot.html</a></p> <p>Basically, I want to generate a grid map where the color of each pixel is specified by me.</p> <p>Thank you!</p>
<p>In <code>matplotlib</code> package, there is a function <a href="https://matplotlib.org/api/pyplot_api.html?highlight=matplotlib%20pyplot%20imshow#matplotlib.pyplot.imshow" rel="nofollow noreferrer"><code>imshow</code></a>.</p>
python|numpy|matplotlib|plot|wolfram-mathematica
1
1,901,262
33,468,149
find max, min from a .txt-file which has multiple columns
<p>I have a lot of data in a <code>.txt</code> file. It looks something like this:</p> <pre><code># T1 T2 T3 T4 T5 T6 T7 T8 1 20.67 20.70 20.73 20.76 20.69 20.73 20.66 20.72 2 20.68 20.70 20.74 20.75 20.69 20.73 20.66 20.72 </code></pre> <p>I want to find the max/min. values using a Python script.</p> <p>First I tried to find what the maximum of the <code>T1</code> column is. This is my (very, very simple) code:</p> <pre><code>import numpy as np T1 = np.genfromtxt('data.txt', unpack=True) T1_max=np.maximum(T1) print("T1_max") </code></pre> <p>When I try to run it I receive error messages like these:</p> <pre><code>Line #7816 (got 10 columns instead of 1) Line #7817 (got 10 columns instead of 1) Line #7818 (got 10 columns instead of 1) Line #7819 (got 10 columns instead of 1) Line #7820 (got 10 columns instead of 1) Line #7821 (got 2 columns instead of 1) </code></pre> <p>(it starts with Line #2 (got 10..). It has to be the 'np.genfromtxt' function. What argument do I have to add to make it work? Or do you have any idea how to start an alternative script which puts out the max./min. values?</p> <p>Can anyone help me?</p>
<p>Problem here was not setting 8 variables. One variable for each of the columns. Also the 'max'-function from python will do the job:</p> <pre><code>T1, T2, T3, T4, T5, T6, T7, T8 = np.genfromtxt('data.txt', unpack=True) T1max = max(T1) print(T1max) </code></pre>
numpy|max|minimum
1
1,901,263
73,614,632
Is there a way to reduce the resolution of xarray facetgrid plots?
<p>When plotting a single xarray I can use something like: <code>ds['variable'][::5,::5].plot()</code> to reduce the lat and lon resolution by a factor of 5. Is there a similar way to reduce the resolution when using facet grids?</p> <p>e.g., I'm currently plotting: <code>WL_monthly.plot(x='lon',y='lat',col='time',col_wrap=4)</code> but due to the high resoultion of the data it takes a couple of minutes to plot.</p> <p>I tried: <code>WL_monthly[::5,::5].plot(x='lon',y='lat',col='time',col_wrap=4)</code> but this slices the array's time entries.</p>
<p>When slicing positionally like this, the slice arguments are interpreted in the order of the array's dimensions. So you can inspect <code>WL_monthly.dims</code> to see the dimension ordering and then slice accordingly.</p> <p>For example, if your array has dimensions <code>(time, lat, lon)</code>, you could plot using:</p> <pre class="lang-py prettyprint-override"><code>WL_monthly[:, ::5,::5].plot(x='lon',y='lat',col='time',col_wrap=4) </code></pre> <p>You can also slice using named dimensions using <code>.sel</code> or using <code>.loc</code>, which allows you to specify dims using a dictionary and slice with a python <code>slice</code> object, e.g.:</p> <pre class="lang-py prettyprint-override"><code># using .sel() WL_monthly.sel(lat=slice(None, None, 5), lon=slice(None, None, 5)) # using .loc[] WL_monthly.loc[{'lat': slice(None, None, 5), 'lon': slice(None, None, 5)}] </code></pre> <p>This is admittedly a bit clunky. The xarray docs on <a href="https://docs.xarray.dev/en/stable/user-guide/indexing.html#indexing-with-dimension-names" rel="nofollow noreferrer">indexing and selecting data</a> summarize the situation well:</p> <blockquote> <p>We would love to be able to do indexing with labeled dimension names inside brackets, but unfortunately, Python <a href="https://legacy.python.org/dev/peps/pep-0472/" rel="nofollow noreferrer">does yet not support</a> indexing with keyword arguments like <code>da[space=0]</code></p> </blockquote>
python|python-xarray|facet-grid
1
1,901,264
73,690,107
Separating column string values with varying delimiters
<p>I have a column in a dataframe that I want to split into two columns. The values in the column are strings with a players' name followed by their position. Because players have different numbers of names, this becomes a bigger issue.</p> <p>For example:</p> <ul> <li>1 name: <code>Jorginho Defensive Midfield</code></li> <li>2 names: <code>Heung-min Son Left Winger</code></li> <li>3 names: <code>Bilal El Khannouss Attacking Midfield</code></li> </ul> <p>The desired output would be:</p> <pre><code>Player Position Jorginho Defensive Midfield Heung-min Son Left Winger Bilal El Khannouss Attacking Midfield </code></pre> <p>I believe this can be done by listing the player positions, however I don't know how to approach that problem. I tried separating using <code>split()</code> with a space character as the delimiter, but that doesn't work unfortunately.</p> <pre><code>import pandas as pd df = pd.DataFrame({'Player': ['Richarlison Centre-Forward', 'Heung-min Son Left Winger', 'Harry Wilson Right Winger', 'Bilal El Khannouss Attacking Midfield', 'Eduardo Camavinga Central Midfield', 'Jorginho Defensive Midfield', 'Lewis Patterson Centre-Back', 'Layvin Kurzawa Left-Back', 'Kyle Walker Right-Back', 'Jordan Pickford Goalkeeper']}) positions = ['Centre-Forward', 'Left Winger', 'Right Winger', 'Attacking Midfield', 'Central Midfield', 'Defensive Midfield', 'Centre-Back', 'Left-Back', 'Right-Back', 'Goalkeeper'] </code></pre> <p>Is this possible to do?</p>
<p>You can craft a regex.</p> <pre><code>import re regex = '|'.join(map(re.escape, positions)) df['Player'].str.extract(fr'(.*)\s*({regex})') </code></pre> <p><em>NB. changed <code>'Central Midfielder'</code> to <code>'Central Midfield'</code> in the list of positions.</em></p> <p>Another approach <strong>that does not require any list</strong>, would be to extract the last 2 words (either separated by spaces, or a dash):</p> <pre><code>df['Player'].str.extract(r'(.*)\s(\w+(?:-|\s+)\w+)') </code></pre> <p>output:</p> <pre><code> 0 1 0 Richarlison Centre-Forward 1 Heung-min Son Left Winger 2 Harry Wilson Right Winger 3 Bilal El Khannouss Attacking Midfield 4 Eduardo Camavinga Central Midfield 5 Jorginho Defensive Midfield 6 Lewis Patterson Centre-Back 7 Layvin Kurzawa Left-Back 8 Kyle Walker Right-Back 9 Jordan Pickford Goalkeeper </code></pre>
python|pandas|string|dataframe|split
4
1,901,265
73,545,537
How to set matplotlib parameters using a file
<p>I am making a series of plots from several different scripts. I want to use the 'seaborn-bright' style with some minor changes.</p> <p>How can I efficiently apply the style and changes to all scripts, before creating the plots, without having to copy/paste the template to every script? Something where I can import the style+changes and automatically apply to every plot generated in the script.</p> <p>I guess I could create the plots and the apply a function to the fig, ax to clean them up, but I'd rather define things at the start.</p> <p>I also could save the style sheet for seaborn-bright and edit the definitions, but that seems tedious and it seems like there should be a better way.</p> <p>Example:</p> <pre><code>import matplotlib.pyplot as plt ### Template for all plots ### How do I have this as a separate file and just call/execute? plt.style.use(&quot;seaborn-bright&quot;) plt.rcParams[&quot;figure.figsize&quot;] = (3,2) plt.rcParams[&quot;figure.dpi&quot;] = 120 plt.rcParams[&quot;xtick.direction&quot;] = &quot;in&quot; # lots of other little things ### Example plot in a stand-alone script fig, ax = plt.subplots(1, 1) ax.plot([0, 1], [0, 1]) ax.plot([0.2, 0.8], [0.2, 0.2]) ax.plot([0.2, 0.8], [0.4, 0.4]) </code></pre>
<p>You can place all these features in a separate <code>py</code> file that is located in the same directory as your main code file (<code>ipynb</code>) and then call it to run with <code>%run -i Parameters.py</code> or whatever you want to call it. You can even put the <code>py</code> file in another folder, you just have to make the current working directory where that file is located with <code>os.chdir('Path/to/Parameters.py')</code></p> <p>My <code>Parameters.py</code> file:</p> <pre class="lang-py prettyprint-override"><code>plt.style.use(&quot;seaborn-bright&quot;) plt.rcParams[&quot;figure.figsize&quot;] = (3,2) plt.rcParams[&quot;figure.dpi&quot;] = 120 plt.rcParams[&quot;xtick.direction&quot;] = &quot;in&quot; </code></pre> <p>My <code>Main.ipynb</code> code:</p> <pre><code>import matplotlib.pyplot as plt %run -i Parameters.py plt.plot([1,2,3]) </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/wMuR5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wMuR5.png" alt="enter image description here" /></a></p> <h2>~~ EDIT for Main.py instead of Main.ipynb ~~</h2> <p>If you are working with two <code>py</code> files, the best thing to do would be to just use <code>import YourParametersFileName</code> like any other module. Unfortunately, you will have to import your other modules in both <code>py</code> files (e.g. <code>import matplotlib.pyplot as plt</code> will have to be in both files). But Python is great in not using extra resources to double import those modules (as it sees that they are already loaded, but it does need the <code>plt</code> variable definition for both for example). So, name the parameters file something unique (as to not mess with your python environment by naming it like <code>Numpy.py</code> or something) and you should be good to go:</p> <p><code>Parameters.py</code> code:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt plt.style.use(&quot;seaborn-bright&quot;) plt.rcParams[&quot;figure.figsize&quot;] = (3,2) plt.rcParams[&quot;figure.dpi&quot;] = 120 plt.rcParams[&quot;xtick.direction&quot;] = &quot;in&quot; </code></pre> <p><code>Main.py</code> code:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import Parameters plt.plot([1,2,3]) plt.show() </code></pre>
python|matplotlib
1
1,901,266
24,882,720
Why SymPy can't solve quadratic equation with complicated coefficients
<p>SymPy can easily solve quadratic equations with short simple coefficients. For example:</p> <pre><code>from pprint import pprint from sympy import * x,b,f,Lb,z = symbols('x b f Lb z') eq31 = Eq((x*b + f)**2, 4*Lb**2*z**2*(1 - x**2)) pprint(eq31) sol = solve(eq31, x) pprint(sol) </code></pre> <p>But with a little bit larger coefficients - it's can't:</p> <pre><code>from pprint import pprint from sympy import * c3,b,f,Lb,z = symbols('c3 b f Lb z') phi,Lf,r = symbols('phi Lf r') eq23 = Eq( ( c3 * (2*Lb*b - 2*Lb*f + 2*Lb*r*cos(phi + pi/6)) + (Lb**2 - Lf**2 + b**2 - 2*b*f + 2*b*r*cos(phi + pi/6) + f**2 - 2*f*r*cos(phi + pi/6) + r**2 + z**2) )**2, 4*Lb**2*z**2*(1 - c3**2) ) pprint(eq23) print("\n\nSolve (23) for c3:") solutions_23 = solve(eq23, c3) pprint(solutions_23) </code></pre> <p>Why?</p>
<p>This is not specific to Sympy - other programs like Maple or Mathematica suffer from same the problem: When solving an equation, <code>solve</code> needs to choose a proper solution strategy (see e.g. <a href="http://docs.sympy.org/dev/modules/solvers/solvers.html" rel="nofollow">Sympy's Solvers</a>) based on assumptions about the variables and the structure of the equation. These are choices are normally heuristic and often incorrect (hence no solution, or false strategies are tried first). Furthermore, the assumptions of variables is often to broad (e.g., complex instead of reals).</p> <p>Thus, for complex equations the solution strategy often has to be given by the user. For your example, you could use:</p> <pre><code>sol23 = roots(eq23.lhs - eq23.rhs, c3) </code></pre>
python|sympy|solver|quadratic
1
1,901,267
24,726,469
Graphics on Linux without full-blown window manager?
<p>Is it possible to do graphics on Linux without installing and using a full blown window manager like Gnome/KDE etc?</p> <p>I am working on an embedded system with a touch screen and I just need to generate Python plots and perhaps have a few buttons to select which plots are displayed. There is no mouse or keyboard.</p> <p>I do not want to use a windowing system because that would be total overkill for this project. Is there any way I can just display my plots and buttons in fixed locations on the screen and be done with it? Platform is Debian Linux.</p>
<p>Yes, there are libs available. Years ago I used svgalib. Games like Quake used it as well.</p> <p><a href="http://www.svgalib.org" rel="nofollow">http://www.svgalib.org</a></p> <p>I may be behind the times, however, so I am not sure how current this alternative is. It seems a bit out of date.</p>
python|linux|user-interface|graphics|touchscreen
2
1,901,268
38,422,643
Call AWS lambda function from an existing lambda function on Python 2.7
<p>I'm trying to call another lambda function from an existing lambda fucntion as below (python 2.7)</p> <pre><code>from __future__ import print_function import boto3 import json lambda_client = boto3.client('lambda') def lambda_handler(event, context): invoke_response = lambda_client.invoke(FunctionName="teststack", InvocationType='RequestResponse' ) print(invoke_response) return str(invoke_response) </code></pre> <p>I'm gettting the below response instead of an actual result. When I run teststack lambda invidually it works fine, but getting below response instead of "test" returned by the <code>teststack</code> Lambda function.</p> <pre><code>{u'Payload': &lt;botocore.response.StreamingBody object at ****&gt;, 'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': '******', 'HTTPHeaders': {'x-amzn-requestid': '******', 'content-length': '155', 'x-amzn-remapped-content-length': '0', 'connection': 'keep-alive', 'date': 'Sun, 17 Jul 2016 21:02:01 GMT', 'content-type': 'application/json'}}, u'StatusCode': 200} </code></pre>
<p>The response data you're looking for is there, it's just inside the <code>Payload</code> as a <a href="http://botocore.readthedocs.io/en/latest/reference/response.html#botocore.response.StreamingBody" rel="noreferrer">StreamingBody</a> object.</p> <p>According to the Boto docs, you can read the object using the <code>read</code> method:</p> <pre><code>invoke_response['Payload'].read() </code></pre>
python|python-2.7|aws-lambda
20
1,901,269
40,156,212
Class decorator not being called everytime
<p>A seemingly easy thing which i cant get around.</p> <pre><code>registry = {} def register(cls): registry[cls.__clsid__] = cls print cls return cls @register class Foo(object): __clsid__ = "123-456" def bar(self): pass c=Foo() d=Foo() e=Foo() </code></pre> <p>Output:</p> <pre><code>&lt;class '__main__.Foo'&gt; </code></pre> <p>Now i expect decorator to be called <code>3</code> times.Why has it been called only <code>once</code>.</p>
<p>A class decorator is applied <em>when the class is created</em>, not each time an instance is created.</p> <p>The <code>@register</code> line applies to the <code>class Foo(object):</code> statement only. This is run just <em>once</em>, when the module is imported.</p> <p>Creating an instance does not need to re-run the class statement because instances are just objects that keep a reference to the class (<code>type(c)</code> returns the <code>Foo</code> class object); instances are not 'copies' of a class object.</p> <p>If you want to register <em>instances</em> you'll either have to do so in the <code>__init__</code> or the <code>__new__</code> method of a class (which can be decorated too). <code>__new__</code> is responsible for creating the instance, <code>__init__</code> is the hook called to initialise that instance.</p>
python|python-2.7|decorator|python-decorators
4
1,901,270
40,281,490
How to connect to a linux server only support xterm terminal
<p>I want to ssh to a server with <code>paramiko</code> modules, but when I do this, Get the server respons like: Error:only support xterm terminal <img src="https://i.stack.imgur.com/mql4z.png" alt="enter image description here"></p> <p>Connect code:</p> <pre><code>ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect(hostname=blip,username=bluser,password=blpasswd) channel = ssh.invoke_shell() </code></pre> <p>How can I set the terminal type? My evn: OS:windows 7 IDE:pycharm python version:3.4</p>
<p>According to <a href="http://jingege.me/2013/09/25/only-support-xterm-terminal/" rel="nofollow">this website</a>, you'd need to set the <code>TERM</code> environment variable to 'xterm'.</p> <p>However, according to the <a href="http://docs.paramiko.org/en/2.0/api/client.html#paramiko.client.SSHClient.invoke_shell" rel="nofollow">paramiko documentation</a>, you can tell <code>invoke_shell</code> to emulate a terminal type like so:</p> <pre><code>ssh.invoke_shell(term='xterm') </code></pre>
python|linux|ssh|paramiko|xterm
2
1,901,271
29,018,373
how can i compare string within a list to an integer number?
<p>my task today is to create a function which takes a list of string and an integer number. If the string within the list is larger then the integer value it is then discarded and deleted from the list. This is what i have so far: </p> <pre><code>def main(L,n): i=0 while i&lt;(len(L)): if L[i]&gt;n: L.pop(i) else: i=i+1 return L #MAIN PROGRAM L = ["bob", "dave", "buddy", "tujour"] n = int (input("enter an integer value) main(L,n) </code></pre> <p>So really what im trying to do here is to let the user enter a number to then be compared to the list of string values. For example, if the user enters in the number 3 then dave, buddy, and tujour will then be deleted from the list leaving only bob to be printed at the end. </p> <p>Thanks a million!</p>
<p>Looks like you are doing to much here. Just return a list comprehension that makes use of the appropriate conditional.</p> <pre><code>def main(L,n): return([x for x in L if len(x) &lt;= n]) </code></pre>
python
1
1,901,272
8,546,870
Why does this socket connection only allow 1 send and receive?
<p><strong>Background</strong><br> I have a simple socket server setup that I am trying to allow simultaneous connections to and echo back the data. The client side launches several threads each making its own connection to the server. This works fine for the socket.send() call, but all subsequent calls cause either a "Connection reset by peer" or a "Broken pipe". Note that I have not found the change that toggles the reset and broken pipe. I have looked here on SO for a solution, but I'm afraid I may not know what to search for.</p> <p>Am I going about this in the wrong manner, or am I overlooking something in my setup?</p> <p><strong>Server</strong> </p> <pre><code>import SocketServer class MyTCPHandler(SocketServer.BaseRequestHandler): def handle(self): self.data = self.request.recv(1024).strip() print "{} wrote: {}\n".format(self.client_address[0], self.data) self.request.send(self.data.upper()) if __name__ == "__main__": HOST, PORT = "localhost", 9999 server = SocketServer.TCPServer((HOST, PORT), MyTCPHandler) server.serve_forever() </code></pre> <p><strong>Client</strong> </p> <pre><code>import socket import sys import threading import time HOST, PORT = "localhost", 9999 def create_client(): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: cur_thread = threading.current_thread() sock.connect((HOST, PORT)) for x in range(55): msg = "{}: {}\n".format(cur_thread.name, str(x)) # Connect to server and send data print cur_thread.name + ": sending message\n" sock.send(msg) # Receive data from the server and shut down received = sock.recv(2048) print "RX:" + received finally: cur_thread = threading.current_thread() response = "{}: Closing!\n".format(cur_thread.name) print response sock.close() if __name__ == "__main__": print "testing single thread" #create_client() print "starting threads" client_1 = threading.Thread(target=create_client) client_1.daemon = True client_1.start() client_2 = threading.Thread(target=create_client) client_2.daemon = True client_2.start() time.sleep(20) </code></pre>
<p>When you return from <code>handle</code> the socket is closed. Use a while loop and return from <code>handle</code> only when <code>self.data == ''</code>. <code>recv</code> returns zero bytes when the client closes the connection. Also don't <code>strip()</code> the result until after testing the return value or you could get a false close. Finally, use <code>ThreadingTCPServer</code> or the server can only handle one connection at a time.</p> <p>Example:</p> <pre><code>import SocketServer class MyTCPHandler(SocketServer.BaseRequestHandler): def handle(self): while True: self.data = self.request.recv(1024) if self.data == '': break self.data = self.data.strip() print "{} wrote: {}\n".format(self.client_address[0], self.data) self.request.send(self.data.upper()) if __name__ == "__main__": HOST, PORT = "localhost", 9999 server = SocketServer.ThreadingTCPServer((HOST, PORT), MyTCPHandler) server.serve_forever() </code></pre> <p>Also note the <code>send()</code> is not guaranteed to send all bytes of message, so use <code>sendall()</code> or check the return value. <code>recv()</code> can also be tricky. TCP/IP is a streaming protocol and has no concept of message boundaries, so it is up to you to implement a protocol to check that you have received a complete message. It is possible to send 10000 bytes and receive less than that, requiring multiple receives to get the whole message. It is also possible to make two sends and receive both in one receive, or even all of one send and part of another. For your example simply buffering all receives until there is a <code>\n</code> in the message would do for a simple protocol.</p>
python|sockets|tcp|socketserver|broken-pipe
8
1,901,273
8,436,212
mpylayer, PyQt4 and lineEdit
<p>Consider the minimal example below. It works perfectly until I uncomment the following lines: </p> <pre><code># self.mainwi = QtGui.QWidget(self) # self.lineEdit1 = QtGui.QLineEdit(self.mainwi) # self.setCentralWidget(self.lineEdit1) </code></pre> <p>If those lines are uncommented, I can write text in the LineEdit-field, but the buttons don't react. Any idea what's wrong with it, how to fix this?</p> <p>I should add that I am an absolute beginner in programming python.</p> <pre><code>#!/usr/bin/python import mpylayer import sys from PyQt4 import QtCore from PyQt4 import QtGui class DmplayerGUI(QtGui.QMainWindow): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) self.dirty = False self.mp = mpylayer.MPlayerControl() #Toolbar ## items ### Play self.play = QtGui.QAction(QtGui.QIcon('icons/play_32.png'), 'Play', self) self.play.setShortcut('Ctrl+A') self.connect(self.play, QtCore.SIGNAL('triggered()'), self.DPlay) ### Pause self.pause = QtGui.QAction(QtGui.QIcon('icons/pause_32.png'), 'Pause', self) self.pause.setShortcut('Ctrl+P') self.connect(self.pause, QtCore.SIGNAL('triggered()'), self.DPause) ## toolbar self.toolbar = self.addToolBar('Toolbar') self.toolbar.addAction(self.play) self.toolbar.addAction(self.pause) # self.mainwi = QtGui.QWidget(self) # self.lineEdit1 = QtGui.QLineEdit(self.mainwi) # self.setCentralWidget(self.lineEdit1) # play def DPlay(self): self.mp.loadfile('video.mp4') # pause def DPause(self): self.mp.pause(self) if __name__ == "__main__": app = QtGui.QApplication(sys.argv) dp = DmplayerGUI() dp.show() sys.exit(app.exec_()) </code></pre>
<p>You do not need the mainwi at all in this simple example. Just do</p> <pre><code>self.lineEdit1 = QtGui.QLineEdit(self) self.setCentralWidget(self.lineEdit1) </code></pre> <p>In case you really wanted it, then you need to set the mainwi as the centralwidget</p> <pre><code>self.mainwi = QtGui.QWidget(self) self.lineEdit1 = QtGui.QLineEdit(self.mainwi) self.setCentralWidget(self.mainwi) </code></pre> <p>do not forget to add some layout for mainwi, since this looks ugly :-)</p> <p>Anyway, I have to admit, that I do not know why exactly does it "disable" the buttons. But the central widget has to be a child of the window as far as I know.</p>
python|pyqt4|mplayer
2
1,901,274
52,144,364
Draw a line between points to compare, plot and analyze using python
<p><img src="https://i.stack.imgur.com/FjOll.png" alt="Sample image to be processed"></p> <p>I wanted to draw a line in every point between the white spot and the green rectangle-like beside each spot (the green ones served as the reference and the spots are the distorted ones which I would like to get the difference between them and plot it (surface plot,etc..)</p> <p>Also, I used this code to detect the spots:</p> <pre><code>if circles is not None: circles = np.round(circles[0,:].astype("int")) for (x,y,r) in circles: cv2.circle(output2, (x,y),r,(0,255,0),2) </code></pre> <p><img src="https://i.stack.imgur.com/ctFNI.png" alt="Sample image with detected spots"></p> <p>how can I know the center of each spot and how to use them as an array/list? just to begin with, because based on what I read, these are necessary to plot the shape. Thanks</p>
<p>Here is a solution using <code>scikit-image</code> Hough-transform. Using the following code you can detect the circles, find the centers and radii (you can use <code>cv2</code>'s corresponding functions in the same way):</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from skimage import data, color, io from skimage.transform import hough_circle, hough_circle_peaks from skimage.feature import canny from skimage.draw import circle_perimeter from skimage.util import img_as_ubyte image = color.rgb2gray(img_as_ubyte(io.imread('new images/FjOll.png'))) edges = canny(image, sigma=1) hough_radii = [6] # detect circles of radius 6 hough_res = hough_circle(edges, hough_radii) # select most prominent 25 circles accums, cx, cy, radii = hough_circle_peaks(hough_res, hough_radii, total_num_peaks=20) # Draw circles fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 4)) image = color.gray2rgb(image) for center_y, center_x, radius in zip(cy, cx, radii): circy, circx = circle_perimeter(center_y, center_x, radius) print(center_y, center_x, radius) image[circy, circx] = (255, 0, 0) ax.imshow(image) plt.show() ## detected circles: (center_y, center_x, radius) # (171, 103, 6) # (56, 38, 6) # (16, 99, 6) # (141, 128, 6) # (126, 32, 6) # (95, 159, 6) # (120, 90, 6) # (56, 96, 6) # (57, 157, 6) # (120, 158, 6) # (140, 62, 6) # (108, 64, 6) # (77, 64, 6) # (42, 68, 6) # (106, 130, 6) # (73, 128, 6) # (38, 127, 6) # (75, 130, 6) # (88, 38, 6) # (86, 93, 6) </code></pre> <p><a href="https://i.stack.imgur.com/qWEcR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qWEcR.png" alt="enter image description here"></a></p>
python|image-processing|scikit-image|cv2
1
1,901,275
51,899,612
Mock the result of accessing public GCS bucket
<p>I have the following code:</p> <pre><code>bucket = get_bucket('bucket-name') blob = bucket.blob(os.path.join(*pieces)) blob.upload_from_string('test') blob.make_public() result = blob.public_url # result is `&lt;Mock name='mock().get_bucket().blob().public_url` </code></pre> <p>And I would do like to mock the result of <em>public_url</em>, my unit test code is something like this</p> <pre><code>with ExitStack() as st: from google.cloud import storage blob_mock = mock.Mock(spec=storage.Blob) blob_mock.public_url.return_value = 'http://' bucket_mock = mock.Mock(spec=storage.Bucket) bucket_mock.blob.return_value = blob_mock storage_client_mock = mock.Mock(spec=storage.Client) storage_client_mock.get_bucket.return_value = bucket_mock st.enter_context( mock.patch('google.cloud.storage.Client', storage_client_mock)) my_function() </code></pre> <p>Is there something like <a href="https://github.com/guilleiguaran/fakeredis" rel="noreferrer">FakeRedis</a> or <a href="https://github.com/spulec/moto" rel="noreferrer">moto</a> for Google Storage, so I can mock <code>google.cloud.storage.Blob.public_url</code>?</p>
<p>I found this <a href="https://github.com/fsouza/fake-gcs-server/tree/main/examples/python" rel="nofollow noreferrer">fake gcs server</a> written in Go which can be run within a Docker container and consumed by the Python library. See <a href="https://github.com/fsouza/fake-gcs-server/tree/main/examples/python" rel="nofollow noreferrer">Python examples</a>.</p>
google-api|mocking|google-cloud-storage|python-unittest|google-cloud-python
2
1,901,276
59,685,140
Python: perform blur only within a mask of image
<p>I have a greyscale image and a binary mask of an ROI in that image. I would like to perform a blur operation on the greyscale image but only within the confines of the mask. Right now I'm blurring the whole image and than just removing items outside the mask, but I don't want pixels outside of the mask affecting my ROI. Is there a way to do this without building a custom blur function?</p> <p>hoping for something like:</p> <pre><code>import scipy blurredImage = scipy.ndimage.filters.gaussian_filter(img, sigma = 3, weight = myMask) </code></pre> <p>@stefan:</p> <pre><code>blur = 3 invmask = np.logical_not(mask).astype(int) masked = img * mask remaining = img * invmask blurred = scipy.ndimage.filters.gaussian_filter(masked, sigma = blur) blurred = blurred+remaining </code></pre> <p>Dilate approach:</p> <pre><code>blur = 3 invmask = np.logical_not(mask).astype(int) masked = img * mask masked2 = scipy.ndimage.morphology.grey_dilation(masked,size=(5,5)) masked2 = masked2 *invmask masked2 = masked + masked2 blurred = scipy.ndimage.filters.gaussian_filter(masked2, sigma = blur) </code></pre>
<p>The right approach to apply a linear filter to a limited domain is to use <a href="http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/PIRODDI1/NormConv/NormConv.html" rel="noreferrer">Normalized Convolution</a>. This method computes (weighted) means within each neighborhood, then normalizes by the (weighted) number of pixels present in that neighborhood. It does so using only two applications of the filter and some trivial per-pixel operations:</p> <pre class="lang-py prettyprint-override"><code># normalized convolution of image with mask filter = scipy.ndimage.filters.gaussian_filter(img * mask, sigma = blur) weights = scipy.ndimage.filters.gaussian_filter(mask, sigma = blur) filter /= weights # after normalized convolution, you can choose to delete any data outside the mask: filter *= mask </code></pre> <p>Note that <code>mask</code> doesn't need to be just 0 and 1, it can contain intermediate values indicating how "certain" you are of the correctness of that pixel's value. But typically it's just 0 for "missing data" and 1 for available data.</p> <p><code>gaussian_filter</code> must do its computations in a floating-point format and return an floating-point-valued image. Integer operations will not do the correct thing here.</p> <hr> <p>Here's an example:</p> <p><a href="https://i.stack.imgur.com/pZ5lx.png" rel="noreferrer"><img src="https://i.stack.imgur.com/pZ5lx.png" alt="enter image description here"></a></p> <ul> <li><p>2nd image: Plain filtering, then removing the stuff outside the mask. This shows that the data outside the mask influences the result of the filtering.</p></li> <li><p>3rd image: Plain filtering, but setting stuff outside the mask to zero first. This shows that the zeros outside the mask influence the result of the filtering.</p></li> <li><p>4th image: Using normalized convolution: the data outside the masked area does not affect the filtering at all.</p></li> </ul>
python|image-processing|filter|scipy|gaussian
6
1,901,277
19,283,067
Selenium Webdriver with Python - driver.title parameter
<p>I'm new to Python and Selenium. How is the driver.title parameter is derived? Below is a simple webdriver script. <strong>How do you find what other driver.x parameters</strong> there are to use with the <a href="http://docs.python.org/2/library/unittest.html" rel="nofollow noreferrer">various asserts in the unittest module</a>?</p> <pre><code>import unittest from selenium import webdriver from selenium.webdriver.common.keys import Keys class PythonOrgSearch(unittest.TestCase): def setUp(self): self.driver = webdriver.Firefox() def test_search_in_python_org(self): driver = self.driver driver.get(&quot;http://www.python.org&quot;) self.assertIn(&quot;Python&quot;, driver.title) elem = driver.find_element_by_name(&quot;q&quot;) elem.send_keys(&quot;selenium&quot;) elem.send_keys(Keys.RETURN) self.assertIn(&quot;Google&quot;, driver.title) def tearDown(self): self.driver.close() if __name__ == &quot;__main__&quot;: unittest.main() </code></pre>
<p>I'm not sure what you are asking here.</p> <p>Other driver.x parameters can be found in <a href="http://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.remote.webdriver" rel="noreferrer">documentation</a> or <a href="https://github.com/SeleniumHQ/selenium/blob/master/py/selenium/webdriver/remote/webdriver.py" rel="noreferrer">source code</a>. </p> <pre><code># Generally I found the following might be useful for verifying the page: driver.current_url driver.title # The following might be useful for verifying the driver instance: driver.name driver.orientation driver.page_source driver.window_handles driver.current_window_handle driver.desired_capabilities </code></pre>
python-2.7|selenium|selenium-webdriver|parameters
28
1,901,278
19,216,549
Python: Converting a string to an integer
<p>I need to convert a string from a file that I have into an integer. The string in question is just one number.</p> <pre><code>L= linecache.getline('data.txt', 1) L=int(L) print L </code></pre> <p>I receive the error: </p> <pre><code>ValueError: invalid literal for int() with base 10: '\xef\xbb\xbf3\n' </code></pre> <p>How do I convert this string into an integer?</p>
<p>The file contains an UTF-8 BOM.</p> <pre><code>&gt;&gt;&gt; import codecs &gt;&gt;&gt; codecs.BOM_UTF8 '\xef\xbb\xbf' </code></pre> <p><a href="http://docs.python.org/2/library/linecache.html#linecache.getline" rel="noreferrer"><code>linecache.getline</code></a> does not support encoding.</p> <p>Use <a href="http://docs.python.org/2/library/codecs.html#codecs.open" rel="noreferrer"><code>codecs.open</code></a>:</p> <pre><code>with codecs.open('data.txt', encoding='utf-8-sig') as f: L = next(f) L = int(L) print L </code></pre>
python|string|int
5
1,901,279
18,834,393
Python XML File Open
<p>I am trying to open an xml file and parse it, but when I try to open it the file never seems to open at all it just keeps running, any ideas?</p> <pre><code>from xml.dom import minidom Test_file = open('C::/test_file.xml','r') xmldoc = minidom.parse(Test_file) Test_file.close() for i in xmldoc: print('test') </code></pre> <p>The file is 180.288 KB, why does it never make it to the print portion?</p>
<p><strong>Running your Python code with a few adjustments:</strong></p> <pre><code>from xml.dom import minidom Test_file = open('C:/test_file.xml','r') xmldoc = minidom.parse(Test_file) Test_file.close() def printNode(node): print node for child in node.childNodes: printNode(child) printNode(xmldoc.documentElement) </code></pre> <p><strong>With this sample input as test_file.xml:</strong></p> <pre><code>&lt;a&gt; &lt;b&gt;testing 1&lt;/b&gt; &lt;c&gt;testing 2&lt;/c&gt; &lt;/a&gt; </code></pre> <p><strong>Yields this output:</strong></p> <pre><code>&lt;DOM Element: a at 0xbc56e8&gt; &lt;DOM Text node "u'\n '"&gt; &lt;DOM Element: b at 0xbc5788&gt; &lt;DOM Text node "u'testing 1'"&gt; &lt;DOM Text node "u'\n '"&gt; &lt;DOM Element: c at 0xbc5828&gt; &lt;DOM Text node "u'testing 2'"&gt; &lt;DOM Text node "u'\n'"&gt; </code></pre> <p><strong>Notes:</strong></p> <ul> <li>As @LukeWoodward mentioned, avoid DOM-based libraries for large inputs, however 180K should be fine. For 180M, control may never return from <code>minidom.parse()</code> without running out of memory first (MemoryError).</li> <li>As @alecxe mentioned, you should eliminate the extraneous ':' in the file spec. You should have seen error output along the lines of <code>IOError: [Errno 22] invalid mode ('r') or filename: 'C::/test_file.xml'</code>.</li> <li>As @mzjn mentioned, <code>xml.dom.minidom.Document</code> is not iterable. You should have seen error output along the lines of <code>TypeError: iteration over non-sequence</code>.</li> </ul>
python|xml
12
1,901,280
62,128,814
Indentation error in VS Code for Python script
<p>I'm trying to run the following script in VS Code, however I keep getting an indentation error, which is kinda self-explanatory, but I haven't been able to find the error? Is there a way auto-formatting in VS Code? </p> <pre><code>import logging import uuid import json import azure.functions as func def main(msg: func.QueueMessage, message: func.Out[func.Document]) -&gt; None: logging.info('Python queue trigger function processed a queue item: %s', msg_body) data = json.dumps({ 'id': msg.id, 'body': msg.get_body().decode('utf-8'), 'expiration_time': (msg.expiration_time.isoformat() if msg.expiration_time else None), 'insertion_time': (msg.insertion_time.isoformat() if msg.insertion_time else None), 'time_next_visible': (msg.time_next_visible.isoformat() if msg.time_next_visible else None), 'pop_receipt': msg.pop_receipt, 'dequeue_count': msg.dequeue_count }) message.set(func.Document.from_json(json.dumps(data))) </code></pre> <p>My error message when I run the script:</p> <pre><code>[Running] python -u "c:\Users\artem\Desktop\function\inspariqueuestore\__init__.py" File "c:\Users\artem\Desktop\function\inspariqueuestore\__init__.py", line 12 data = json.dumps({ ^ IndentationError: unexpected indent [Done] exited with code=1 in 0.093 seconds </code></pre> <p><strong>UPDATE</strong></p> <p>I mixed tabs and spaces apparently. Issue resolved.</p>
<p>I don't see any indentation errors in your file.</p> <p>Yes, you can try to format your file using VS Code. There is a shortcut named <code>editor.action.formatDocument</code>. You can find the exact keys for your setup in settings.</p> <p>In my case it is <code>shift + option (alt) + F</code></p>
python|visual-studio-code
0
1,901,281
67,297,105
Separating a column into multiple columns using Pandas
<p>I am trying to use Pandas to load datasets and display them in tabular form. But I'm not sure why it can't be separated using delimiters. Does anyone know?</p> <p>This is the output I got: <a href="https://i.stack.imgur.com/Yc8iT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Yc8iT.png" alt="This is the output got by me." /></a></p> <p>My expected output is something like this: <a href="https://i.stack.imgur.com/xgl1I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xgl1I.png" alt="enter image description here" /></a></p> <p>The dataset that I used: <a href="https://www.kaggle.com/tunguz/big-five-personality-test" rel="nofollow noreferrer">https://www.kaggle.com/tunguz/big-five-personality-test</a></p>
<p>As per comments this works for me. One way to avoid manual downloading issues is to automate download</p> <ol> <li><code>pip3 install kaggle</code></li> <li>place <strong>kaggle.json</strong> as directed by CLI</li> <li>can then use following code to download Kaggle data in jupyter</li> </ol> <pre><code>import kaggle.cli import sys from pathlib import Path if not Path.cwd().joinpath(&quot;IPIP-FFM-data-8Nov2018/data-final.csv&quot;).exists(): sys.argv = [sys.argv[0]] + &quot;datasets download tunguz/big-five-personality-test --unzip&quot;.split(&quot; &quot;) kaggle.cli.main() pd.read_csv(Path.cwd().joinpath(&quot;IPIP-FFM-data-8Nov2018/data-final.csv&quot;), sep=&quot;\t&quot;) </code></pre>
python|pandas|dataframe
1
1,901,282
63,530,645
How to Index A Search Based Off More Than One Column Using Pandas
<p>I am having an issue with indexing a user input to search multiple columns. Here is my code</p> <pre><code>Searched_Multicast_Row_Location = excel_data_df_Sheet_1[excel_data_df_Sheet_1['Zixi Multicast'] == Group.get()].index print(Searched_Multicast_Row_Location) </code></pre> <p>This works great, but the problem is, the user may input a value that is in a different column and I would like to index that as well. I tried the following</p> <pre><code>Searched_Multicast_Row_Location = excel_data_df_Sheet_1[excel_data_df_Sheet_1['Zixi Multicast','Gateway Card Multicast'] == Group.get()].index print(Searched_Multicast_Row_Location) </code></pre> <p>I am hoping I can store the index of either into a single var</p> <p>I receive the following error:</p> <pre><code>Exception in Tkinter callback Traceback (most recent call last): File &quot;C:\Users\206415779\Anaconda3\envs\FINDIT\lib\site-packages\pandas\core\indexes\base.py&quot;, line 2889, in get_loc return self._engine.get_loc(casted_key) File &quot;pandas\_libs\index.pyx&quot;, line 70, in pandas._libs.index.IndexEngine.get_loc File &quot;pandas\_libs\index.pyx&quot;, line 97, in pandas._libs.index.IndexEngine.get_loc File &quot;pandas\_libs\hashtable_class_helper.pxi&quot;, line 1675, in pandas._libs.hashtable.PyObjectHashTable.get_item File &quot;pandas\_libs\hashtable_class_helper.pxi&quot;, line 1683, in pandas._libs.hashtable.PyObjectHashTable.get_item KeyError: ('Zixi Multicast', 'Gateway Card Multicast') </code></pre> <p>The above exception was the direct cause of the following exception:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\206415779\Anaconda3\envs\FINDIT\lib\tkinter\__init__.py&quot;, line 1883, in __call__ return self.func(*args) File &quot;C:/Users/206415779/Python/FINDIT/FINDIT START&quot;, line 221, in Okay Searched_Multicast_Row_Location = excel_data_df_Sheet_1[excel_data_df_Sheet_1['Zixi Multicast','Gateway Card Multicast'] == Group.get()].index File &quot;C:\Users\206415779\Anaconda3\envs\FINDIT\lib\site-packages\pandas\core\frame.py&quot;, line 2899, in __getitem__ indexer = self.columns.get_loc(key) File &quot;C:\Users\206415779\Anaconda3\envs\FINDIT\lib\site-packages\pandas\core\indexes\base.py&quot;, line 2891, in get_loc raise KeyError(key) from err **KeyError: ('Zixi Multicast', 'Gateway Card Multicast')** </code></pre>
<p>Here is what got me what I needed. This searches the user input from &quot;Group.get()&quot; and queries multiple columns, then it indexes the row # so I can grab data from that specific row. Hope this helps someone down the road. From some findings it looks like the &quot;or&quot; operation is not supported within the new version of pandas so you should use | instead.</p> <pre><code>Searched_Multicast_Row_Location = excel_data_df_Sheet_1[excel_data_df_Sheet_1['Zixi Multicast'] == Group.get()].index | excel_data_df_Sheet_1[excel_data_df_Sheet_1['Gateway Card Multicast'] == Group.get()].index print(Searched_Multicast_Row_Location) </code></pre>
python|excel|pandas|tkinter
0
1,901,283
13,575,622
AttributeError: 'str' object has no attribute 'slice'
<p>I'm writing a Sublime2 plugin and fighting a bit.</p> <p>Code is:</p> <pre><code> def run(self, edit): self.edit = edit self.view.window().show_input_panel("New Controller and View Path (ex: client_area/index )", "", self.trigger, None, None) def trigger(self, user_entry): formatted_entry = user_entry.encode('utf-8') print formatted_entry.__class__ print formatted_entry if formatted_entry.slice('/')[0] == '': #some code </code></pre> <p>Output is:</p> <pre><code>&lt;type 'str'&gt; client_area/index Traceback (most recent call last): File "./PluginName.py", line 27, in trigger AttributeError: 'str' object has no attribute 'slice' </code></pre> <p>How is it I get <code>'str' object has no attribute 'slice'</code> ? (Python version is 2.6)</p>
<p>Strings don't have a <code>slice</code> method in Python - did you mean <a href="http://docs.python.org/2/library/stdtypes.html#str.split" rel="noreferrer"><code>split</code></a> (or some variation thereof, such as <a href="http://docs.python.org/2/library/string.html#string.rsplit" rel="noreferrer"><code>rsplit</code></a>)?</p>
python
7
1,901,284
54,540,452
Problems with downloading data from a FTP website using python
<p>I am trying to download data from this FTP site "<a href="ftp://nais.ec.gc.ca" rel="nofollow noreferrer">ftp://nais.ec.gc.ca</a>" Python 2.7 I have tryied ofther FTP like "<a href="ftp://test.rebex.net/" rel="nofollow noreferrer">ftp://test.rebex.net/</a>" and <a href="ftp://speedtest.tele2.net" rel="nofollow noreferrer">ftp://speedtest.tele2.net</a> and they come up with the same error</p> <p>I have the password and username and I know they work.</p> <pre><code>from ftplib import FTP ftp = FTP("ftp://nais.ec.gc.ca") ftp.login("Username","password") </code></pre> <p>The error I get is below:</p> <blockquote> <p>[Errno 11001] getaddrinfo failed</p> </blockquote> <p>I have also tried <code>urllib</code> function and it seems like it can login but i cant downloaded anything or access the correct directories. </p>
<p>Your code looks like this:</p> <blockquote> <p>ftp = FTP("<a href="ftp://nais.ec.gc.ca" rel="nofollow noreferrer">ftp://nais.ec.gc.ca</a>")</p> </blockquote> <p>But in <a href="https://docs.python.org/3/library/ftplib.html#ftplib.FTP" rel="nofollow noreferrer">the documentation</a> you'll find:</p> <blockquote> <p>class ftplib.FTP(host='', ... <br> ... When host is given, the method call connect(host) is made. </p> </blockquote> <p>Thus, the first argument is a host name, not a URL. It shoud be just <code>nais.ec.gc.ca</code> not <code>ftp://nais.ec.gc.ca</code></p>
python|ftp|urllib|ftplib
1
1,901,285
71,099,952
How to decode morse code in a more pythonic way
<p>I have made a morse_code decoder in python as an assignment as shown below. It handles all characters available in morse_code. Although my approach works, it feels like a very amateurish way of doing things in python. The format in which morse code is sent:</p> <ol> <li>Characters are encoded by substituting them with combinations of '.' and '-'</li> <li>Characters are separated with &quot; &quot; (whitespace)</li> <li>Words are separated with &quot; &quot; (triple whitespace)</li> </ol> <p>In my code below I create an empty list, which get filled with a list in which each item represents a single morse code character, that is then replaced with the actual character. Finally, the lists within the list are joined, and the resulting list is joined as well so it can be returned as a string value. The reason I work with lists is because strings are immutable in python. I can't create an empty string, go into a for loop, and append to the string. I can not create a new string within the for loop either, since the variable will be lost upon leaving the for loop scope.</p> <p>I have tried to do it with the replace method first, but I ran into trouble because replace doesn't have the flexibility needed to decode morse code.</p> <pre><code>decode_morse(morse_code): morse_code_q = &quot; &quot;+morse_code+&quot; &quot; morse_code_r = morse_code_q.replace(&quot; .- &quot;, ' A ').replace(&quot; -... &quot;, ' B ').replace(' -.-. ', ' C ').replace(' -.. ', 'D ').replace(' . ', ' E ').replace(' ..-. ', ' F ').replace(' --. ', ' G ').replace(' .... ', ' H ').replace(' .. ', ' I ').replace(' .--- ', ' J ').replace(' .-. ', ' K ').replace(' .-.. ', ' L ').replace(' -- ', ' M ').replace(' -. ', ' N ').replace(' --- ', ' O ').replace(' .--. ', ' P ').replace(' --.- ', ' Q ').replace(' .-. ', ' R ').replace(' ... ', ' S ').replace(' - ', ' T ').replace(' ..- ', ' U ').replace(' ...- ', ' V ').replace(' .--', ' W ').replace(' -..- ', ' X ').replace(' -.-- ', ' Y ').replace(' --.. ', ' Z ').replace(' ----- ', '0').replace(' .---- ', '1').replace(' ..--- ', '2').replace(' ...-- ', '3').replace(' ....- ', '4').replace(' ..... ', '5').replace(' -.... ', '6').replace(' --... ', '7').replace(' ---.. ', '8').replace(' ----. ', '9').replace(' .-.-.- ', '.').replace(' --..-- ', ',').replace(' ..--.. ', '?') return morse_code_r.strip() print(decode_morse('.... . -.-- .--- ..- -.. .')) </code></pre> <p>This returns H E Y J U D E, rather than HEY JUDE. Leaving out the spaces won't do much good. After replacing a character, the next replace function won't be able to find a character, because it needs the spaces to determine the start and the end of a character (else .... for H would resolve to EEEE, since . = E)</p> <p>So here is my very ugly but working approach:</p> <pre><code>def decode_morse(morse_code): result=[] words = morse_code.split(&quot; &quot;) for j in range(0,len(words)): reverser = words[j].split(&quot; &quot;) for i in range(0,len(reverser)): if reverser[i]==&quot;.-&quot;: reverser[i]='A' elif reverser[i]==&quot;-...&quot;: reverser[i]='B' elif reverser[i]==&quot;-.-.&quot;: reverser[i]='C' elif reverser[i]==&quot;-..&quot;: reverser[i]='D' elif reverser[i]==&quot;.&quot;: reverser[i]='E' elif reverser[i]==&quot;..-.&quot;: reverser[i]='F' elif reverser[i]==&quot;--.&quot;: reverser[i]='G' elif reverser[i]==&quot;....&quot;: reverser[i]='H' elif reverser[i]==&quot;..&quot;: reverser[i]='I' elif reverser[i]==&quot;.---&quot;: reverser[i]='J' elif reverser[i]==&quot;.-.&quot;: reverser[i]='K' elif reverser[i]==&quot;.-..&quot;: reverser[i]='L' elif reverser[i]==&quot;--&quot;: reverser[i]='M' elif reverser[i]==&quot;-.&quot;: reverser[i]='N' elif reverser[i]==&quot;---&quot;: reverser[i]='O' elif reverser[i]==&quot;.--.&quot;: reverser[i]='P' elif reverser[i]==&quot;--.-&quot;: reverser[i]='Q' elif reverser[i]==&quot;.-.&quot;: reverser[i]='R' elif reverser[i]==&quot;...&quot;: reverser[i]='S' elif reverser[i]==&quot;-&quot;: reverser[i]='T' elif reverser[i]==&quot;..-&quot;: reverser[i]='U' elif reverser[i]==&quot;...-&quot;: reverser[i]='V' elif reverser[i]==&quot;.--&quot;: reverser[i]='W' elif reverser[i]==&quot;-..-&quot;: reverser[i]='X' elif reverser[i]==&quot;-.--&quot;: reverser[i]='Y' elif reverser[i]==&quot;--..&quot;: reverser[i]='Z' elif reverser[i]==&quot;-----&quot;: reverser[i]='0' elif reverser[i]==&quot;.----&quot;: reverser[i]='1' elif reverser[i]==&quot;..---&quot;: reverser[i]='2' elif reverser[i]==&quot;...--&quot;: reverser[i]='3' elif reverser[i]==&quot;....-&quot;: reverser[i]='4' elif reverser[i]==&quot;.....&quot;: reverser[i]='5' elif reverser[i]==&quot;-....&quot;: reverser[i]='6' elif reverser[i]==&quot;--...&quot;: reverser[i]='7' elif reverser[i]==&quot;---..&quot;: reverser[i]='8' elif reverser[i]==&quot;----.&quot;: reverser[i]='9' elif reverser[i]==&quot;.-.-.-&quot;: reverser[i]='.' elif reverser[i]==&quot;--..--&quot;: reverser[i]=',' elif reverser[i]==&quot;..--..&quot;: reverser[i]='?' result.append(reverser) final =[] for h in range(0,len(result)): final.append(&quot;&quot;.join(result[h])+&quot; &quot;) return &quot;&quot;.join(final) print(decode_morse('.... . -.-- .--- ..- -.. .')) #returns HEY JUDE </code></pre> <p>Anyone with a solution that makes this more pythonic? For this exercise, we are not allowed to use regexp library. Thanks in advance.</p>
<p>Always turn code into data whenever possible.</p> <pre><code># Full table at bottom of answer. encode_table = { &quot;A&quot;: &quot;.-&quot;, &quot;B&quot;: &quot;-...&quot;, &quot;C&quot;: &quot;-.-.&quot;, ... &quot; &quot;: &quot;SPACE&quot;, # Special &quot;sentinel&quot; value to simplify decoder. } # Reverse of encode_table. decode_table = {v: k for k, v in encode_table.items()} </code></pre> <p>Now, simply:</p> <pre><code>def encode(s): enc = &quot; &quot;.join(encode_table[x] for x in s) return enc.replace(&quot; SPACE &quot;, &quot; &quot;) def decode(encoded): symbols = encoded.replace(&quot; &quot;, &quot; SPACE &quot;).split(&quot; &quot;) return &quot;&quot;.join(decode_table[x] for x in symbols) </code></pre> <p>Test:</p> <pre><code>&gt;&gt;&gt; encode(&quot;HEY JUDE&quot;) '.... . -.-- .--- ..- -.. .' &gt;&gt;&gt; decode(&quot;.... . -.-- .--- ..- -.. .&quot;) 'HEY JUDE' </code></pre> <hr /> <p>Full table:</p> <pre><code>encode_table = { &quot;A&quot;: &quot;.-&quot;, &quot;B&quot;: &quot;-...&quot;, &quot;C&quot;: &quot;-.-.&quot;, &quot;D&quot;: &quot;-..&quot;, &quot;E&quot;: &quot;.&quot;, &quot;F&quot;: &quot;..-.&quot;, &quot;G&quot;: &quot;--.&quot;, &quot;H&quot;: &quot;....&quot;, &quot;I&quot;: &quot;..&quot;, &quot;J&quot;: &quot;.---&quot;, &quot;K&quot;: &quot;-.-&quot;, &quot;L&quot;: &quot;.-..&quot;, &quot;M&quot;: &quot;--&quot;, &quot;N&quot;: &quot;-.&quot;, &quot;O&quot;: &quot;---&quot;, &quot;P&quot;: &quot;.--.&quot;, &quot;Q&quot;: &quot;--.-&quot;, &quot;R&quot;: &quot;.-.&quot;, &quot;S&quot;: &quot;...&quot;, &quot;T&quot;: &quot;-&quot;, &quot;U&quot;: &quot;..-&quot;, &quot;V&quot;: &quot;...-&quot;, &quot;W&quot;: &quot;.--&quot;, &quot;X&quot;: &quot;-..-&quot;, &quot;Y&quot;: &quot;-.--&quot;, &quot;Z&quot;: &quot;--..&quot;, &quot;0&quot;: &quot;-----&quot;, &quot;1&quot;: &quot;.----&quot;, &quot;2&quot;: &quot;..---&quot;, &quot;3&quot;: &quot;...--&quot;, &quot;4&quot;: &quot;....-&quot;, &quot;5&quot;: &quot;.....&quot;, &quot;6&quot;: &quot;-....&quot;, &quot;7&quot;: &quot;--...&quot;, &quot;8&quot;: &quot;---..&quot;, &quot;9&quot;: &quot;----.&quot;, &quot;.&quot;: &quot;.-.-.-&quot;, &quot;,&quot;: &quot;--..--&quot;, &quot;?&quot;: &quot;..--..&quot;, &quot; &quot;: &quot;SPACE&quot;, } </code></pre>
python|string|list
3
1,901,286
9,113,016
How to add onfocus to a z3c.form input
<p>I am developing a site in plone and am currently writing a form using python and z3c.forms</p> <p>Currently I am using an interface to define form fields like...</p> <pre><code>class IMyInterface(Interface): name = schema.TextLine( title=_(u"Name"), default=_(u"Name") ) </code></pre> <p>and then assigning to fields like...</p> <pre><code>fields = field.Fields(IMyInterface) </code></pre> <p>this is then rendered in a template using tal like...</p> <pre><code>&lt;div tal:replace="structure python: view.contents" /&gt; </code></pre> <p>I would like to render an onfocus attribute within the markup of the input. Is there a way to do this?</p>
<p>You can give arbitrary HTML attribute parameters to z3c.form widgets in <code>updateWidgets()</code> phase of your form.</p> <p><a href="http://collective-docs.readthedocs.org/en/latest/forms/z3c.form.html#modifying-a-widget" rel="nofollow">http://collective-docs.readthedocs.org/en/latest/forms/z3c.form.html#modifying-a-widget</a></p> <p>HTML attributes for widgets:</p> <p><a href="http://svn.zope.org/z3c.form/trunk/src/z3c/form/browser/widget.py?rev=103729&amp;view=auto" rel="nofollow">http://svn.zope.org/z3c.form/trunk/src/z3c/form/browser/widget.py?rev=103729&amp;view=auto</a></p>
python|plone|z3c.form
4
1,901,287
9,529,025
how do I have matplotlib change line markers automatically?
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://stackoverflow.com/questions/7799156/can-i-cycle-through-line-styles-in-matplotlib">Can i cycle through line styles in matplotlib</a><br> <a href="https://stackoverflow.com/questions/7358118/matplotlib-black-white-colormap-with-dashes-dots-etc">matplotlib - black &amp; white colormap (with dashes, dots etc)</a></p> </blockquote> <p>I'm using matplotlib (python) and I am plotting several lines on a single plot. </p> <p>By default, python is assigning a different color to each line, but I want it to assign different line types and just use black for all of them. </p> <p>I know I could make a list of different line types and use them, but that involves grabbing all the line types and adding them to each script I want to plot multiple lines with. I figure there has to be an automatic way.</p>
<p>I don't think that is possible <em>that</em> automatically you would want, but it is certainly doable with a very little effort. The way I do in my plots, I do all the plotting I want, then I change the markers. However, in my experience finding a right marker cycle depends on the graph you want to show and on the context the graph appears. I would verily encourage you to opt for this manual selection of markers and find out what looks the best on your graphs. Following a little sketch showing the way I do (but you've already mentioned something similar in your question):</p> <pre><code>import matplotlib.pyplot as plt f = plt.figure(1); f.clf() ax = f.add_subplot(111) ax.plot([1,2,3,4,5]) ax.plot([5,4,3,2,1]) ax.plot([2,3,2,3,2]) import itertools for l, ms in zip(ax.lines, itertools.cycle('&gt;^+*')): l.set_marker(ms) l.set_color('black') plt.show() </code></pre>
python|matplotlib
7
1,901,288
9,289,071
Permissions, Python Script
<p>I am learning to write Python Scripts for work and I have run into some problems. The script is supposed to read a file, and print the permissions to an email. My problem is I am getting an error when it tries to call the permission() method, and I don't know how to fix it. </p> <p><strong>Python Code</strong></p> <pre><code>import smtplib import os import stat result = "" def permission(file): s = os.stat(file) mode = s.st_mode if(stat.S_IRUSR &amp; mode): ownerRead = 1 result += ownerRead else: ownerRead = 0 result += ownerRead if(stat.S_IWUSR &amp; mode): ownerWrite = 1 result += ownerWrite else: ownerWrite = 0 result += ownerWrite if(stat.S_IXUSR &amp; mode): ownerExecute = 1 result += ownerExecute else: ownerExecute = 0 result += ownerExecute if(stat.S_IRGRP &amp; mode): groupRead = 1 result += groupRead else: groupRead = 0 result += groupRead if(stat.S_IWGRP &amp; mode): groupWrite = 1 result += groupWrite else: groupWrite = 0 result += groupWrite if(stat.S_IXGRP &amp; mode): groupExecute = 1 result += groupExecute else: groupExecute = 0 result += groupExecute if(stat.S_IROTH &amp; mode): otherRead = 1 result += otherRead else: otherRead = 0 result += otherRead if(stat.S_IWOTH &amp; mode): otherWrite = 1 result += otherWrite else: otherWrite = 0 result += otherWrite if(stat.S_IXOTH &amp; mode): otherExecute = 1 result += otherExecute else: otherExecute = 0 result += otherExecute return result to = 'email@yahoo.com' gmail_user = 'email@gmail.com' gmail_pwd = 'pwd' smtpserver = smtplib.SMTP("smtp.gmail.com",587) smtpserver.ehlo() smtpserver.starttls() smtpserver.ehlo smtpserver.login(gmail_user, gmail_pwd) header = 'To:' + to + '\n' + 'From: ' + gmail_user + '\n' + 'Subject:permissions \n' print header values = permission(file) print values msg = header + values smtpserver.sendmail(gmail_user, to, msg) print 'done!' smtpserver.close() </code></pre> <p><strong>Error Output</strong></p> <pre><code>Traceback (most recent call last): File "lastpart.py", line 83, in &lt;module&gt; values = permission(file) File "lastpart.py", line 15, in permission s = os.stat(file) TypeError: coercing to Unicode: need string or buffer, type found </code></pre>
<p>You fix it by passing the actual filename to the function, not the <code>file</code> built-in type.</p>
python
3
1,901,289
9,088,403
Distributing integers using weights? How to calculate?
<p>I need to distribute a value based on some weights. For example, if my weights are 1 and 2, then I would expect the column weighted as 2 to have twice the value as the column weighted 1.</p> <p>I have some Python code to demonstrate what I'm trying to do, and the problem:</p> <pre><code>def distribute(total, distribution): distributed_total = [] for weight in distribution: weight = float(weight) p = weight/sum(distribution) weighted_value = round(p*total) distributed_total.append(weighted_value) return distributed_total for x in xrange(100): d = distribute(x, (1,2,3)) if x != sum(d): print x, sum(d), d </code></pre> <p>There are many cases shown by the code above where distributing a value results in the sum of the distribution being different than the original value. For example, distributing 3 with weights of (1,2,3) results in (1,1,2), which totals 4.</p> <p>What is the simplest way to fix this distribution algorithm?</p> <p>UPDATE:</p> <p>I expect the distributed values to be integer values. It doesn't matter exactly how the integers are distributed as long as they total to the correct value, and they are "as close as possible" to the correct distribution.</p> <p>(By correct distribution I mean the non-integer distribution, and I haven't fully defined what I mean by "as close as possible." There are perhaps several valid outputs, so long as they total the original value.)</p>
<p>Distribute the first share as expected. Now you have a simpler problem, with one fewer participants, and a reduced amount available for distribution. Repeat until there are no more participants.</p> <pre><code>&gt;&gt;&gt; def distribute2(available, weights): ... distributed_amounts = [] ... total_weights = sum(weights) ... for weight in weights: ... weight = float(weight) ... p = weight / total_weights ... distributed_amount = round(p * available) ... distributed_amounts.append(distributed_amount) ... total_weights -= weight ... available -= distributed_amount ... return distributed_amounts ... &gt;&gt;&gt; for x in xrange(100): ... d = distribute2(x, (1,2,3)) ... if x != sum(d): ... print x, sum(d), d ... &gt;&gt;&gt; </code></pre>
python|algorithm
7
1,901,290
39,279,810
How to define and instantiate a derived class at once in python?
<p>I have a base class that I want to derive and instantiate together. I can do that in java like: </p> <pre><code>BaseClass derivedClassInstance = new BaseClass() { @override void someBaseClassMethod() { // my statements} }; </code></pre> <p>In python I can derive and and instantiate a base class like: </p> <pre><code>class DerivedClass(BaseClass): def some_base_class_method(): # my statements derived_class_instance = DerivedClass() </code></pre> <p>I need to sub-class single instances of some objects with minor changes. Deriving and assigning them separately seems like overkill. </p> <p>Is there a Java-like <em>one-liner</em> way to derive and instantiate a class on the fly? Or is there a more concise way to do what I did in python?</p>
<p>In general you won't see this kind of code, because is difficult to read and understand. I really suggest you find some alternative and avoid what comes next. Having said that, you can create a class and an instance in one single line, like this:</p> <pre><code>&gt;&gt;&gt; class BaseClass(object): ... def f1(self, x): ... return 2 ... def f2(self, y): ... return self.f1(y) + y ... &gt;&gt;&gt; &gt;&gt;&gt; W = BaseClass() &gt;&gt;&gt; W.f2(2) 4 &gt;&gt;&gt; X = type('DerivedClass', (BaseClass,), {'f1': (lambda self, x: (x + x))})() &gt;&gt;&gt; X.f2(2) 6 </code></pre>
python|inheritance
2
1,901,291
55,507,759
Connecting Python to Google Analytics
<p>I am up to the creating my credentials but I noticed this: </p> <p>"The consent screen tells your users who is requesting access to their data and what kind of data you're asking to access."</p> <p>Does this mean the customers on my webstore will be prompted with a consent screen? </p> <p>I have searched for an answer to this online, but haven't come across anything specific and have been following this tutorial: </p> <p><a href="https://developers.google.com/analytics/devguides/reporting/core/v3/quickstart/service-py" rel="nofollow noreferrer">https://developers.google.com/analytics/devguides/reporting/core/v3/quickstart/service-py</a></p> <p>I want to make sure my webstore isn't effected at all by making this connection. </p> <p>All I am looking to do with this is make reporting easier. </p> <p>Any advice on the topic would be greatly appreciated. </p> <p>Thank you</p>
<blockquote> <p>&quot;The consent screen tells your users who is requesting access to their data and what kind of data you're asking to access.&quot;</p> <p>Does this mean the customers on my webstore will be prompted with a consent screen?</p> </blockquote> <p>Yes Google analytics data is private user data. If you want to access a users data then they will have to login to their Google account and consent to you accessing that data.</p> <p>The user consent is an integral part of <a href="https://tools.ietf.org/id/draft-hunt-oauth-v2-user-a4c-01.html" rel="nofollow noreferrer">Oauth2</a></p> <p><a href="https://i.stack.imgur.com/Xp1LE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Xp1LE.png" alt="enter image description here" /></a></p> <p>In the above image Google Analytics windows is requesting access to the users Google analytics data. If the user accepts then the application will be allowed to access the data.</p> <h2>service account</h2> <p>If you are trying to share your own google analytics data with your users. Then you should be looking into using a service account instead.</p>
python|google-analytics|google-oauth
0
1,901,292
52,567,601
"Runtime error event loop already running" during asyncio
<p>I am trying out some asyncio examples found on the web: <a href="https://proxybroker.readthedocs.io/en/latest/examples.html#proxybroker-examples-grab" rel="nofollow noreferrer">Proxybroker example</a></p> <p>When I run this first example:</p> <pre><code>"""Find and show 10 working HTTP(S) proxies.""" import asyncio from proxybroker import Broker async def show(proxies): while True: proxy = await proxies.get() if proxy is None: break print('Found proxy: %s' % proxy) proxies = asyncio.Queue() broker = Broker(proxies) tasks = asyncio.gather( broker.find(types=['HTTP', 'HTTPS'], limit=10), show(proxies)) loop = asyncio.get_event_loop() loop.run_until_complete(tasks) </code></pre> <p>I get the error:</p> <pre><code>RuntimeError: This event loop is already running </code></pre> <p>But the loop completes as expected. I'm new to concurrent code so any explanation / pseudo code of what is occurring would be appreciated. </p>
<p>I install this package,and run it passed, no error occured,are use a ide? try to run it on cli,or move it another directory</p>
python|asynchronous|python-asyncio|proxybroker
1
1,901,293
52,685,928
In Python, if I type a=1 b=2 c=a c=b, what is the value of c? What does c point to?
<p>Python variables are for the most part really easy to understand, but there is one case I have been struggling with. If I want to point my variable to a new memory address, how do I do this? Or, if Python does this by default (treating variables like pointers), then how do I literally assign the value from a new variable to the memory address of the old variable?</p> <p>For example, if I type</p> <pre><code>a=1 b=2 c=a c=b </code></pre> <p>What is the value of <code>c</code>? And what does it point to? Is the statement replacing the pointer <code>c -&gt; a</code> with pointer <code>c -&gt; b</code> or grabbing the value from <code>b</code> and overwriting <code>a</code> with <code>b</code>'s value? <code>c=b</code> is ambiguous.</p> <p>In other words, if you start with this:</p> <pre><code>a -&gt; 1 &lt;- c b -&gt; 2 </code></pre> <p>is it re-pointing <code>c</code> like this:</p> <pre><code>a -&gt; 1 _c b -&gt; 2 &lt;-/ </code></pre> <p>or copying <code>b</code> like this?</p> <pre><code>a -&gt; 2 &lt;- c b -&gt; 2 </code></pre>
<p>There are no pointers to variables in Python. In particular, when you say this:</p> <blockquote> <p>Is the statement replacing the pointer <code>c -&gt; a</code> with pointer <code>c -&gt; b</code>...</p> </blockquote> <p>Python does not have any such thing as "the pointer <code>c -&gt; a</code>", so it is not doing that.</p> <blockquote> <p>...or grabbing the value from b and overwriting a with b's value</p> </blockquote> <p>but there is no assignment to <code>a</code>, so it's not doing that either.</p> <p>Instead, Python keeps a symbol table<sup>1</sup> that maps each name (<code>a</code>, <code>b</code>, <code>c</code>, etc.) to a pointer to an object. In your code sample, after you assign to <code>a</code> and <code>b</code>, it would look like this (obviously I have made up the memory addresses):</p> <pre><code>a -&gt; 0xfffa9600 -&gt; 1 b -&gt; 0xfffa9608 -&gt; 2 </code></pre> <p>and then after you assign <code>c = a</code>, it would look like this:</p> <pre><code>a -&gt; 0xfffa9600 -&gt; 1 b -&gt; 0xfffa9608 -&gt; 2 c -&gt; 0xfffa9600 -&gt; 1 </code></pre> <p>Note that <code>c</code> is entirely independent of <code>a</code>. When you run <code>c = b</code>, it replaces the pointer associated with <code>c</code> in the symbol table with the pointer that was associated with <code>b</code>, but <code>a</code> is not affected:</p> <pre><code>a -&gt; 0xfffa9600 -&gt; 1 b -&gt; 0xfffa9608 -&gt; 2 c -&gt; 0xfffa9608 -&gt; 2 </code></pre> <p>In this case that's pretty much all there is to it because the objects in question, namely the integer constants <code>1</code> and <code>2</code>, are immutable. However, if you use mutable objects, they do start to act a bit more like pointers in the sense that changes to the object when it's stored in one variable are reflected in other variables that refer to the same object. For example, consider this sample of code:</p> <pre><code>x = {'a': 1, 'b': 2} y = x </code></pre> <p>Here, the symbol table might look something like this:</p> <pre><code>x -&gt; 0xffdc1040 -&gt; {'a': 1, 'b': 2} y -&gt; 0xffdc1040 -&gt; {'a': 1, 'b': 2} </code></pre> <p>If you now run</p> <pre><code>y['b'] = y['a'] </code></pre> <p>then it doesn't actually change the pointer associated with <code>y</code> in the symbol table, but it does change the object pointed to by that pointer, so you wind up with</p> <pre><code>x -&gt; 0xffdc1040 -&gt; {'a': 1, 'b': 1} y -&gt; 0xffdc1040 -&gt; {'a': 1, 'b': 1} </code></pre> <p>and you'll see that your assignment to <code>y['b']</code> has affected <code>x</code> as well. Contrast this with</p> <pre><code>y = {'a': 1, 'b': 2} </code></pre> <p>which actually makes <code>y</code> point at an entirely different object, and is more akin to what you were doing before with <code>a</code>, <code>b</code>, and <code>c</code>.</p> <hr> <p><sup>1</sup>Actually there are several symbol tables, corresponding to different scopes, and Python has an order in which it checks them, but that detail isn't particularly relevant here.</p>
python
15
1,901,294
47,954,010
numpy vectorized way to change multiple rows of array(rows can be repeated)
<p>I run into this problem when implementing the vectorized svm gradient for cs231n assignment1. here is an example:</p> <pre><code>ary = np.array([[1,-9,0], [1,2,3], [0,0,0]]) ary[[0,1]] += np.ones((2,2),dtype='int') </code></pre> <p>and it outputs:</p> <pre><code>array([[ 2, -8, 1], [ 2, 3, 4], [ 0, 0, 0]]) </code></pre> <p>everything is fine until rows is not unique:</p> <pre><code>ary[[0,1,1]] += np.ones((3,3),dtype='int') </code></pre> <p>although it didn't throw an error,the output was really strange:</p> <pre><code>array([[ 2, -8, 1], [ 2, 3, 4], [ 0, 0, 0]]) </code></pre> <p>and I expect the second row should be [3,4,5] rather than [2,3,4], the naive way I used to solve this problem is using a for loop like this:</p> <pre><code>ary = np.array([[ 2, -8, 1], [ 2, 3, 4], [ 0, 0, 0]]) # the rows I want to change rows = [0,1,2,1,0,1] # the change matrix change = np.random.randn((6,3)) for i,row in enumerate(rows): ary[row] += change[i] </code></pre> <p>so I really don't know how to vectorize this for loop, is there a better way to do this in NumPy? and why it's wrong to do something like this?:</p> <pre><code>ary[rows] += change </code></pre> <p>In case anyone is curious why I want to do so, here is my implementation of svm_loss_vectorized function, I need to compute the gradients of weights based on labels y:</p> <pre><code>def svm_loss_vectorized(W, X, y, reg): """ Structured SVM loss function, vectorized implementation. Inputs and outputs are the same as svm_loss_naive. """ loss = 0.0 dW = np.zeros(W.shape) # initialize the gradient as zero # transpose X and W # D means input dimensions, N means number of train example # C means number of classes # X.shape will be (D,N) # W.shape will be (C,D) X = X.T W = W.T dW = dW.T num_train = X.shape[1] # transpose W_y shape to (D,N) W_y = W[y].T S_y = np.sum(W_y*X ,axis=0) margins = np.dot(W,X) + 1 - S_y mask = np.array(margins&gt;0) # get the impact of num_train examples made on W's gradient # that is,only when the mask is positive # the train example has impact on W's gradient dW_j = np.dot(mask, X.T) dW += dW_j mul_mask = np.sum(mask, axis=0, keepdims=True).T # dW[y] -= mul_mask * X.T dW_y = mul_mask * X.T for i,label in enumerate(y): dW[label] -= dW_y[i] loss = np.sum(margins*mask) - num_train loss /= num_train dW /= num_train # add regularization term loss += reg * np.sum(W*W) dW += reg * 2 * W dW = dW.T return loss, dW </code></pre>
<p><strong>Using built-in <code>np.add.at</code></strong></p> <p>The built-in is <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ufunc.at.html" rel="nofollow noreferrer"><code>np.add.at</code></a> for such tasks, i,e.</p> <pre><code>np.add.at(ary, rows, change) </code></pre> <p>But, since we are working with a <code>2D</code> array, that might not be the most performant one.</p> <p><strong>Leveraging fast <code>matrix-multiplication</code></strong></p> <p>As it turns out, we can leverage the very efficient <code>matrix-multplication</code> for such a case as well and given enough number of repeated rows for summation, could be really good. Here's how we can use it -</p> <pre><code>mask = rows == np.arange(len(ary))[:,None] ary += mask.dot(change) </code></pre> <hr> <p><strong>Benchmarking</strong></p> <p>Let's time <code>np.add.at</code> method against <code>matrix-multiplication</code> based one for bigger arrays -</p> <pre><code>In [681]: ary = np.random.rand(1000,1000) In [682]: rows = np.random.randint(0,len(ary),(10000)) In [683]: change = np.random.rand(10000,1000) In [684]: %timeit np.add.at(ary, rows, change) 1 loop, best of 3: 604 ms per loop In [687]: def matmul_addat(ary, rows, change): ...: mask = rows == np.arange(len(ary))[:,None] ...: ary += mask.dot(change) In [688]: %timeit matmul_addat(ary, rows, change) 10 loops, best of 3: 158 ms per loop </code></pre>
python|arrays|numpy|vectorization|svm
4
1,901,295
47,740,542
TypeError: 'generator' object is not callable.
<pre><code>def lines(file): for line in file: yield line yield '\n' def blocks(file): block = [] for line in lines(file): if line.strip(): block.append(line) elif block: yield ''.join(block).strip() block = [] with open(r'test_input.txt', 'r') as f: lines = lines(f) file = blocks(lines) for line in file: print(line) </code></pre> <p>I got this error message:</p> <pre><code>TypeError: 'generator' object is not callable </code></pre> <p>I don't know what happened. Does it because generator in python3.6 is different from 2.X?</p>
<p>Your issue is caused by this line:</p> <pre><code>lines = lines(f) </code></pre> <p>With this assignment, you're overwriting the <code>lines</code> generator function with its own return value. That means that when <code>blocks</code> tries to call <code>lines</code> again (which seems a little buggy to me, but not the main issue), it gets the generator object instead of the function it expected.</p> <p>Pick a different name for the assignment, or just pass <code>f</code> to <code>blocks</code>, since it will call <code>lines</code> itself.</p>
python-3.x
2
1,901,296
47,693,631
Python 3.6 startup error
<p>I am very new to python, I have installed Python 3.6 on my Windows 10 machine ( and I believe it has version 2.7 installed ). The installation was ok but when I try to start up it gives me error as shown below</p> <pre><code>Fatal Python error: Py_Initialize: unable to load the file system codec File "C:\csvn\Python25\\lib\encodings\__init__.py", line 123 raise CodecRegistryError,\ ^ SyntaxError: invalid syntax Current thread 0x00002c78 (most recent call first): </code></pre> <p>Please someone would help me to identify this error and how to fix it. Thank you so much in advance for any help.</p>
<p>The error can be resolved by adding an environment variable "PYTHONPATH" which point to the installation location of Python.</p> <p>Refer to the following link,</p> <p><a href="https://stackoverflow.com/questions/5694706/py-initialize-fails-unable-to-load-the-file-system-codec">Py_Initialize fails - unable to load the file system codec</a></p>
python|windows
1
1,901,297
47,617,158
Apache-Beam add sequence number to a PCollection
<p> I'm trying to build an ETL to load a Dimension table. I'm ussign Apache Bea, with Python and DataFlow, and BigQuery.</p> <p>I need to assign a sequence number to each element of a pcollection in order to load its into BigQuery, but I cant find any way to do this.</p> <p>I think I need DataFlow to make the previous aggregation and joins to get my final pcollection to add the sequence number, but in this moment I need to stop parallel processing and cast my pcollection to a list (as in Spark when you use .collect()) and then make an easy loop to assign the sequence number. is it right?</p> <p>This is the pipeline I've coded:</p> <pre><code>p | ReadFromAvro(known_args.input) | beam.Map(adapt) | beam.GroupByKey() | beam.Map(adaptGroupBy) </code></pre> <p>I've read there is no way to get a list from pcollection: <a href="https://stackoverflow.com/questions/41440634/how-to-get-a-list-of-elements-out-of-a-pcollection-in-google-dataflow-and-use-it">How to get a list of elements out of a PCollection in Google Dataflow and use it in the pipeline to loop Write Transforms?</a></p> <p>How can I achieve it? Any help?</p>
<p>If what you want is to get a list with each of the elements in a <code>PCollection</code>, you can use a side input. Keep in mind that this will remove all parallelism from your results, and your pipeline may become slow.</p> <p>If you still want to do this, then:</p> <pre><code>side_input_coll = beam.pvalue.AsIterable(my_collection) (p | beam.Create([0]) | beam.FlatMap(lambda _, my_seq: [(elem, i) for i, elem in enumerate(my_seq)], my_seq=side_input_coll)) </code></pre> <p>But don't forget that to preserve parallelism, it may be best to simply generate a random ID. Remember that <code>PCollections</code> are intrinsically unordered.</p> <p>To learn more about side inputs, see the <a href="https://beam.apache.org/documentation/programming-guide/#side-inputs" rel="nofollow noreferrer">Beam Programming Guide on Side Inputs</a></p>
python|google-bigquery|google-cloud-dataflow|apache-beam|dataflow
3
1,901,298
34,379,847
Python Flask: Get database rows as dictionary
<p>I'm building an application using Flask MySQLDb and wondering how to return database rows as dictionaries (like PHP's FETCH_ASSOC). The default <code>fetchall()</code> method of the cursor class returns a tuple, and there's nothing about returning a dict in the <a href="https://mysqlclient.readthedocs.org/en/latest/index.html" rel="nofollow">docs</a> of the underlying library.</p> <p>So far I've been executing code the following format, but getting a dict with column names as keys would really help:</p> <pre><code>g.cursor.execute('SELECT email, password FROM users WHERE email = %s', [request.form['email']]) row = g.cursor.fetchall() </code></pre>
<p>Simply add the line</p> <pre><code>app.config['MYSQL_CURSORCLASS'] = 'DictCursor' </code></pre>
python|flask
11
1,901,299
39,643,609
How can programs make Lua(or python) interpreter execute statements provided by the program?
<p>I am using Cent OS 7.</p> <p>I want to write a program(written in Java or other languages) that can interact with Lua interpreter. I hope that my program can feed statements to Lua interpreter, and get executed real-time, while previous variables are still available.</p> <p>For example, my program feeds <code>a = 4; print(a);</code> to Lua interpreter <code>4</code> is printed on the screen. Then the program does other work. Later it feeds <code>n = 0; for i=1,4 do n = n + i; end; print(n);</code> to the interpreter, and <code>10</code> is printed on the screen.</p> <p>Note: All I want is that Lua interpreter execute the statements when my program feeds it one, while keeping its previous status. My program does not need to access variables in Lua interpreter.</p> <p>I tried calling Lua interpreter separately, but it doesn't work as expected. Another solution is to record all previous statements, and run them before a new statement is going to run. But this is obviously not efficient.</p> <p>Is there a easy way to do this? Such as just creating sub-process and making system calls?</p>
<p>As your question is very broad (interpret Lua or Python, in any language), I can only give you some hints. </p> <p>If you write in C or C++, you can use the Lua library directly. It probably allows you to execute Lua statements, make values in C visible to the Lua code, make C function available to the Lua code, and access values of the Lua code.</p> <p>If you write in Java, you may either write a JNI wrapper for the Lua library, or use another Lua implementation. See <a href="https://stackoverflow.com/q/2113432/1314743">how can I embed lua in java?</a>.</p> <p>For other languages, you essentially have the same options: either use (if available) some other implementation in your favourite language, or find a way how you can access C library functions from your language. The latter is possible for most relevant programming languages.</p> <p>For Python, the situation is similar. See, for example, <a href="https://stackoverflow.com/q/1119696/1314743">Java Python Integration</a> and <a href="https://wiki.python.org/moin/IntegratingPythonWithOtherLanguages" rel="nofollow noreferrer">Integrating Python With Other Languages</a>.</p>
java|python|linux|lua
1