Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,904,900
35,436,844
move offset text position in polar plot
<p>I want to change the position of the offset text (see in the attached picture).</p> <p><img src="https://i.stack.imgur.com/8dQoz.png" alt="Change Position of &#39;Scientific scale&#39;-tick"></p> <p>Is there a way to do this in matplotlib?</p> <p>My code:</p> <pre><code>""" Demo of a line plot on a polar axis. """ import numpy as np import matplotlib.pyplot as plt r = np.arange(0, 3.0, 0.01) theta = 2 * np.pi * r r = r*1000000000 ax = plt.subplot(111, projection='polar') ax.plot(theta, r, color='b', linewidth=3) ax.grid(True) ax.set_title("A line plot on a polar axis", va='bottom') plt.show() </code></pre>
<p>The text object is called the <code>offset_text</code>. On a polar plot, it is considered the offset text of the <code>yaxis</code>. You could move it using the <code>ax.yaxis.set_offset_position()</code> function. This only takes <code>left</code> or <code>right</code> as options. So, adding <code>ax.yaxis.set_offset_position('right')</code> would move it closer to where you want:</p> <p><a href="https://i.stack.imgur.com/zlGo4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zlGo4.png" alt="enter image description here"></a></p> <hr> <p>By request, here's that line in the full script:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt r = np.arange(0, 3.0, 0.01) theta = 2 * np.pi * r r = r*1000000000 ax = plt.subplot(111, projection='polar') ax.plot(theta, r, color='b', linewidth=3) ax.grid(True) ax.yaxis.set_offset_position('right') plt.show() </code></pre>
python|python-3.x|matplotlib
1
1,904,901
58,621,903
Use csvkit in a bash script to convert CSV to desired format?
<p>I need to convert a large csv file into the static content format for Kirby CMS.</p> <p>Say I have a csv file:</p> <pre><code>id,name,age,bio 0,bob,25,"Example bio, with a comma" 1,sam,37,"Hello World" ... </code></pre> <p>That I would like to restructure into separate folders/files like so:</p> <p>1_bob/person.txt</p> <pre><code>ID: 0 ---- Name: bob ---- Age: 25 ---- Bio: Example bio, with a comma </code></pre> <p>2_sam/person.txt</p> <pre><code>ID: 1 ---- Name: sam ---- Age: 37 ---- Bio: Hello World </code></pre> <p>etc...</p> <p>This is obviously a far more simplified version of my data, thus I had considered using <code>csvkit</code> because of its ability to properly parse commas in quoted fields etc.</p> <p>I had found this script: <a href="https://forum.getkirby.com/t/import-from-csv/6038/15" rel="nofollow noreferrer">https://forum.getkirby.com/t/import-from-csv/6038/15</a> which fails as a result of the above issue (the inability for basic bash IFS to read more complex CSV data)</p> <pre><code>#!/bin/bash OLDIFS=$IFS IFS=";" while read number year title website slug do if [ ! -d "$number-$slug" ]; then mkdir ./$number-$slug fi echo -e "Year: $year\n----\nTitle: $title\n----\nWebsite: $website" &gt; $number-$slug/project.txt done &lt; projects.csv IFS=$OLDIFS </code></pre> <p>I know I could write a python script to do this faily easily but was wondering if there is indeed a way to combine any of the tooling of csvkit to do this in a bash script. My assumption was to use <code>csvcut</code> to pull lines of data out of the csv but of course am still at the same block of how to parse this data and output it into the desired format.</p>
<p>Usually, much easier to process TSV files vs CSV files - with bash, awk and many utilities. It avoid the need for quoting. csvformat will handle the conersion:</p> <p>Using your current script:</p> <pre><code>csvformat -T projects.csv | while IFS=$'\t' read number year title website slug do if [ ! -d "$number-$slug" ]; then mkdir ./$number-$slug fi echo -e "Year: $year\n----\nTitle: $title\n----\nWebsite: $website" &gt; $number-$slug/project.txt done </code></pre> <p>The code expect 'slug' column for each record, which is not in the sample input. I'm assuming the actual input will have this in the 5th column</p>
python|bash|csv
0
1,904,902
42,625,093
Django: Block access to specific users in one app
<p>I'm building an Django project <code>demonstration</code> that has in its constitution 3 apps (app, blog, frontend) <a href="https://i.stack.imgur.com/1C9Cm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1C9Cm.png" alt="enter image description here"></a></p> <p><strong><em>The challenge</strong> I'm facing is the following:</em></p> <p>I want to <strong>limit access to the app <code>app</code> to allow only registered users</strong>.</p> <p>In other words, restrict access to all the pages in <code>app</code> django app.</p> <p>After doing some research, stumbled accross the following links:</p> <ol> <li><a href="https://stackoverflow.com/questions/2164069/best-way-to-make-djangos-login-required-the-default">Link 1</a></li> <li><a href="https://stackoverflow.com/questions/12597864/how-to-restrict-access-to-pages-based-on-user-type-in-django">Link 2</a></li> <li><a href="https://stackoverflow.com/questions/6017542/restrict-access-to-all-the-pages-in-a-django-app">Link 3</a></li> </ol> <p>The answer in <a href="https://stackoverflow.com/questions/2164069/best-way-to-make-djangos-login-required-the-default">Link 1</a> seems the easier to implement.</p> <p>Still, I'm having some problems doing it, as I have little experience working with Middleware in Django.</p> <p><em>Asked there in the comments</em>:</p> <p>'I want to limit the access of one app, called app, if the user doesn't have login. The middleware RequireLoginMiddleware class should be placed where?'</p> <p>but no reply yet so far and I don't seem to find a way to cross this.</p> <p>Can anyone explain me what I need to do to <strong>Restrict access to all the pages in a django app to allow only registered users</strong>?</p>
<p><strong>How to fix:</strong></p> <p>Inside of <code>views.py</code>, from the <code>app</code> app directory, added the following:</p> <pre><code>from django.contrib.auth.decorators import login_required </code></pre> <p>and, right before calling the view:</p> <pre><code>@login_required(login_url="/admin/") #location where users are going to be able to do the login def profile(request): #view </code></pre> <p><strong>This means</strong>: once the user is not logged on and tries to access the view, he/she/it will be redirected to the login screen</p>
python|django|django-views|django-apps|django-middleware
2
1,904,903
50,971,213
Is there a scope for (numpy) random seeds?
<p>My question is related to <a href="https://stackoverflow.com/questions/12368996/what-is-the-scope-of-a-random-seed-in-python">What is the scope of a random seed in Python?</a> . In the case of above question, it is clarified that there is a (hidden) global <code>Random()</code> instance in the module for <code>random</code>.</p> <p>1) I would like to clarify whether setting the random seed in one module will cause this to be the random seed in <strong>other</strong> modules and whether there are certain things to be aware of.</p> <p>For instance: Given: moduleA.py, moduleB.py</p> <p>moduleA.py:</p> <pre><code>import random import moduleB random.seed(my_seed) moduleB.randomfct() </code></pre> <p>moduleB.py:</p> <pre><code>import random def randomfct(): #do_things_using_random </code></pre> <p>Does moduleB also use <code>my_seed</code>, or do I have to pass the seed to moduleB.py and set it again?</p> <p>2) Does the order of setting the random seed / importing play any role?</p> <p>For example in <code>moduleA.py</code>:</p> <pre><code>import random random.seed(my_seed) import moduleB </code></pre> <p>3) Is this also the case for setting numpy random seeds, e.g. <code>np.random.seed(42)</code>?</p>
<p>The CPython <code>random.py</code> implementation is very readable. I recommend having a look: <a href="https://github.com/python/cpython/blob/3.6/Lib/random.py" rel="noreferrer">https://github.com/python/cpython/blob/3.6/Lib/random.py</a></p> <p>Anyway, that version of python creates a global <code>random.Random()</code> object and assigns it directly to the <code>random</code> module. This object contains a <code>seed(a)</code> method which <a href="https://github.com/python/cpython/blob/3.6/Lib/random.py#L746" rel="noreferrer">acts as a module function</a> when you call <code>random.seed(a)</code>. Thus the seed state is shared across your entire program.</p> <p>1) Yes. <code>moduleA</code> and <code>moduleB</code> uses the same seed. Importing <code>random</code> in <code>moduleA</code> creates the global <code>random.Random()</code> object. Reimporting it in <code>moduleB</code> just gives you the same module and maintains the originally created <code>random.Random()</code> object.</p> <p>2) No. Not in the example you gave, but in general yes it can matter. You might use <code>moduleB</code> before you set the seed in <code>moduleA</code> thus your seed wasn't set.</p> <p>3) Hard to tell. Much more complicated code base. That said, I would think it works the same way. The authors of numpy would really have to <em>try</em> to make it work in a different way than how it works in the python implementation.</p> <p>In general, if you are worried about seed state, I recommend creating your own random objects and pass them around for generating random numbers.</p>
python|numpy|random|scope
10
1,904,904
56,108,631
Temporarily enable exhaustive pylint messages in vscode
<p>Visual Studio Code by default has sensible pylint settings that limits the number of pylint messages that are output.</p> <p>Is there any way to easily trigger a "pylint run" including the tests that are disabled by vscode by default either on all modules or on an individual module without messing with the vscode settings every time?</p>
<p>Unfortunately there isn't a way to temporarily flip the linting cap for just a single lint check.</p>
python|visual-studio-code|pylint
1
1,904,905
45,336,670
Seaching substring in Python
<p>I am learning the basics of Python programming. The program should print all the strings containing substring "ba" from a defined list. The program doesn't provide the expected output. I have analyzed the code and tried every possible change without success. Please check it and help me out with the solution.</p> <p>Thanks in advance!!</p> <pre><code>"""list""" ls = ["black", "back", "bag", "bleach", "biba", "adabas"] el = len(ls) ind1 = 0 sublen = len(ls[ind1]) ind2 = 0 cha1 = "b" cha2 = "a" """printing vals to see if they r not giving any error""" print(sublen) print(ls[ind1]) print(ls[ind1][ind2]) print(ls[ind1][ind2 + 1]) """while loop to get the strings having 'ba' in them""" while ind1 &lt; el: while ind2 &lt; sublen: if cha1 == ls[ind1][ind2] and cha2 == ls[ind1][ind2 + 1]: print(ls[ind1]) ind2 += 1 ind1 += 1 print("finished!!") </code></pre>
<pre><code>ls = ["black", "back", "bag", "bleach", "biba", "adabas"] for item in ls: if 'ba' in item: print(item) </code></pre> <p>The output is:</p> <pre><code>back bag biba adabas </code></pre>
python-3.x|pycharm
0
1,904,906
49,354,606
Difference between 'any' with generator-comprehension and comprehension without parentheses?
<p>Reviewing some of my code and I realized I had written what is essentially:'</p> <pre><code>if (any(predicate for predicate in list_of_predicates)): # do something </code></pre> <p>I had expected this syntax error since it was missing '()' or '[]'. So I tried it in ipython:</p> <p>Without bracketing:</p> <pre><code>In [33]: timeit.repeat('any(True for x in xrange(10))', repeat=10) Out[33]: [0.502741813659668, 0.49950194358825684, 0.6626348495483398, 0.5485308170318604, 0.5268769264221191, 0.6033108234405518, 0.4647831916809082, 0.45836901664733887, 0.46139097213745117, 0.4977281093597412] </code></pre> <p>Generator comprehension:</p> <pre><code>In [34]: timeit.repeat('any((True for x in xrange(10)))', repeat=10) Out[34]: [0.7183680534362793, 0.6293261051177979, 0.5045809745788574, 0.4723200798034668, 0.4649538993835449, 0.5164840221405029, 0.5919051170349121, 0.5790350437164307, 0.594775915145874, 0.5718569755554199] </code></pre> <p>Ramping up:</p> <pre><code>In [52]: reg = timeit.repeat('any(True for x in xrange(10))', repeat=100) In [53]: comp = timeit.repeat('any((True for x in xrange(10)))', repeat=100) In [55]: avg(reg) Out[55]: 0.5245428466796875 In [56]: avg(comp) Out[56]: 0.5283565306663514 In [57]: stddev(reg) Out[57]: 0.05609485659272963 In [58]: stddev(comp) Out[58]: 0.058506353663056954 In [59]: reg[50] Out[59]: 0.46748805046081543 In [60]: comp[50] Out[60]: 0.5147180557250977 </code></pre> <p>There seems to be a marginal (possibly noise) performance advantage to not having the parentheses - ramping up the test it appears more like noise. <strong>Is there a fundamental difference between how these are processed</strong>?</p>
<p>These expressions are equivalent. The performance difference is noise.</p> <p>From the <a href="http://legacy.python.org/dev/peps/pep-0289/" rel="nofollow noreferrer">original genexp PEP</a>:</p> <blockquote> <p>if a function call has a single positional argument, it can be a generator expression without extra parentheses, but in all other cases you have to parenthesize it.</p> </blockquote> <p>And viewing the disassembly, you can see that they compile to the same bytecode:</p> <pre><code>&gt;&gt;&gt; def f(): ... any(True for x in xrange(10)) ... &gt;&gt;&gt; def g(): ... any((True for x in xrange(10))) ... &gt;&gt;&gt; dis.dis(f) 2 0 LOAD_GLOBAL 0 (any) 3 LOAD_CONST 1 (&lt;code object &lt;genexpr&gt; at 0000000002 B46A30, file "&lt;stdin&gt;", line 2&gt;) 6 MAKE_FUNCTION 0 9 LOAD_GLOBAL 1 (xrange) 12 LOAD_CONST 2 (10) 15 CALL_FUNCTION 1 18 GET_ITER 19 CALL_FUNCTION 1 22 CALL_FUNCTION 1 25 POP_TOP 26 LOAD_CONST 0 (None) 29 RETURN_VALUE &gt;&gt;&gt; dis.dis(g) 2 0 LOAD_GLOBAL 0 (any) 3 LOAD_CONST 1 (&lt;code object &lt;genexpr&gt; at 0000000002 BE0DB0, file "&lt;stdin&gt;", line 2&gt;) 6 MAKE_FUNCTION 0 9 LOAD_GLOBAL 1 (xrange) 12 LOAD_CONST 2 (10) 15 CALL_FUNCTION 1 18 GET_ITER 19 CALL_FUNCTION 1 22 CALL_FUNCTION 1 25 POP_TOP 26 LOAD_CONST 0 (None) 29 RETURN_VALUE &gt;&gt;&gt; f.__code__.co_consts (None, &lt;code object &lt;genexpr&gt; at 0000000002B46A30, file "&lt;stdin&gt;", line 2&gt;, 10) &gt;&gt;&gt; dis.dis(f.__code__.co_consts[1]) # the genexp's code object in f 2 0 LOAD_FAST 0 (.0) &gt;&gt; 3 FOR_ITER 11 (to 17) 6 STORE_FAST 1 (x) 9 LOAD_GLOBAL 0 (True) 12 YIELD_VALUE 13 POP_TOP 14 JUMP_ABSOLUTE 3 &gt;&gt; 17 LOAD_CONST 0 (None) 20 RETURN_VALUE &gt;&gt;&gt; dis.dis(g.__code__.co_consts[1]) # the genexp's code object in g 2 0 LOAD_FAST 0 (.0) &gt;&gt; 3 FOR_ITER 11 (to 17) 6 STORE_FAST 1 (x) 9 LOAD_GLOBAL 0 (True) 12 YIELD_VALUE 13 POP_TOP 14 JUMP_ABSOLUTE 3 &gt;&gt; 17 LOAD_CONST 0 (None) 20 RETURN_VALUE </code></pre>
python-2.7
2
1,904,907
53,510,190
Displaying different Django forms at different template locations
<p>I have a model like this:</p> <p><strong>models.py</strong></p> <pre><code>from django.db import models class Foo(models.Model): text = models.TextField() </code></pre> <p>Example instances of this model are:</p> <pre><code>Foo.objects.create(text="My first text [[@shorttext_1@]] random text.") Foo.objects.create(text="Select something from below [[@multipleselect_1@]]. text.") Foo.objects.create(text="A different form [[@shorttext_1@]] and another" "form [[@shorttext_2@]] random texts.") Foo.objects.create(text="Mixed form [[@shorttext_1@]] and another" "form [[@multipleselect_1@]] random text.") </code></pre> <p>The values <code>[[@shorttext_1@]]</code>, <code>[[@multipleselect_1@]]</code> represent the location and the type of the forms to be placed in the template below. <code>[[@ @]]</code> is a randomly chosen markdown style placeholder. </p> <p><strong>forms.py</strong></p> <pre><code>from django import forms class ShortTextForm(forms.Form): # [[@shorttext_1@]] form short_text = forms.CharField(max_length=300) class MultipleSelectionForm(forms.Form): # [[@multipleselect_1@]] form selection = forms.ChoiceField( choices=[('A', 'A text'), ('B', 'B text')], widget=forms.RadioSelect()) </code></pre> <p><strong>views.py</strong></p> <pre><code>from django.shortcuts import render def text_view(request): if request.method == 'POST': # get the info for each form else: foo = Foo.objects.order_by('?').first() return render( request=request, template_name='templates/index.html', context={'text': foo.text}) </code></pre> <p><strong>templates/index.html</strong></p> <pre><code>{% extends "base.html" %} {% block body %} {{ text }} {% endblock %} </code></pre> <p>Is it possible to show the <code>foo.text</code> in template and render the desired forms?</p> <p>Currently, I have a <code>type</code> variable in my <code>Foo</code> class to designate the type of the form. And my view can render only one desired type of form which can only placed at the end of the text. I want to render multiple forms at any location of the text using only one template. </p> <p>EDIT:</p> <p>To give an example output of I want to achieve:</p> <pre><code>Foo.objects.create(text="The age of the person is [[@shorttext_1@]] and " "another attribute is [[@shorttext_2@]]. " "Additionally select one:&lt;br&gt;[[@multipleselect_1@]]") </code></pre> <p>this object should be rendered in the template such that the output looks like this:</p> <p><a href="https://i.stack.imgur.com/pqIEw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pqIEw.png" alt="enter image description here"></a></p>
<p>Just pass foo:</p> <pre><code>context={'foo': foo} </code></pre> <p>not foo.text and use <code>if...else</code> statements inside your template to figure out what you want to do.</p> <p>This will let you access the <code>type</code> variable too:</p> <pre><code>{% if foo.type == some_type %} {{ form }} {% endif %} </code></pre>
python|django|django-forms|django-templates
-1
1,904,908
54,941,886
Calculate between dates under certain conditions
<p>I want to count days between two or more identical codename cells. What I need is shown right below in the <code>daysBetween</code> column:</p> <pre><code>codename date daysBetween AAA 20-oct-2011 NaN AAB 20-oct-2011 NaN AAB 21-oct-2011 1 AAB 29-oct-2011 9 AAB 21-oct-2012 365 </code></pre> <hr> <p>Below was my raw data: </p> <pre><code>codename date daysBetween AAB 21-oct-2011 NaN AAO 20-oct-2011 NaN AAB 21-oct-2012 NaN AAB 20-oct-2011 NaN AAB 29-0ct-2011 NaN </code></pre> <p>I managed to first sort the data by <code>codename</code> and <code>date</code> using </p> <pre><code>file.sort_values(by=['codename', 'date']) </code></pre> <p>Result:</p> <pre><code>codename date daysBetween AAA 20-oct-2011 NaN AAB 20-oct-2011 NaN AAB 21-oct-2011 NaN AAB 29-0ct-2011 NaN AAB 21-oct-2012 NaN </code></pre> <p>Here came my problem, when cells in <code>codename</code> are identical I needed to calculate days between the first date compared to other dates. </p> <p>I think I need to use pandas <code>Timedelta(date1 - date2).days</code>, but exactly how I find identical cells in <code>codename</code> and them compare the first date to the rest of the dates I'm not sure. </p>
<p>Use:</p> <pre><code>df['date'] = pd.to_datetime(df['date']) df = df.sort_values(by=['codename', 'date']) df['new'] = (df['date'] - df.groupby('codename')['date'].transform('first')).dt.days print (df) codename date daysBetween new 0 AAA 2011-10-20 NaN 0 1 AAB 2011-10-20 NaN 0 2 AAB 2011-10-21 1.0 1 3 AAB 2011-10-29 9.0 9 4 AAB 2012-10-21 365.0 367 </code></pre> <p><strong>Explanation</strong>:</p> <p>Aftr converting to datetimes and sorting use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>transform</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.first.html" rel="nofollow noreferrer"><code>first</code></a> for get <code>Series</code> with same size like original DataFrame, so is possible subtract, last convert timedeltas to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.days.html" rel="nofollow noreferrer"><code>days</code></a>.</p> <p><strong>Detail</strong>:</p> <pre><code>print (df.groupby('codename')['date'].transform('first')) 0 2011-10-20 1 2011-10-20 2 2011-10-20 3 2011-10-20 4 2011-10-20 Name: date, dtype: datetime64[ns] </code></pre>
python|pandas
0
1,904,909
39,904,587
Setting up 'encoding' in Python's gzip.open() doesn't seem to work
<p>Even tho I tried to specify encoding in python's gzip.open(), it seems to be always using cp1252.py to encode the file's content. My code:</p> <pre><code>with gzip.open('file.gz', 'rt', 'cp1250') as f: content = f.read() </code></pre> <p>Response:</p> <blockquote> <p>File "C:\Python34\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x8f in position 52893: character maps to undefined</p> </blockquote>
<h1>Python 3.x</h1> <p><code>gzip.open</code> is <a href="https://docs.python.org/3.4/library/gzip.html#gzip.open" rel="nofollow">defined</a> as:</p> <blockquote> <p>gzip.open(filename, mode='rb', compresslevel=9, encoding=None, errors=None, newline=None)</p> </blockquote> <p>Therefore, <code>gzip.open('file.gz', 'rt', 'cp1250')</code> sends it these arguments: - filename = 'file.gz' - mode = 'rt' - compresslevel = 'cp1250'</p> <p>This is clearly wrong, because the intention is to use 'cp1250' encoding. The <code>encoding</code> argument can either be sent as the fourth positional argument or as a keyword argument:</p> <pre><code>gzip.open('file.gz', 'rt', 5, 'cp1250') # 4th positional argument gzip.open('file.gz', 'rt', encoding='cp1250') # keyword argument </code></pre> <h1>Python 2.x</h1> <p><a href="https://docs.python.org/2/library/gzip.html#gzip.open" rel="nofollow">Python 2 version of <code>gzip.open</code></a> does not take the <code>encoding</code> argument and it does not accept text modes, so the decoding has to be done explicitly after reading the data:</p> <pre><code>with gzip.open('file.gz', 'rb') as f: data = f.read() decoded_data = data.decode('cp1250') </code></pre>
python|python-3.x|character-encoding|gzip
0
1,904,910
52,404,175
Restructure word list into two columns
<p>If I have columns of lists, is there a pandas function that lets me split each word which is separated by a comma to return two new columns, one column represents the first word which can not connect by itself and the second column represents the connected word. In general the idea is to create a word table of the different words and compare them together in two columns. The following table is used for a better interpretation of the problem.</p> <pre><code>import pandas as pd r1=['tag1','tag2', 'tag3', 'tag4'] df=pd.DataFrame(r1,columns=['text']) </code></pre> <p>Desired Outcome, first column shows the first word, the second column represents the connected word. This process is also vice versa, for the next words in the lists.: </p> <pre><code>col1 | col2 -------------- tag1 | tag2 tag1 | tag3 tag1 | tag4 tag2 | tag1 tag2 | tag3 tag2 | tag4 tag3 | tag1 tag3 | tag2 tag3 | tag4 tag4 | tag1 tag4 | tag2 tag4 | tag3 </code></pre>
<p>Using <code>itertools.permutations</code></p> <p><strong>Demo:</strong></p> <pre><code>from itertools import permutations import pandas as pd r1=['tag1','tag2', 'tag3', 'tag4'] df = pd.DataFrame(list(permutations(r1,2)), columns=['col1','col2']) #df = pd.DataFrame([i for i in permutations(r1,2)], columns=['col1','col2']) print(df) </code></pre> <p><strong>Output:</strong></p> <pre><code> col1 col2 0 tag1 tag2 1 tag1 tag3 2 tag1 tag4 3 tag2 tag1 4 tag2 tag3 5 tag2 tag4 6 tag3 tag1 7 tag3 tag2 8 tag3 tag4 9 tag4 tag1 10 tag4 tag2 11 tag4 tag3 </code></pre>
python-3.x|pandas|dataframe|grouping|pandas-groupby
2
1,904,911
34,274,505
How to use variables as key of dictionary in python
<p>Hi I'm very new at python so asking for forgiveness if I'm asking very stupid question. So, I have this dictionary called array with some value, now I want to input the key as a variable and use that to print value assigned to that key.</p> <pre><code>array = {'color':'blue' , 'size':'small'} print array ['color'] </code></pre> <p>this works just fine, outputting the value blue. but if I try this it doesn't work.</p> <pre><code>array = {'color':'blue' , 'size':'small'} var = input ('input a key') #input would be " color " or " size " print array[var] </code></pre> <p>I think there is a very easy solution to this. thanks for helping in advance :)</p>
<p>To avoid KeyError exception, you can use get:</p> <pre><code>&gt;&gt;&gt; mydict = {'color':'blue' , 'size':'small'} &gt;&gt;&gt; var = input().strip() ' color ' &gt;&gt;&gt; mydict.get(var, 'Not Found') </code></pre>
python|dictionary
2
1,904,912
38,907,220
Graphviz executables not found
<p>I'm familiar with the various threads that already exist regarding this problem.</p> <p>I'm on a Windows 7 machine. I'm just trying to run the example code to draw a decision tree:</p> <pre><code>from sklearn.datasets import load_iris from sklearn import tree clf = tree.DecisionTreeClassifier() iris = load_iris() clf = clf.fit(iris.data, iris.target) from sklearn.externals.six import StringIO import pydotplus dot_data = StringIO() tree.export_graphviz(clf, out_file=dot_data) graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) graph.write_pdf("iris.pdf") </code></pre> <p>I installed graphviz and added it as a PATH variable. I installed pydot (now pydotplus) after installing the python's graphviz library. I still get the error:</p> <pre><code>InvocationException: GraphViz's executables not found </code></pre>
<p><a href="http://www.graphviz.org/Download_windows.php" rel="nofollow">It looks like the installer isn't setting the PATH variable for you</a>, you'll need to add the installation folder of Graphviz to your PATH manually.</p>
python-2.7|scikit-learn|graphviz
1
1,904,913
40,570,092
How to deploy a python client script on heroku?
<p>Basically, I have a python script which using python-twitter api fetches tweets particular hashtag and store it in a database. Script doess this after every 30 seconds. How do i deploy the script to run it on heroku.</p>
<p>Define a "worker" process type in your Procfile that invokes your script.</p>
python|heroku
0
1,904,914
64,225,210
Python ast: decide whether a FunctionDef is inside a ClassDef or not
<p>I'd like to build an ast from a Python source code, then get specific information from the ast. I faced the following problem: while walking through a ClassDef's body is viable, how I decide if a method is inside the class or not.</p> <p>The code I build ast from:</p> <pre class="lang-py prettyprint-override"><code>class A: def foo(self): pass def foo(self): pass </code></pre> <p>In this example I will hit all the <code>foo</code>s but I will not able to tell if it is from the class or not (as they have the same set of parameters, named badly, but the code can be interpreted).</p> <pre class="lang-py prettyprint-override"><code> def build_ast(self): with open(self.path, 'r', encoding='utf-8') as fp: tree = ast.parse(fp.read()) for node in ast.walk(tree): if isinstance(node, ast.FunctionDef): print(ast.dump(node)) # access the parent if it has </code></pre>
<p>I'm not totally satisfied with my final solution, but apparently it works for Python 3.8.3:</p> <p>As I experienced, <code>ast.walk</code> traverses ClassDef nodes before FunctionsDef nodes.</p> <pre class="lang-py prettyprint-override"><code>def build_ast(self): with open(self.path, 'r', encoding='utf-8') as fp: tree = ast.parse(fp.read()) for node in ast.walk(tree): if isinstance(node, ast.FunctionDef): if hasattr(node, &quot;parent&quot;): print(node.parent.name, node.name) else: print(node.name, &quot;is not in a class.&quot;) if isinstance(node, ast.ClassDef): for child in node.body: if isinstance(child, ast.FunctionDef): child.parent = node </code></pre>
python|python-3.x|abstract-syntax-tree
1
1,904,915
70,535,586
Is pythonw exe an "external" application
<p>Using Python 3.10:</p> <pre><code>import os os.system('notepad.exe') </code></pre> <p>Notepad launches, but if I try that with pythonw.exe Idle doesn't launch but the exit code is also 0. Not sure why. Is it because pythonw is not an external application? How can I launch IDLE from the interpreter?</p> <p>I apologize, not sure how to include code properly....</p> <p>Many thanks</p>
<p><code>pythonw</code> is not IDLE. <code>pythonw</code> is just another copy of <code>python</code> that is marked as a Windows GUI application, so it doesn't attach to your terminal session. IDLE is a separate command. Depending on where your Python installation is, you can run:</p> <pre><code>C:\Python310\Lib\idlelib\idle.bat </code></pre> <p>Or, even easier:</p> <pre><code>pythonw -m idlelib </code></pre>
python|external
1
1,904,916
69,784,101
i have all my csv data all on one row, how can i have it on separate rows?
<pre><code>car_name,pic_url,plate &quot;Porsche 911,Toyota Coaster,Toyota Fortuner,Toyota Fortuner,Toyota Yaris,Toyota Camry,Toyota Camry,Maserati Ghibli,Nissan Altima,Mercedes-Benz AMG GT&quot;,&quot;http://img03.platesmania.com/211021/m/17801541.jpg,http://img03.platesmania.com/211021/m/17795740.jpg,http://img03.platesmania.com/211018/m/17775860.jpg,http://img03.platesmania.com/211018/m/17775844.jpg,http://img03.platesmania.com/211014/m/17747911.jpg,http://img03.platesmania.com/211014/m/17747842.jpg,http://img03.platesmania.com/211013/m/17740349.jpg,http://img03.platesmania.com/211012/m/17739094.jpg,http://img03.platesmania.com/211012/m/17733851.jpg,http://img03.platesmania.com/211008/m/17702219.jpg&quot;,&quot;7 32900,6 63196,17 44571,17 44571,5 72738,8 52101,8 52101,9 71531,12 54194,7 2494&quot; </code></pre>
<p>If all the rows have no missing values, expand on the comment from @QuentinLerebours above.</p> <pre><code>import pandas as pd filename = &lt;csv file name here&gt; with open(filename,'r') as data: res = data.read().split('&quot;') header = res[0].split(&quot;,&quot;) car_names = res[1].split(&quot;,&quot;) pic_urls = res[3].split(&quot;,&quot;) #res[2],res[4] are commas plates = res[5].split(&quot;,&quot;) out = zip(car_names,pic_urls,plates) df = pd.DataFrame(out, columns=header) df.to_csv(&quot;out.csv&quot;, index=False) </code></pre>
python|csv
1
1,904,917
55,776,177
Find the file name using python
<p>I have a file with the following format in filename.txt file.</p> <pre><code>h:\abc\abc_Foldhr_1\hhhhhhhhhh8db h:\abc\abc_Foldhr_1\hhhhhhhhhh8dc h:\abc\abc_Foldhr_1\hhhhhhhhhh8dx h:\abc\abc_Foldhr_1\hhhhhhhhhh8du h:\abc\abc_Foldhr_1\hhhhhhhhhh8d4 h:\abc\abc_Foldhr_1\hhhhhhhhhh8d5 h:\abc\abc_Foldhr_1\hhhhhhhhhh8d6 h:\abc\abc_Foldhr_1\hhhhhhhhhh8d7 h:\abc\abc_Foldhr_1\hhhhhhhhhh8d8 </code></pre> <p>I was able to read it well but unable to store in pandas data frame or the list or dictionary.</p> <pre><code>import pandas as pd #data = pd.read_excel ('/home/home/Documents/pythontestfiles/HON-Lib.xlsx') data = pd.read_table('/home/home/Documents/pythontestfiles/filename.txt', delim_whitespace=True, names=('A')) df = pd.DataFrame(data, columns= ['A']) print(df) </code></pre> <p>and would like to list out the filename only as</p> <pre><code>hhhhhhhhhh8db . . . hhhhhhhhhh8d6 hhhhhhhhhh8d7 hhhhhhhhhh8d8 </code></pre> <p>the purpose of storing in any data frame or dictionary is to compare against the excel file result.</p>
<p>Using <code>split()</code>:</p> <pre><code>res = [] with open('filename.txt', 'r') as file: content = file.readlines() for line in content: # print(line.split('\\')[-1]) # to print each name res.append(line.split('\\')[-1]) # append the name to the list print(res) </code></pre> <p><strong>EDIT</strong>:</p> <p>Elaborating on the answer given, the <code>split()</code> method being applied on the string splits it by the <code>\\</code>, Consider the following example:</p> <pre><code>s = 'h:\abc\abc_Foldhr_1\hhhhhhhhhh8db' print(s.split('\\')) </code></pre> <p>Which gives the output:</p> <pre><code>['h:\x07bc\x07bc_Foldhr_1', 'hhhhhhhhhh8db'] </code></pre> <p>The <code>[-1]</code> index grabs the last element in it, hence:</p> <pre><code>print(s.split('\\')[-1]) </code></pre> <p>Would give:</p> <pre><code>hhhhhhhhhh8db </code></pre>
python|string|pandas|filenames
2
1,904,918
50,165,148
Google StackDriver Logging on Flask App - Difference between default Flask logger?
<p>I'm trying to see the difference between default flask logger and stackdriver logger in GAE's sample application: <a href="https://cloud.google.com/python/getting-started/using-pub-sub" rel="nofollow noreferrer">https://cloud.google.com/python/getting-started/using-pub-sub</a></p> <p>Code without StackDriver logger:</p> <pre><code>def create_app(config, debug=False, testing=False, config_overrides=None): app = Flask(__name__) app.config.from_object(config) app.debug = debug app.testing = testing if config_overrides: app.config.update(config_overrides) # Configure logging if not app.testing: logging.basicConfig(level=logging.INFO) </code></pre> <p>Code with StackDriver logger:</p> <pre><code>def create_app(config, debug=False, testing=False, config_overrides=None): app = Flask(__name__) app.config.from_object(config) app.debug = debug app.testing = testing if config_overrides: app.config.update(config_overrides) # [START setup_logging] if not app.testing: client = google.cloud.logging.Client(app.config['PROJECT_ID']) # Attaches a Google Stackdriver logging handler to the root logger client.setup_logging(logging.INFO) </code></pre> <p>There's some difference with the StackDriver code where a logger was imported from google cloud. However, the output of the logs seems similar:</p> <p>Output Log without StackDriver: <a href="https://i.stack.imgur.com/3cT1H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3cT1H.png" alt="enter image description here"></a></p> <p>Output Log with StackDriver:</p> <p><a href="https://i.stack.imgur.com/2ORNm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2ORNm.png" alt="enter image description here"></a></p> <p>These logs does not look that different with or without a StackDriver.</p> <p>When I go to the StackDriver logs I get redirected to the default logs in GAE. Is there anything special with StackDriver loggers that the normal flask logger cannot do?</p>
<p>Taking a look into the the two function that you are using for logger configuration: <a href="https://docs.python.org/2/library/logging.html#logging.basicConfig" rel="nofollow noreferrer">Basicconfig</a> and <a href="https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/logging/google/cloud/logging/handlers/handlers.py#L114" rel="nofollow noreferrer">Setup.logging</a>, your loggers have similar settings so it make sense to me to have similar log output. </p> <p>I didn't understand what you expected to see in Stackdriver Logging Viewer since the two pictures that you attached looks right to me since them are normal <a href="https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry" rel="nofollow noreferrer">Log entry for Stackdriver Logging</a>. Notice that, by default, App Engine logging is provided by Stackdriver Logging as explained in <a href="https://cloud.google.com/appengine/docs/standard/python/logs/#Python_Logs_storage" rel="nofollow noreferrer">this document</a></p> <p>The advantage of <a href="https://cloud.google.com/logging/" rel="nofollow noreferrer">Stackdriver Logging</a> is to have a better management of the logs and the possibility to analyze them. You can have a look in <a href="https://cloud.google.com/logging/docs/view/overview" rel="nofollow noreferrer">this tutorial</a> in order to have an idea about it.</p>
python|google-app-engine|google-cloud-stackdriver
1
1,904,919
63,809,334
Internal server error while deploying with docker
<p>I'm tying to dockerize a webservice, I configured the following files:</p> <p><strong>dockerfile</strong></p> <pre><code>FROM python:3 ENV PYTHONUNBUFFERED 1 RUN mkdir /code WORKDIR /code COPY requirements.txt /code/ RUN pip install -r requirements.txt COPY . /code/ </code></pre> <p><strong>docker-compose.yml</strong></p> <pre><code>version: '3' services: db: image: mysql container_name: DB_test environment: MYSQL_DATABASE: 'test' MYSQL_USER: 'user' MYSQL_PASSWORD: 'password' MYSQL_ROOT_PASSWORD: 'password' ports: - &quot;3306:3306&quot; volumes: - ./sql:/docker-entrypoint-initdb.d web: build: . command: gunicorn --bind 0.0.0.0:8000 manage:app ports: - &quot;8000:8000&quot; depends_on: - db </code></pre> <p><strong>sql/init.sql</strong></p> <pre><code>GRANT ALL PRIVILEGES ON test TO 'user'@'%'; FLUSH PRIVILEGES; </code></pre> <p><strong>sqlalchemy_conf</strong></p> <pre><code>{ &quot;HR_API_ID&quot; : &quot;api_id&quot;, &quot;HR_API_CODE&quot; : &quot;api_code&quot;, &quot;DB_HOST&quot; : &quot;0.0.0.0&quot;, &quot;DB_PORT&quot; : &quot;3306&quot;, &quot;DB_USERNAME&quot; : &quot;user&quot;, &quot;DB_PASSWORD&quot; : &quot;password&quot;, &quot;DB_NAME&quot; : &quot;test&quot;, &quot;DB_CONNECTION&quot; : &quot;mysql&quot; } </code></pre> <p>After that I run:</p> <pre><code>docker-compose build docker-compose up </code></pre> <p>And the log error that appears in the console is the following one when I try to access <a href="http://0.0.0.0:8000/" rel="nofollow noreferrer">http://0.0.0.0:8000/</a>:</p> <pre><code>sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, &quot;Can't connect to MySQL server on '0.0.0.0' ([Errno 111] Connection refused)&quot;) (Background on this error at: http://sqlalche.me/e/e3q8) </code></pre> <p>And in <a href="http://0.0.0.0:8000/" rel="nofollow noreferrer">http://0.0.0.0:8000/</a>:</p> <pre><code>Internal Server Error </code></pre> <p>What am I doing wrong?</p> <p>UPDATE:</p> <pre><code>DB_HOST&quot; : &quot;db&quot; </code></pre> <p>new error:</p> <pre><code>sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1045, &quot;Access denied for user 'user'@'172.19.0.3' (using password: YES)&quot;) (Background on this error at: http://sqlalche.me/e/e3q8) </code></pre> <p>not sure, but running <strong>init.sql</strong> at the beggining should avoid this error</p>
<p>according to <a href="https://docs.docker.com/compose/networking/" rel="nofollow noreferrer">https://docs.docker.com/compose/networking/</a> your DB_HOST should be <code>db</code> which is your db service name not 0.0.0.0</p>
docker|docker-compose|dockerfile|mysql-python
1
1,904,920
65,449,280
Tensorflow giving Unknow Dtype Policy
<p>I Am trying to train a model in Colab and then transferring it to Kaggle. The Model seems to Work fine in Colab as a .h5 model. The problem seems to be with Efficient net B4 and after in Kaggle. There is No Documentation about This. I am training this model on a TPU and doing the inference on a GPU but even if i train the model on GPU this problem is There.</p> <p>My error Log</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-2-2ca8d6accabd&gt; in &lt;module&gt; 10 policy = mixed_precision.Policy('mixed_bfloat16') 11 mixed_precision.set_policy(policy) ---&gt; 12 model = tf.keras.models.load_model(r&quot;../input/model-for-training/effieceintnettpurandomcrop.h5&quot;) 13 14 model.summary() /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/save.py in load_model(filepath, custom_objects, compile) 182 if (h5py is not None and ( 183 isinstance(filepath, h5py.File) or h5py.is_hdf5(filepath))): --&gt; 184 return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile) 185 186 if sys.version_info &gt;= (3, 4) and isinstance(filepath, pathlib.Path): /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/hdf5_format.py in load_model_from_hdf5(filepath, custom_objects, compile) 176 model_config = json.loads(model_config.decode('utf-8')) 177 model = model_config_lib.model_from_config(model_config, --&gt; 178 custom_objects=custom_objects) 179 180 # set weights /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/model_config.py in model_from_config(config, custom_objects) 53 '`Sequential.from_config(config)`?') 54 from tensorflow.python.keras.layers import deserialize # pylint: disable=g-import-not-at-top ---&gt; 55 return deserialize(config, custom_objects=custom_objects) 56 57 /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/layers/serialization.py in deserialize(config, custom_objects) 107 module_objects=globs, 108 custom_objects=custom_objects, --&gt; 109 printable_module_name='layer') /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name) 371 custom_objects=dict( 372 list(_GLOBAL_CUSTOM_OBJECTS.items()) + --&gt; 373 list(custom_objects.items()))) 374 with CustomObjectScope(custom_objects): 375 return cls.from_config(cls_config) /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/sequential.py in from_config(cls, config, custom_objects) 396 for layer_config in layer_configs: 397 layer = layer_module.deserialize(layer_config, --&gt; 398 custom_objects=custom_objects) 399 model.add(layer) 400 if (not model.inputs and build_input_shape and /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/layers/serialization.py in deserialize(config, custom_objects) 107 module_objects=globs, 108 custom_objects=custom_objects, --&gt; 109 printable_module_name='layer') /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name) 373 list(custom_objects.items()))) 374 with CustomObjectScope(custom_objects): --&gt; 375 return cls.from_config(cls_config) 376 else: 377 # Then `cls` may be a function returning a class. /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in from_config(cls, config) 653 A layer instance. 654 &quot;&quot;&quot; --&gt; 655 return cls(**config) 656 657 def compute_output_shape(self, input_shape): /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/layers/normalization.py in __init__(self, axis, momentum, epsilon, center, scale, beta_initializer, gamma_initializer, moving_mean_initializer, moving_variance_initializer, beta_regularizer, gamma_regularizer, beta_constraint, gamma_constraint, renorm, renorm_clipping, renorm_momentum, fused, trainable, virtual_batch_size, adjustment, name, **kwargs) 198 **kwargs): 199 super(BatchNormalizationBase, self).__init__( --&gt; 200 name=name, **kwargs) 201 if isinstance(axis, (list, tuple)): 202 self.axis = axis[:] /opt/conda/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs) 454 self._self_setattr_tracking = False # pylint: disable=protected-access 455 try: --&gt; 456 result = method(self, *args, **kwargs) 457 finally: 458 self._self_setattr_tracking = previous_value # pylint: disable=protected-access /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in __init__(self, trainable, name, dtype, dynamic, **kwargs) 336 # fields, like the loss scale, are used by Models. For subclassed networks, 337 # the compute and variable dtypes are used as like any ordinary layer. --&gt; 338 self._set_dtype_policy(dtype) 339 # Boolean indicating whether the layer automatically casts its inputs to the 340 # layer's compute_dtype. /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in _set_dtype_policy(self, dtype) 1986 self._dtype_policy = dtype 1987 elif isinstance(dtype, dict): -&gt; 1988 self._dtype_policy = policy.deserialize(dtype) 1989 elif dtype: 1990 self._dtype_policy = policy.Policy(dtypes.as_dtype(dtype).name) /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/mixed_precision/experimental/policy.py in deserialize(config, custom_objects) 628 module_objects=module_objects, 629 custom_objects=custom_objects, --&gt; 630 printable_module_name='dtype policy') /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name) 360 config = identifier 361 (cls, cls_config) = class_and_config_for_serialized_keras_object( --&gt; 362 config, module_objects, custom_objects, printable_module_name) 363 364 if hasattr(cls, 'from_config'): /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/utils/generic_utils.py in class_and_config_for_serialized_keras_object(config, module_objects, custom_objects, printable_module_name) 319 cls = get_registered_object(class_name, custom_objects, module_objects) 320 if cls is None: --&gt; 321 raise ValueError('Unknown ' + printable_module_name + ': ' + class_name) 322 323 cls_config = config['config'] ValueError: Unknown dtype policy: PolicyV1 </code></pre> <p>My model Training Code:</p> <pre><code>import tensorflow as tf from tensorflow.keras.mixed_precision import experimental as mixed_precision from tensorflow.keras.layers import BatchNormalization from tensorflow.keras.regularizers import l1 policy = mixed_precision.Policy('mixed_bfloat16') mixed_precision.set_policy(policy) reg = l1(0.001) with strategy.scope(): base_model = tf.keras.applications.EfficientNetB3(weights=&quot;imagenet&quot;, include_top=False) base_model.trainable = True model = tf.keras.Sequential([ tf.keras.layers.BatchNormalization(), base_model, BatchNormalization(), tf.keras.layers.LeakyReLU(), tf.keras.layers.GlobalAveragePooling2D(), tf.keras.layers.Dense(256), BatchNormalization(), tf.keras.layers.LeakyReLU(), BatchNormalization(), tf.keras.layers.Dense(128), BatchNormalization(), tf.keras.layers.LeakyReLU(), BatchNormalization(), tf.keras.layers.Dropout(0.4), BatchNormalization(), tf.keras.layers.Dense(64), BatchNormalization(), tf.keras.layers.LeakyReLU(), tf.keras.layers.Dense(32), BatchNormalization(), tf.keras.layers.Dropout(0.4), tf.keras.layers.LeakyReLU(), tf.keras.layers.Dense(16), tf.keras.layers.LeakyReLU(), tf.keras.layers.Dense(8), tf.keras.layers.LeakyReLU(), tf.keras.layers.Dense(len(CLASSES), activation='softmax') ]) model.compile( optimizer=tf.keras.optimizers.SGD(lr=0.04), loss='sparse_categorical_crossentropy', metrics=['sparse_categorical_accuracy']) from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping early = EarlyStopping(monitor='val_loss', mode='min', patience=5) STEPS_PER_EPOCH = 17118 // BATCH_SIZE VALID_STEPS = 4279 // BATCH_SIZE checkpoint_filepath = 'gs://mithil/tmp/checkpoint_temp' model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_filepath, save_weights_only=True, monitor='val_sparse_categorical_accuracy', mode='max', save_best_only=True) history = model.fit(train_dataset, steps_per_epoch=STEPS_PER_EPOCH, epochs=25, validation_data=valid_dataset, validation_steps=VALID_STEPS, callbacks=[early,model_checkpoint_callback]) model.load_weights(checkpoint_filepath) </code></pre> <p>Inference Code</p> <pre><code>import tensorflow as tf import numpy as np import os import pandas as pd import cv2 from collections import Counter from tensorflow.keras.mixed_precision import experimental as mixed_precision from tensorflow.keras.regularizers import l1 policy = mixed_precision.Policy('mixed_bfloat16') mixed_precision.set_policy(policy) model = tf.keras.models.load_model(r&quot;../input/model-for-training/effieceintnettpurandomcrop.h5&quot;) model.summary() path = &quot;../input/cassava-leaf-disease-classification/test_images&quot; </code></pre>
<p>Had the same problem when I saved the model with tf-version 2.4 and loaded it with tf-version 2.3. Fixed it by upgrading to 2.4.</p> <p>Maybe helpful to someone who stumbles upon this question from google.</p>
python|python-3.x|tensorflow|keras|deep-learning
0
1,904,921
65,322,142
How to convert an xml to csv dynamically
<p>I want to build a tool that convert xml's to csv. I am using python, but can move to a different tool if better.</p> <p>these xml's are not always follow the same schema, so i need to automatically convert the structure to the csv. without always knowing the tree structures. the main tags are known and always the same, some xml might use all tags and some only few. I tried using xml.etree and managed to work with the xml, but not with a dynamic xml input. is that even possible ?</p> <p>here is a sample of my xml input file content:</p> <pre><code>&lt;Process&gt; &lt;ProcessName&gt;Vault-2-A&lt;/ProcessName&gt; &lt;ProcessEnabled&gt;True&lt;/ProcessEnabled&gt; &lt;ProcessType&gt;N2N&lt;/ProcessType&gt; &lt;NonDuplicationMethod&gt;Delete&lt;/NonDuplicationMethod&gt; &lt;OnFileExistsInDest&gt;Overwrite&lt;/OnFileExistsInDest&gt; &lt;ProcessScheduling&gt;ExternalActivation&lt;/ProcessScheduling&gt; &lt;ExternalActivationLevel&gt;Process&lt;/ExternalActivationLevel&gt; &lt;ProcessRecursive&gt;True&lt;/ProcessRecursive&gt; &lt;FileSelectionPattern&gt;*&lt;/FileSelectionPattern&gt; &lt;Rules&gt; &lt;Rule1&gt; &lt;RuleName&gt;V2A&lt;/RuleName&gt; &lt;SourcePort&gt; &lt;Name&gt;xxx&lt;/Name&gt; &lt;Type&gt;Vault&lt;/Type&gt; &lt;VaultName&gt;yyy&lt;/VaultName&gt; &lt;UserName&gt;user&lt;/UserName&gt; &lt;FolderName&gt;Root\&lt;/FolderName&gt; &lt;/SourcePort&gt; &lt;DestPort&gt; &lt;Name&gt;MyFileSystem&lt;/Name&gt; &lt;Type&gt;FileSystem&lt;/Type&gt; &lt;FolderName&gt;D:\xxx\&lt;/FolderName&gt; &lt;/DestPort&gt; &lt;/Rule1&gt; &lt;Rule2&gt; &lt;RuleName&gt;A2V&lt;/RuleName&gt; &lt;SourcePort&gt; &lt;Name&gt;xxx&lt;/Name&gt; &lt;Type&gt;Vault&lt;/Type&gt; &lt;VaultName&gt;yyyn&lt;/VaultName&gt; &lt;UserName&gt;user&lt;/UserName&gt; &lt;SafeName&gt;userTest&lt;/SafeName&gt; &lt;FolderName&gt;Root\&lt;/FolderName&gt; &lt;/SourcePort&gt; &lt;DestPort&gt; &lt;Name&gt;sftp&lt;/Name&gt; &lt;Type&gt;sftp&lt;/Type&gt; &lt;FolderName&gt;D:\Accellion Tests\DCA-IN&lt;/FolderName&gt; &lt;ArchiveFolder&gt;\arc&lt;/ArchiveFolder&gt; &lt;/DestPort&gt; &lt;/Rule2&gt; &lt;Rule3&gt; &lt;RuleName&gt;Vault-2-Accellion&lt;/RuleName&gt; &lt;NOND&gt;true&lt;/NOND&gt; &lt;SourcePort&gt; &lt;Name&gt;A&lt;/Name&gt; &lt;Type&gt;Vault&lt;/Type&gt; &lt;VaultName&gt;Am&lt;/VaultName&gt; &lt;UserName&gt;g&lt;/UserName&gt; &lt;SafeName&gt;test&lt;/SafeName&gt; &lt;FolderName&gt;Root\&lt;/FolderName&gt; &lt;/SourcePort&gt; &lt;DestPort&gt; &lt;Name&gt;MyFileSystem&lt;/Name&gt; &lt;Type&gt;FileSystem&lt;/Type&gt; &lt;FolderName&gt;D:\Tests\DCA-IN&lt;/FolderName&gt; &lt;/DestPort&gt; &lt;/Rule3&gt; &lt;/Rules&gt; &lt;UserExits&gt; &lt;/UserExits&gt; &lt;/Process&gt; </code></pre> <p>thanks david</p>
<p>As per the comment; you can use <code>xmltodict</code> to convert xml to dictionary. Then you can uses <code>CSV</code> to output the results, using <code>DictWriter()</code></p> <p>You'll need to think about how to display the data in CSV. By default <code>&lt;Rules&gt;</code> data, will be outputted as in 1 cell as an OrdeDict. You may want to flattened the dictionary or allow for repeating data?</p> <p><strong>As an example:</strong></p> <pre><code>import csv import xmltodict def save_dict_to_csv(filename, dict): with open(filename, 'w') as csvfile: w = csv.DictWriter(csvfile, dict.keys()) w.writeheader() w.writerow(dict) xml = r&quot;&quot;&quot; &lt;Process&gt; &lt;ProcessName&gt;Vault-2-A&lt;/ProcessName&gt; &lt;ProcessEnabled&gt;True&lt;/ProcessEnabled&gt; &lt;ProcessType&gt;N2N&lt;/ProcessType&gt; &lt;NonDuplicationMethod&gt;Delete&lt;/NonDuplicationMethod&gt; &lt;OnFileExistsInDest&gt;Overwrite&lt;/OnFileExistsInDest&gt; &lt;ProcessScheduling&gt;ExternalActivation&lt;/ProcessScheduling&gt; &lt;ExternalActivationLevel&gt;Process&lt;/ExternalActivationLevel&gt; &lt;ProcessRecursive&gt;True&lt;/ProcessRecursive&gt; &lt;FileSelectionPattern&gt;*&lt;/FileSelectionPattern&gt; &lt;Rules&gt; &lt;Rule1&gt; &lt;RuleName&gt;V2A&lt;/RuleName&gt; &lt;SourcePort&gt; &lt;Name&gt;xxx&lt;/Name&gt; &lt;Type&gt;Vault&lt;/Type&gt; &lt;VaultName&gt;yyy&lt;/VaultName&gt; &lt;UserName&gt;user&lt;/UserName&gt; &lt;FolderName&gt;Root\&lt;/FolderName&gt; &lt;/SourcePort&gt; &lt;DestPort&gt; &lt;Name&gt;MyFileSystem&lt;/Name&gt; &lt;Type&gt;FileSystem&lt;/Type&gt; &lt;FolderName&gt;D:\xxx\&lt;/FolderName&gt; &lt;/DestPort&gt; &lt;/Rule1&gt; &lt;Rule2&gt; &lt;RuleName&gt;A2V&lt;/RuleName&gt; &lt;SourcePort&gt; &lt;Name&gt;xxx&lt;/Name&gt; &lt;Type&gt;Vault&lt;/Type&gt; &lt;VaultName&gt;yyyn&lt;/VaultName&gt; &lt;UserName&gt;user&lt;/UserName&gt; &lt;SafeName&gt;userTest&lt;/SafeName&gt; &lt;FolderName&gt;Root\&lt;/FolderName&gt; &lt;/SourcePort&gt; &lt;DestPort&gt; &lt;Name&gt;sftp&lt;/Name&gt; &lt;Type&gt;sftp&lt;/Type&gt; &lt;FolderName&gt;D:\Accellion Tests\DCA-IN&lt;/FolderName&gt; &lt;ArchiveFolder&gt;\arc&lt;/ArchiveFolder&gt; &lt;/DestPort&gt; &lt;/Rule2&gt; &lt;Rule3&gt; &lt;RuleName&gt;Vault-2-Accellion&lt;/RuleName&gt; &lt;NOND&gt;true&lt;/NOND&gt; &lt;SourcePort&gt; &lt;Name&gt;A&lt;/Name&gt; &lt;Type&gt;Vault&lt;/Type&gt; &lt;VaultName&gt;Am&lt;/VaultName&gt; &lt;UserName&gt;g&lt;/UserName&gt; &lt;SafeName&gt;test&lt;/SafeName&gt; &lt;FolderName&gt;Root\&lt;/FolderName&gt; &lt;/SourcePort&gt; &lt;DestPort&gt; &lt;Name&gt;MyFileSystem&lt;/Name&gt; &lt;Type&gt;FileSystem&lt;/Type&gt; &lt;FolderName&gt;D:\Tests\DCA-IN&lt;/FolderName&gt; &lt;/DestPort&gt; &lt;/Rule3&gt; &lt;/Rules&gt; &lt;UserExits&gt; &lt;/UserExits&gt; &lt;/Process&gt;&quot;&quot;&quot; my_dict = xmltodict.parse(xml) save_dict_to_csv('test.csv', next(iter(my_dict.values()))) # pass value for Process </code></pre> <p>.</p>
python|xml|csv|xml-parsing|xml.etree
0
1,904,922
65,359,299
Failed to execute script docker-compose when I try docker-compose up
<p><strong>This is my first setup of Docker-compose. I have followed the steps shows here: <a href="https://docs.docker.com/compose/gettingstarted/" rel="nofollow noreferrer">https://docs.docker.com/compose/gettingstarted/</a>. Here are the errors im facing, I am not sure how to fix these</strong></p> <pre><code>Traceback (most recent call last): File &quot;urllib3/connectionpool.py&quot;, line 677, in urlopen File &quot;urllib3/connectionpool.py&quot;, line 392, in _make_request File &quot;http/client.py&quot;, line 1252, in request File &quot;http/client.py&quot;, line 1298, in _send_request File &quot;http/client.py&quot;, line 1247, in endheaders File &quot;http/client.py&quot;, line 1026, in _send_output File &quot;http/client.py&quot;, line 966, in send File &quot;docker/transport/unixconn.py&quot;, line 43, in connect PermissionError: [Errno 13] Permission denied During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;requests/adapters.py&quot;, line 449, in send File &quot;urllib3/connectionpool.py&quot;, line 727, in urlopen File &quot;urllib3/util/retry.py&quot;, line 403, in increment File &quot;urllib3/packages/six.py&quot;, line 734, in reraise File &quot;urllib3/connectionpool.py&quot;, line 677, in urlopen File &quot;urllib3/connectionpool.py&quot;, line 392, in _make_request File &quot;http/client.py&quot;, line 1252, in request File &quot;http/client.py&quot;, line 1298, in _send_request File &quot;http/client.py&quot;, line 1247, in endheaders File &quot;http/client.py&quot;, line 1026, in _send_output File &quot;http/client.py&quot;, line 966, in send File &quot;docker/transport/unixconn.py&quot;, line 43, in connect urllib3.exceptions.ProtocolError: ('Connection aborted.', PermissionError(13, 'Permission denied')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;docker/api/client.py&quot;, line 205, in _retrieve_server_version File &quot;docker/api/daemon.py&quot;, line 181, in version File &quot;docker/utils/decorators.py&quot;, line 46, in inner File &quot;docker/api/client.py&quot;, line 228, in _get File &quot;requests/sessions.py&quot;, line 543, in get File &quot;requests/sessions.py&quot;, line 530, in request File &quot;requests/sessions.py&quot;, line 643, in send File &quot;requests/adapters.py&quot;, line 498, in send requests.exceptions.ConnectionError: ('Connection aborted.', PermissionError(13, 'Permission denied')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;bin/docker-compose&quot;, line 3, in &lt;module&gt; File &quot;compose/cli/main.py&quot;, line 67, in main File &quot;compose/cli/main.py&quot;, line 123, in perform_command File &quot;compose/cli/command.py&quot;, line 69, in project_from_options File &quot;compose/cli/command.py&quot;, line 132, in get_project File &quot;compose/cli/docker_client.py&quot;, line 43, in get_client File &quot;compose/cli/docker_client.py&quot;, line 170, in docker_client File &quot;docker/api/client.py&quot;, line 188, in __init__ File &quot;docker/api/client.py&quot;, line 213, in _retrieve_server_version docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', PermissionError(13, 'Permission denied')) [13492] Failed to execute script docker-compose </code></pre> <p><em>If anyone could give me some suggestions on what to do, it would be quite helpful.</em></p>
<p>Looks like docker-compose does not have the right to connect to docker.socket</p> <pre><code>sudo systemctl restart docker sudo docker-compose up -d </code></pre>
python|linux|docker|docker-compose
0
1,904,923
68,810,981
Hyperparameter tuning with multiprocessing pool
<p>I have the function for tuning hyperparameters for the model</p> <pre><code>def modelHPTuning(X_train, y_train, hyperparams): classifier = ExtraTreesClassifier() grid = GSCV(classifier, hyperparams) grid_result = grid.fit(X_train, y_train) best_hyperparameters = grid_result.best_params_ return best_hyperparameters best_hyperparameters = modelHPTuning(X_train, y_train, XRT_Hyperparams) </code></pre> <p>For multiprocessing, I tried</p> <pre><code>pool = Pool(4) best_hyperparameters = pool.starmap(modelHPTuning, X_train, y_train, XRT_Hyperparams) pool.close() </code></pre> <p>getting error</p> <pre><code>TypeError: starmap() takes from 3 to 4 positional arguments but 5 were given </code></pre> <p>I tried</p> <pre><code>zip(X_train, y_train, XRT_Hyperparams) </code></pre> <p>getting</p> <pre><code>AttributeError: 'str' object has no attribute 'keys' </code></pre> <p>How to fix the arguments, GSCV function is just creating a grid space for GridSearchCV</p>
<p>You didn't really answer what type of parameters <code>modelHPTuning</code> was expecting. Its arguments <em>X_train</em>, <em>y_train</em>, <em>hyperparams</em> cannot be the same type as the actual values being used in <code>starmap</code> :<code>X_train</code>, <code>y_train</code>, <code>XRT_Hyperparams</code>.</p> <p>The arguments to <code>starmap</code> is a function and a <em>single iterable</em>. If that function takes, for example, two arguments (we'll call them <em>x</em> and <em>y</em>), then that single iterable will typically be a <code>list</code> or <code>tuple</code> containing one or more lists or tuples <em>each contains two elements</em> for the two arguments that the function takes. For example:</p> <pre class="lang-py prettyprint-override"><code>from multiprocessing.pool import Pool def f(x, y): print(x, y) if __name__ == '__main__': x = [9, 10] y = [90, 100] with Pool(2) as pool: pool.starmap(f, [[9, 90], [90, 100]]) </code></pre> <p>Prints:</p> <pre class="lang-none prettyprint-override"><code>9 90 90 100 </code></pre> <p><strong>In other words, you must be passing to <code>starmap</code> a function and multiples of the type(s) of value(s) that the function is expecting.</strong></p> <p>If you have the <code>x</code> and <code>y</code> values to be passed to function <code>f</code> in individual iterables, then you would use builtin function <code>zip</code> to create the single iterable required by <code>starmap</code>:</p> <pre class="lang-py prettyprint-override"><code>from multiprocessing.pool import Pool def f(x, y): print(x, y) if __name__ == '__main__': x = [9, 10] y = [90, 100] with Pool(2) as pool: pool.starmap(f, zip(x, y)) </code></pre> <p>Prints:</p> <pre class="lang-none prettyprint-override"><code>9 90 90 100 </code></pre> <p>If your function <code>f</code> has one or more arguments for which you need to always pass the same value (in the following example argument <em>caption</em>), the easiest way to do that is to use function <code>functools.partial</code>:</p> <pre class="lang-py prettyprint-override"><code>from multiprocessing.pool import Pool from functools import partial def f(x, y, caption): print(caption, x, y) if __name__ == '__main__': x = [9, 10] y = [90, 100] with Pool(2) as pool: pool.starmap(partial(f, caption='The values are:'), zip(x, y)) </code></pre> <p>Prints:</p> <pre class="lang-none prettyprint-override"><code>The values are: 9 90 The values are: 90 100 </code></pre> <p>The other way to accomplish this if you are using <code>zip</code> to build your parameter list for <code>starmap</code> is to use <code>itertools.repeat</code>, but this can be far more costly:</p> <pre class="lang-py prettyprint-override"><code>from multiprocessing.pool import Pool from itertools import repeat def f(x, y, caption): print(caption, x, y) if __name__ == '__main__': x = [9, 10] y = [90, 100] with Pool(2) as pool: pool.starmap(f, zip(x, y, repeat('The values are:'))) </code></pre> <p>Prints:</p> <pre class="lang-none prettyprint-override"><code>The values are: 9 90 The values are: 90 100 </code></pre> <p><strong>With the above examples, you should now be able to figure out what you need to be using as values for <code>starmap</code> and arguments to <code>modelHPTuning</code>.</strong></p>
python|multiprocessing
0
1,904,924
71,557,697
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 162: invalid start byte
<p>I am fetching data from s3 and I need to extract the text from a pdf file.</p> <pre><code>import boto3 from io import StringIO from pdfminer.converter import TextConverter from pdfminer.layout import LAParams from pdfminer.pdfdocument import PDFDocument from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter from pdfminer.pdfpage import PDFPage from pdfminer.pdfparser import PDFParser s3_client = boto3.client('s3') s3_bucket_name = 'XXXXXX' s3 = boto3.resource('s3', aws_access_key_id = 'XXXXXXXX', aws_secret_access_key='XXXXXXX') obj = s3.Object(s3_bucket_name, 'XXXXXX.pdf').get() data = obj['Body'].read() output_string = StringIO() with open(data, 'rb') as in_file: parser = PDFParser(in_file) doc = PDFDocument(parser) rsrcmgr = PDFResourceManager() device = TextConverter(rsrcmgr, output_string, laparams=LAParams()) interpreter = PDFPageInterpreter(rsrcmgr, device) for page in PDFPage.create_pages(doc): interpreter.process_page(page) print(output_string.getvalue()) </code></pre> <p>I'm getting this error:</p> <p>with open(data, 'rb') as in_file: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 162: invalid start byte</p>
<p>open() method here can open only file from disk and you are probably passing bytes array to it. Try replacing</p> <pre><code>with open(data, 'rb') as in_file: </code></pre> <p>with ByteIO object as it accept byte array and create stream out of it</p> <pre><code>with io.BytesIO(data) as in_file: </code></pre> <p>More info here <a href="https://docs.python.org/3/library/io.html#binary-i-o" rel="nofollow noreferrer">https://docs.python.org/3/library/io.html#binary-i-o</a></p>
python|amazon-s3|pdfminer
1
1,904,925
4,918,317
how to change the look of an image according to the season we like
<p>We are working with Image magik and using python programming language. We have been able to make some changes to the colours but it does not work for every image and at other times the entire image changes colour not just the required portions . We only want to change the colours and don't want to add any effects These are the 2 codes we have used. first:</p> <pre><code>import Image import ImageEnhance img = Image.open( 'image.jpg') img = img.convert('RGBA') r, g, b, alpha = img.split() selection = r.point(lambda i: i &gt; 100 and 300) selection.save( "autmask.png") r.paste(g, None, selection) img = Image.merge( "RGBA", (r, b, g, alpha)) img.save( "newclr.png") img.show() </code></pre> <p><strong>and the second.</strong></p> <pre><code>import Image # split the image into individual bands im = Image.open('image.jpg') im.convert("RGB") source = im.split() R, G, B = 0, 1, 2 # select regions where red is less than 100 mask = source[B].point(lambda i: i &lt; 100 and 300) # process the green band out = source[G].point(lambda i: i * 2.5) # paste the processed band back, but only where red was &lt; 100 source[G].paste(out, None, mask) # build a new multiband image im = Image.merge(im.mode, source) im.save( "newimage.png") im.show() </code></pre>
<p>For the expressions <code>lambda i: i &gt; 100 and 300</code> - It will return 300 if i is greater than 100 otherwise it will return <code>False</code>.</p> <p>For the expression: <code>lambda i: i &lt; 100 and 300</code> - It will return 300 if i is lesser than 100 otherwise it will return <code>False</code>.</p> <p>This is what you intended?</p> <p>From what I understand of your code requirement, you may want to replace the first one with <code>lambda i: i &gt; 100 and i or 300</code> and second one with <code>lambda i: i &lt; 100 and i or 300</code></p>
python
0
1,904,926
5,431,613
The Pythonic way of validating a long chain of conditions in Python
<p>So I have a long chain of conditions that should be validated to be true. Instead of chaining a long <code>if</code> condition, I tried to be "innovative" and did it this way, which I reckon is more readable. But my question is, is this the optimal way of doing it?</p> <p>Or is there a pythonic way of doing it? PS: Please respond with an alternative instead of answering "No", thanks!</p> <p>Here's the code chunk:</p> <pre><code>def site_exists(site): """ returns the sitebean if it exists, else returns false """ vpadmin_service = _get_vpadmin_service(site) all_sites = VpAdminServiceUtil.getSites(vpadmin_service) for site_listing in all_sites: if site.getId(): #condition check try: assert site.getId() == site_listing.getId() assert site.getName() == site_listing.getName() assert site.getCustomer().getId() == site_listing.getCustomer().getId() except AssertionError: continue #pass conditions return site_listing #no id, so just check for name and customer else: #condition check try: assert site.getName() == site_listing.getName() assert site.getCustomer().getId() == site_listing.getCustomer().getId() except AssertionError: continue #pass conditions site.setId(site_listing.getId()) return site_listing return False </code></pre>
<p>A simpler approach is to build a tuple of the conditions and compare the tuples:</p> <pre><code>def site_info(s): return s.getId(), s.getName(), s.getCustomer().getId() if site_info(site) == site_info(site_listing): return site_listing else: continue </code></pre> <p>If you have a lot of conditions, or the conditions are expensive, you can instead create a generator for the conditions, and compare with <code>any</code> or <code>all</code>:</p> <pre><code>import itertools def iter_site_info(s): yield s.getId() yield s.getName() yield s.getCustomer().getId() if all(x==y for (x, y) in itertools.izip(iter_site_info(site), iter_site_info(site_listing)): return site_listing else: continue </code></pre> <p>I'm not sure whether Jython has <code>any</code> and <code>all</code>, but they're trivial functions to write.</p> <p>EDIT - <code>any</code> and <code>all</code> appeared in Python 2.5, so Jython should have them.</p>
python
10
1,904,927
62,746,180
ImportError: cannot import name 'hf_bucket_url' in HuggingFace Transformers
<p>So I installed the latest version of transformers on Google Colab</p> <pre><code>!pip install transformers </code></pre> <p>When trying to invoke the conversion file using</p> <pre><code>!python /usr/local/lib/python3.6/dist-packages/transformers/convert_pytorch_checkpoint_to_tf2.py .py --help </code></pre> <p>Or trying to use</p> <pre><code>from transformers.file_utils import hf_bucket_url. // works from transformers.convert_pytorch_checkpoint_to_tf2 import *. // fails convert_pytorch_checkpoint_to_tf(&quot;gpt2&quot;, pytorch_file, config_file, tf_file). </code></pre> <p>I get this error</p> <pre><code> ImportError Traceback (most recent call last) &lt;ipython-input-3-dadaf83ecea0&gt; in &lt;module&gt;() 1 from transformers.file_utils import hf_bucket_url ----&gt; 2 from transformers.convert_pytorch_checkpoint_to_tf2 import * 3 4 convert_pytorch_checkpoint_to_tf(&quot;gpt2&quot;, pytorch_file, config_file, tf_file) /usr/local/lib/python3.6/dist-packages/transformers/convert_pytorch_checkpoint_to_tf2.py in &lt;module&gt;() 20 import os 21 ---&gt; 22 from transformers import ( 23 ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, 24 BERT_PRETRAINED_CONFIG_ARCHIVE_MAP, ImportError: cannot import name 'hf_bucket_url' </code></pre> <p>What's going on?</p>
<p>It turns out to be a bug. <a href="https://github.com/huggingface/transformers/pull/5531" rel="nofollow noreferrer">This PR solves the issue</a> by importing the function <code>hf_bucket_url</code> properly.</p>
tensorflow|pytorch|huggingface-transformers
1
1,904,928
60,347,532
Incorrect integer value: '%s' for column `materialdatabase`.`tensilesummary`.`batchnumber` at row 1
<pre><code>batchnumbers = [1,2,3,4,5,6,7,8,9,10] def update_batchnum(): try: for num in range(len(batchnumbers)): query = ("INSERT INTO tensilesummary(batchnumber) VALUES ('%s');") cursor.execute(query,batchnumbers[num]) mariadb_connection.commit() print("Batchnumber successfullt inserted into tensilesummary table") except mysql.connector.Error as error: print("Failed using updatebatchnum to insert into tensilesummary table:{}".format(error)) update_batchnum() </code></pre> <p>Returned error: Failed using updatebatchnum to insert into tensilesummary table:1366 (22007): Incorrect integer value: '%s' for column <code>materialdatabase</code>.<code>tensilesummary</code>.<code>batchnumber</code> at row 1. </p> <p>I tried to change sql_mode = "" in my.ini but it doesn't work. </p>
<ol> <li>When using placeholders, don't put them in quotes, it's also waste of space to append a semicolon.</li> </ol> <p><code>query = "INSERT INTO tensilesummary (batchnumber) VALUES (%s)"</code></p> <ol start="2"> <li>According to DBAPI 2.0 (PEP-249) the second parameter of cursor.execute must be a tuple, so it should be <code>(batchnumbers[num],)</code></li> </ol>
python|sql|sql-server|mariadb
0
1,904,929
64,248,629
Fourier transform Sympy does nothing?
<p>Is sympy able to compute fourier transforms quickly or do I need another software? Because I tried many things . I am pretty inexperienced in python and Sympy though . Is there a quicker way to do these Fourier transforms with python?</p> <ol> <li>(a +j \omega) / (4a^2 + (a+j \omega)^2) ) I tried as such</li> </ol> <pre><code> from sympy import poly, pi, I, pow from sympy.abc import a,f, n, k n = poly(a**2 + t**2) k = t / n fourier_transform(k , t, 2*pi*f) </code></pre> <p>And I get this message: <a href="https://i.stack.imgur.com/UzmeY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UzmeY.png" alt="enter image description here" /></a></p> <p>And another example from Python 3.9 shell : <a href="https://i.stack.imgur.com/tGYO6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tGYO6.png" alt="enter image description here" /></a> So what about the actual result?</p>
<h2>1</h2> <p>Currently, if the Fourier transform returns a <code>PieceWise</code> expression, it throws a fuss and just returns an unevaluated expression instead. So SymPy can do the integral, it just isn't in a very nice form. Here is the result after manual integration.</p> <p><a href="https://i.stack.imgur.com/BIzrv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BIzrv.png" alt="enter image description here" /></a></p> <p>You can use this function on your own to attain this result if you want this kind of answer.</p> <pre class="lang-py prettyprint-override"><code>def my_fourier_transform(f, t, s): return integrate(f*exp(-2*S.Pi*I*t*s), (t, -oo, oo)).simplify() </code></pre> <p>One can see that the long condition has <code>arg(a)</code> and <code>arg(s)</code> which should be <code>sign(a)</code> and <code>sign(s)</code> respectively (assuming they are real-valued). After doing a similar thing in Wolfram Alpha, I think this is a correct result. It is just not a very nice one.</p> <p>But I found a trick that helps. If SymPy struggles to simplify something, usually giving it stronger assumptions is the way to go if your main goal is just to get an answer. A lot of the time, the answer is still correct even if the assumptions don't hold. So we make the variables positive.</p> <p>Note that SymPy does the transform differently to Wolfram Alpha as noted in the comment below. This was why my third argument is different.</p> <pre class="lang-py prettyprint-override"><code>from sympy import * a, s, t = symbols('a s t', positive=True) f = t / (a**2 + t**2) # Wolfram computes integral(f(t)*exp(I*t*w)) # SymPy computes integral(f(t)*exp(-2*pi*I*s*t)) # So w = -2*pi*s print(fourier_transform(f, t, -s)) # I*pi*exp(-2*pi*a*s) </code></pre> <h2>2</h2> <p>Correct me if I'm wrong but according to Wikipedia:</p> <p><a href="https://i.stack.imgur.com/L76ym.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L76ym.png" alt="enter image description here" /></a></p> <p>So the Fourier transform of the Dirac Delta is 1.</p> <h2>3</h2> <p>Using the <code>my_fourier_transfrom</code> defined above, we get</p> <p><a href="https://i.stack.imgur.com/fpqCC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fpqCC.png" alt="enter image description here" /></a></p> <p>The first condition is always false. This is probably a failure on SymPy's part since this integral diverges. I assume it's because it can't decide whether it is <code>oo</code>, <code>-oo</code> or <code>zoo</code>.</p>
python|python-3.x|fft|sympy
1
1,904,930
64,215,722
How to get a list of people with most common traits for each person in Python?
<p>I have a dataframe which looks like (below 1 represents having a trait and 0 represents not having it):</p> <pre><code>Person Trait_1 Trait_2 Trait_3 Trait_4 A 1 1 1 1 B 0 1 1 0 C 0 1 0 0 D 1 1 0 1 E 0 0 0 1 </code></pre> <p>I want a function which returns, for each person, the top 10 people with the most number of common traits.</p> <p>So for person A the output can be:</p> <pre><code>D (3 traits), B (2 traits), C(1 trait), E(1 trait) </code></pre> <p>I thought having a matrix which looks like encodes how many traits each person has in common with the other would be a good start:</p> <pre><code> A B C D E A 4 2 1 3 1 B 2 4 1 1 0 C 1 1 4 1 0 D 3 1 1 4 1 E 1 0 0 1 4 </code></pre> <p>But I am not sure how to achieve this or what this is called.</p>
<ol> <li>Create a DataFrame</li> </ol> <pre><code>import pandas as pd df = pd.DataFrame(data=[[1, 1, 1, 1], [0, 1, 1, 0], [0, 1, 0, 0], [1, 1, 0, 1], [0, 0, 0, 1]], index=['A', 'B', 'C', 'D', 'E',], columns=['Trait_1', 'Trait_2', 'Trait_3', 'Trait_4']) </code></pre> <ol start="2"> <li>Using matrix multiplication create the matrix of common traits (the one you described)</li> </ol> <pre><code>common_traits = df @ df.T </code></pre> <ol start="3"> <li>For each person print the string you want (top10 people with most traits in common)</li> </ol> <pre><code>n = 10 for index, row in common_traits.iterrows(): top10 = row.drop(index).nlargest(n) top10 = top10[top10 &gt; 0] string = ', '.join(top10.index + top10.map(lambda x: f' ({x} trait{&quot;s&quot; if x != 1 else &quot;&quot;})')) print(f'{index}: {string}') </code></pre> <h3>Output</h3> <pre><code>A: D (3 traits), B (2 traits), C (1 trait), E (1 trait) B: A (2 traits), C (1 trait), D (1 trait) C: A (1 trait), B (1 trait), D (1 trait) D: A (3 traits), B (1 trait), C (1 trait), E (1 trait) E: A (1 trait), D (1 trait) </code></pre>
python
2
1,904,931
70,607,437
Reverse dictionary attributes set to ids?
<p>I have the following translation dictionary:</p> <pre><code>{0: {'a', 'b', 'c'}, 1: {'a', 'b', 'c', 'd'}, 2: {'k', 'b', 'e', 'a', 'n'}} </code></pre> <p>And I want to 'reverse' it to be attributes to keys (keys here are a form of id). Given a set of attributes, give me the relevant id (key).</p> <p>For example, given <code>{'a', 'b', 'c'}</code> return <code>0</code>.</p> <p>What is the best practice to do this? The attributes can come in different order that's why I am using sets. Should I insert it into a dataframe (translation table)? Or there is another solution?</p>
<p>you can use a <code>Series</code> to achieve this in <code>pandas</code>:</p> <pre><code>import pandas as pd x = {0: {'a', 'b', 'c'}, 1: {'a', 'b', 'c', 'd'}, 2: {'k', 'b', 'e', 'a', 'n'}} lookup = pd.Series(x) print(lookup[lookup.values == {'a', 'b', 'c'}]) # 0 {c, b, a} # dtype: object </code></pre>
python|pandas|dictionary|set
1
1,904,932
63,715,045
How to catch the stop button in PyCharm on Windows?
<p>I want to create a program that does something in which someone terminates the script by clicking the stop button in PyCharm. I tried</p> <pre><code>from sys import exit def handler(signal_received, frame): # Handle any cleanup here print('SIGINT or CTRL-C detected. Exiting gracefully') exit(0) if __name__ == '__main__': signal(SIGINT, handler) print('Running. Press CTRL-C to exit.') while True: # Do nothing and hog CPU forever until SIGINT received. pass </code></pre> <p>from <a href="https://www.devdungeon.com/content/python-catch-sigint-ctrl-c" rel="nofollow noreferrer">https://www.devdungeon.com/content/python-catch-sigint-ctrl-c</a>.</p> <p>I tried on both Mac and Windows. On the Mac, PyCharm behaved as expected, when I click the stop button it catches the SIGINT. But on Windows, I did exactly the same thing, but it just straightly returns to me a <code>Process finished with exit code -1</code>. Is there something I can do to change to make the Windows behave like what on Mac?</p> <p>Any help is appreciated!</p>
<p>I don't think it's a strange question at all. On unix systems, pycham sends a SIGTERM, waits one second, then send a SIGKILL. On windows, it does something else to end the process, something that seems untrappable. Even during development you need a way to cleanly shut down a process that uses native resources. In my case, there is a CAN controller that, if not shut down properly, can't ever be opened again. My work around was to build a simple UI with a stop button that shuts the process down cleanly. The problem is, out of habit, from using pycharm, goland, and intellij, is to just hit the red, square button. Every time I do that I have to reboot the development system. So I think it is clearly also a development time question.</p>
python|pycharm|signals|keyboardinterrupt
4
1,904,933
63,338,551
Multiple condition in 'if' loop to compare values of column
<p>I have 2 columns in index 12 and 13 of dataframe df_2. I want to compare 'i+1' row of 2 columns with 'i' row of the same column. <strong>Only</strong> if both row doesn't match then i want to increment the value i assign. But the code i have written is failing somewhere. What is the problem?</p> <pre><code>h_count = 0 current = &quot;G&quot; status = [] for i in range(len(df_2)): if (i &lt; len(df_2)-1) and ((df_2.iloc[i+1, 12] and df_2.iloc[i+1, 13]) == (df_2.iloc[i, 12] and df_2.iloc[i, 13])): status.append(f&quot;G{h_count}&quot;) else: status.append(f&quot;G{h_count}&quot;) h_count += 1 </code></pre> <p><a href="https://i.stack.imgur.com/pDL49.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pDL49.png" alt="enter image description here" /></a></p> <p>I want it to be G4 here because one of the value doesn't match above. I think it is comparing only second column.</p> <p>PN: I know I can modify loop by comparing one column first and then compare second column inside that loop. but how can I do it in one condition?</p>
<p>I think your issue is in decalring the conditions. What you should do is like the following example</p> <pre><code>if x == y and z == y and b == y: do smth </code></pre> <p>what you did is</p> <pre><code>if x and z and b == y: do smthg # here python will try to find any correct condition, not to restrict the result to all declared conditions </code></pre>
python|pandas|dataframe|if-statement
1
1,904,934
69,719,930
Matplotlib fails with `TypeError` when plotting with mpmath
<p>I have 2D data which is very small (of order <code>e-500</code>) so I cannot use numpy which I would like to draw as a pcolormesh. For instance,</p> <pre><code>import mpmath as mp import matplotlib.pyplot as plt import numpy as np from mpmath import e as e from mpmath import mpf, mpc,mp mp.dps = 1000 y, x = np.meshgrid(np.linspace(-5, 5, 1000), np.linspace(-5, 5, 1000)) z = e ** (-x**2 + y) z = z[:-1, :-1] z_min, z_max = -np.abs(z).max(), np.abs(z).max() </code></pre> <p>compiles very well but when I want to do a <code>pcolormesh</code> things go south:</p> <pre><code>fig, ax = plt.subplots() c = ax.pcolormesh(x, y, z, cmap='RdBu', vmin=z_min, vmax=z_max) ax.set_title('Titles are overall a positive feature') ax.axis([x.min(), x.max(), y.min(), y.max()]) fig.colorbar(c, ax=ax) plt.show() </code></pre> <p>gives the error <code>TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''</code></p> <p>Why does this happen? Do you have any idea how to solve this? Maybe plotting without mpmath can be helpful.</p> <p><strong>EDIT:</strong> Full traceback</p> <pre><code>TypeError Traceback (most recent call last) ~/.local/lib/python3.8/site-packages/IPython/core/formatters.py in __call__(self, obj) 339 pass 340 else: --&gt; 341 return printer(obj) 342 # Finally look for special method names 343 method = get_real_method(obj, self.print_method) ~/.local/lib/python3.8/site-packages/IPython/core/pylabtools.py in &lt;lambda&gt;(fig) 246 247 if 'png' in formats: --&gt; 248 png_formatter.for_type(Figure, lambda fig: print_figure(fig, 'png', **kwargs)) 249 if 'retina' in formats or 'png2x' in formats: 250 png_formatter.for_type(Figure, lambda fig: retina_figure(fig, **kwargs)) ~/.local/lib/python3.8/site-packages/IPython/core/pylabtools.py in print_figure(fig, fmt, bbox_inches, **kwargs) 130 FigureCanvasBase(fig) 131 --&gt; 132 fig.canvas.print_figure(bytes_io, **kw) 133 data = bytes_io.getvalue() 134 if fmt == 'svg': ~/.local/lib/python3.8/site-packages/matplotlib/backend_bases.py in print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, **kwargs) 2077 print_method, dpi=dpi, orientation=orientation), 2078 draw_disabled=True) -&gt; 2079 self.figure.draw(renderer) 2080 bbox_artists = kwargs.pop(&quot;bbox_extra_artists&quot;, None) 2081 bbox_inches = self.figure.get_tightbbox(renderer, ~/.local/lib/python3.8/site-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs) 36 renderer.start_filter() 37 ---&gt; 38 return draw(artist, renderer, *args, **kwargs) 39 finally: 40 if artist.get_agg_filter() is not None: ~/.local/lib/python3.8/site-packages/matplotlib/figure.py in draw(self, renderer) 1733 1734 self.patch.draw(renderer) -&gt; 1735 mimage._draw_list_compositing_images( 1736 renderer, self, artists, self.suppressComposite) 1737 ~/.local/lib/python3.8/site-packages/matplotlib/image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite) 135 if not_composite or not has_images: 136 for a in artists: --&gt; 137 a.draw(renderer) 138 else: 139 # Composite any adjacent images together ~/.local/lib/python3.8/site-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs) 36 renderer.start_filter() 37 ---&gt; 38 return draw(artist, renderer, *args, **kwargs) 39 finally: 40 if artist.get_agg_filter() is not None: ~/.local/lib/python3.8/site-packages/matplotlib/axes/_base.py in draw(self, renderer, inframe) 2628 renderer.stop_rasterizing() 2629 -&gt; 2630 mimage._draw_list_compositing_images(renderer, self, artists) 2631 2632 renderer.close_group('axes') ~/.local/lib/python3.8/site-packages/matplotlib/image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite) 135 if not_composite or not has_images: 136 for a in artists: --&gt; 137 a.draw(renderer) 138 else: 139 # Composite any adjacent images together ~/.local/lib/python3.8/site-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs) 36 renderer.start_filter() 37 ---&gt; 38 return draw(artist, renderer, *args, **kwargs) 39 finally: 40 if artist.get_agg_filter() is not None: ~/.local/lib/python3.8/site-packages/matplotlib/collections.py in draw(self, renderer) 2045 offsets = np.column_stack([xs, ys]) 2046 -&gt; 2047 self.update_scalarmappable() 2048 2049 if not transform.is_affine: ~/.local/lib/python3.8/site-packages/matplotlib/collections.py in update_scalarmappable(self) 790 return 791 if self._is_filled: --&gt; 792 self._facecolors = self.to_rgba(self._A, self._alpha) 793 elif self._is_stroked: 794 self._edgecolors = self.to_rgba(self._A, self._alpha) ~/.local/lib/python3.8/site-packages/matplotlib/cm.py in to_rgba(self, x, alpha, bytes, norm) 243 if norm: 244 x = self.norm(x) --&gt; 245 rgba = self.cmap(x, alpha=alpha, bytes=bytes) 246 return rgba 247 ~/.local/lib/python3.8/site-packages/matplotlib/colors.py in __call__(self, X, alpha, bytes) 559 if np.ma.is_masked(X): 560 mask_bad = X.mask --&gt; 561 elif np.any(np.isnan(X)): 562 # mask nan's 563 mask_bad = np.isnan(X) TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' &lt;Figure size 432x288 with 2 Axes&gt; ``` </code></pre>
<p>With your code, edited to run in reasonable testing time:</p> <pre><code> ...: mp.dps = 10 ...: y, x = np.meshgrid(np.linspace(-5, 5, 10), np.linspace(-5, 5, 10)) ...: z = e ** (-x**2 + y) ...: z = z[:-1, :-1] ...: z_min, z_max = -np.abs(z).max(), np.abs(z).max() ...: In [11]: z Out[11]: array([[mpf('9.357622969874e-14'), mpf('2.842594865759e-13'), mpf('8.635040754285e-13'), mpf('2.623093769921e-12'), mpf('7.968255300282e-12'), mpf('2.420542233701e-11'), mpf('7.352958062098e-11'), mpf('2.233631436379e-10'), mpf('6.785173193566e-10')], ...., dtype=object) In [12]: z.dtype Out[12]: dtype('O') In [13]: np.isnan(z) Traceback (most recent call last): File &quot;&lt;ipython-input-13-72670087bbfa&gt;&quot;, line 1, in &lt;module&gt; np.isnan(z) TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' </code></pre> <p>By using <code>mpmath</code> you have created a object dtype array. While that may have greater precision, it also runs much slower, and in some cases does not run at all.</p> <p><code>isnan</code> expects a numeric dtype array:</p> <pre><code>In [16]: np.any(np.isnan(z.astype(float))) Out[16]: False </code></pre> <p>The traceback indicates that <code>plot</code> uses the <code>isnan</code> to &quot;blank out&quot; <code>nan</code> values in the plot.</p> <p>This works:</p> <pre><code>plt.pcolormesh(x, y, z.astype('float128'), cmap='RdBu', vmin=z_min, vmax=z_max) </code></pre>
python|numpy|matplotlib|plot|mpmath
0
1,904,935
69,789,130
shutil.copy2() file size doesn't match
<p>Setup: Synology with Docker running Home-Assistant with HACS integration and pyscript.</p> <p>I have made the following two functions:</p> <pre><code>@service def getListOfFiles(dirName): import os # create a list of file and sub directories # names in the given directory listOfFile = os.listdir(dirName) allFiles = list() # Iterate over all the entries for entry in listOfFile: # Create full path fullPath = os.path.join(dirName, entry) # If entry is a directory then get the list of files in this directory if os.path.isdir(fullPath): allFiles = allFiles + getListOfFiles(fullPath) else: if fullPath.endswith('jpg'): allFiles.append(fullPath) elif fullPath.endswith('jpeg'): allFiles.append(fullPath) elif fullPath.endswith('png'): allFiles.append(fullPath) return allFiles @service def slideshow(): import random import os import shutil path = '/Slideshow' listOfFiles = getListOfFiles(path) random_image = random.choice([x for x in listOfFiles]) image_path = '{}'.format(random_image) shutil.copy2(image_path, '/config/www/slide.jpg') </code></pre> <p>Now everything works, BUT the destination file (slide.jpg) is never the correct size. It varies between 10kB - 1000kB, while the original image is often between 7-10 MB.</p> <p>Any suggestions?</p> <p>Running the same code (with different source and destination, of course) on Mac works perfectly.</p> <p>Same results using .copyfile and .copy</p>
<p>So after a lot of digging around, the issue was found. Synology creates a directory: <code>/@eaDir/</code> for every file with a thumbnail in S, M, L which turned out to be the root cause. This (sometimes) was the file being transferred over and not the assumed image, hence the smaller size.</p>
python|docker|synology|home-assistant
0
1,904,936
18,141,698
How can I set a code for users when they enter a valud URL or not with PYTHON/Flask?
<p>I would like to know what kind of approach i need to adopt with Python or Flask for do the following task : </p> <ul> <li>check to see if the url is valid </li> <li>if valid return a list of all links on that page and its sub-pages</li> </ul> <p>My editor is sublime and i run it under Windows Powershell</p> <p>Now my code show this : </p> <p><img src="https://i.stack.imgur.com/Nn1AY.png" alt="enter image description here"></p> <p>So when you input a search it go o a new page and show the result ( for exemple:ddddd)</p> <p><img src="https://i.stack.imgur.com/01r4V.png" alt="enter image description here"></p> <p><strong>BUT i want to check if the URL is valid or not and if valid return a list of all links on that page and its sub-pages</strong> like this :</p> <p><img src="https://i.stack.imgur.com/p15EY.png" alt="enter image description here"></p> <p>Any idea for a Newbie in the World of Programming ?(not very new now,been still have a lot to learn..)</p> <p>Thanks for the Help.</p> <p>Here my code who bring this result (it's Work) : </p> <p>So a project folder with inside my .py set with Flask and a templates folder with the .html.</p> <hr> <h2>Python file</h2> <pre><code># -*- coding: utf-8 -*- from flask import Flask, render_template, request import re app = Flask (__name__) @app.route("/") def index(): return render_template('index.html') @app.route('/search', methods=['POST', 'GET']) def search(): error = True if request.method == 'POST': return request.form['urlsearch'] else: return request.args.get('urlsearch') if __name__ == "__main__": app.run() </code></pre> <hr> <h2>HTML FILE</h2> <pre><code>&lt;!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"&gt; &lt;html lang="en"&gt; &lt;head&gt; &lt;title&gt;URL TEST&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;ul id="navigation"&gt; {% for item in navigation %} &lt;li&gt;&lt;a href="{{ item.href }}"&gt;{{ item.caption }}&lt;/a&gt;&lt;/li&gt; {% endfor %} &lt;/ul&gt; &lt;h1 style="color:orange;"&gt;You can put your URL here :&lt;/h1&gt; {{ a_variable }} &lt;form method="get" action="/search"&gt; &lt;p&gt;Please Input an URL below : &lt;/p&gt; &lt;input type="text" name="urlsearch" /&gt; &lt;input type="submit" value="Search" /&gt; &lt;/form&gt; &lt;/body&gt; &lt;/html&gt; </code></pre>
<p>You can use <a href="http://wwwsearch.sourceforge.net/mechanize/" rel="nofollow">mechanize</a>:</p> <pre><code>from mechanize import Browser br = Browser() r = br.open("http://www.example.com/") if r.code == 200: for link in br.links(): print link else: print "Error loading page" </code></pre> <p>Or <a href="http://docs.python.org/2/library/urllib2.html" rel="nofollow">urllib2</a> and <a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow">BeautifulSoup</a></p> <pre><code>from BeautifulSoup import BeautifulSoup import urllib2 html_page = urllib2.urlopen("http://www.example.com") if html_page.getcode() == 200: soup = BeautifulSoup(html_page) for link in soup.findAll('a'): print link.get('href') else: print "Error loading page" </code></pre> <hr> <p>I haven't worked much with Flask before, but try this:</p> <p>As I understand <code>urlsearch</code> is the URL that you are getting from form, so add check of it</p> <pre><code>@app.route('/search', methods=['POST', 'GET']) def search(): error = True if request.method == 'POST': return request.form['urlsearch'] else: br = Browser() r = br.open(request.args.get('urlsearch')) if r.code == 200: return br.links() else: return "Error loading page" </code></pre>
python|html|flask
5
1,904,937
17,875,775
What is Pythonic way to test size of a generator, then display it?
<p>Yesterday I have been implementing a small Python scripts that checks difference between two files (using difflib), printing the result if there is any, exiting with code 0 otherwise.</p> <p>The precise method, <code>difflib.unified_diff()</code> is returning a generator on the diffs found. How can I test this generator to see if it needs to be printed? I tried using <code>len()</code>, <code>sum()</code> to see what was the size of this generator but then it is impossible to print it.</p> <p>Sorry to ask such a silly question but I really don't see what is the good practice on that topic.</p> <p>So far this is what I am doing</p> <pre><code>import difflib import sys fromlines = open("A.csv").readlines() tolines = open("B.csv").readlines() diff = difflib.unified_diff(fromlines, tolines, n=0) if (len(list(diff))): print("Differences found!") # Recomputing the generator again: how stupid is that! diff = difflib.unified_diff(fromlines, tolines, n=0) sys.stdout.writelines(diff) else: print("OK!") </code></pre>
<p>You're already converting your generator to a list, so you don't need to rebuild it.</p> <pre><code>diff = list(difflib.unified_diff(fromlines, tolines, n=0)) if diff: ... sys.stdout.writelines(diff) else: ... </code></pre> <p>You don't even need to convert the generator to a list if you don't want by using a simple flag:</p> <pre><code>diff = difflib.unified_diff(fromlines, tolines, n=0) f = False for line in diff: if not f: print("Differences found!") f = True sys.stdout.write(line) if not f: print("OK!") </code></pre>
python|python-3.x
1
1,904,938
61,005,805
Need help in Code for Writing View Functions in Flask - Python Web Framework
<p>We had a below task, we that tried to write code in what ever best possible way we can. But we are not able to pass the test as seems there is some issue in the code. hence need help to correct the code. request if some one can help us here it will be great for us.</p> <pre><code>from flask import Flask ## Define a flask application name 'app' below app = Flask(__name__) ## Define below a view function 'hello', which displays the message ## "Hello World!!! I've run my first Flask application." ## The view function 'hello' should be mapped to URL '/' . @app.route("/") def hello(): return "Hello World!!! I've run my first Flask application." ## Define below a view function 'hello_user', which takes 'username' as an argument ## and returns the html string containing a 'h2' header "Hello &lt;username&gt;" ## After displaying the hello message, the html string must also display one quote, ## randomly chosen from the provided list `quotes` # Before displaying the quote, the html string must contain the 'h3' header 'Quote of the Day for You' ## The view function 'hello_user' should be mapped to URL '/hello/&lt;username&gt;/' . ## Use the below list 'quotes' in 'hello_user' function ## quotes = [ ## "Only two things are infinite, the universe and human stupidity, and I am not sure about the former.", ## "Give me six hours to chop down a tree and I will spend the first four sharpening the axe.", ## "Tell me and I forget. Teach me and I remember. Involve me and I learn.", ## "Listen to many, speak to a few.", ## "Only when the tide goes out do you discover who has been swimming naked." ## ] @app.route("/hello/&lt;username&gt;/") def hello_user(username): return "Hello " + username + "Quote of the Day for You" ## Define below a view function 'display_quotes', which returns an html string ## that displays all the quotes present in 'quotes' list in a unordered list. ## Before displaying 'quotes' as an unordered list, the html string must also include a 'h1' header "Famous Quotes". ## The view function 'display_quotes' should be mapped to URL '/quotes/' . ## Use the below list 'quotes' in 'display_quotes' function ## quotes = [ ## "Only two things are infinite, the universe and human stupidity, and I am not sure about the former.", ## "Give me six hours to chop down a tree and I will spend the first four sharpening the axe.", ## "Tell me and I forget. Teach me and I remember. Involve me and I learn.", ## "Listen to many, speak to a few.", ## "Only when the tide goes out do you discover who has been swimming naked." ## ] @app.route("/quotes/") def display_quotes(): return render_template( 'test.html',name=display_quotes) quotes = [ "Only two things are infinite, the universe and human stupidity, and I am not sure about the former.", "Give me six hours to chop down a tree and I will spend the first four sharpening the axe.", "Tell me and I forget. Teach me and I remember. Involve me and I learn.", "Listen to many, speak to a few.", "Only when the tide goes out do you discover who has been swimming naked."] randomNumber = randint(0,len(quotes)-1) quote = quotes[randomNumber] ## Write the required code below which runs flask applictaion 'app' defined above ## on host 0.0.0.0 and port 8000 if __name__ == '__main__': app.run(host='0.0.0.0', port=8000) </code></pre> <p>Please let us know where in mistake in it and help us in correcting code and pass required test.</p>
<pre><code>from flask import Flask import random </code></pre> <p>Define a flask application name 'app' below</p> <pre><code>app = Flask(__name__) </code></pre> <p>Define below a view function 'hello', which displays the message &quot;Hello World!!! I've run my first Flask application.&quot; The view function 'hello' should be mapped to URL '/' .</p> <pre><code>@app.route(&quot;/&quot;) def hello(): return &quot;Hello World!!! I've run my first Flask application.&quot; </code></pre> <p>Define below a view function 'hello_user', which takes 'username' as an argument and returns the html string containing a 'h2' header &quot;Hello &quot; After displaying the hello message, the html string must also display one quote, randomly chosen from the provided list <code>quotes</code> Before displaying the quote, the html string must contain the 'h3' header 'Quote of the Day for You' The view function 'hello_user' should be mapped to URL '/hello//' . Use the below list 'quotes' in 'hello_user' function quotes = [ &quot;Only two things are infinite, the universe and human stupidity, and I am not sure about the former.&quot;, &quot;Give me six hours to chop down a tree and I will spend the first four sharpening the axe.&quot;, &quot;Tell me and I forget. Teach me and I remember. Involve me and I learn.&quot;, &quot;Listen to many, speak to a few.&quot;, &quot;Only when the tide goes out do you discover who has been swimming naked.&quot; ]</p> <pre><code>@app.route(&quot;/hello/&lt;username&gt;/&quot;) def hello_user(username): quotes = [ &quot;Only two things are infinite, the universe and human stupidity, and I am not sure about the former.&quot;, &quot;Give me six hours to chop down a tree and I will spend the first four sharpening the axe.&quot;, &quot;Tell me and I forget. Teach me and I remember. Involve me and I learn.&quot;, &quot;Listen to many, speak to a few.&quot;, &quot;Only when the tide goes out do you discover who has been swimming naked.&quot; ] return &quot;&lt;h2&gt;Hello &quot; + username + &quot;&lt;/h2&gt;&lt;h3&gt;Quote of the Day for You&lt;/h3&gt;&quot; + random.choice(quotes) </code></pre> <p>Define below a view function 'display_quotes', which returns an html string that displays all the quotes present in 'quotes' list in a unordered list. Before displaying 'quotes' as an unordered list, the html string must also include a 'h1' header &quot;Famous Quotes&quot;. The view function 'display_quotes' should be mapped to URL '/quotes/' . Use the below list 'quotes' in 'display_quotes' function quotes = [ &quot;Only two things are infinite, the universe and human stupidity, and I am not sure about the former.&quot;, &quot;Give me six hours to chop down a tree and I will spend the first four sharpening the axe.&quot;, &quot;Tell me and I forget. Teach me and I remember. Involve me and I learn.&quot;, &quot;Listen to many, speak to a few.&quot;, &quot;Only when the tide goes out do you discover who has been swimming naked.&quot; ]</p> <pre><code>@app.route(&quot;/quotes/&quot;) def display_quotes(): quotes = [ &quot;Only two things are infinite, the universe and human stupidity, and I am not sure about the former.&quot;, &quot;Give me six hours to chop down a tree and I will spend the first four sharpening the axe.&quot;, &quot;Tell me and I forget. Teach me and I remember. Involve me and I learn.&quot;, &quot;Listen to many, speak to a few.&quot;, &quot;Only when the tide goes out do you discover who has been swimming naked.&quot; ] return &quot;&lt;h1&gt;Famous Quotes&lt;/h1&gt;&lt;ul&gt;&lt;li&gt;&quot;+ quotes[0] +&quot;&lt;/li&gt;&lt;li&gt;&quot;+ quotes[1] +&quot;&lt;/li&gt;&lt;li&gt;&quot;+ quotes[2] +&quot;&lt;/li&gt;&lt;li&gt;&quot;+ quotes[3] +&quot;&lt;/li&gt;&lt;li&gt;&quot;+ quotes[4] +&quot;&lt;/li&gt;&lt;/ul&gt;&quot; </code></pre> <p>Write the required code below which runs flask applictaion 'app' defined above on host 0.0.0.0 and port 8000</p> <pre><code>if __name__ == '__main__': app.run(host='0.0.0.0', port=8000) </code></pre>
python|web|flask
5
1,904,939
66,294,450
How to display a scale or vscale in GTK-3 with Python
<p>I have code that displays a VScale, sort of. The widget shows with a number indicating the current value. It responds to page up events by changing the value. However, there is no slider.There is just a small circle. How can I get it to display properly so that the user can adjust the value by dragging the slider?</p> <p><a href="https://i.stack.imgur.com/zs20R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zs20R.png" alt="enter image description here" /></a></p> <p>Here is my code:</p> <pre><code>class ThresholdingWindow(SuperClass): def __init__(self, parent, controller): super(ThresholdingWindow, self).__init__(parent, controller) self.set_title(&quot;Thresholding&quot;) self.controller = controller button = Gtk.Button.new_with_label(&quot;Apply&quot;) self.box.pack_start(button, True, True, 0) button2 = Gtk.Button.new_with_label(&quot;Back to Main&quot;) button2.connect(&quot;clicked&quot;, self.on_back_clicked) self.box.pack_start(button2, True, True, 0) adjustment = Gtk.Adjustment(0.0,0.0,1.0,.01,.02) # scale = Gtk.Scale.new(Gtk.Orientation.VERTICAL, adjustment) scale = Gtk.VScale.new(adjustment) scale.set_digits(3) # scale = Gtk.VScale(adjustment) self.box.pack_start(scale, False, False,0) </code></pre>
<p>Found on stackoverflow:</p> <pre><code> scale.set_size_request(30,100) </code></pre> <p>It seems that there wasn't enough &quot;real estate&quot; on the screen to accommodate the slider.</p>
python|gtk3
0
1,904,940
59,201,188
Set different node colors in a random networkx graph
<p>I have created an Erdős-Rényi random graph with 100 nodes, but I want to draw this graph with different node colors. For example, I want to have 25 nodes colored red, 25 nodes colored blue, and so on. Can you help me, how can I achieve this in my code?</p> <pre><code>&gt;&gt;&gt; import networkx as nx &gt;&gt;&gt; import matplotlib.pyplot as plt &gt;&gt;&gt; G = nx.erdos_renyi_graph (100,0.02) &gt;&gt;&gt; nx.draw(G, node_color=range(100), node_size=800, cmap=plt.cm.Blues) </code></pre>
<p>This can be achieved using colors from the <a href="https://matplotlib.org/2.0.2/api/colors_api.html" rel="nofollow noreferrer">matplotlib colors API</a>. From the <a href="https://networkx.github.io/documentation/stable/reference/generated/networkx.drawing.nx_pylab.draw_networkx.html#networkx.drawing.nx_pylab.draw_networkx" rel="nofollow noreferrer">networkx.draw</a> docs, we can see the description of <code>node_color</code> is as follows:</p> <blockquote> <p><strong>node_color</strong> (color or array of colors (default=’#1f78b4’))</p> <p>Node color. Can be a single color or a sequence of colors with the same length as nodelist. Color can be string, or rgb (or rgba) tuple of floats from 0-1. If numeric values are specified they will be mapped to colors using the cmap and vmin,vmax parameters. See matplotlib.scatter for more details.</p> </blockquote> <p>So in your case, you want 25 nodes to be one color, 25 to be another, etc. For this, we can use <code>colors = ['r','b','y','c']*25</code> to define the array of colors, and then pass this to <code>nx.draw</code> as in the following code.</p> <h2>Code:</h2> <pre><code>import networkx as nx import matplotlib.pyplot as plt G = nx.erdos_renyi_graph (100,0.02) colors = ['r','b','y','c']*25 plt.figure(figsize=(10,10)) nx.draw(G, node_size=400, node_color=colors, dpi=500) </code></pre> <h2>Output:</h2> <p><a href="https://i.stack.imgur.com/58uBj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/58uBj.png" alt="Output graph" /></a></p>
python|python-3.x|list|python-2.7|networkx
3
1,904,941
72,880,715
Hosting a server-side websocket and a gui loop simultaneously
<p>I'm trying to make a simple multiplayer lan Catan game, using a python host and website clients in js. The clients are fine, but I'm having issue with hosting the websocket (with the <code>websockets</code> library) and running a gui loop (using PySimpleGUI) in parallel, at the same time, with the <code>threading</code> library. Here's a (hopefully) minimal example:</p> <pre class="lang-py prettyprint-override"><code>import asyncio import json import websockets import PySimpleGUI as sg import threading async def gameLogicHandler(event, ws): &lt;handle client input&gt; async def handler(websocket): while True: try: message = await websocket.recv() except websockets.ConnectionClosedOK: break await gameLogicHandler(json.loads(message), websocket) layout = [[sg.Text('Catan Server', size=(20, 1), justification='center', font='Helvetica 20')], [sg.Text('Players:')], [sg.Multiline(size=(20, 10), key='players')]] window = sg.Window('Catan Server', layout) def updateGui(): while True: event, values = window.read() window[&quot;players&quot;].update('\n'.join([player.name for player in players])) # player object I'm using that isn't included here if event == sg.WIN_CLOSED or event == 'Exit': exit() async def main_socket(): async with websockets.serve(handler, &quot;&quot;, 8001): await asyncio.Future() def main(): t = threading.Thread(target=updateGui) t.start() asyncio.run(main_socket()) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>I'm sure there's numerous mistakes, and I'm rather new to all of these libraries, as well as asking questions on Stack Overflow, so please don't be too harsh.</p> <p>However, the updateGui function just runs once, instead of constantly. (If I put a print statement in it, it's only in the console once). If you have any insights, it's very much appreciated. Thank you :)</p> <p>Edit: I have discovered that the function is pausing on the line <code>event, values = window.read()</code> for some reason - if I put another print after that line, it would not run until the window was closed.</p>
<p>Something wrong</p> <ul> <li>Should run GUI in main thread</li> <li><code>event = window.read(100, None)</code>, it should be <code>event, values = window.read(100, None)</code>, <code>None</code> here will be the <code>timeout_key</code> and same as <code>sg.WIN_CLOSED</code>.</li> </ul> <pre class="lang-py prettyprint-override"><code>def updateGui(): while True: event = window.read(timeout=100) if event == sg.WIN_CLOSED or event == 'Exit': break elif event == sg.TIMEOUT_EVENT: window[&quot;players&quot;].update('\n'.join([player.name for player in players])) window.close() </code></pre>
python|websocket|python-multithreading|pysimplegui
1
1,904,942
62,949,488
AMLS Experiment run stuck in status "Running"
<p>I made an Azure Machine Learning Service Experiment run and logged neural network losses with Jupyter Notebook. Logging worked fine and NN training completed as it should. However, the experiment is stuck in the running status. Shutting down the compute resources does not shut down the Experiment run and I cannot cancel it from the Experiment panel. In addition, the run does not have any log-files.</p> <p>Has anyone had the same behavior? Run has now lasted for over 24 hours.</p> <p><a href="https://i.stack.imgur.com/KzAoS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KzAoS.jpg" alt="AMLS Experiment run" /></a></p>
<p>this totally happens from time to time. it is certainly frustrating especially because the &quot;Cancel&quot; button it grayed out. You can use either the CLI or Python SDK to cancel the run.</p> <h2>SDK</h2> <h3>&gt;= 1.16.0</h3> <p>As of version <code>1.16.0</code> you no longer an <code>Experiment</code> object is no longer needed. Instead you can access using the <a href="https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.run(class)?view=azure-ml-py#get-workspace--run-id-&amp;WT.mc_id=AI-MVP-5003930" rel="nofollow noreferrer"><code>Run</code></a> or <a href="https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace(class)?view=azure-ml-py#get-run-run-id-&amp;WT.mc_id=AI-MVP-5003930" rel="nofollow noreferrer"><code>Workspace</code></a> objects directly</p> <pre class="lang-py prettyprint-override"><code>from azureml.core import Workspace, Experiment, Run, VERSION print(&quot;SDK version:&quot;, VERSION) ws = Workspace.from_config() run = ws.get_run('YOUR_RUN_ID') run = Run().get(ws, 'YOUR_RUN_ID') # also works run.cancel() </code></pre> <h3>&lt; 1.16.0</h3> <pre class="lang-py prettyprint-override"><code>from azureml.core import Workspace, Experiment, Run, VERSION print(&quot;SDK version:&quot;, VERSION) ws = Workspace.from_config() exp = Experiment(workspace = ws, name = 'YOUR_EXP_NAME') run = Run(exp, run_id='YOUR STEP RUN ID') run.cancel() # or run.fail() </code></pre> <h1>CLI</h1> <p><a href="https://docs.microsoft.com/en-us/azure/machine-learning/reference-azure-machine-learning-cli#install-the-extension" rel="nofollow noreferrer">More CLI details here</a></p> <pre class="lang-sh prettyprint-override"><code>az login az ml run cancel --run YOUR_RUN_ID </code></pre>
python|azure|neural-network|jupyter-notebook|azure-machine-learning-service
5
1,904,943
63,045,469
Clicking recs in Pygame
<p>I am currently starting on python3 in the past few days and started in developing minor projects, but i'm having some trouble, so sorry if i cant use top notch proffessional coders language.</p> <p>How can I make a pygame.draw.rect rectangle become clickable? I know about the pygame.mouse. ones, but there might be something wrong in the code. I want it so that when i press the red rect it will decreese i health and will add a &quot;burn&quot; stat (its just text for now).</p> <p>Here's the code:</p> <pre><code>import pygame import random import sys pygame.init() #Screen Size screen_width = 600 screen_height = 600 #Screen Settings screen = pygame.display.set_mode((screen_width, screen_height)) br_color = (0, 0, 0) pygame.display.set_caption(&quot;Type Effect Beta 0.0.1&quot;) #Game Over Bullian game_over = False #Other Defenitions clock = pygame.time.Clock() myFont = pygame.font.SysFont(&quot;arial&quot;, 20) #Basic Recources health = 50 score = 0 status = &quot;none&quot; #Colors for the Text white = (255, 255, 255) red = (255, 0, 0) #Mouse Things mouse_location = pygame.mouse.get_pos() print(mouse_location) #Status Text Helpers burning = &quot;Burning&quot; #Cards card_size_x = 45 card_size_y = 60 fire_car_color = (255, 0 ,0) fire_card_posx = 300 fire_card_posy = 300 card_button_fire = pygame.Rect(fire_card_posx, fire_card_posy, card_size_x, card_size_y) #Functions def health_decrease_burn(health, status): health -= 1 status = &quot;burning&quot; return health and status while not game_over: for event in pygame.event.get(): if event.type == pygame.QUIT: sys.exit() if pygame.mouse.get_pressed()[0] and card_button_fire.collidepoint(mouse_location): health_decrease_burn() if health_decrease_burn(health, status) and health &lt;= 0: game_over = True text = &quot;Score:&quot; + str(score) lable = myFont.render(text, 1, white) screen.blit(lable, (10, 10)) text = &quot;Health:&quot; + str(health) lable = myFont.render(text, 1, red) screen.blit(lable, (10, 30)) text = &quot;Status:&quot; + str(status) lable = myFont.render(text, 1, white) screen.blit(lable, (10, 50)) pygame.draw.rect(screen, fire_car_color, (fire_card_posx, fire_card_posy, card_size_x, card_size_y)) clock.tick(30) pygame.display.update() </code></pre>
<p>You need to grab the <code>mouse_location</code> every iteration of your main loop, as the mouse position/state is constantly changing. The current code is only fetching the mouse position once, on start.</p> <pre><code>while not game_over: for event in pygame.event.get(): if event.type == pygame.QUIT: sys.exit() elif event.type == pygame.MOUSEBUTTONUP: # if mouse button clicked mouse_location = pygame.mouse.get_pos() # &lt;-- HERE if pygame.mouse.get_pressed()[0] and card_button_fire.collidepoint(mouse_location): health_decrease_burn() #[...etc ] </code></pre>
python|pygame
1
1,904,944
62,349,983
Parsing addresses from a blob of text in dataframe column
<p>I am trying to use a library called pyap to parse addresses from text in a dataframe column.</p> <p>My dataframe df has data in the following format:</p> <pre><code>MID TEXT_BODY 1 I live at 4998 Stairstep Lane Toronto ON 2 Let us catch up at the Ruby Restaurant. Here is the address 1234 Food Court Dr, Atlanta, GA 30030 </code></pre> <p>The package website gives the following sample:</p> <pre><code>import pyap test_address = """ I live at 4998 Stairstep Lane Toronto ON """ addresses = pyap.parse(test_address, country='CA') for address in addresses: # shows found address print(address) </code></pre> <p>THe sample return it as a list but I would like to keep it in the dataframe as a new column</p> <p>The output I am expecting is a data frame like this:</p> <pre><code>MID ADDRESS TEXT_BODY 1 4998 Stairstep Lane Toronto ON I live at 4998 Stairstep Lane Toronto ON 2 1234 Food Court Dr, Atlanta, GA 30030 Let us catch up at the Ruby Restaurant. Here is the address 1234 Food Court Dr, Atlanta, GA 30030 </code></pre> <p>I tried this:</p> <pre><code> df["ADDRESS"] = df['TEXT_BODY'].apply(lambda row: pyap.parse(row, country='US')) </code></pre> <p>But this does not work. I get an error:</p> <pre><code>TypeError: expected string or bytes-like object </code></pre> <p>How do I do this?</p>
<p><code>Apply</code> is indeed the right direction. </p> <pre><code>def parse_address(addr): address = pyap.parse(addr, country = "US") if not address: address = pyap.parse(addr, country = "CA") return address[0] df["addr"] = df.TEXT_BODY.apply(parse_address) </code></pre> <p>The result is: </p> <pre><code> MID TEXT_BODY addr 0 1 I live at 4998 Stairstep Lane Toronto ON 4998 Stairstep Lane Toronto ON 1 2 Let us catch up at the Ruby Restaurant. Here i... 1234 Food Court Dr, Atlanta, GA 30030 </code></pre>
python|python-3.x|pandas
0
1,904,945
62,376,219
How to get data from USDA API into Postgres database plus add one new column
<p>I am using an API to access the USDA food database and pull some nutrient data for a particular food, do some calculations (with another function partially shown here). I would like to find a way to run my function on every food in the database that contains that particular nutrient, "tryptophan", and then add the result to Postgres along with the food name, foodID, and amino acids amounts for each item. Also, if there is a way to narrow it down to standard reference items that would be ideal. Does anyone know how to do this?</p> <pre><code>import requests import json import pandas as pd apiKey = '' foodID = '' def nutrient_API(apiKey, foodID): #calls get api and json load api_resp = json.loads(requests.get('https://api.nal.usda.gov/fdc/v1/' + foodID + '?api_key=' + apiKey).text) #only return nutrition information api_nutrients = api_resp['foodNutrients'] #first entry is its description, foodID, and database entry type nutrientDict = {"FoodID": [api_resp['description'],foodID, api_resp['dataType']]} for items in api_nutrients: if 'amount' in items: #each entry includes nutrient name, nutrient id, amount, and its respective unit nutrientDict.update({(items['nutrient']['name']): [(items['nutrient']['id']), (items['amount']),(items['nutrient']['unitName'])]}) #print(nutrientDict) return(nutrientDict) def trypfunc(foodID): dataframe = pd.DataFrame(nutrient_API(apiKey, foodID)) tryp_g=(dataframe['Tryptophan'][1]) #does some more stuff return trypfunc # I call the above function for one food at a time with the foodID print("Sesame seeds: ") trypfunc(foodID='170150') </code></pre>
<p>I don't have an API key so I can't verify at this time, still here is how I would start. From here:</p> <p><a href="https://github.com/USDA/USDA-APIs/issues/64" rel="nofollow noreferrer">https://github.com/USDA/USDA-APIs/issues/64</a></p> <pre><code>params = {'api_key': key} data = {'generalSearchInput': 'chia'} response = requests.post( r'https://api.nal.usda.gov/fdc/v1/search', params=params, json=data ) </code></pre> <p>Then I went here: </p> <p><a href="https://fdc.nal.usda.gov/fdc-app.html#/?query=Tryptophan" rel="nofollow noreferrer">https://fdc.nal.usda.gov/fdc-app.html#/?query=Tryptophan</a></p> <p>and snooped the request payload:</p> <pre><code>{"includeDataTypes":{"Survey (FNDDS)":true,"Foundation":true,"Branded":true,"SR Legacy":true},"referenceFoodsCheckBox":true,"sortCriteria":{"sortColumn":"description","sortDirection":"asc"},"generalSearchInput":"Tryptophan","pageNumber":1,"exactBrandOwner":null,"currentPage":1} </code></pre> <p>So I figure a minimal example would be:</p> <pre><code>params = {'api_key': key} data = {'generalSearchInput': 'Tryptophan'} response = requests.post( r'https://api.nal.usda.gov/fdc/v1/search', params=params, json=data ) </code></pre> <p>[Edited] Got API key. SO when you run the above you get:</p> <pre><code>tryptophan_dict = json.loads(response.text) tryptophan_dict.keys() dict_keys(['foodSearchCriteria', 'totalHits', 'currentPage', 'totalPages', 'foods']) tryptophan_dict['totalHits'] 5124 tryptophan_dict['totalPages'] 103 </code></pre> <p>You only get one page of results at a page size of 50. That leaves you with the options of grabbing the totalPages number and iterating over it with responses that have:</p> <pre><code>data = {'generalSearchInput': 'Tryptophan', 'pageNumber': page_number} </code></pre> <p>or just grabbing everything using the totalHits number:</p> <pre><code>data = {'generalSearchInput': 'Tryptophan', 'pageSize': 5124} </code></pre> <p>To get description:</p> <pre><code>for food in tryptophan_dict["foods"]: print(food["description"]) </code></pre>
python|pandas|postgresql|api|python-requests
0
1,904,946
62,137,528
How to create a new string list from a list of strings excluding one variable?
<p>I have a list of strings and I'm trying to iterate through it and create each iteration a new list without one of the strings. I tried the following:</p> <pre><code>tx_list = ['9540a4ff214d6368cc557803e357f8acebf105faad677eb06ab10d1711d3db46', 'dd92415446692593a4768e3604ab1350c0d81135be42fd9581e2e712f11d82ed',....] for txid in tx_list: tx_list_copy = tx_list tx_list_without_txid = tx_list_copy.remove(txid) </code></pre> <p>But each iteration the new list is empty.</p>
<p>You may try this:</p> <pre><code>for i in range(len(tx_list)) : tx_list_without_txid = tx_list[:i] + tx_list[i+1:] # do something with the new list... </code></pre>
python|python-3.x|string|list
1
1,904,947
35,561,625
storing SQL queries in a dictionary?
<p>What is the best way to do a stored procedure for MySQL queries in python? </p> <p>I really need to be able to access the key name which is why i went down the route of using a dictionary. Any advice would be greatly appreciated! </p> <pre><code>reports = { 'z_report': """ SELECT * FROM calculation1 WHERE owner = "z" AND calculation1.`Last Seen` &gt; CURDATE() - INTERVAL 7 DAY ; """ 'y_report': """ SELECT * FROM calculation1 WHERE owner = "y" AND calculation1.`Last Seen` &gt; CURDATE() - INTERVAL 7 DAY ; """ 'x_report': """ SELECT * FROM calculation1 WHERE owner = "x" AND calculation1.`Last Seen` &gt; CURDATE() - INTERVAL 7 DAY ; """ 'master_report': """ SELECT * FROM calculation1 WHERE calculation1.`Last Seen` &gt; CURDATE() - INTERVAL 7 DAY ; """ </code></pre> <p>p.s. i already know this is vuln to SQLi, i'm just trying to get a prototype up first. </p>
<p>I'm not saying your approach is good or bad. I'll just address your specific question: you could put each query in its own function e.g.</p> <pre><code>def do_z_report(): # call MySQL with specific z_report query def do_y_report(): # call MySQL with specific y_report query </code></pre> <p>Add these to your dictionary:</p> <pre><code>queries = {'z_report': do_z_report, 'y_report' : do_y_report} </code></pre> <p>Call the functions:</p> <pre><code>queries['z_report']() queries['y_report']() </code></pre>
python|mysql|dictionary
0
1,904,948
73,517,443
Need to set a default value for multiple input for list in Python
<p>I'm trying to get the default value if input is blank, how can I do that?</p> <p>Output what I'm getting is for variable is blank if there is no user input.</p> <pre><code>Brand_default = 'ABC','XYZ' cate_default = 'GKK','KKL','MKK','UKK' Brand = list(input('Please enter Brand? (ABC/XYZ)= ').split(',') or Brand_default) Cate = list(input('Please enter Category?GKK,KKL,MKK,UKK = ').split(',') or cate_default) </code></pre>
<p>Your logic needs to decide whether the value is empty, and then split if not. Unfortunately, this means your otherwise rather elegant <code>or</code> formulation won't work.</p> <pre><code>def split_or_default(prompt, default): response = input(prompt + &quot; (&quot; + &quot;/&quot;.join(default) + &quot;) &quot;) if response == &quot;&quot;: return default return response.split(&quot;,&quot;) Brand = split_or_default( 'Please enter Brand?', ['ABC','XYZ']) Cate = split_or_default( 'Please enter Category?', ['GKK','KKL','MKK','UKK']) </code></pre> <p>Notice also how the defaults are also lists now. I suppose your code could work with lists or tuples, but that seems like an unwelcome complication.</p> <p>Tangentially, you should probably remove the &quot;Please enter&quot; part from the prompt. It's nice to be polite, but in user interfaces, it's actually more friendly to be concise.</p> <p>Requiring commas between the values is also somewhat cumbersome. If some values could contain spaces, you need for the input to be unambiguous; but if these examples are representative, you could just as well split on whitespace.</p> <p>In keeping with the <a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself" rel="nofollow noreferrer">DRY Principle</a> I refactored the repeated code into a function.</p>
python|python-3.x|list|user-input
0
1,904,949
49,279,387
Looping through a pandas Dataframe to get values from another Dataframe
<p>I have two pandas dataframes. The first dataframe contains the location of different circles over time. Example: df1 = </p> <pre><code> x y time circle 1.0 235 133 1.0 2.0 236 133 1.0 3.0 245 425 1.0 4.0 215 325 2.0 5.0 287 203 4.0 6.0 394 394 5.0 </code></pre> <p>The second dataframe is organised exactly like the first, but contains the locations of squares at different times. Example: df2 = </p> <pre><code> x y time square 1.0 243 233 1.0 1.0 293 436 2.0 2.0 189 230 3.0 2.0 189 233 4.0 3.0 176 203 4.0 3.0 374 394 5.0 </code></pre> <p>I would like to figure out how to loop through the dataframe df1 to access all the present squares in df2 at each time point, to find out which is closest.</p> <p>Example output: </p> <pre><code> x y time closest_sq sq_x sq_y circle 1.0 235 133 1.0 1.0 243 233 2.0 236 133 1.0 1.0 243 233 3.0 245 425 1.0 1.0 243 233 4.0 215 325 2.0 1.0 243 233 5.0 287 203 4.0 2.0 189 233 6.0 394 394 5.0 3.0 374 394 </code></pre> <p>I'm guessing I have to use either iterrows() or itertuples() in a for loop to get at this but I'm not sure, and scipy cdist to get the distance.</p>
<p>You can user <code>pd.merge()</code> to join the dataframes and then use <code>df.loc</code>. Here's how you can do this with your dataframes:</p> <pre><code>df3 = pd.merge(df1,df2,on='time',how='inner') df4 = df3.loc[df3['time'] == 1.0] df4[['circle','square','time']].head() </code></pre> <p>You can optimize the above code by using <code>inplace=True</code>. </p>
python|pandas|for-loop|dataframe
3
1,904,950
70,886,543
Python: resample dataframe and sum
<p>I have the following dataframe:</p> <pre><code>df=pd.DataFrame(index=[0,1]) df['timestamp'] = ['2022-01-01 20:10:00', '2022-01-01 20:50:00'] df['currency'] = ['USD', 'USD'] df['operation'] = ['deposit', 'deposit'] df['amount'] = [0.1, 0.4] df: timestamp currency operation amount 0 2022-01-01 20:10:00 USD deposit 0.1 1 2022-01-01 20:50:00 USD deposit 0.4 </code></pre> <p>How can I resample the data on an hourly basis and sum the &quot;amount&quot; to get the following dataframe:</p> <pre><code>df: timestamp currency operation amount 0 2022-01-01 20:00:00 USD deposit 0.5 </code></pre> <p>Using <code>.resample('H')</code> eliminates the currency and operation columns. How can I do this so that Sum the &quot;amount&quot; column?</p>
<p>Doing with <code>pd.Grouper</code> and follow by <code>agg</code></p> <pre><code>out = df.groupby(pd.Grouper(key='timestamp',freq='1h')).\ agg(lambda x : x.sum() if x.dtypes == float else x.iloc[0]).reset_index() Out[122]: timestamp currency operation amount 0 2022-01-01 20:00:00 USD deposit 0.5 </code></pre>
python|pandas|dataframe|datetime|pandas-resample
1
1,904,951
70,770,752
I can't install paho-mqtt
<p>I tried to install paho-mqtt. I typed <code>pip install paho-mqtt</code> to install it and it was successful! But when I type <code>import paho.mqtt.client as mqtt</code> in my .py. The following error code is shown:</p> <pre><code>Import &quot;paho.mqtt.client&quot; could not be resolved. </code></pre> <p>What's wrong with it??? Please help me if you know how to fix it, thanks</p>
<p>try do this:</p> <pre><code>python -m pip install paho-mqtt </code></pre>
python|pip|paho
0
1,904,952
59,930,581
Writing a dictionary to file with biopython
<p>I'm new using biopython... I'm trying to write a dictionary to a file, using biopython. Here is my code:</p> <pre><code>with open("file_in.fasta") as original, open("file_out.fasta", "w") as corrected: for seq_record in SeqIO.parse(original,'fasta'): desc=seq_record.description seq_dict={seq_record.id + '_1':seq_record.seq} SeqIO.write(seq_dict.values(),corrected,'fasta') </code></pre> <p>But I get this error: AttributeError: 'Seq' object has no attribute 'id'</p>
<p>Given your aim of wanting to add <code>_1</code> to the end of each <code>&gt;</code> line, you don't need a dictionary, you can just modify the sequence record directly:</p> <pre><code>from Bio import SeqIO with open("file_in.fasta") as original, open("file_out.fasta", "w") as corrected: for seq_record in SeqIO.parse(original,'fasta'): seq_record.description += '_1' seq_record.id = seq_record.description.split()[0] SeqIO.write(seq_record, corrected, 'fasta') </code></pre> <p>Modifying both the <code>.description</code> and <code>.id</code> like this is important</p> <p>Note this would also be a simple task with unix tools like <code>sed</code>, you don't really need Biopython unless you're doing something else too.</p>
python|python-3.x|windows|bioinformatics|biopython
2
1,904,953
59,977,637
How to iterate rows in a csr matrix?
<p>I use this code to iterate elements in a csr matrix.</p> <pre><code>import numpy as np from scipy import sparse A = [[0,0,0,0],[5,8,0,0],[0,0,3,0],[0,6,0,0]] M = sparse.csr_matrix(A) print(type(M)) zip2 = lambda x: zip(x[0], x[1]) for i1, i2 in zip2(M.tocsr().nonzero()): print(i1, i2, M[i1, i2]) </code></pre> <p>But I'd like to iterate rows in this way. Is there a way to do so?</p> <pre><code>for i1 in ...: # Do something with i1 for i2 in ...: # Do something with (i1, i2) </code></pre> <p>One way to achieve this is to use <code>.indices</code> <code>.indptr</code> <code>.data</code>. But probably there is something more readable that this way?</p>
<p>One way is convert the csr matrix to array, but that is not good idea if the matrix shape is huge due limited memory.</p> <p>So the other solution is to let it be CSR and iterate only the rows like this:</p> <pre><code>for i in range(0, M.shape[0]): M.getrow(i).toarray()[0] </code></pre>
python|scipy|sparse-matrix
0
1,904,954
60,151,952
How to suggest a plot type from a csv file data python
<p>How to build a recommender for the data visualization in python. For example, if a csv file contains multiple columns with different data. I need to create x axis as column and y axis as data (rows). Then I need to read each columns and based on the data I need to recommend it as line graph or scatter or pie chart or other plots using mathplotlib or any other visualization tools.</p> <p>How can I determine which chart to use for data for each column?</p>
<p>The type of plots largely depends on if your data is categorical or continuous. </p> <p>Assuming that your data is relatively clean, you can call <code>df.dtypes</code> to determine the data types of the columns. If they are continuous (<code>float</code>), you can use scatter plots, distribution plots, etc, depending on what you want to do. </p> <p>If they are <code>object</code>, you can use <code>df['col'].value_counts()</code> to get the most frequent values, select those who are important enough, and create a pie chart, etc. </p> <p>If they are <code>int</code> (integers), you can use a bar chart with <code>value_counts()</code>, or pie charts, etc. You get the idea.</p> <pre><code>Out[18]: col1 int32 col2 int32 col3 object col4 float64 dtype: object </code></pre>
python|machine-learning|data-visualization
0
1,904,955
2,974,124
reading floating-point numbers with 1.#QNAN values in python
<p>Does anyone know of a python string-to-float parser that can cope with MSVC nan numbers (1.#QNAN)? Currently I'm just using <code>float(str)</code> which at least copes with "nan".</p> <p>I'm using a python script to read the output of a C++ program (runs under linux/mac/win platforms) and the script barfs up when reading these values. (I did already find a C++ library to output the values consistently across platforms, but sometimes have to compare past results, so this still occaisionally pops up.)</p>
<p>Since you have to deal with legacy output files, I see no other possibility but writing a <code>robust_float</code> function:</p> <pre><code>def robust_float(s): try: return float(s) except ValueError: if 'nan' in s.lower(): return float('nan') else: raise </code></pre>
python|cross-platform|visual-c++|nan
2
1,904,956
3,031,817
How do I make a defaultdict safe for unexpecting clients?
<p>Several times (even several in a row) I've been bitten by the defaultdict bug: forgetting that something is actually a defaultdict and treating it like a regular dictionary.</p> <pre><code>d = defaultdict(list) ... try: v = d["key"] except KeyError: print "Sorry, no dice!" </code></pre> <p>For those who have been bitten too, the problem is evident: when d has no key 'key', the <code>v = d["key"]</code> magically creates an empty list and assigns it to both <code>d["key"]</code> and <code>v</code> instead of raising an exception. Which can be quite a pain to track down if d comes from some module whose details one doesn't remember very well.</p> <p>I'm looking for a way to take the sting out of this bug. For me, the best solution would be to somehow disable a defaultdict's magic before returning it to the client.</p>
<p>You may still convert it to an normal dict.</p> <pre><code>d = collections.defaultdict(list) d = dict(d) </code></pre>
python|default-value
14
1,904,957
5,720,376
PyCUDA Memory Addressing: Memory offset?
<p>I've got a large chunk of generated data (A[i,j,k]) on the device, but I only need one 'slice' of A[i,:,:], and in regular CUDA this could be easily accomplished with some pointer arithmetic. </p> <p>Can the same thing be done within pycuda? i.e </p> <pre><code>cuda.memcpy_dtoh(h_iA,d_A+(i*stride)) </code></pre> <p>Obviously this is completely wrong since theres no size information (unless inferred from the dest shape), but hopefully you get the idea?</p>
<p>The pyCUDA gpuArray class supports slicing of 1D arrays, but not higher dimensions that require a stride (although it is coming). You can, however, get access to the underlying pointer in a multidimensional gpuArray from the gpuarray member, which is a pycuda.driver.DeviceAllocation type, and the size information from the gpuArray.dtype.itemsize member. You can then do the same sort of pointer arithmetic you had in mind to get something that the driver memcpy functions will accept.</p> <p>It isn't very pythonic, but it does work (or at least it did when I was doing a lot of pyCUDA + MPI hacking last year).</p>
python|cuda|addressing|relative-addressing|pycuda
2
1,904,958
67,682,893
Pyspark Dataframe Ordering Issue
<p>I am having the data shown in the image:</p> <p><a href="https://i.stack.imgur.com/4QNVt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4QNVt.png" alt="Data" /></a></p> <p>I have been trying to partition and order this data frame in such a way so that we get the output as shown in the below image:</p> <p><a href="https://i.stack.imgur.com/mRIEk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mRIEk.png" alt="Data" /></a></p> <p>Have tried partitioning and sorting with different columns such as partitioning by id and date and ordering by id, date, and column3 then again tried with partitioning and ordering by id and date but in all the cases it shows the different output.</p> <p>Can anyone help with this? I am struggling for the past week on this.</p>
<p>I am not sure if I understand correctly but that should work for your example: <br></p> <pre><code>df = df.orderBy(f.col('id').asc(),f.col('date').asc(),f.col('column3').desc()) </code></pre>
python|apache-spark|pyspark|apache-spark-sql
1
1,904,959
67,823,079
Is there any way to improve the performance of the denoising autoencoder?
<p>I am trying to train a denoising autoencoder to denoise with an image composed of two simple lines as input. However, even when using a simple image like this, it does not output a good output.What's more mysterious is that it perfectly removes noise when inputting a single linear image data as an input.</p> <p>I used a general convolutional autoencoder structure using the leakyReLU function, and the code is down below.</p> <p>Even if I increase the number of training data given as input or increase the training epoch, the result is always the same.</p> <p>Any suggestions on how to increase the performance of the denoising autoencoder would be appreciated. Thank you.</p> <p>Convolutional Auto Encoder code:</p> <pre><code> from tensorflow.keras.layers import BatchNormalization from tensorflow.keras.layers import Conv2D from tensorflow.keras.layers import Conv2DTranspose from tensorflow.keras.layers import LeakyReLU from tensorflow.keras.layers import Activation from tensorflow.keras.layers import Flatten from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Reshape from tensorflow.keras.layers import Input from tensorflow.keras.models import Model from tensorflow.keras import backend as K import numpy as np class ConvAutoencoder: @staticmethod def build(width, height, depth, filters=(32, 64), latentDim = 16): # initialize the input shape to be &quot;channels last&quot; along with # the channels dimension itself # channels dimension itself inputShape = (height, width, depth) chanDim = -1 # define the input to the encoder inputs = Input(shape=inputShape) x = inputs # loop over the number of filters for f in filters: # apply a CONV =&gt; RELU =&gt; BN operation x = Conv2D(f, (3, 3), strides=2, padding=&quot;same&quot;)(x) x = LeakyReLU(alpha=0.2)(x) x = BatchNormalization(axis=chanDim)(x) # flatten the network and then construct our latent vector volumeSize = K.int_shape(x) x = Flatten()(x) latent = Dense(latentDim)(x) # build the encoder model encoder = Model(inputs, latent, name=&quot;encoder&quot;) # start building the decoder model which will accept the # output of the encoder as its inputs latentInputs = Input(shape=(latentDim,)) x = Dense(np.prod(volumeSize[1:]))(latentInputs) x = Reshape((volumeSize[1], volumeSize[2], volumeSize[3]))(x) # loop over our number of filters again, but this time in # reverse order for f in filters[::-1]: # apply a CONV_TRANSPOSE =&gt; RELU =&gt; BN operation x = Conv2DTranspose(f, (3, 3), strides=2, padding=&quot;same&quot;)(x) x = LeakyReLU(alpha=0.2)(x) x = BatchNormalization(axis=chanDim)(x) # apply a single CONV_TRANSPOSE layer used to recover the # original depth of the image x = Conv2DTranspose(depth, (3, 3), padding=&quot;same&quot;)(x) outputs = Activation(&quot;sigmoid&quot;)(x) # build the decoder model decoder = Model(latentInputs, outputs, name=&quot;decoder&quot;) # our autoencoder is the encoder + decoder autoencoder = Model(inputs, decoder(encoder(inputs)), name=&quot;autoencoder&quot;) # return a 3-tuple of the encoder, decoder, and autoencoder return (encoder, decoder, autoencoder) </code></pre> <p>input and output images : <a href="https://i.stack.imgur.com/XFnhY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XFnhY.png" alt="The upper figure is the input image and the lower figure is the output image." /></a></p>
<p>There is a way to preserve local information in autoencoder. This way is using a Convolution as last layer of Encoder. Thus you shouldn't use Flatten and Dense layer's and you should create a encoder latents space using output volumes from convolutions</p> <p>Code,</p> <pre><code>input_shape = ...#your shape input = layers.Input(shape=input_shape) # Encoder x = layers.Conv2D(32, (3, 3), activation=&quot;relu&quot;, padding=&quot;same&quot;)(input) x = layers.MaxPooling2D((2, 2), padding=&quot;same&quot;)(x) x = layers.Conv2D(32, (3, 3), activation=&quot;relu&quot;, padding=&quot;same&quot;)(x) x = layers.MaxPooling2D((2, 2), padding=&quot;same&quot;)(x) # Decoder x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation=&quot;relu&quot;, padding=&quot;same&quot;)(x) x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation=&quot;relu&quot;, padding=&quot;same&quot;)(x) x = layers.Conv2D(1, (3, 3), activation=&quot;sigmoid&quot;, padding=&quot;same&quot;)(x) # Autoencoder autoencoder = Model(input, x) autoencoder.compile(optimizer=&quot;adam&quot;, loss=&quot;binary_crossentropy&quot;) autoencoder.summary() </code></pre>
python|keras|tensorflow2.0|autoencoder
1
1,904,960
66,967,692
Getting a field from another django model?
<p>How can I get the field pto from emplyees app and use it in the permission app.</p> <p>employee/models.py</p> <pre><code>class Employee(AbstractUser): department = models.ForeignKey(Department, on_delete=models.CASCADE,blank=True, null=True) pto = models.IntegerField(default=20) is_deleted = models.BooleanField(default=False) is_superuser = models.BooleanField(default=False) roles = models.ManyToManyField(Role, related_name='+') def __str__(self): return self.username </code></pre> <p>permission/models.py</p> <pre><code>class Permission(models.Model): STATUS = ( ('PENDING', 'PENDING'), ('DENIED', 'DENIED'), ('ACCEPTED', 'ACCEPTED') ) user = models.ForeignKey(Employee, on_delete=models.CASCADE, related_name='lorem') description = models.CharField(max_length=255) date_created = models.DateTimeField(auto_now=True, blank=True, null=True) date = models.DateField() status = models.CharField(max_length=200, choices=STATUS, default=STATUS[0][0]) is_deleted = models.BooleanField(default=False) def __str__(self): return self.description </code></pre> <p>sorry if a was not clear, thanks in advance</p>
<p>add this to class Permissions model</p> <pre><code>from yourproject.apps.employee.models import Employee class Permission(models.Model): .......... def get_pto(self): return int(self.user.pto) # if it's floating point number then change int to float </code></pre> <p>now you can use <code>get_pto</code> in your html, <code>{{ form.get_pto }}</code></p> <p>Sorry if this is not what you are looking for, let me know clearly and I will try to answer if I can.</p>
python|django|django-models|django-rest-framework|django-views
0
1,904,961
63,995,418
Save all python variables in file via shelve module raising KeyError
<p>I followed this topic <a href="https://stackoverflow.com/questions/2960864/how-to-save-all-the-variables-in-the-current-python-session">How to save all the variables in the current python session?</a> to save all my python variables in a file.</p> <p>I did the following code:</p> <pre><code>import shelve def saveWorkspaceVariables(pathSavedVariables): # This functions saves all the variables in a file. my_shelf = shelve.open(pathSavedVariables,'n') # 'n' for new for key in dir(): try: my_shelf[key] = globals()[key] except TypeError: # # __builtins__, my_shelf, and imported modules can not be shelved. # print('ERROR shelving: {0}'.format(key)) my_shelf.close() T=&quot;test&quot; saveWorkspaceVariables(&quot;file.out&quot;) </code></pre> <p>However, it raises: <code>KeyError: 'my_shelf'</code>.</p> <p>Why so? How to solve this issue?</p>
<p>May not be the answer you're looking for, but depending on what IDE you're using you may be able to save your session from there. I know Spyder has this functionality.</p>
python|save
0
1,904,962
42,877,662
How to assign unicode json string
<p>I'm work with lxml and try to put the parsed data to json string. But my data is unicode string, and it converts automatically.</p> <p>Here is my code:</p> <pre><code>from lxml import html,etree import pprint import requests url="http://thuvienphapluat.vn" page = requests.get(url) tree=html.fromstring(page.content) vbplm=tree.xpath('//div[@id="VBPLMOI"]//div[@class="left-col"]') rlst={} # print etree.tostring(tree.find('./a'),pretty_print=True) import re for vb in vbplm: id = re.sub(r"\n*\s",'',vb.xpath('.//*[@class="number"]/text()')[0]) rlst[id]={} tmp=vb.xpath('.//a') for tpm_part in tmp: rlst[id][ (tpm_part.xpath('.//text()'))[0].encode(encoding='utf-8') ]=((tpm_part.get("href"))) print (tpm_part.xpath('.//text()'))[0].encode(encoding='utf-8') print "&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;" break break pprint.pprint(rlst) </code></pre> <p>Here is my result:</p> <pre><code>Văn bản hợp nhất 02/VBHN-BGDĐT năm 2017 hướng dẫn Quyết định 152/2007/QĐ-TTg về học bổng chính sách đối với học sinh, sinh viên học tại cơ sở giáo dục thuộc hệ thống giáo dục quốc dân do Bộ Giáo dục và Đào tạo ban hành &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt; {'1': {'V\xc4\x83n b\xe1\xba\xa3n h\xe1\xbb\xa3p nh\xe1\xba\xa5t 02/VBHN-BGD\xc4\x90T n\xc4\x83m 2017 h\xc6\xb0\xe1\xbb\x9bng d\xe1\xba\xabn Quy\xe1\xba\xbft \xc4\x91\xe1\xbb\x8bnh 152/2007/Q\xc4\x90-TTg v\xe1\xbb\x81 h\xe1\xbb\x8dc b\xe1\xbb\x95ng ch\xc3\xadnh s\xc3\xa1ch \xc4\x91\xe1\xbb\x91i v\xe1\xbb\x9bi h\xe1\xbb\x8dc sinh, sinh vi\xc3\xaan h\xe1\xbb\x8dc t\xe1\xba\xa1i c\xc6\xa1 s\xe1\xbb\x9f gi\xc3\xa1o d\xe1\xbb\xa5c thu\xe1\xbb\x99c h\xe1\xbb\x87 th\xe1\xbb\x91ng gi\xc3\xa1o d\xe1\xbb\xa5c qu\xe1\xbb\x91c d\xc3\xa2n do B\xe1\xbb\x99 Gi\xc3\xa1o d\xe1\xbb\xa5c v\xc3\xa0 \xc4\x90\xc3\xa0o t\xe1\xba\xa1o ban h\xc3\xa0nh': 'http://thuvienphapluat.vn/van-ban/Giao-duc/Van-ban-hop-nhat-02-VBHN-BGDDT-huong-dan-152-2007-QD-TTg-hoc-bong-chinh-sach-hoc-sinh-sinh-vien-342726.aspx'}} </code></pre> <p>It's not save as format "Văn bản hợp nhất 02/VBHN-BGDĐT năm 2017 hướng dẫn Quyết định 152/2007/QĐ-TTg về học bổng chính sách đối với học sinh, sinh viên học tại cơ sở giáo dục thuộc hệ thống giáo dục quốc dân do Bộ Giáo dục và Đào tạo ban hành".</p> <p>Please help me create this unicode json string.</p> <p>Thanks</p>
<p>You just have a Python dict. You need to use the <code>json</code> module to produce a JSON string.</p> <pre><code>import json print(json.dumps(rlst)) </code></pre>
python|json|unicode|lxml|python-unicode
0
1,904,963
66,404,582
Ray object store running out of memory using out of core. How can I configure an external object store like s3 bucket?
<pre><code>import ray import numpy as np ray.init() @ray.remote def f(): return np.zeros(10000000) results = [] for i in range(100): print(i) results += ray.get([f.remote() for _ in range(50)]) </code></pre> <p>Normally, when the object store fills up, it begins evicting objects that are not in use (in a least-recently used fashion). However, because all of the objects are numpy arrays that are being held in the results list, they are all still in use, and the memory that those numpy arrays live in is actually in the object store, so they are taking up space in the object store. The object store can't evict them until those objects go out of scope.</p> <p>Question: How can I specify an external object store like redis without exceeding memory on single machine? I don't want to use /dev/shm or /tmp as object store as only limited memory is available and it quickly fills up</p>
<p>As of ray 1.2.0, the object spilling to support out-of-core data processing is supported. Fro 1.3+ (which will be released in 3 weeks), this feature will be turned on by default.</p> <p><a href="https://docs.ray.io/en/latest/ray-core/objects/object-spilling.html" rel="nofollow noreferrer">https://docs.ray.io/en/latest/ray-core/objects/object-spilling.html</a></p> <p>But your example won't work with this feature. Let me explain why here.</p> <p>There are two things you need to know.</p> <ol> <li>When you call ray task (f.remote) or ray.put, it returns an object reference. Try</li> </ol> <pre class="lang-py prettyprint-override"><code>ref = f.remote() print(ref) </code></pre> <ol start="2"> <li>When you run <code>ray.get</code> on this reference, then now the python variable accesses to the memory directly (in Ray, it will be in shared memory, which is managed by a distributed object store of ray called plasma store if your object size is &gt;= 100KB). So,</li> </ol> <pre class="lang-py prettyprint-override"><code>obj = ray.get(ref) # Now, obj is pointing to the shared memory directly. </code></pre> <p>Currently, the object spilling feature support disk spilling for the 1 case, but not for 2 (2 is much trickier to support if you imagine).</p> <p>So there are 2 solutions here;</p> <ol> <li>Use a file directory for your plasma store. For example, start ray with</li> </ol> <pre class="lang-py prettyprint-override"><code>ray.init(_plasma_directory=&quot;/tmp&quot;) </code></pre> <p>This will allow you to use <code>tmp</code> folder as a plasma store (meaning ray objects are stored in the tmp file system). Note you can possibly see the performance degradation when you use this option.</p> <ol start="2"> <li>Use the object spilling with backpressure. Instead of getting all of ray objects using <code>ray.get</code>, use <code>ray.wait</code>.</li> </ol> <pre class="lang-py prettyprint-override"><code>import ray import numpy as np # Note: You don't need to specify this if you use the latest master. ray.init( _system_config={ &quot;automatic_object_spilling_enabled&quot;: True, &quot;object_spilling_config&quot;: json.dumps( {&quot;type&quot;: &quot;filesystem&quot;, &quot;params&quot;: {&quot;directory_path&quot;: &quot;/tmp/spill&quot;}}, ) }, ) @ray.remote def f(): return np.zeros(10000000) result_refs = [] for i in range(100): print(i) result_refs += [f.remote() for _ in range(50)] while result_refs: [ready], result_refs = ray.wait(result_refs) result = ray.get(ready) </code></pre>
python|ray|modin
2
1,904,964
72,139,966
count function in python list
<p><strong>Hello comrades, I want to take a character from the input and convert it to a list, and then show the number of repetitions of each index to the user, but it gives an error.</strong></p> <blockquote> <p>my code:</p> </blockquote> <pre><code>list = list(input(&quot;plase enter keyword&quot;)) for item in list: print(f&quot;value({item})&quot;+list.count(item)) </code></pre> <blockquote> <p>my error</p> </blockquote> <pre><code>TypeError Traceback (most recent call last) c:\Users\emanull\Desktop\test py\main.py in &lt;cell line: 3&gt;() 2 list = list(input(&quot;plase enter keyword&quot;)) 4 for item in list: ----&gt; 5 print(f&quot;value({item})&quot;+list.count(item)) TypeError: can only concatenate str (not &quot;int&quot;) to str </code></pre>
<p>Firstly overshadowing <code>list</code> built-in is bad idea, secondly you need to convert number into <code>str</code> if you want to concatenate it with other <code>str</code>, after applying these changes</p> <pre><code>lst = list(input(&quot;plase enter keyword&quot;)) for item in lst: print(f&quot;value({item})&quot;+str(lst.count(item))) </code></pre> <p>but be warned that it will print more than once for repeated item</p>
python|compiler-errors
0
1,904,965
65,642,745
ValueError: Data cardinality is ambiguous: x sizes: 10 y sizes: 1 Please provide data which shares the same first dimension
<p>I'm trying to create a Keras model. Here my code</p> <pre><code>init_data = np.array([1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0]) init_data = np.array(init_data,dtype=&quot;float&quot;).reshape(-1,1) result_data = np.array([11.0]) result_data = np.array(result_data,dtype=&quot;float&quot;).reshape(-1,1) stock_model = Sequential() stock_model.add(LSTM(10, input_shape=(10,1), return_sequences=True)) stock_model.add(LSTM(5, activation=&quot;relu&quot;)) return_sequences = True stock_model.add(Dense(1)) sgd = SGD(lr=0.01) stock_model.summary() stock_model.compile(loss=&quot;mean_squared_error&quot;, optimizer=sgd, metrics=[tf.keras.metrics.mse]) stock_model.fit(init_data, result_data, epochs=100, verbose=1) </code></pre> <p>When I run it I get the following error:</p> <pre><code>ValueError: Data cardinality is ambiguous: x sizes: 10 y sizes: 1 Please provide data which shares the same first dimension. </code></pre> <p>I've tried a lot, but unfortunately haven't solved the problem. I've read the other questions referencing the same error, but I'm not really understanding what I need to change.</p>
<p>Add batch dimension:</p> <pre><code>init_data = np.array([1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0]) init_data = np.array(init_data,dtype=&quot;float&quot;).reshape(1,-1,1) # &lt;= add dimension here result_data = np.array([11.0]) result_data = np.array(result_data,dtype=&quot;float&quot;).reshape(1,-1,1) # &lt;= add dimension here </code></pre> <p>EDIT:</p> <p>For 2-dim data:</p> <pre><code>init_data = np.array([[1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0],[1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0]]) init_data = init_data[..., tf.newaxis] </code></pre>
python|tensorflow|keras
0
1,904,966
65,815,820
How to restore matrix from its CSR representation?
<p>I don't get the algorithm behind restoring a matrix from its csr representation. For example i've got these 3 arrays:</p> <pre class="lang-py prettyprint-override"><code>data = [1, 2, 3, 4, 1, 11] ind = [0, 1, 3, 2, 1, 3] indptr = [0, 3, 4, 6] </code></pre> <p>the matrix we are representing:</p> <pre class="lang-py prettyprint-override"><code>[[1, 2, 0, 3], [0, 0, 4, 0], [0, 1, 0, 11]] </code></pre> <p>what do i have to do to restore second matrix from these 3 arrays without any external libraries?</p>
<p><code>indptr</code> is <em>saying</em> that</p> <ul> <li>the first row is <em>comprised of</em> items 0,1,2 of <code>data</code> and <code>ind</code> - slice(<strong>0</strong>,<strong>3</strong>)</li> <li>the second row is <em>comprised of</em> item 3 of <code>data</code> and <code>ind</code> - slice(<strong>3</strong>,<strong>4</strong>)</li> <li>the third row is <em>comprised of</em> items 4,5 of <code>data</code> and <code>ind</code> - slice(<strong>4</strong>,<strong>6</strong>)</li> </ul> <p>This <em>csr</em> is only telling you where non-zero values are - <code>ind</code> is the index of a non-zero value and <code>data</code> is the value at that index.</p> <p>The first row has</p> <ul> <li>non-zero data at indices 0,1,3 - <code>ind[slice(0,3)]</code></li> <li>the data at those indices is 1,2,3 - <code>data[slice(0,3)]</code></li> <li>notice index 2 is missing - it must be zero</li> </ul> <p>There is no indication of the size/dimensions of the matrix except that there are only three rows with non-zero data (gleaned from <code>indptr</code>) and that the largest index for any row is three so there must be four columns.</p>
python|math|matrix|linear-algebra
1
1,904,967
50,901,094
VTK rendering 2D mesh in python
<p>so i'm trying to render a 2D mesh using vtk (in python). I have a list of tuples containing all the points and also a list of tuples containing the points of each cell. Just to experiment, I tried to create a polydata object of a square with 4 elements and render it, but i ended up with this: </p> <p><a href="https://i.stack.imgur.com/jFs6U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jFs6U.png" alt="square"></a></p> <p>I would like it to show the lines connecting the nodes (like a wireframe) instead of solid square.. This is the code to produce the image above:</p> <pre><code>def main2(): #Array of vectors containing the coordinates of each point nodes = np.array([[0, 0, 0], [1, 0, 0], [2, 0, 0], [2, 1, 0], [2, 2, 0], [1, 2, 0], [0, 2, 0], [0, 1, 0], [1, 1, 0]]) #Array of tuples containing the nodes correspondent of each element elements = np.array([(0, 1, 8, 7), (7, 8, 5, 6), (1, 2, 3, 8), (8, 3, 4, 5)]) #Make the building blocks of polyData attributes Mesh = vtk.vtkPolyData() Points = vtk.vtkPoints() Cells = vtk.vtkCellArray() #Load the point and cell's attributes for i in range(len(nodes)): Points.InsertPoint(i, nodes[i]) for i in range(len(elements)): Cells.InsertNextCell(mkVtkIdList(elements[i])) #Assign pieces to vtkPolyData Mesh.SetPoints(Points) Mesh.SetPolys(Cells) #Mapping the whole thing MeshMapper = vtk.vtkPolyDataMapper() if vtk.VTK_MAJOR_VERSION &lt;= 5: MeshMapper.SetInput(Mesh) else: MeshMapper.SetInputData(Mesh) #Create an actor MeshActor = vtk.vtkActor() MeshActor.SetMapper(MeshMapper) #Rendering Stuff camera = vtk.vtkCamera() camera.SetPosition(1,1,1) camera.SetFocalPoint(0,0,0) renderer = vtk.vtkRenderer() renWin = vtk.vtkRenderWindow() renWin.AddRenderer(renderer) iren = vtk.vtkRenderWindowInteractor() iren.SetRenderWindow(renWin) renderer.AddActor(MeshActor) renderer.SetActiveCamera(camera) renderer.ResetCamera() renderer.SetBackground(1,1,1) renWin.SetSize(300,300) #Interact with data renWin.Render() iren.Start() main2() </code></pre> <p>I would also like to know if it's possible to have a gridline as the background of the render window, instead of a black color, just like this: </p> <p><a href="https://i.stack.imgur.com/Jczlx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Jczlx.png" alt="gridline"></a></p> <p>Thanks in advance!</p>
<p>You can use MeshActor.GetProperty().SetRepresentationToWireframe() (<a href="https://www.vtk.org/doc/nightly/html/classvtkProperty.html#a2a4bdf2f46dc499ead4011024eddde5c" rel="nofollow noreferrer">https://www.vtk.org/doc/nightly/html/classvtkProperty.html#a2a4bdf2f46dc499ead4011024eddde5c</a>) to render the actor as wireframe, or MeshActor.GetProperty().SetEdgeVisibility(True) to render it as solid with edges rendered as lines.</p> <p>Regarding the render window background, I don't know.</p>
python|mesh|vtk
1
1,904,968
50,812,980
What I do wrong? Django objects filter
<p>I try show post only from 30 days. What I do worng? </p> <pre><code>@login_required def dashboard(request): days = 30 posts = Post.objects.filter(Post.publish &lt; timezone.now() - timedelta(days=days)) #posts = Post.objects.all() return render(request, 'account/dashboard.html', {'section': 'dashboard', 'posts': posts}) </code></pre> <p>error</p> <pre><code>TypeError at /account/ unorderable types: DeferredAttribute() &lt; datetime.datetime() </code></pre>
<p>Intead of <code>&lt;</code> sign inside filter method you should use <code>__lt</code> lookup attached to field name:</p> <pre><code>posts = Post.objects.filter(publish__lt=timezone.now() - timedelta(days=days)) </code></pre>
python|django
4
1,904,969
44,856,376
File handling in Python
<p>Im a python noob and I'm stuck on a problem. </p> <pre><code>filehandler = open("data.txt", "r") alist = filehandler.readlines() def insertionSort(alist): for line in alist: line = list(map(int, line.split())) print(line) for index in range(2, len(line)): currentvalue = line[index] position = index while position&gt;1 and line[position-1]&gt;currentvalue: line[position]=line[position-1] position = position-1 line[position]=currentvalue print(line) insertionSort(alist) for line in alist: print line </code></pre> <p>Output: </p> <pre><code>[4, 19, 2, 5, 11] [4, 2, 5, 11, 19] [8, 1, 2, 3, 4, 5, 6, 1, 2] [8, 1, 1, 2, 2, 3, 4, 5, 6] 4 19 2 5 11 8 1 2 3 4 5 6 1 2 </code></pre> <p>I am supposed to sort lines of values from a file. The first value in the line represents the number of values to be sorted. I am supposed to display the values in the file in sorted order. </p> <p>The print calls in insertionSort are just for debugging purposes. </p> <p>The top four lines of output show that the insertion sort seems to be working. I can't figure out why when I print the lists after calling insertionSort the values are not sorted. </p> <p>I am new to Stack Overflow and Python so please let me know if this question is misplaced. </p>
<pre><code>for line in alist: line = list(map(int, line.split())) </code></pre> <p><code>line</code> starts out as eg <code>"4 19 2 5 11"</code>. You split it and convert to int, ie <code>[4, 19, 2, 5, 11]</code>.</p> <p>You then assign this new value to <code>list</code> - but <code>list</code> is a local variable, the new value never gets stored back into <code>alist</code>.</p> <p>Also, <code>list</code> is a terrible variable name because there is already a <code>list</code> data-type (and the variable name will keep you from being able to use the data-type).</p> <p>Let's reorganize your program:</p> <pre><code>def load_file(fname): with open(fname) as inf: # -&gt; list of list of int data = [[int(i) for i in line.split()] for line in inf] return data def insertion_sort(row): # `row` is a list of int # # your sorting code goes here # return row def save_file(fname, data): with open(fname, "w") as outf: # list of list of int -&gt; list of str lines = [" ".join(str(i) for i in row) for row in data] outf.write("\n".join(lines)) def main(): data = load_file("data.txt") data = [insertion_sort(row) for row in data] save_file("sorted_data.txt", data) if __name__ == "__main__": main() </code></pre> <p>Actually, with your data - where the first number in each row isn't actually data to sort - you would be better to do</p> <pre><code> data = [row[:1] + insertion_sort(row[1:]) for row in data] </code></pre> <p>so that the logic of <code>insertion_sort</code> is cleaner.</p>
python
0
1,904,970
64,694,163
How to convert this while loop to for loop
<ul> <li>Hello, I am fairly new to python and wondering how to convert this while loop to a for loop. I am not sure how to keep looping the inputs so it keeps asking Enter a line until GO and then say Next until STOP then display the count of lines.</li> </ul> <pre><code>def keyboard(): &quot;&quot;&quot;Repeatedly reads lines from standard input until a line is read that begins with the 2 characters &quot;GO&quot;. Then a prompt Next: , until a line is read that begins with the 4 characters &quot;STOP&quot;. The program then prints a count of the number of lines between the GO line and the STOP line (but not including them in the count) and exits.&quot;&quot;&quot; line = input(&quot;Enter a line: &quot;) flag = False count = 0 while(line[:4] != &quot;STOP&quot;): if (line[:2] == &quot;GO&quot;): flag = True if flag: line = input(&quot;Next: &quot;) count += 1 else: line = input(&quot;Enter a line: &quot;) print(f&quot;Counted {count-1} lines&quot;) keyboard() </code></pre> <p>With these inputs:</p> <pre><code>ignore me and me GO GO GO! I'm an important line So am I Me too! STOP now please I shouldn't even be read let alone printed Nor me </code></pre> <p>Should display/result in:</p> <pre><code>Enter a line: ignore me Enter a line: and me Enter a line: GO GO GO! Next: I'm an important line Next: So am I Next: Me too! Next: STOP now please Counted 3 lines </code></pre>
<p>You just need an infinity <code>for</code> loop which is what <code>while True</code> essentially is. This answer has it: <a href="https://stackoverflow.com/questions/34253996/infinite-for-loops-possible-in-python">Infinite for loops possible in Python?</a></p> <pre class="lang-py prettyprint-override"><code>#int will never get to 1 for _ in iter(int, 1): pass </code></pre> <p>So just replace your while loop with the above and add a break condition.</p> <pre class="lang-py prettyprint-override"><code>def keyboard(): &quot;&quot;&quot;Repeatedly reads lines from standard input until a line is read that begins with the 2 characters &quot;GO&quot;. Then a prompt Next: , until a line is read that begins with the 4 characters &quot;STOP&quot;. The program then prints a count of the number of lines between the GO line and the STOP line (but not including them in the count) and exits.&quot;&quot;&quot; line = input(&quot;Enter a line: &quot;) flag = False count = 0 for _ in iter(int, 1): if line[:4] == &quot;STOP&quot;: break if (line[:2] == &quot;GO&quot;): flag = True if flag: line = input(&quot;Next: &quot;) count += 1 else: line = input(&quot;Enter a line: &quot;) print(f&quot;Counted {count-1} lines&quot;) keyboard() </code></pre>
python
1
1,904,971
61,248,565
Causal Inference in observational data
<p>I am using the python package <code>DoWhy</code> to see if I have a causal relationship between tenure and churn based on <a href="https://medium.com/@akelleh/introducing-the-do-sampler-for-causal-inference-a3296ea9e78d" rel="nofollow noreferrer">this site</a>.</p> <pre><code># TREATMENT = TENURE causal_df = df.causal.do('tenure', method = 'weighting', variable_types = {'Churn': 'd', 'tenure': 'c', 'nr_login', 'c','avg_movies': 'c' }, outcome='Churn',common_causes=['nr_login':'c','avg_movies': 'c']) </code></pre> <p>I have a number of other variables as well.</p> <ol> <li><p>Is this the right way to do the analysis?</p> </li> <li><p>What does common causes mean, and how to choose them?</p> </li> <li><p>How I can interpret results and with what certainty?</p> </li> </ol>
<p>Let's take your questions one by one.</p> <h2>1. Is this the right way?</h2> <p>Yes, your code snippet is correct, assuming that you want to estimate the causal effect of <code>tenure</code> and <code>Churn</code>, by conditioning on <code>nr_login</code> and <code>avg_movies</code>.</p> <p>However this method will output a dataframe containing the <em>interventional</em> values of the outcome <code>Churn</code>. That is, the values of the churn variable as if tenure had been changed independent of the specified common causes. If the treatment <code>tenure</code> was discrete, you could have done a simple plot to visualize the effect of different values of <code>tenure</code>. Something like: </p> <pre><code>causal_df = df.causal.do('tenure', method = 'weighting', variable_types = {'Churn': 'd', 'tenure': 'd', 'nr_login', 'c','avg_movies': 'c'}, outcome='Churn',common_causes=['nr_login':'c','avg_movies': 'c']).groupby('tenure').mean() </code></pre> <p>However, to compute the average causal effect, a more direct procedure is to run the <code>do</code> method twice for the two values of treatment over which the effect is to be computed (the typical value is comparing treatment=1 versus 0). The resultant code will look like below as described in the <a href="https://microsoft.github.io/dowhy/example_notebooks/dowhy_causal_api.html" rel="noreferrer">example notebook</a> (also see the <a href="https://microsoft.github.io/dowhy/dowhy.api.html#dowhy.api.causal_data_frame.CausalAccessor.do" rel="noreferrer">docs</a> for the <code>do</code> method): </p> <pre><code>df_treatment1 = df.causal.do({'tenure': 1}, method = 'weighting', variable_types = {'Churn': 'd', 'tenure': 'd', 'nr_login', 'c','avg_movies': 'c'}, outcome='Churn',common_causes=['nr_login':'c','avg_movies': 'c']) df_treatment0 = df.causal.do({'tenure': 0}, method = 'weighting', variable_types = {'Churn': 'd', 'tenure': 'd', 'nr_login', 'c','avg_movies': 'c'}, outcome='Churn',common_causes=['nr_login':'c','avg_movies': 'c']) causal_effect = (df_treatment1['churn'] - df_treatment0["churn"]).mean() </code></pre> <p>There's also an equivalent way of achieving the same result using the main DoWhy API.</p> <pre><code>model= CausalModel( data=df, treatment='tenure', outcome='churn', common_causes=['nr_login', 'avg_movies'])` identified_estimand = model.identify_effect() model.estimate_effect(identified_estimand, method="backdoor.propensity_score_weighting") </code></pre> <p>That said, based on your dataset, there may be other estimation methods that are better suited. For example, the "weighting" method is expected to have high variance if one of the treatment values is unlikely given the possible values of common causes. Also, if you have limited data, this method may not work well for continuous treatments since it is a non-parametric method that will have high variance in general. In those cases, you can use other estimator methods like double-ML that use parametric assumptions to reduce variance in the estimation (at the cost of possible bias). You can call double-ML or other advanced EconML estimators like this (full example in <a href="https://microsoft.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html" rel="noreferrer">this notebook</a>):</p> <pre><code>model.estimate_effect(identified_estimand, method_name="backdoor.econml.dml.DMLCateEstimator", control_value = 0, treatment_value = 1, confidence_intervals=False, method_params={"init_params": {'model_y':GradientBoostingRegressor(), 'model_t': GradientBoostingRegressor(), "model_final":LassoCV(), 'featurizer':PolynomialFeatures(degree=1, include_bias=True) }, "fit_params":{} }) </code></pre> <h2>2. How to choose common_causes?</h2> <p>Common causes are the variables that cause both treatment and outcome. Therefore, a correlation between treatment and outcome can be due to the causal effect of treatment, or simply due to the effect of common causes (classic example is that ice-cream sales are correlated with swimming pool memberships, but one does not cause the other; hot weather is the common cause here). The goal of causal inference is to somehow disentangle the effect of common causes and only return the effect of treatment. Formally, causal effect is the effect of treatment on outcome when all common causes are held constant. For more, check out this <a href="https://causalinference.gitlab.io/kdd-tutorial/" rel="noreferrer">tutorial</a> on causal inference)</p> <p>So, in your example, you'd want to include all variables that both lead to a customer having high tenure and reduce their chances of churn (e.g., their monthly usage, trust in the platform, etc.). These are the common causes or confounders that need to be included in the model.</p> <h2>3. How to interpret the results and their uncertainty?</h2> <p>As mentioned above, the standard interpretation of a causal effect is the change in outcome (<code>churn</code>) when the treatment is changed by 1 unit. For a continuous variable though, this is simply a convention: you can define the causal effect as the change in outcome over any two values of the treatment.</p> <p>For estimating uncertainty, you can estimate confidence intervals and/or do refutation tests. Confidence intervals will tell you about the statistical uncertainty (roughly, how much will your estimate change if you are given a fresh i.i.d. sample of the data?). Refutations tests will quantify the uncertainty due to causal assumptions (if you missed specifying an important common cause, how much would the estimate change?). </p> <p>Here's an example. You can find more on refutation methods <a href="https://microsoft.github.io/dowhy/dowhy.causal_refuters.html?highlight=dummy_outcome" rel="noreferrer">here</a>.</p> <pre><code># Confidence intervals est = model.estimate_effect(identified_estimand, method="backdoor.propensity_score_weighting", confidence_intervals=True) # Refutation test by adding a random common cause model.refute_estimate(identified_estimand, est, method_name="random_common_cause") </code></pre>
python|inference|causality
6
1,904,972
61,523,713
How to run a Tensorflow-Lite inference in (Android Studio) NDK (C / C++ API)?
<h1>Info</h1> <ul> <li>I built a Tensorflow (TF) model from Keras and converted it to Tensorflow-Lite (TFL)</li> <li>I built an Android app in Android Studio and used the Java API to run the TFL model</li> <li>In the Java app, I used the TFL Support Library (see <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/support/java/README.md" rel="noreferrer">here</a>), and the TensorFlow Lite AAR from JCenter by including <code>implementation 'org.tensorflow:tensorflow-lite:+'</code> under my <code>build.gradle</code> dependencies</li> </ul> <p>Inference times are not so great, so now I want to use TFL in Android's NDK.</p> <p>So I built an exact copy of the Java app in Android Studio's NDK, and now I'm trying to include the TFL libs in the project. I followed <a href="https://www.tensorflow.org/lite/guide/android#build_tensorflow_lite_locally" rel="noreferrer">TensorFlow-Lite's Android guide</a> and built the TFL library locally (and got an AAR file), and included the library in my NDK project in Android Studio.</p> <p>Now I'm trying to use the TFL library in my C++ file, by trying to <code>#include</code> it in code, but I get an error message: <code>cannot find tensorflow</code> (or any other name I'm trying to use, according to the name I give it in my <code>CMakeLists.txt</code> file).</p> <h1>Files</h1> <p>App <em>build.gradle</em>:</p> <pre><code>apply plugin: 'com.android.application' android { compileSdkVersion 29 buildToolsVersion "29.0.3" defaultConfig { applicationId "com.ndk.tflite" minSdkVersion 28 targetSdkVersion 29 versionCode 1 versionName "1.0" testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner" externalNativeBuild { cmake { cppFlags "" } } ndk { abiFilters 'arm64-v8a' } } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro' } } // tf lite aaptOptions { noCompress "tflite" } externalNativeBuild { cmake { path "src/main/cpp/CMakeLists.txt" version "3.10.2" } } } dependencies { implementation fileTree(dir: 'libs', include: ['*.jar']) implementation 'androidx.appcompat:appcompat:1.1.0' implementation 'androidx.constraintlayout:constraintlayout:1.1.3' testImplementation 'junit:junit:4.12' androidTestImplementation 'androidx.test.ext:junit:1.1.1' androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0' // tflite build compile(name:'tensorflow-lite', ext:'aar') } </code></pre> <p>Project <em>build.gradle</em>:</p> <pre><code>buildscript { repositories { google() jcenter() } dependencies { classpath 'com.android.tools.build:gradle:3.6.2' } } allprojects { repositories { google() jcenter() // native tflite flatDir { dirs 'libs' } } } task clean(type: Delete) { delete rootProject.buildDir } </code></pre> <p><em>CMakeLists.txt</em>:</p> <pre><code>cmake_minimum_required(VERSION 3.4.1) add_library( # Sets the name of the library. native-lib # Sets the library as a shared library. SHARED # Provides a relative path to your source file(s). native-lib.cpp ) add_library( # Sets the name of the library. tensorflow-lite # Sets the library as a shared library. SHARED # Provides a relative path to your source file(s). native-lib.cpp ) find_library( # Sets the name of the path variable. log-lib # Specifies the name of the NDK library that # you want CMake to locate. log ) target_link_libraries( # Specifies the target library. native-lib tensorflow-lite # Links the target library to the log library # included in the NDK. ${log-lib} ) </code></pre> <p><em>native-lib.cpp</em>:</p> <pre><code>#include &lt;jni.h&gt; #include &lt;string&gt; #include "tensorflow" extern "C" JNIEXPORT jstring JNICALL Java_com_xvu_f32c_1jni_MainActivity_stringFromJNI( JNIEnv* env, jobject /* this */) { std::string hello = "Hello from C++"; return env-&gt;NewStringUTF(hello.c_str()); } class FlatBufferModel { // Build a model based on a file. Return a nullptr in case of failure. static std::unique_ptr&lt;FlatBufferModel&gt; BuildFromFile( const char* filename, ErrorReporter* error_reporter); // Build a model based on a pre-loaded flatbuffer. The caller retains // ownership of the buffer and should keep it alive until the returned object // is destroyed. Return a nullptr in case of failure. static std::unique_ptr&lt;FlatBufferModel&gt; BuildFromBuffer( const char* buffer, size_t buffer_size, ErrorReporter* error_reporter); }; </code></pre> <h1>Progress</h1> <p>I also tried to follow these:</p> <ul> <li><a href="https://stackoverflow.com/questions/49834875/problems-with-using-tensorflow-lite-c-api-in-android-studio-project/50332808#50332808">Problems with using tensorflow lite C++ API in Android Studio Project</a></li> <li><a href="https://stackoverflow.com/questions/60925493/android-c-ndk-some-shared-libraries-refuses-to-link-in-runtime">Android C++ NDK : some shared libraries refuses to link in runtime</a></li> <li><a href="https://stackoverflow.com/questions/55125977/how-to-build-tensorflow-lite-as-a-static-library-and-link-to-it-from-a-separate">How to build TensorFlow Lite as a static library and link to it from a separate (CMake) project?</a></li> <li><a href="https://stackoverflow.com/questions/50150701/how-to-set-input-of-tensorflow-lite-c">how to set input of Tensorflow Lite C++</a></li> <li><a href="https://stackoverflow.com/questions/57151987/how-can-i-build-only-tensorflow-lite-and-not-all-tensorflow-from-source">How can I build only TensorFlow lite and not all TensorFlow from source?</a></li> </ul> <p>but in my case I used Bazel to build the TFL libs.</p> <p>Trying to build the classification demo of (<a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/examples/label_image" rel="noreferrer">label_image</a>), I managed to build it and <code>adb push</code> to my device, but when trying to run I got the following error:</p> <pre><code>ERROR: Could not open './mobilenet_quant_v1_224.tflite'. Failed to mmap model ./mobilenet_quant_v1_224.tflite </code></pre> <ul> <li>I followed <a href="https://github.com/zimenglyu/zimenglyu/blob/master/_posts/2018-11-27-tflite-android-ndk-eng.markdown" rel="noreferrer">zimenglyu's post</a>: trying to set <code>android_sdk_repository</code> / <code>android_ndk_repository</code> in <code>WORKSPACE</code> got me an error: <code>WORKSPACE:149:1: Cannot redefine repository after any load statement in the WORKSPACE file (for repository 'androidsdk')</code>, and locating these statements at different places resulted in the same error.</li> <li>I deleted these changes to <code>WORKSPACE</code> and continued with zimenglyu's post: I've compiled <code>libtensorflowLite.so</code>, and edited <code>CMakeLists.txt</code> so that the <code>libtensorflowLite.so</code> file was referenced, but left the <code>FlatBuffer</code> part out. The Android project compiled successfully, but there was no evident change, I still can't include any TFLite libraries.</li> </ul> <p>Trying to compile TFL, I added a <code>cc_binary</code> to <code>tensorflow/tensorflow/lite/BUILD</code> (following the <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/examples/label_image" rel="noreferrer">label_image example</a>):</p> <pre><code>cc_binary( name = "native-lib", srcs = [ "native-lib.cpp", ], linkopts = tflite_experimental_runtime_linkopts() + select({ "//tensorflow:android": [ "-pie", "-lm", ], "//conditions:default": [], }), deps = [ "//tensorflow/lite/c:common", "//tensorflow/lite:framework", "//tensorflow/lite:string_util", "//tensorflow/lite/delegates/nnapi:nnapi_delegate", "//tensorflow/lite/kernels:builtin_ops", "//tensorflow/lite/profiling:profiler", "//tensorflow/lite/tools/evaluation:utils", ] + select({ "//tensorflow:android": [ "//tensorflow/lite/delegates/gpu:delegate", ], "//tensorflow:android_arm64": [ "//tensorflow/lite/delegates/gpu:delegate", ], "//conditions:default": [], }), ) </code></pre> <p>and trying to build it for <code>x86_64</code>, and <code>arm64-v8a</code> I get an error: <code>cc_toolchain_suite rule @local_config_cc//:toolchain: cc_toolchain_suite '@local_config_cc//:toolchain' does not contain a toolchain for cpu 'x86_64'</code>.</p> <p>Checking <code>external/local_config_cc/BUILD</code> (which provided the error) in line 47:</p> <pre><code>cc_toolchain_suite( name = "toolchain", toolchains = { "k8|compiler": ":cc-compiler-k8", "k8": ":cc-compiler-k8", "armeabi-v7a|compiler": ":cc-compiler-armeabi-v7a", "armeabi-v7a": ":cc-compiler-armeabi-v7a", }, ) </code></pre> <p>and these are the only 2 <code>cc_toolchain</code>s found. Searching the repository for "cc-compiler-" I only found "<strong>aarch64</strong>", which I assumed is for the 64-bit ARM, but nothing with "x86_64". There are "x64_windows", though - and I'm on Linux.</p> <p>Trying to build with aarch64 like so:</p> <pre><code>bazel build -c opt --fat_apk_cpu=aarch64 --cpu=aarch64 --host_crosstool_top=@bazel_tools//tools/cpp:toolchain //tensorflow/lite/java:tensorflow-lite </code></pre> <p>results in an error:</p> <pre><code>ERROR: /.../external/local_config_cc/BUILD:47:1: in cc_toolchain_suite rule @local_config_cc//:toolchain: cc_toolchain_suite '@local_config_cc//:toolchain' does not contain a toolchain for cpu 'aarch64' </code></pre> <h3>Using the libraries in Android Studio:</h3> <p>I was able to build the library for <code>x86_64</code> architecture by changing the <code>soname</code> in build config and using full paths in <code>CMakeLists.txt</code>. This resulted in a <code>.so</code> shared library. Also - I was able to build the library for <code>arm64-v8a</code> using the TFLite Docker container, by adjusting the <code>aarch64_makefile.inc</code> file, but I did not change any build options, and let <code>build_aarch64_lib.sh</code> whatever it builds. This resulted in a <code>.a</code> static library.</p> <p>So now I have two TFLite libs, but I'm still unable to use them (I can't <code>#include "..."</code> anything for example).</p> <p>When trying to build the project, using only <code>x86_64</code> works fine, but trying to include the <code>arm64-v8a</code> library results in ninja error: <code>'.../libtensorflow-lite.a', needed by '.../app/build/intermediates/cmake/debug/obj/armeabi-v7a/libnative-lib.so', missing and no known rule to make it</code>.</p> <h3>Different approach - build/compile source files with Gradle:</h3> <ol> <li>I created a Native C++ project in Android Studio</li> <li>I took the basic C/C++ source files and headers from Tensorflow's <code>lite</code> directory, and created a similar structure in <code>app/src/main/cpp</code>, in which I include the (A) tensorflow, (B) absl and (C) flatbuffers files</li> <li>I changed the <code>#include "tensorflow/...</code> lines in all of tensorflow's header files to relative paths so the compiler can find them.</li> <li>In the app's <code>build.gradle</code> I added a no-compression line for the <code>.tflite</code> file: <code>aaptOptions { noCompress "tflite" }</code></li> <li>I added an <code>assets</code> directory to the app</li> <li>In <code>native-lib.cpp</code> I added <a href="https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_c" rel="noreferrer">some example code from the TFLite website</a></li> <li>Tried to build the project with the source files included (build target is <code>arm64-v8a</code>).</li> </ol> <p>I get an error:</p> <pre><code>/path/to/Android/Sdk/ndk/20.0.5594570/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/c++/v1/memory:2339: error: undefined reference to 'tflite::impl::Interpreter::~Interpreter()' </code></pre> <p>in <code>&lt;memory&gt;</code>, line 2339 is the <code>"delete __ptr;"</code> line:</p> <pre><code>_LIBCPP_INLINE_VISIBILITY void operator()(_Tp* __ptr) const _NOEXCEPT { static_assert(sizeof(_Tp) &gt; 0, "default_delete can not delete incomplete type"); static_assert(!is_void&lt;_Tp&gt;::value, "default_delete can not delete incomplete type"); delete __ptr; } </code></pre> <h1>Question</h1> <p>How can I include the TFLite libraries in Android Studio, so I can run a TFL inference from the NDK?</p> <p>Alternatively - how can I use gradle (currently with <strong>cmake</strong>) to build and compile the source files?</p>
<p>I use Native TFL with C-API in the following way:</p> <h3>SETUP:</h3> <ol> <li>Download the latest version of <a href="https://bintray.com/google/tensorflow/tensorflow-lite" rel="noreferrer">TensorFlow Lite AAR file</a></li> <li>Change the file type of downloaded <code>.arr</code> file to <code>.zip</code> and unzip the file to get the shared library (<code>.so</code> file)</li> <li>Download all header files from the <code>c</code> directory in the <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/c" rel="noreferrer">TFL repository</a></li> <li>Create an Android C++ app in Android Studio</li> <li>Create a <code>jni</code> directory (<code>New</code> -&gt; <code>Folder</code> -&gt; <code>JNI Folder</code>) in <code>app/src/main</code> and also create architecture sub-directories in it (<code>arm64-v8a</code> or <code>x86_64</code> for example)</li> <li>Put all header files in the <code>jni</code> directory (next to the architecture directories), and put the shared library inside the architecture directory/ies</li> <li>Open the <code>CMakeLists.txt</code> file and include an <code>add_library</code> stanza for the TFL library, the path to the shared library in a <code>set_target_properties</code> stanza and the headers in <code>include_directories</code> stanza (see below, in NOTES section)</li> <li>Sync Gradle</li> </ol> <h3>USAGE:</h3> <p>In <code>native-lib.cpp</code> include the headers, for example:</p> <pre><code>#include &quot;../jni/c_api.h&quot; #include &quot;../jni/common.h&quot; #include &quot;../jni/builtin_ops.h&quot; </code></pre> <p>TFL functions can be called directly, for example:</p> <pre><code>TfLiteModel * model = TfLiteModelCreateFromFile(full_path); TfLiteInterpreter * interpreter = TfLiteInterpreterCreate(model); TfLiteInterpreterAllocateTensors(interpreter); TfLiteTensor * input_tensor = TfLiteInterpreterGetInputTensor(interpreter, 0); const TfLiteTensor * output_tensor = TfLiteInterpreterGetOutputTensor(interpreter, 0); TfLiteStatus from_status = TfLiteTensorCopyFromBuffer( input_tensor, input_data, TfLiteTensorByteSize(input_tensor)); TfLiteStatus interpreter_invoke_status = TfLiteInterpreterInvoke(interpreter); TfLiteStatus to_status = TfLiteTensorCopyToBuffer( output_tensor, output_data, TfLiteTensorByteSize(output_tensor)); </code></pre> <h3>NOTES:</h3> <ul> <li>In this setup SDK version 29 was used</li> <li><code>cmake</code> environment also included <code>cppFlags &quot;-frtti -fexceptions&quot;</code></li> </ul> <p><code>CMakeLists.txt</code> example:</p> <pre><code>set(JNI_DIR ${CMAKE_CURRENT_SOURCE_DIR}/../jni) add_library(tflite-lib SHARED IMPORTED) set_target_properties(tflite-lib PROPERTIES IMPORTED_LOCATION ${JNI_DIR}/${ANDROID_ABI}/libtfl.so) include_directories( ${JNI_DIR} ) target_link_libraries( native-lib tflite-lib ...) </code></pre>
c++|android-studio|gradle|android-ndk|tensorflow-lite
6
1,904,973
61,592,545
Plotting Fourier Transform Of A Sinusoid In Python
<p>The following python program plots a sinusoid:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np # Canvas plt.style.use("ggplot") # Frequency, Oscillations &amp; Range f = int(input("Enter frequency: ")) n_o = int(input("Enter number of oscillations: ")) t_max = n_o/f t = np.linspace(0, t_max, 1000) # Sine y_sin = np.sin(2*np.pi*f*t) # Setting subplots on separate axes fig, axs = plt.subplots(2, 1, constrained_layout = True) # Sine axis axs[0].plot(t, y_sin, color = "firebrick", label = "sin({}Hz)".format(f)) axs[0].axhline(y = 0, color = "grey", linestyle = "dashed", label = "y = 0") axs[0].legend(loc = "lower left", frameon = True, fancybox = True, shadow = True, facecolor = "white") # Title axs[0].set_title("Sine") axs[0].set_xlabel("Time(s)") axs[0].set_ylabel("Amplitude") # Axis Limits axs[0].axis([-0.05*t_max, t_max+0.05*t_max, -1.5, 1.5]) plt.show() </code></pre> <p>How can i plot the Fourier transform of this frequency in the second subplot? I have seen various examples but they only work with small frequencies, whereas i'm working with frequencies above 100 Hz. Thanks.</p>
<p>By correctly applying <a href="https://numpy.org/doc/1.18/reference/routines.fft.html" rel="nofollow noreferrer">FFT</a> on your signal you should be just fine:</p> <pre><code># FFT # number of samples N = len(t) # time step dt = t[1]-t[0] # max number of harmonic to display H_max = 5 xf = np.linspace(0.0, 1.0/(2.0*dt), N//2) yf = np.fft.fft(y_sin) axs[1].plot(xf, (2/N)*np.abs(yf[:N//2])) axs[1].set_xlim([0, H_max*f]) axs[1].set_xlabel('f (Hz)') axs[1].set_ylabel('$||H_i||_2$') </code></pre> <p>which gives for inputs <code>f=100</code> and <code>n_o=3</code>:</p> <p><a href="https://i.stack.imgur.com/dJy8d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dJy8d.png" alt="output"></a></p> <p>Hope this helps.</p>
python|numpy|matplotlib|graph|fft
2
1,904,974
61,384,362
Python Selenium Soloution
<p>I Want A Soloution With Python Selenium Web Driver Please !! </p> <p>I am trying to refresh page untill dates is available in calendar I Want To stop refresh when the dates is available And I Want To Choose Any Date Automaticly </p>
<p>The easiest way to check if an element exists is to simply call <code>find_element</code> inside a <code>try/catch</code>. For example : </p> <pre><code>while True: try: driver.find_element_by_id("&lt;the id&gt;") break except: driver.refresh() </code></pre>
python|selenium|datepicker
0
1,904,975
60,457,383
Data Deletion in File Using Python
<p>As new programmer in Python Programming Language, I thought to create a student Database Management System in Python. But while deleting the Data from the file I got stuck and I thought to apply these steps to the file to delete the characters but how shall I Implement it? I have developed my code but it's not working.</p> <p>The algorithm:</p> <p>STEP 1: Create an additional file and open the current file in reading mode and open the new file in writing mode</p> <p>STEP 2: Read and copy the Data to the newly created file except for the line we want to delete</p> <p>STEP 3: Close both the file and remove the old file and rename the newly created file with the deleted filename</p> <p>But while implementing it I got stuck on how to implement as it is not remaining the same.</p> <p>Here is the code which I wrote:</p> <pre><code>def delete(): rollno = int(input('\n Enter The Roll number : ')) f = open('BCAstudents3.txt','r') f1 = open('temp.txt','a+') for line in f: fo = line.split() if fo: if fo[3] != rollno: f1.write(str(str(fo).replace('[','').replace(']','').replace("'","").replace(",",""))) f.close() f1.close() os.remove('BCAstudents3.txt') os.rename('temp.txt','BCAstudents3.txt') </code></pre> <p>The Data From the Original File Looks Like This :</p> <pre><code>Roll Number = 1 Name : Alex Section = C Optimisation Technique = 99 Maths III = 99 Operating System = 99 Software Engneering = 99 Computer Graphics = 99 {Here Line change is present but it is not showing while typing on to stackoverflow } Roll Number = 2 Name : Shay Section = C Optimisation Technique = 99 Maths III = 99 Operating System = 99 Software Engneering = 99 Computer Graphics = 99` </code></pre> <p>and the Resullt after The Deletion is this :</p> <pre><code>Roll Number = 1 Name : Alex Section = C Optimisation Technique = 99 Maths III = 99 Operating System = 99 Software Engneering = 99 Computer Graphics = 99Roll Number = 2 Name : Shay Section = C Optimisation Technique = 99 Maths III = 99 Operating System = 99 Software Engneering = 99 Computer Graphics = 99 </code></pre> <p>and I also want to give comma after the end of the data But don't have any idea that how to do this one</p>
<p>I modified your code and it should work how you wanted. A couple of things to consider:</p> <ol> <li>Your original text file seems to indicate that there are line breaks for each Roll Number. I assumed that with my answer.</li> <li>Because you are reading a text file, there are no integers so fo[3] would not ever match rollno if you are converting the input to an int.</li> <li>I wasn't sure exactly where you wanted the comma. After each line? Or just at the very end.</li> </ol> <p>I wasn't sure if you wanted new lines for each Roll Number.</p> <pre><code>def delete(): rollno = input('\n Enter The Roll number : ') f = open('BCAstudents3.txt','r') f1 = open('temp.txt','a+') for line in f: fo = line.split() if fo: if fo[3] != rollno: newline = " ".join(fo) + "," #print(newline) f1.write(newline) f.close() f1.close() os.remove('BCAstudents3.txt') os.rename('temp.txt','BCAstudents3.txt') </code></pre>
python|python-3.x|file
1
1,904,976
57,750,437
What do these conda package resolution warnings mean and can they be safely ignored?
<p>I ran the command <code>conda update anaconda</code>. Here is the output;</p> <pre><code>Collecting package metadata (current_repodata.json): done Solving environment: / Warning: 4 possible package resolutions (only showing differing packages): - anaconda::_py-xgboost-mutex-2.0-cpu_0, anaconda::libxgboost-0.90-0 - anaconda::_py-xgboost-mutex-2.0-cpu_0, defaults::libxgboost-0.90-0 - anaconda::libxgboost-0.90-0, defaults::_py-xgboost-mutex-2.0-cpu_0 - defaults::_py-xgboost-mutex-2.0-cpu_0, defaults::libxgboost-0.90done # All requested packages already installed. </code></pre> <p>What do the warnings mean exactly? I am unable to fix the warnings. Can these warnings be ignored safely without side effect? </p> <p>I am running python 3.7 on anaconda.</p>
<p>Try this</p> <pre><code>conda update conda </code></pre> <p>If it doesn't works try, sometimes due to the version issue you get the error</p> <pre><code>conda install conda=4.6 </code></pre> <p>Yes you can ignore the warnings, if you can use the library, then it won't create an issue</p>
python|anaconda|conda
1
1,904,977
56,190,740
how to find how many times the values of a row hit max consecutively
<p>I want to find how many times the values of a row hit max consecutively.</p> <ul> <li><p>Ps1: My data has 500K rows, so I concern about speed of calculation</p></li> <li><p>Ps2: In this example, startDay =1 and endDay=7 but some rows have different start or end day. (such as startDay=2, endDay=5 or startDay=4, endDay=3. arr_bool control this conditions)</p></li> </ul> <p>My data:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np idx = ['id1', 'id2', 'id3', 'id4', 'id5', 'id6', 'id7', 'id8', 'id9', 'id10'] data = {'Day1':[0,0,1,0,1,1,0,0,1,1], 'Day2':[0,1,1,1,2,1,0,1,1,2], 'Day3':[1,3,1,1,1,0,0,1,3,2], 'Day4':[1,2,0,1,1,0,0,2,1,1], 'Day5':[0,2,1,1,1,1,0,2,1,1], 'Day6':[1,0,1,1,2,1,0,2,1,1], 'Day7':[0,0,0,1,1,1,0,0,3,1]} startday = pd.DataFrame([1,1,1,1,1,1,1,1,1,1],columns=['start'], index=idx) endday = pd.DataFrame([7,7,7,7,7,7,7,7,7,7],columns=['end'], index=idx) df = pd.DataFrame(data, index=idx) Neg99 = -999 Neg90 = -900 </code></pre> <p>I should search the time interval for every rows.(like a loop startday to endday) I can find the max values in the time interval but I couldn't find the count of the values of a row hit max consecutively.</p> <pre class="lang-py prettyprint-override"><code>arr_bool = (np.less_equal.outer(startday.start, range(1,8)) &amp; np.greater_equal.outer(endday.end, range(1,8)) ) df_result = pd.DataFrame(df.mask(~arr_bool).max(axis=1), index=idx, columns=['result']) </code></pre> <p>Last conditions:</p> <pre class="lang-py prettyprint-override"><code>df_result.result= np.select( condlist = [startday.start &gt; endday.end, ~arr_bool.any(axis=1)], choicelist = [Neg99,Neg90], default = df_result.result) </code></pre> <p>The result I want;</p> <pre class="lang-py prettyprint-override"><code>result_i_want = pd.DataFrame([2,1,3,6,1,3,0,3,1,2],columns=['result'], index=idx) </code></pre> <p>Here is @WeNYoBen 's solutions but this is running slow;</p> <pre class="lang-py prettyprint-override"><code>s=((df.eq(df.max(1),0))&amp;(df.ne(0))) s.apply(lambda x : x[x].groupby((~x).cumsum()).count().max(),1).fillna(0) </code></pre>
<h3>Pure Numpy slicing and stuff</h3> <p>The point of this effort is that OP asked for speed. This should help. If you have access to a JIT library like <code>numba</code>, you should use that and just loop over each row.</p> <pre><code>sd = startday.start.values ed = endday.end.values dr = ed - sd + 1 i = np.arange(len(df)).repeat(dr) j = np.concatenate([np.arange(s - 1, e) for s, e in zip(sd, ed)]) v = df.values mx = np.empty(len(v), dtype=v.dtype) mx.fill(v.min()) np.maximum.at(mx, i, v[i, j]) b = np.ones((v.shape[0], v.shape[1] + 2), bool) b[i, j + 1] = (v[i, j] != mx[i]) | (mx[i] == 0) x, y = np.where(b) y_ = np.diff(y) mask = y_ &gt; 0 y__ = y_[mask] x__ = x[1:][mask] c = np.empty(len(v), int) c.fill(y__.min()) np.maximum.at(c, x__, y__) c - 1 array([2, 1, 3, 6, 1, 3, 0, 3, 1, 2]) </code></pre> <hr> <h3>Explanation</h3> <p>I'll leave the obvious alone.</p> <p>This represents the number of days in each interval</p> <pre><code>dr = ed - sd + 1 </code></pre> <p><code>i</code> is the flattened relevant row indices for the corresponding flattened column indices in <code>j</code></p> <pre><code>i = np.arange(len(df)).repeat(dr) j = np.concatenate([np.arange(s - 1, e) for s, e in zip(sd, ed)]) </code></pre> <p><code>mx</code> will be the maximum value for each interval.</p> <p><code>b</code> will be a boolean array with width 2 more columns wide than <code>v</code>. For this case it looks like:</p> <pre><code># Buffer Buffer # /--\ /--\ array([[ True, True, True, False, False, True, False, True, True], [ True, True, True, False, True, True, True, True, True], [ True, False, False, False, True, False, False, True, True], [ True, True, False, False, False, False, False, False, True], [ True, True, False, True, True, True, False, True, True], [ True, False, False, True, True, False, False, False, True], [ True, False, False, False, False, False, False, False, True], [ True, True, True, True, False, False, False, True, True], [ True, True, True, False, True, True, True, False, True], [ True, True, False, False, True, True, True, True, True]]) </code></pre> <p>The reason for the buffer columns is that I can calculate difference of positions after using <code>np.where</code></p> <p>Now I populate <code>b</code> where the <code>v</code> values are not equal to the maximum values in <code>mx</code></p> <pre><code> # not equal to max is equal to zero b[i, j + 1] = (v[i, j] != mx[i]) | (mx[i] == 0) </code></pre> <p>Then I find where those positions are in <code>y</code>.</p> <p>By taking the <code>diff</code>, I find the number of positions from one instance of not equal to max to the next position of not equal to max. This will always be one greater than the number we're looking for but we'll correct that later.</p> <p>Also, the <code>diff</code> will reduce the length by one but in reality, there's a bunch of stuff we don't need because I don't need to take the difference from one row relative to a previous row. Fortunately, I can get rid of all zero or negative differences because they don't make sense.</p> <p>I use <code>np.maximum.at</code> (again) but this time on the differences to find the largest difference and that will be the longest length of consecutive max values for each row.</p> <p>Mind that it's actually one more than that</p> <p>Phew. I'm tired of typing...</p>
python|pandas|numpy|dataframe
4
1,904,978
55,197,222
Generating new List Pair from List Elements
<p>and list of keys. I want to create a new pair list with increasing the number of elements in new pair list each time. I write a short python code for this, but its not doing what I expected, I can't find where I am doing Wrong.</p> <pre><code> keys = ['83eb48aa3c770a55eb194b3e8c8207e3', 'cc657723152be15805bb53894486653c', 'cbbcfce733b1ae42c044131aab3e9439', '83eb48aa3c770a55eb194b3e8c8207e3', 'cbbcfce733b1ae42c044131aab3e9439'] k = 'cbbcfce733b1ae42c044131aab3e9439' freq = [] pair = [k] for key in keys: pair.append(key) freq.append(pair) print(freq) </code></pre> <p>Expected Result:</p> <pre><code>[['cbbcfce733b1ae42c044131aab3e9439', '83eb48aa3c770a55eb194b3e8c8207e3'], ['cbbcfce733b1ae42c044131aab3e9439', '83eb48aa3c770a55eb194b3e8c8207e3', 'cc657723152be15805bb53894486653c'], ['cbbcfce733b1ae42c044131aab3e9439', '83eb48aa3c770a55eb194b3e8c8207e3', 'cc657723152be15805bb53894486653c', 'cbbcfce733b1ae42c044131aab3e9439'], ['cbbcfce733b1ae42c044131aab3e9439', '83eb48aa3c770a55eb194b3e8c8207e3', 'cc657723152be15805bb53894486653c', 'cbbcfce733b1ae42c044131aab3e9439', '83eb48aa3c770a55eb194b3e8c8207e3'], ['cbbcfce733b1ae42c044131aab3e9439', '83eb48aa3c770a55eb194b3e8c8207e3', 'cc657723152be15805bb53894486653c', 'cbbcfce733b1ae42c044131aab3e9439', '83eb48aa3c770a55eb194b3e8c8207e3', 'cbbcfce733b1ae42c044131aab3e9439']] </code></pre> <p>But I got following result:</p> <pre><code>[['cbbcfce733b1ae42c044131aab3e9439', '83eb48aa3c770a55eb194b3e8c8207e3', 'cc657723152be15805bb53894486653c', 'cbbcfce733b1ae42c044131aab3e9439', '83eb48aa3c770a55eb194b3e8c8207e3', 'cbbcfce733b1ae42c044131aab3e9439'], ['cbbcfce733b1ae42c044131aab3e9439', '83eb48aa3c770a55eb194b3e8c8207e3', 'cc657723152be15805bb53894486653c', 'cbbcfce733b1ae42c044131aab3e9439', '83eb48aa3c770a55eb194b3e8c8207e3', 'cbbcfce733b1ae42c044131aab3e9439'], ['cbbcfce733b1ae42c044131aab3e9439', '83eb48aa3c770a55eb194b3e8c8207e3', 'cc657723152be15805bb53894486653c', 'cbbcfce733b1ae42c044131aab3e9439', '83eb48aa3c770a55eb194b3e8c8207e3', 'cbbcfce733b1ae42c044131aab3e9439'], ['cbbcfce733b1ae42c044131aab3e9439', '83eb48aa3c770a55eb194b3e8c8207e3', 'cc657723152be15805bb53894486653c', 'cbbcfce733b1ae42c044131aab3e9439', '83eb48aa3c770a55eb194b3e8c8207e3', 'cbbcfce733b1ae42c044131aab3e9439'], ['cbbcfce733b1ae42c044131aab3e9439', '83eb48aa3c770a55eb194b3e8c8207e3', 'cc657723152be15805bb53894486653c', 'cbbcfce733b1ae42c044131aab3e9439', '83eb48aa3c770a55eb194b3e8c8207e3', 'cbbcfce733b1ae42c044131aab3e9439']] </code></pre> <p>I spent more than hour but couldn't find where i am doing wrong.</p>
<p>Need to use .copy() function to create copy of the pair list element, seems python links to list variable when append to another list. to copy the current list, need to use .copy() like:</p> <pre><code>for key in keys: pair.append(key) freq.append(pair.copy()) </code></pre>
python-3.x|list
0
1,904,979
42,565,699
How to save contents of filtered list in text document
<p>I am trying to save a list of users to a file on a network drive and want the name to be removed from the text file when the user exits the program. Whenever I run it only the last name in the list is saved to the file. Here is what I have tried</p> <pre><code>def move_offline(self): with open("usercheck.txt", "r") as self.text, open("user.txt", "r") as exc: exclusions = [line.rstrip('\n') for line in exc] for line in self.text: if not any(exclusion in line for exclusion in exclusions): #print (line) self.gg = [line.strip("\n")] print (self.gg) </code></pre> <p>This Function is just to determine what the username is and if it is present in the file on the network file</p> <pre><code>def actmov(self): try: mmm = open("usercheck.txt","w") mmm.writelines(["%s\n"%item for item in self.gg]) except AttributeError: print ("Oops, something didnt save correctly!") </code></pre> <p>If you have a more elegant solution which is completely different from the approach shown here I would be more than happy to see them!</p>
<pre><code>self.gg = [line.strip("\n")] </code></pre> <p>This is your problem. Every iteration you're setting <code>self.gg</code> to the one-element list consisting of <code>line.strip("\n")</code>. Instead you should be setting <code>self.gg</code> to an empty list somewhere during initialization (or perhaps at the beginning of your function?) then doing <code>self.gg.append(line.strip("\n"))</code> instead.</p>
python|python-3.x
1
1,904,980
54,126,608
defaultdict slower in comparison to normal dictionary
<p>I have a very simple code and was testing it with normal dictonaries as well as <code>defaultdict</code> and surprisingly <code>defaultdict</code> is slower when compared with normal dictionary.</p> <pre><code>from collections import defaultdict from timeit import timeit text = "hello this is python python is a great language, hello again" d = defaultdict(int) s = {} def defdict(): global text, d for word in text.split(): d[word] += 1 def nordict(): global text, s for word in text.split(): if word not in s: s[word] = 1 else: s[word] += 1 print(timeit(stmt='defdict', setup='from __main__ import defdict', number=3)) print(timeit(stmt='nordict', setup='from __main__ import nordict', number=3)) st = time.time() defdict() print(time.time() - st) st = time.time() nordict() print(time.time() - st) </code></pre> <p><strong><em>Output</em></strong></p> <pre><code>5.799811333417892e-07 3.5099219530820847e-07 6.198883056640625e-06 3.0994415283203125e-06 </code></pre> <p>This is a very simple example and for this particular case i can surely use <code>Counter</code> which would fastest of all, but I am looking at it from overall perspective for cases where we need to do stuff just more than counting the occurrences of a key and where we obviously cannot use <code>Counter</code>.</p> <p>So why I am seeing is this behavior, am i missing something here or doing something in a wrong way ? </p>
<p>Your test is flawed because of the small size of the string. Thus fixed costs can outweigh the performance of your iteration logic. A good hint is your timings are measured in microseconds, negligible for benchmarking purposes.</p> <p>Here's a more reasonable test:</p> <pre><code>n = 10**5 text = "hello this is python python is a great language, hello again"*n %timeit defdict() # 445 ms per loop %timeit nordict() # 520 ms per loop </code></pre>
python
4
1,904,981
53,831,964
how do I concatenate string with list in list items?
<pre><code>a = [apple,[green,red,yellow]] print(a[0]+ " available in these colours " + a[1[]]) </code></pre> <p>how do I concatenate string with list in list items ?</p> <p><strong>expected result :-</strong></p> <pre><code>apple available in these collars green red yellow </code></pre>
<p>Assuming you start with</p> <pre><code>a = ['apple',['green', 'red', 'yellow']] </code></pre> <p>Then <code>a[0]</code> is a string, and <code>a[1]</code> is a list of strings. You can change <code>a[1]</code> into a string using <code>', '.join(a[1])</code>, which will concatenate them using a comma and a space. </p> <p>So</p> <pre><code>a[0] + ' available in ' + ', '.join(a[1]) </code></pre> <p>should work, as you can concatenate strings with <code>+</code>.</p>
python|list
3
1,904,982
53,892,151
How to write part of a list to a csv?
<p>I have data in these lists. I need to use these certain elements at the beginning of the row, and then add 30 data points after. I understand how to splice a list, but I want to return those individual items from the list</p> <pre><code>w.writerow([sub_sub_header_list[0], data_list[0:29]]) w.writerow([sub_sub_header_list[1], data_list[30:59]]) w.writerow([sub_sub_header_list[2], data_list[60:89]]) w.writerow([sub_sub_header_list[3], data_list[90:119]]) </code></pre> <p>I get something like this:</p> <p><code>Team Stats, [u'310', u'5291', u'1018', u'5.2', u'27', u'11', u'289', u'377', u'598', u'3879', u'26', u'16', u'6.3', u'190', u'398', u'1412', u'6', u'3.5', u'73', u'88', u'857', u'26', u'193', u'27.5', u'13.0', u'Own 27.6', u'2:21', u'5.40', u'27.4']</code></p> <p>When I want:</p> <p><code>Team Stats, [310, 5291, 1018</code>,...] and so forth.</p>
<p>Do keep in mind that CSV's are structured in a tabular fashion (like excel). You have the header first, then the data for each column of the header on separate rows. When you do a <code>writerow</code> you must provide it with the actual values, for specific columns, for the current row that's being written. You've basically written a couple of lists in your CSV on a each column by doing <code>w.writerow([sub_sub_header_list[0], data_list[0:29]])</code> which is essentially <code>w.writerow([1, 2...], [3, 4,...])</code>, that's why you got in the CSV data like:</p> <pre><code>u'[1,2,..]', u'[3,4,...]' </code></pre> <p>It was basically treating each list as an individual cell, and converting it to string so that it can store it in the CSV (that's where the <code>u''</code> comes from).</p> <p>You basically have to keep a referenced index throughout the vector, since it's a 1-dimensional data structure that has the series appended one after another.</p> <pre><code>import csv pf = open("out.csv", "w") csv_writer = csv.DictWriter(pf, fieldnames=["A", "B", "C"]) csv_writer.writeheader() LENGTH = 3 # number elements per column data_list = [1, 1, 2, 2, 3, 3] for i in range(LENGTH): csv_writer.writerow({ 'A': data_list[i], 'B': data_list[i+LENGTH], 'C': data_list[i+LENGTH*2], }) pf.close() </code></pre> <p>and the output would be something like:</p> <pre><code>A,B,C 1,2,3 1,2,3 </code></pre>
python|export-to-csv
0
1,904,983
58,223,060
Pythonic way to count keys of multiple dicts
<p>Lets say there are 3 dictionaries <code>first</code>, <code>second</code>, <code>third</code> with following values</p> <pre class="lang-py prettyprint-override"><code>first = {'a': 0.2, 'b': 0.001} second = {'a': 0.99, 'c': 0.78} third = {'c': 1, 'd': 0.1} total = {'_first': first, '_second': second, '_third':third} </code></pre> <p>Is there a way to <strong>quickly</strong> get a data structure which can hold the information of the count of each key (<code>a</code>, <code>b</code>, <code>c</code>, <code>d</code>) using <em><code>total</code></em> instead of multiple dictionaries. For example, the answer should return something like <code>{'a':2, 'b':2, 'c':2, 'd':1}</code> since <code>a</code>, <code>b</code>, <code>c</code> key occurred twice while <code>d</code> occurred only once in these dictionaries. </p>
<pre><code>from collections import Counter from itertools import chain first = {'a': 0.2, 'b': 0.001} second = {'a': 0.99, 'c': 0.78} third = {'c': 1, 'd': 0.1} print(Counter(chain(first, second, third))) </code></pre> <p>to account for edited question with variable number of dicts stored in a dict <code>total</code></p> <pre><code>total = {'_first': first, '_second': second, '_third':third} print(Counter(chain.from_iterable(total.values()))) </code></pre>
python|counter
2
1,904,984
65,318,175
Discord.py random syntax error when loading bot.run
<p>I was following a tutorial to learn some new discord.py techniques but when following a tutorial I came across this error which I can't fix, this is the code:</p> <pre><code>import discord import asyncio from discord.ext import commands import os from discord.utils import get from dotenv import load_dotenv load_dotenv() DISCORD_TOKEN = os.getenv(&quot;Nzg4NjQxMTQzNDExNTcyNzk2.X9mdTA.3NrZ87u3cn8-i5icp7AQD1xdmbQ&quot;) bot = commands.Bot(command_prefix=&quot;/&quot; bot.run('Nzg4NjQxMTQzNDExNTcyNzk2.X9mdTA.3NrZ87u3cn8-i5icp7AQD1xdmbQ') </code></pre> <p>My problem is that when I run the code I get a syntax error specifically on the <code>&quot;b&quot;</code> at the beginning of <code>bot.run()</code> any help is appreciated.</p>
<p>If your code is exactly what you've posted, you're just missing a <code>)</code> at the line before the last.</p> <pre><code>bot = commands.Bot(command_prefix=&quot;/&quot; # &lt;-- MISSING PARENTHESIS HERE bot.run('Nzg4NjQxMTQzNDExNTcyNzk2.X9mdTA.3NrZ87u3cn8-i5icp7AQD1xdmbQ') </code></pre> <p>In cases like this Python throws an error at the line below because it can't understand the previous line has ended.</p>
python|discord|discord.py
1
1,904,985
22,596,975
Terminate the Thread by using button in Tkinter
<p>In my GUI code, I tried to run loop1 and loop2 at the same time by clicking one button. Thus, I used <code>Thread</code> to achieve this. But I also tried to stop it by clicking another button and I failed. After searching on stackoverflow, I found out that there was no directly way to kill <code>Thread</code>. Here is the part of code:</p> <pre><code>def loop1(): while True: call (["raspivid -n -op 150 -w 640 -h 480 -b 2666666.67 -t 5000 -o test.mp4"],shell=True) call (["raspivid -n -op 150 -w 640 -h 480 -b 2666666.67 -t 5000 -o test1.mp4"],shell=True) def loop2(): while True: call (["arecord -D plughw:1 --duration=5 -f cd -vv rectest.wav"],shell=True) call (["arecord -D plughw:1 --duration=5 -f cd -vv rectest1.wav"],shell=True) def combine(): Thread(target = loop1).start() Thread(target = loop2).start() def stop(): Thread(target = loop1).terminate() Thread(target = loop2).terminate() </code></pre> <p>I tried to use these two buttons to control it.</p> <pre><code>btn1 = Button(tk, text="Start Recording", width=16, height=5, command=combine) btn1.grid(row=2,column=0) btn2 = Button(tk, text="Stop Recording", width=16, height=5, command=stop) btn2.grid(row=3,column=0) </code></pre> <p>I want the loop1 and loop2 can be stopped button2. Apparently there is no <code>terminate</code> in <code>Thread</code>. So I used another method <code>Process</code>. Here is the code:</p> <pre><code>from subprocess import call from multiprocessing import Process def loop1(): while True: call (["raspivid -n -op 150 -w 640 -h 480 -b 2666666.67 -t 5000 -o test.mp4"],shell=True) call (["raspivid -n -op 150 -w 640 -h 480 -b 2666666.67 -t 5000 -o test1.mp4"],shell=True) def loop2(): while True: call (["arecord -D plughw:1 --duration=5 -f cd -vv rectest.wav"],shell=True) call (["arecord -D plughw:1 --duration=5 -f cd -vv rectest1.wav"],shell=True) if __name__ == '__main__': Process(target = loop1).start() Process(target = loop2).start() </code></pre> <p>But this program finished immediately after I ran it. I know there is <code>terminate</code> function in <code>Process</code>. But I don't know how to use it.</p>
<p>A potential solution would use <code>Event</code>s. Also, a good rule of thumb when making GUIs is to use objects.</p> <pre><code>from threading import Thread,Event from subprocess import call class Controller(object): def __init__(self): self.thread1 = None self.thread2 = None self.stop_threads = Event() def loop1(self): while not self.stop_threads.is_set(): call (["raspivid -n -op 150 -w 640 -h 480 -b 2666666.67 -t 5000 -o test.mp4"],shell=True) call (["raspivid -n -op 150 -w 640 -h 480 -b 2666666.67 -t 5000 -o test1.mp4"],shell=True) def loop2(self): while not self.stop_threads.is_set(): call (["arecord -D plughw:1 --duration=5 -f cd -vv rectest.wav"],shell=True) call (["arecord -D plughw:1 --duration=5 -f cd -vv rectest1.wav"],shell=True) def combine(self): self.stop_threads.clear() self.thread1 = Thread(target = self.loop1) self.thread2 = Thread(target = self.loop2) self.thread1.start() self.thread2.start() def stop(self): self.stop_threads.set() self.thread1.join() self.thread2.join() self.thread1 = None self.thread2 = None </code></pre> <p>This way your button calls would become something like:</p> <pre><code>control = Controller() btn1 = Button(tk, text="Start Recording", width=16, height=5, command=control.combine) btn1.grid(row=2,column=0) btn2 = Button(tk, text="Stop Recording", width=16, height=5, command=control.stop) btn2.grid(row=3,column=0) </code></pre>
python|multithreading|tkinter|multiprocessing|raspberry-pi
5
1,904,986
22,547,298
Python - numpy 2D array too long and prints weird?
<p>When I want to print a 2D array (shape of a square) to see it in my python panel, the format of the array appears to show up weird. It seems that 19 elements won't make a square shape, and puts that last element onto the next line. When I try with 18 elements, it's fine. I'm not sure if this is a numpy issue or my platform (I use Enthought Canopy). Is there anything I could do to have all 19 elements in one line when I print it out?</p> <pre><code>import numpy a = numpy.zeros(361) b = a.reshape(19,19) print b </code></pre> <p>Output:</p> <pre><code>[[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] </code></pre>
<p>You can change the default line-wrapping width with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.set_printoptions.html" rel="nofollow">np.set_printoptions</a>. The default is 75.</p> <pre><code>numpy.set_printoptions(linewidth=200) </code></pre>
python|arrays|numpy
4
1,904,987
45,688,871
Implementing an efficient queue in Python
<p>I have been trying to implement a queue in Python, and I've been running into a problem.</p> <p>I am attempting to use lists to implement the Queue data structure, however I can't quite figure out how to make <code>enqueue</code> and <code>dequeue</code> O(1) operations.</p> <p>Every example I have seen online, seems to just append the <code>enqueue</code> operation and remove the first element from the list for the <code>dequeue</code> operation. But this would make the <code>dequeue</code> operation O(n) (where n is the size of the list) correct?</p> <p>Is there something basic I have missed? Or do you have to use LinkedLists to implement a Queue efficiently?</p> <pre><code>import unittest class Queue: def __init__(self): self._queue = [] self.size = 0 self.maxSize = 10 def enqueue(self, item): if self.size &lt; self.maxSize: self._queue.append(item) def dequeue(self): ''' Removes an item from the front of the list. Remove first element of the array ''' first = self._queue[0] del self._queue[0] return first </code></pre>
<p>As <a href="https://stackoverflow.com/users/1097347/uri-goren">Uri Goren</a> astutely <a href="https://stackoverflow.com/questions/45688871/implementing-an-efficient-queue-in-python#comment78335497_45688871">noted above</a>, the Python stdlib already implemented an efficient queue on your fortunate behalf: <a href="https://docs.python.org/3/library/collections.html#deque-objects" rel="noreferrer"><code>collections.deque</code></a>.</p> <h2>What Not to Do</h2> <p>Avoid reinventing the wheel by hand-rolling your own:</p> <ul> <li><a href="https://stackoverflow.com/a/48459059/2809027">Linked list implementation</a>. While doing so reduces the worst-case time complexity of your <code>dequeue()</code> and <code>enqueue()</code> methods to O(1), the <code>collections.deque</code> type already does so. It's also thread-safe and presumably more space and time efficient, given its C-based heritage.</li> <li><a href="https://stackoverflow.com/a/48577248/2809027">Python list implementation</a>. As I <a href="https://stackoverflow.com/questions/45688871/implementing-an-efficient-queue-in-python#comment85545411_48577248">note below</a>, implementing the <code>enqueue()</code> methods in terms of a Python list increases its worst-case time complexity to <strong>O(n).</strong> Since removing the last item from a C-based array and hence Python list is a constant-time operation, implementing the <code>dequeue()</code> method in terms of a Python list retains the same worst-case time complexity of O(1). But who cares? <code>enqueue()</code> remains pitifully slow.</li> </ul> <p>To quote the <a href="https://docs.python.org/3/library/collections.html#deque-objects" rel="noreferrer">official <code>deque</code> documentation</a>:</p> <blockquote> <p>Though <code>list</code> objects support similar operations, they are optimized for fast fixed-length operations and incur O(n) memory movement costs for <code>pop(0)</code> and <code>insert(0, v)</code> operations which change both the size and position of the underlying data representation.</p> </blockquote> <p>More critically, <code>deque</code> <em>also</em> provides out-of-the-box support for a maximum length via the <code>maxlen</code> parameter passed at initialization time, obviating the need for manual attempts to limit the queue size (which inevitably breaks thread safety due to race conditions implicit in if conditionals).</p> <h2>What to Do</h2> <p>Instead, implement your <code>Queue</code> class in terms of the standard <code>collections.deque</code> type as follows:</p> <pre><code>from collections import deque class Queue: ''' Thread-safe, memory-efficient, maximally-sized queue supporting queueing and dequeueing in worst-case O(1) time. ''' def __init__(self, max_size = 10): ''' Initialize this queue to the empty queue. Parameters ---------- max_size : int Maximum number of items contained in this queue. Defaults to 10. ''' self._queue = deque(maxlen=max_size) def enqueue(self, item): ''' Queues the passed item (i.e., pushes this item onto the tail of this queue). If this queue is already full, the item at the head of this queue is silently removed from this queue *before* the passed item is queued. ''' self._queue.append(item) def dequeue(self): ''' Dequeues (i.e., removes) the item at the head of this queue *and* returns this item. Raises ---------- IndexError If this queue is empty. ''' return self._queue.pop() </code></pre> <p>The proof is in the hellish pudding:</p> <pre><code>&gt;&gt;&gt; queue = Queue() &gt;&gt;&gt; queue.enqueue('Maiden in Black') &gt;&gt;&gt; queue.enqueue('Maneater') &gt;&gt;&gt; queue.enqueue('Maiden Astraea') &gt;&gt;&gt; queue.enqueue('Flamelurker') &gt;&gt;&gt; print(queue.dequeue()) Flamelurker &gt;&gt;&gt; print(queue.dequeue()) Maiden Astraea &gt;&gt;&gt; print(queue.dequeue()) Maneater &gt;&gt;&gt; print(queue.dequeue()) Maiden in Black </code></pre> <h2>It Is Dangerous to Go Alone</h2> <p>Actually, <strong>don't do that either.</strong></p> <p>You're better off just using a raw <code>deque</code> object rather than attempting to manually encapsulate that object in a <code>Queue</code> wrapper. The <code>Queue</code> class defined above is given <em>only</em> as a trivial demonstration of the general-purpose utility of the <code>deque</code> API.</p> <p>The <code>deque</code> class provides <a href="https://docs.python.org/3/library/collections.html#deque-objects" rel="noreferrer">significantly more features</a>, including:</p> <blockquote> <p>...iteration, pickling, <code>len(d)</code>, <code>reversed(d)</code>, <code>copy.copy(d)</code>, <code>copy.deepcopy(d)</code>, membership testing with the in operator, and subscript references such as <code>d[-1]</code>.</p> </blockquote> <p>Just use <code>deque</code> anywhere a single- or double-ended queue is required. That is all.</p>
python|queue
30
1,904,988
28,833,776
C Function in Python: Return an Array and Variables
<p>I just got transited into Python from Matlab and I know that calling C function is different than Matlab mex. With lacks of proper documentation, I have scourge the web for days now and couldn't find the solution to my simple problem.</p> <p>Basically I want to call a C-Function in Python, input 2 integers and an 2D array, do some calculations, and return a 2D Array. I would also like to output some other variables too (this might require the use of structure). I know this are very basic stuffs but if someone can help me, I will greatly appreciated.</p> <p>So what I am hoping for is the equivalent of this in matlab! Thx!!!</p> <blockquote> <blockquote> <p>[Nxy,outArray] = Function(Nx,Ny,inArray)</p> </blockquote> </blockquote> <p><strong>Code for setup.py</strong></p> <pre><code>from distutils.core import setup, Extension import numpy.distutils.misc_util setup( ext_modules=[Extension("myfunc", ["myfunc.c"])], include_dirs=numpy.distutils.misc_util.get_numpy_include_dirs(), ) </code></pre> <p><strong>Code for myfunc.c</strong></p> <pre><code>static char module_docstring[] = "This function does some calculations..."; static char Run_docstring[] = "Run what ever algorithm there is!"; static PyObject *Run(PyObject *self, PyObject *args) { int i, j, Nx, Ny; PyObject *Data; /* Parse the input tuple */ if (!PyArg_ParseTuple(args, "iiO", &amp;Nx, &amp;Ny, &amp;Data)) // Data is a 2D array return NULL; PyObject *array = PyArray_FROM_OTF(Data, NPY_DOUBLE, NPY_IN_ARRAY); // Interpret as numpy array double *newData = (double*)PyArray_DATA(array); // Pointers to the data as C-types double outData[Ny][Nx]; // Creating output 2D Array int outCount; // Calculations outCount = Nx*Ny; for (i=0; i&lt;Nx; i++){ for (j=0; i&lt;Ny; j++){ outData[j][i] = sqrt(Data[j][i]) + sqrt(outCount); } } // Free memory used in PyObject Py_DECREF(array); // Return output Data PyObject *ret = Py_BuildValue("i", outCount); return ret, PyArray_Return(outData); } static PyMethodDef module_methods[] = { {"Run", Run, METH_VARARGS, Run_docstring}, {NULL, NULL, 0, NULL} }; PyMODINIT_FUNC initmyfunc(void) { PyObject *m = Py_InitModule3("myfunc", module_methods, module_docstring); if (m == NULL) return; import_array(); } </code></pre>
<p>It is possible to call optimised C functions from Python using <a href="http://cython.org/" rel="nofollow">Cython</a>.</p> <p>In this particular case, we can for instance create a <code>myfunc.pyx</code> file,</p> <pre><code>import numpy as np cimport numpy as np from libc.math cimport sqrt cpdef tuple myfunc(int Nx, int Ny, double[:,::1] inArray): cdef double [:,::1] outData = np.zeros((Nx, Ny)) cdef int i,j, res with nogil: for i in range(Nx): for j in range(Ny): outData[i, j] = sqrt(inArray[i,j]) + sqrt(&lt;double&gt; Nx*Ny) res = 0 # not sure how res is computed return res, outData.base </code></pre> <p>that can be compiled with the following <code>setup.py</code>,</p> <pre><code>from distutils.core import setup, Extension import numpy as np from Cython.Distutils import build_ext setup( ext_modules=[Extension("myfunc", ["myfunc.pyx"])], cmdclass = {'build_ext': build_ext}, include_dirs=[np.get_include()]) </code></pre> <p>using</p> <pre><code>$ python setup.py build_ext --inplace </code></pre> <p>This generates and compiles the <code>myfunc.c</code>. The resulting Python module can be then used as follows,</p> <pre><code>from myfunc import myfunc import numpy as np Nx, Ny = 2, 2 inArray = np.ones((Nx,Ny)) res, outArray = myfunc(Ny,Ny, inArray) print(outArray) # which would return [[ 3. 3.] [ 3. 3.]] </code></pre> <p>Note, that it in this case it is not necessary to pass the array dimensions <code>Nx</code>, <code>Ny</code> to the function as they can be accessed through <code>inArray.shape</code> in Cython. </p> <p>Please refer to the <a href="http://docs.cython.org/src/tutorial/numpy.html" rel="nofollow">Cython documentation</a> regarding Numpy for further optimisation details.</p>
python|c|arrays|function|return
1
1,904,989
14,769,443
Why Popen need pressing Enter to return back to shell
<p>I want to execute shell commands with bash in my Python script, getting real-time printing message in the screen. I use the following line to do this:</p> <pre><code>subprocess.Popen(my_commands, shell=True, stdout=sys.stdout, stderr=sys.stderr, executable='/bin/bash') </code></pre> <p>Everything looks good expect that after the shell commands are finished, the input cursor is still invisible. I have to press the Enter key to active the shell again. So what is the error?</p>
<p>It is because you have two shells, <code>shell=True</code> and <code>/bin/bash</code></p> <p>If you set <code>shell=False</code> you will not see the output on the console. You will have to use <code>PIPE</code> and/or <code>subprocess.communicate()</code> to get the output (depending on what you want).</p>
python|popen
0
1,904,990
14,856,526
Parsing Twitter JSON object in Python
<p>I am trying to download tweets from twitter.</p> <p>I have used python and Tweepy for this. Though I am new to both Python and Twitter API.</p> <p>My Python script is as follow: #!usr/bin/python</p> <pre><code>#import modules import sys import tweepy import json #global variables consumer_key = '' consumer_secret = '' token_key = '' token_secret = '' #Main function def main(): print sys.argv[0],'starts' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(token_key, token_secret) print 'Connected to Twitter' api = tweepy.API(auth) if not api.test(): print 'Twitter API test failed' print 'Experiment with cursor' print 'Get search method returns json objects' json_search = api.search(q="football") #json.loads(json_search()) print json_search #Standard boilerplate to call main function if this file runs if __name__ == '__main__': main() </code></pre> <p>I am getting result as follows:</p> <pre><code>[&lt;tweepy.models.SearchResult object at 0x9a0934c&gt;, &lt;tweepy.models.SearchResult object at 0x9a0986c&gt;, &lt;tweepy.models.SearchResult object at 0x9a096ec&gt;, &lt;tweepy.models.SearchResult object at 0xb76d8ccc&gt;, &lt;tweepy.models.SearchResult object at 0x9a09ccc&gt;, &lt;tweepy.models.SearchResult object at 0x9a0974c&gt;, &lt;tweepy.models.SearchResult object at 0x9a0940c&gt;, &lt;tweepy.models.SearchResult object at 0x99fdfcc&gt;, &lt;tweepy.models.SearchResult object at 0x99fdfec&gt;, &lt;tweepy.models.SearchResult object at 0x9a08cec&gt;, &lt;tweepy.models.SearchResult object at 0x9a08f4c&gt;, &lt;tweepy.models.SearchResult object at 0x9a08eec&gt;, &lt;tweepy.models.SearchResult object at 0x9a08a4c&gt;, &lt;tweepy.models.SearchResult object at 0x9a08c0c&gt;, &lt;tweepy.models.SearchResult object at 0x9a08dcc&gt;] </code></pre> <p>Now I am confused how to extract tweets from this information? I tried to use json.loads method on this data. But it gives me error as JSON expects string or buffer. Example code would be highly appreciated. Thanks in advance.</p>
<p>Tweepy gives you richer objects; it parsed the JSON for you.</p> <p>The <code>SearchResult</code> objects have the same attributes as the JSON structures that Twitter sent; just look up the <a href="https://dev.twitter.com/docs/platform-objects/tweets" rel="nofollow">Tweet documentation</a> to see what is available:</p> <pre><code>for result in api.search(q="football"): print result.text </code></pre> <p>Demo:</p> <pre><code>&gt;&gt;&gt; import tweepy &gt;&gt;&gt; tweepy.__version__ '3.3.0' &gt;&gt;&gt; consumer_key = '&lt;consumer_key&gt;' &gt;&gt;&gt; consumer_secret = '&lt;consumer_secret&gt;' &gt;&gt;&gt; access_token = '&lt;access_token&gt;' &gt;&gt;&gt; access_token_secret = '&lt;access_token_secret&gt;' &gt;&gt;&gt; auth = tweepy.OAuthHandler(consumer_key, consumer_secret) &gt;&gt;&gt; auth.set_access_token(access_token, access_token_secret) &gt;&gt;&gt; api = tweepy.API(auth) &gt;&gt;&gt; for result in api.search(q="football"): ... print result.text ... Great moments from the Women's FA Cup http://t.co/Y4C0LFJed9 RT @freebets: 6 YEARS AGO TODAY: Football lost one of its great managers. RIP Sir Bobby Robson. http://t.co/NCo90ZIUPY RT @Oddschanger: COMPETITION CLOSES TODAY! Win a Premier League or Football League shirt of YOUR choice! RETWEET &amp;amp; FOLLOW to enter. http… Berita Transfer: Transfer rumours and paper review – Friday, July 31 http://t.co/qRrDIEP2zh [TS] #nobar #gosip @ajperry18 im sorry I don't know this football shit @risu_football おれモロ誕生日で北辰なんすよ笑 NFF Unveils Oliseh As Super Eagles Coach - SUNDAY Oliseh has been unveiled by the Nigeria Football... http://t.co/IOYajD9bi2 #Sports RT @BilelGhazi: RT @lequipe : Gourcuff, au tour de Guingamp http://t.co/Dkio8v9LZq @EDS_Amy HP SAUCE ? RT @fsntweet: マンCの塩対応に怒りの炎!ベトナム人ファン、チケットを燃やして猛抗議 - http://t.co/yg5iuABy3K なめるなよ、プレミアリーグ!マンチェスターCのプレシーズンツアーの行き先でベトナム人男性が、衝撃的な行 RT @peterMwendo: Le football cest un sport collectif ou on doit se faire des passe http://t.co/61hy138yo8 RT @TSBible: 6 years ago today, football lost a true gentleman. Rest in Peace Sir Bobby Robson. http://t.co/6eHTI6UxaC 6 years ago today the greatest football manger of all time passed away SIR Bobby Robson a true Ipswich and footballing legend The Guardian: PSG close to sealing £40m deal for Manchester United’s Ángel Di María. http://t.co/gAQEucRLZa Sir Bobby Robson, the #football #legend passed away 6 years ago. #Barcelona #newcastle #Porto http://t.co/4UXpnvrHhS </code></pre>
python|json|twitter|tweepy
8
1,904,991
41,405,030
XLSX Writer Format Row Borders for First n Columns
<p>I want to add bottom borders to the first 4 rows of a workbook, but only up to the 7th column. How can I restrict the range of columns to which this format will apply? Here's what I tried so far:</p> <pre><code>import xlsxwriter import numpy format = workbook.add_format() format.set_bottom(7) for r in np.arange(4): worksheet.set_row(r,15,format) </code></pre> <p>This works great for formatting all columns for those rows, but I need to either delete all columns after the 7th column or find a way to restrict the range of columns to which the row format is applied.</p> <p>Thanks in advance!</p>
<p>Currently, you <a href="http://xlsxwriter.readthedocs.io/faq.html#q-can-i-apply-a-format-to-a-range-of-cells-in-one-go" rel="nofollow noreferrer">cannot format a specified range of cells</a>, but, as a workaround, you can apply <em>conditional formatting</em> to a range based on a condition - you just need to set the condition to be always evaluated to true:</p> <pre><code>format = workbook.add_format() format.set_bottom(7) worksheet.conditional_format('A1:G4', {'type': 'cell', 'criteria': '!=', 'value': 'None', 'format': format}) </code></pre> <p>The <code>A1:G4</code> should cover the "first 4 rows, first 7 columns only" range.</p> <hr> <p>The complete working code I've used for testing the solution:</p> <pre><code>import xlsxwriter workbook = xlsxwriter.Workbook('hello.xlsx') worksheet = workbook.add_worksheet() for row in range(11): for col in range(11): worksheet.write(row, col, row + col) format = workbook.add_format() format.set_bottom(7) worksheet.conditional_format('A1:D5', {'type': 'cell', 'criteria': '!=', 'value': 'None', 'format': format}) workbook.close() </code></pre> <p>Here is what I get in <code>hello.xlsx</code>:</p> <p><a href="https://i.stack.imgur.com/Tylle.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tylle.png" alt="enter image description here"></a></p>
python|xlsxwriter
2
1,904,992
41,446,153
Python: How to get string from item without [u'']
<p>I'm using python 2.7 for this here. I've got a bit of code to extract certain mp3 tags, like this here</p> <pre><code>mp3info = EasyID3(fileName) print mp3info print mp3info['genre'] print mp3info.get('genre', default=None) print str(mp3info['genre']) print repr(mp3info['genre']) genre = unicode(mp3info['genre']) print genre </code></pre> <p>I have to use the name ['genre'] instead of [2] as the order can vary between tracks. It produces output like this</p> <pre><code>{'artist': [u'Really Cool Band'], 'title': [u'Really Cool Song'], 'genre': [u'Rock'], 'date': [u'2005']} [u'Rock'] [u'Rock'] [u'Rock'] [u'Rock'] [u'Rock'] </code></pre> <p>At first I was like, "Why thank you, I do rock" but then I got on with trying to debug the code. As you can see, I've tried a few different approaches, but none of them work. All I want is for it to output</p> <pre><code>Rock </code></pre> <p>I reckon I could possibly use split, but that could get very messy very quickly as there's a distinct possibility that artist or title could contain '</p> <p>Any suggestions?</p>
<p>It's not a string that you can use <code>split</code> on,, it's a list; that list usually (always?) contains one item. So you can get that first item:</p> <pre><code>genre = mp3info['genre'][0] </code></pre>
python|dictionary|unicode
1
1,904,993
57,226,032
Why does pymysql hang on a select statement that works fine in MySQL workbench?
<p>In MySQL Workbench I run a query that returns results immediately. When running the same query through pymysql, the program seems to hang on fetching results from the SQL.</p> <p>I have tried killing additional processes running in MySQL using the KILL command in MySQL workbench. As I said before, running the query in MySQL workbench returns results immediately.</p> <p>The query itself is shown below:</p> <pre><code>SELECT at.instrument, at.timestamp, at.account, at.in_out_flag, SUM(at.value) AS total_value FROM accounting.transactions AS at WHERE at.instrument="AAPL" AND at.account="Ned" AND at.in_out_flag="OUT" GROUP BY at.instrument, at.timestamp, at.account, at.in_out_flag ORDER BY at.timestamp </code></pre> <p>The python code used to execute the query which works nicely with other queries is shown below. My example gets stuck on the cursor.execute line.</p> <pre><code> def get_list_of_dictionaries_with_select(select_statement): conn = get_new_mysql_connection() cursor = conn.cursor(pymysql.cursors.DictCursor) cursor.execute(select_statement) return_value = cursor.fetchall() cursor.close() conn.close() return return_value </code></pre> <p>Expected results are that this function (get_list_of_dictionaries_with_select) returns a list of dictionaries representing the results of the query. What actually happens is the program just hangs.</p>
<p>MySQL workbench automatically truncates select statements to 1000 results. This means that if the select statement is returning a large number of results then it could take much longer to run in SQL outside of MySQL workbench. If it is truncated to 1000 results (in workbench) then the query stops running once it finds 1000 of them.</p>
mysql|python-3.x|macos|mysql-workbench|pymysql
0
1,904,994
57,142,199
Is it possible with Python `coverage` library get coverage report with Total summary if I test only one file?
<p>I am using coverage library or pytest-cov plugin to produce coverage report. My program consist of from only one file. </p> <p>And for one file coverage library does not produce Total summary line. Also when I tried to list all files like that using <code>pytest-cov</code></p> <pre><code>... --cov=a.py --cov=b.py ... </code></pre> <p>It also does not produce total summary.</p> <p>Is it possible to always get line with total summary?</p>
<p>You can use -a for append for total summary</p> <pre><code>coverage run a.py coverage run -a b.py coverage run -a c.py </code></pre> <p>Print the report</p> <pre><code>coverage report -m </code></pre> <p>Output:Report (for example)</p> <pre><code>Name Stmts Miss Cover Missing ---------------------------------------------- a.py 97 1 99% 95 b.py 1 0 100% c.py 10 0 100% ---------------------------------------------- TOTAL 108 1 99% </code></pre>
python|code-coverage|pytest
0
1,904,995
44,594,759
spacy adding special case tokenization rules by regular expression or pattern
<p>I want to add special case for tokenization in spacy according to the <a href="https://spacy.io/docs/usage/customizing-tokenizer" rel="nofollow noreferrer">documentation</a>. The documentation shows how specific words can be considered as special cases. I want to be able to specify a pattern (e.g. a suffix). For example, I have a string like this</p> <p><code>text = "A sample string with &lt;word-1&gt; and &lt;word-2&gt;"</code></p> <p>where <code>&lt;word-i&gt;</code> specifies a single word. </p> <p>I know I can have it for one special case at a time by the following code. But how can I specify a pattern for that? </p> <pre><code>import spacy from spacy.symbols import ORTH nlp = spacy.load('en', vectors=False,parser=False, entity=False) nlp.tokenizer.add_special_case(u'&lt;WORD&gt;', [{ORTH: u'&lt;WORD&gt;'}]) </code></pre>
<p>You can use regex matches to find bounds of your special case strings, and then use <a href="https://spacy.io/docs/api/span#merge" rel="noreferrer">spacy's merge method</a> to merge them as single token. The add_special_case works only for defined words. Here is an example:</p> <pre><code>&gt;&gt;&gt; import spacy &gt;&gt;&gt; import re &gt;&gt;&gt; nlp = spacy.load('en') &gt;&gt;&gt; my_str = u'Tweet hashtags #MyHashOne #MyHashTwo' &gt;&gt;&gt; parsed = nlp(my_str) &gt;&gt;&gt; [(x.text,x.pos_) for x in parsed] [(u'Tweet', u'PROPN'), (u'hashtags', u'NOUN'), (u'#', u'NOUN'), (u'MyHashOne', u'NOUN'), (u'#', u'NOUN'), (u'MyHashTwo', u'PROPN')] &gt;&gt;&gt; indexes = [m.span() for m in re.finditer('#\w+',my_str,flags=re.IGNORECASE)] &gt;&gt;&gt; indexes [(15, 25), (26, 36)] &gt;&gt;&gt; for start,end in indexes: ... parsed.merge(start_idx=start,end_idx=end) ... #MyHashOne #MyHashTwo &gt;&gt;&gt; [(x.text,x.pos_) for x in parsed] [(u'Tweet', u'PROPN'), (u'hashtags', u'NOUN'), (u'#MyHashOne', u'NOUN'), (u'#MyHashTwo', u'PROPN')] &gt;&gt;&gt; </code></pre>
python|spacy
13
1,904,996
44,434,412
beautifulsoup unable to extract href link
<p>So i am using selenium , phantomjs as my webdriver, and beautifulsoup. currently i want to extract all the links which are underneath the attribute title. <a href="http://www.bursamalaysia.com/market/listed-companies/company-announcements/#/?category=FA&amp;sub_category=FA1&amp;alphabetical=All&amp;company=9695" rel="nofollow noreferrer">The site i want to extract</a></p> <p>However, it seems to be not picking up these links at all ! What is going on ? </p> <pre><code># The standard library modules import os import sys import re # The wget module import wget # The BeautifulSoup module from bs4 import BeautifulSoup # The selenium module from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By def getListLinks(link): #setup drivers driver = webdriver.PhantomJS(service_args=['--ignore-ssl-errors=true']) driver.get(link) # load the web page src = driver.page_source #Get text and split it soup = BeautifulSoup(src, 'html5lib') print soup links = soup.find_all('a') print links driver.close() getListLinks("http://www.bursamalaysia.com/market/listed-companies/company-announcements/#/?category=FA&amp;sub_category=FA1&amp;alphabetical=All&amp;company=9695&amp;date_from=01/01/2012&amp;date_to=31/12/2016") </code></pre> <p>Here is an example of a link i want to extract</p> <pre><code>&lt;a href="/market/listed-companies/company-announcements/5455245"&gt;Quarterly rpt on consolidated results for the financial period ended 31/03/2017&lt;/a&gt; </code></pre>
<p>What I really don't understand is why you are mixing beautifullsoup with selenium. Selenium has it's own api for extract dom element. You don't need to bring BS4 into the picture. Besides BS4 can only work with static HTML and ignores dynamically generated HTML which your selenium instance is capable of handling. </p> <p>Just do</p> <pre><code>driver.find_element_by_tag_name('a') </code></pre>
python|selenium|web-scraping|beautifulsoup|phantomjs
2
1,904,997
23,496,639
python sorted and chain - django querysets
<p>I have this</p> <pre><code>b_articles = tc.tc_buch_articles.all() j_articles = tc.tc_journal_articles.all() joined = itertools.chain(b_articles, j_articles) sorter = lambda x: x.article_in_collection__year if hasattr(x, 'article_in_collection__year') else x.article_in_journal__year articles = sorted(joined, key = sorter, reverse=True) </code></pre> <p>below are my models</p> <p>I am getting the error: </p> <pre><code>'EntryArticleInCollection' object has no attribute 'article_in_journal__year' </code></pre> <p>I want to achieve this kind of result: </p> <pre><code>a = 1, 3, 8, 10 b = 2, 3, 5, 12 res = 1, 2, 3, 3, 5, 8, 10, 12 </code></pre> <p>models: </p> <pre><code>class TopicCenter(models.Model): user = models.ForeignKey(Person,related_name="user_tcs") title = models.TextField() subtitle = models.TextField() class ArticleInCollection(models.Model): user = models.ForeignKey(Person,related_name="user_buchaufsatz") title = models.TextField() book = models.ForeignKey(Book,related_name="book_aufsatze") added = models.DateTimeField(auto_now_add=True, blank=True) year = models.IntegerField(default=0000) class ArticleInJournal(models.Model): user = models.ForeignKey(Person,related_name="user_journalaufsatz") title = models.TextField() journal = models.ForeignKey(Journal,related_name="journal_aufsatze") added = models.DateTimeField(auto_now_add=True, blank=True) year = models.IntegerField(default=0000) class EntryArticleInCollection(models.Model): added = models.DateTimeField(auto_now_add=True, blank=True) lastmodified = models.DateTimeField(auto_now=True, blank=True) topiccenter = models.ForeignKey(TopicCenter,related_name="tc_buch_articles") article_in_collection = models.ForeignKey(ArticleInCollection,related_name="bucharticle_entries") class EntryArticleInJournal(models.Model): added = models.DateTimeField(auto_now_add=True, blank=True) lastmodified = models.DateTimeField(auto_now=True, blank=True) topiccenter = models.ForeignKey(TopicCenter,related_name="tc_journal_articles") article_in_journal = models.ForeignKey(ArticleInJournal,related_name="journal_article_entries") </code></pre>
<p>How about this <code>getattr()</code> approach:</p> <pre><code>sorter = lambda x: getattr(x, 'article_in_collection', x.article_in_journal).year </code></pre> <p>or:</p> <pre><code>def sorter(x): try: return x.article_in_collection.year except AttributeError: return x.article_in_journal.year </code></pre> <p>The problem with your solution is that you first need to get the <code>ForeignKey</code> field and then get the field on the related model via <code>dot notation</code>. Double underscore approach would not work in this case.</p>
python|django
2
1,904,998
23,983,150
How can I log a functions arguments in a reusable way in Python?
<p>I've found myself writing code like this several times:</p> <pre><code>def my_func(a, b, *args, **kwargs): saved_args = locals() # Learned about this from http://stackoverflow.com/a/3137022/2829764 local_var = "This is some other local var that I don't want to log" try: a/b except Exception as e: logging.exception("Oh no! My args were: " + str(saved_args)) raise </code></pre> <p>Running <code>my_func(1, 0, "spam", "ham", my_kwarg="eggs")</code> gives this output on stderr:</p> <pre><code>ERROR:root:Oh no! My args were: {'a': 1, 'args': (u'spam', u'ham'), 'b': 0, 'kwargs': {'my_kwarg': u'eggs'}} Traceback (most recent call last): File "/Users/kuzzooroo/Desktop/question.py", line 17, in my_func a/b ZeroDivisionError: division by zero </code></pre> <p>My question is, can I write something reusable so that I don't have to save locals() at the top of the function? And can it be done in a nice Pythonic way?</p> <p>EDIT: one more request in response to @mtik00: ideally I'd like some way to access saved_args or the like from within my_func so that I can do something other than log uncaught exceptions (maybe I want to catch the exception in my_func, log an error, and keep going).</p>
<p><strong>Decorators</strong> are what you are looking for. Here's an example:</p> <pre><code>import logging from functools import wraps def arg_logger(func): @wraps(func) def new_func(*args, **kwargs): saved_args = locals() try: return func(*args, **kwargs) except: logging.exception("Oh no! My args were: " + str(saved_args)) raise return new_func @arg_logger def func(arg1, arg2): return 1 / 0 if __name__ == '__main__': func(1, 2) </code></pre> <p>Here, I'm using arg_logger() as a <em>decorator</em>. Apply the decorator to any function you want to have this new behavior.</p> <p>There's a good discussion about decorators <a href="http://en.wikipedia.org/wiki/Python_syntax_and_semantics#Decorators" rel="nofollow">here</a>.</p>
python|function|logging|arguments
7
1,904,999
20,786,895
python populate a shelve object/dictionary with multiple keys
<p>I have a list of 4-grams that I want to populate a dictionary object/shevle object with:</p> <pre><code>['I','go','to','work'] ['I','go','there','often'] ['it','is','nice','being'] ['I','live','in','NY'] ['I','go','to','work'] </code></pre> <p>So that we have something like:</p> <pre><code>four_grams['I']['go']['to']['work']=1 </code></pre> <p>and any newly encountered 4-gram is populated with its four keys, with the value 1, and its value is incremented if it is encountered again. </p>
<p>You could do something like this:</p> <pre><code>import shelve from collections import defaultdict db = shelve.open('/tmp/db') grams = [ ['I','go','to','work'], ['I','go','there','often'], ['it','is','nice','being'], ['I','live','in','NY'], ['I','go','to','work'], ] for gram in grams: path = db.get(gram[0], defaultdict(int)) def f(path, word): if not word in path: path[word] = defaultdict(int) return path[word] reduce(f, gram[1:-1], path)[gram[-1]] += 1 db[gram[0]] = path print db db.close() </code></pre>
python|dictionary|n-gram|shelve
1