Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,903,000
59,509,550
How to get the Fourier series coefficients back after applying fft on a set of data point?
<p>Assume that I have a series of points and I applied the fft.fft to the data, how can I recognize the coefficients of the original Fourier series, i know that the a_0 is directly can be extracted from the maximum value of fft.fft, but what about a_n and b_n I know also that they may appear on the spectrum but is there a method here? X = np.array([917, 918, 919, 918, 917, 916, 915, 913, 912, 910, 906, 903, 901, 899, 897, 896, 896, 896, 896, 896, 896, 897, 898, 900, 903, 905, 908, 911, 914, 916, 919, 919, 918, 918, 917, 916, 914, 913, 911, 913, 905, 901, 899, 898]) f = np.fft.fft(X) but if we plot x with time we find it is periodic one that can be written in terms of Fourier Series: f(x) = a_0/2 + Sigma(a_n*np.cos(2*n<em>np.pi/L)+b_n</em>np.sin(2*n*np.pi/L)) How can I get back the a_0, a_n, and b_n?</p>
<p>OK, so you have 7 input elements, starting with 1215+0i, 1219+0i and so on (complex values but in your case they happen to be purely real). It's normal to pad to a power of 2, but OK. The result will start with the DC coefficient a_0 (also complex but the imaginary component will be 0), and then the next element of the result will be the lowest non-DC frequency (in your notation a_1 + b_1*i) and so on. From the doc at <a href="https://docs.scipy.org/doc/numpy/reference/routines.fft.html#module-numpy.fft" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/routines.fft.html#module-numpy.fft</a>:</p> <blockquote> <p>The values in the result follow so-called “standard” order: If A = fft(a, n), then A[0] contains the zero-frequency term (the sum of the signal), which is always purely real for real inputs. Then A[1:n/2] contains the positive-frequency terms, and A[n/2+1:] contains the negative-frequency terms, in order of decreasingly negative frequency. For an even number of input points, A[n/2] represents both positive and negative Nyquist frequency, and is also purely real for real input. For an odd number of input points, A[(n-1)/2] contains the largest positive frequency, while A[(n+1)/2] contains the largest negative frequency.</p> </blockquote> <p>Now since your input array is all reals, you might look into <code>np.fft.rfft</code> which will be faster. The result will be symmetric (because of real input), so it'll contain only half the actual coefficients -- the other half can be derived from the given result. See <a href="https://docs.scipy.org/doc/numpy/reference/routines.fft.html#module-numpy.fft" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/routines.fft.html#module-numpy.fft</a> for more.</p>
python|python-3.x|numpy
0
1,903,001
25,329,012
Py.test collection phase taking very long
<p>I am really quite new to development in Python in general, let alone testing with pytest. My problem is that the pytest collection phase runs unusually slow. I am specifying the test directory which contains only a handful of files with only one file containing three tests. The collection takes pretty much a whole minute, after which the actual tests run in under a few seconds. I have looked at similar questions but couldn't find a solution. I don't think it matters (as py.test is slow even from the command line) but I am using the pycharm IDE. The OS is Ubuntu.</p> <p>This may be relevant: If I terminate the process after a few seconds I usually end up with a stacktrace ending as follows:</p> <pre><code>&lt;A FEW LINES OMITTED...&gt; File "/usr/local/lib/python2.7/dist-packages/_pytest/core.py", line 413, in __call__ return self._docall(methods, kwargs) File "/usr/local/lib/python2.7/dist-packages/_pytest/core.py", line 424, in _docall res = mc.execute() File "/usr/local/lib/python2.7/dist-packages/_pytest/core.py", line 315, in execute res = method(**kwargs) File "/usr/local/lib/python2.7/dist-packages/_pytest/helpconfig.py", line 27, in pytest_cmdline_parse config = __multicall__.execute() File "/usr/local/lib/python2.7/dist-packages/_pytest/core.py", line 315, in execute res = method(**kwargs) File "/usr/local/lib/python2.7/dist-packages/_pytest/config.py", line 636, in pytest_cmdline_parse self.parse(args) File "/usr/local/lib/python2.7/dist-packages/_pytest/config.py", line 747, in parse self._preparse(args) File "/usr/local/lib/python2.7/dist-packages/_pytest/config.py", line 709, in _preparse self._initini(args) File "/usr/local/lib/python2.7/dist-packages/_pytest/config.py", line 704, in _initini self.inicfg = getcfg(args, ["pytest.ini", "tox.ini", "setup.cfg"]) File "/usr/local/lib/python2.7/dist-packages/_pytest/config.py", line 861, in getcfg if exists(p): File "/usr/local/lib/python2.7/dist-packages/_pytest/config.py", line 848, in exists return path.check() File "/usr/local/lib/python2.7/dist-packages/py/_path/local.py", line 352, in check return exists(self.strpath) File "/usr/lib/python2.7/genericpath.py", line 18, in exists os.stat(path) KeyboardInterrupt </code></pre> <p>Or sometimes...</p> <pre><code>&lt;STACK TRACE...&gt; File "/usr/local/lib/python2.7/dist-packages/py/_iniconfig.py", line 50, in __init__ f = open(self.path) KeyboardInterrupt </code></pre> <p>Maybe one of the two last calls before the KeyboardInterrupt is very slow?</p> <p>Please do ask for more detail should you require it!</p> <p>Cheers!</p>
<p>Add <code>PYTHONDONTWRITEBYTECODE=1</code> to your environment variables!</p> <ul> <li>Windows Batch: <code>set PYTHONDONTWRITEBYTECODE=1</code></li> <li>Unix: <code>export PYTHONDONTWRITEBYTECODE=1</code></li> <li><code>subprocess.run</code>: Add keyword <code>env={'PYTHONDONTWRITEBYTECODE': '1'}</code></li> </ul> <p>Note that the first two options are only valid for your current terminal session.</p> <hr /> <p>Here is how I found this out: <code>pytest</code> <strong>was being unusably slow from the command line, but working fine from within PyCharm</strong>. Copying the PyCharm command into cmd.exe (executes a small helper script) also was unusuably slow. Thus I printed out the environ variables at <code>os.environ</code> and tried it with that -- and it was fast! Then I eliminated each one-by-one.</p>
python|ubuntu|pycharm|pytest
2
1,903,002
70,788,062
Adding dates for unique EIDs row-wise to get Years of Service
<p>I have the following dataset:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>EID</th> <th>Company</th> <th>Start_Date</th> <th>End_Date</th> <th>T_F</th> </tr> </thead> <tbody> <tr> <td>A111</td> <td>ABC</td> <td>2015-07-20</td> <td>NaT</td> <td>True</td> </tr> <tr> <td>B111</td> <td>DEF</td> <td>1983-06-01</td> <td>NaT</td> <td>False</td> </tr> <tr> <td>B111</td> <td>ABC</td> <td>2017-01-01</td> <td>NaT</td> <td>True</td> </tr> <tr> <td>C111</td> <td>GHI</td> <td>1980-10-01</td> <td>1981-08-31</td> <td>True</td> </tr> <tr> <td>D111</td> <td>JKL</td> <td>1973-05-01</td> <td>1977-11-30</td> <td>True</td> </tr> <tr> <td>E111</td> <td>ABC</td> <td>2006-04-24</td> <td>NaT</td> <td>True</td> </tr> <tr> <td>F111</td> <td>ABC</td> <td>1991-06-10</td> <td>1994-12-15</td> <td>False</td> </tr> <tr> <td>F111</td> <td>MNO</td> <td>1994-12-01</td> <td>2002-08-31</td> <td>False</td> </tr> <tr> <td>F111</td> <td>ABC</td> <td>2002-08-01</td> <td>NaT</td> <td>True</td> </tr> <tr> <td>G111</td> <td>ABC</td> <td>1979-01-01</td> <td>NaT</td> <td>True</td> </tr> <tr> <td>H111</td> <td>ABC</td> <td>2002-02-01</td> <td>NaT</td> <td>True</td> </tr> </tbody> </table> </div> <p>The expected output is as follows:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>EID</th> <th>Company</th> <th>Start_Date</th> <th>End_Date</th> <th>T_F</th> <th>YoS</th> </tr> </thead> <tbody> <tr> <td>A111</td> <td>ABC</td> <td>2015-07-20</td> <td>NaT</td> <td>True</td> <td>NaN</td> </tr> <tr> <td>B111</td> <td>DEF</td> <td>1983-06-01</td> <td>NaT</td> <td>False</td> <td>(2017-01-01) - (1983-06-01)</td> </tr> <tr> <td>B111</td> <td>ABC</td> <td>2017-01-01</td> <td>NaT</td> <td>True</td> <td>NaN</td> </tr> <tr> <td>C111</td> <td>GHI</td> <td>1980-10-01</td> <td>1981-08-31</td> <td>True</td> <td>(1981-08-31) - (1980-10-01)</td> </tr> <tr> <td>D111</td> <td>JKL</td> <td>1973-05-01</td> <td>1977-11-30</td> <td>True</td> <td>(1977-11-30) - (1973-05-01)</td> </tr> <tr> <td>E111</td> <td>ABC</td> <td>2006-04-24</td> <td>NaT</td> <td>True</td> <td>NaN</td> </tr> <tr> <td>F111</td> <td>ABC</td> <td>1991-06-10</td> <td>1994-12-15</td> <td>False</td> <td>(2002-08-01) - (1991-06-10)</td> </tr> <tr> <td>F111</td> <td>MNO</td> <td>1994-12-01</td> <td>2002-08-31</td> <td>False</td> <td>NaN</td> </tr> <tr> <td>F111</td> <td>ABC</td> <td>2002-08-01</td> <td>NaT</td> <td>True</td> <td>NaN</td> </tr> <tr> <td>G111</td> <td>ABC</td> <td>1979-01-01</td> <td>NaT</td> <td>True</td> <td>NaN</td> </tr> <tr> <td>H111</td> <td>ABC</td> <td>2002-02-01</td> <td>NaT</td> <td>True</td> <td>NaN</td> </tr> </tbody> </table> </div> <p>This is what I am trying to do:</p> <ol> <li>Where an EID has only one record and company is ABC YoS column should be NULL. End_Date is always blank in these cases.</li> <li>Where an EID has multiple records and his/her last record is company ABC then YoS column will be Start date of first company - Start date of ABC company.</li> <li>Where an EID has only one record and company is not ABC then YoS will be calculated as End_Date - Start_Date</li> <li>Only the first record will have YoS value other records will contain NaN value.</li> <li>If an employee has multiple records 99% of the times employees last records will be ABC company.</li> </ol> <p>I tried the following code but this is only half part (or incorrect) I believe:</p> <pre><code> result.loc[~(result.CLEAN_NAME == 'HONEYWELL / HON') &amp; (result.T_F == False),'Hon_StartDate'] = result['Start_Date'] </code></pre> <p>Any leads would be appreciated. Thanks.!</p>
<p>Not the best solution, but it gets the job done. Considering the input is a CSV file stored in <code>company.csv</code> and using <code>groupby</code> on <code>EID</code>:</p> <pre class="lang-py prettyprint-override"><code>from itertools import chain import pandas as pd def compute_yos(record): if len(record) == 1 and record.iloc[0][&quot;Company&quot;] == &quot;ABC&quot;: return [pd.NaT] elif len(record) &gt; 1 and record.iloc[-1][&quot;Company&quot;] == &quot;ABC&quot;: yos = [record.iloc[-1][&quot;Start_Date&quot;] - record.iloc[0][&quot;Start_Date&quot;]] return yos + [pd.NaT] * (len(record) - 1) elif len(record) == 1 and record.iloc[0][&quot;Company&quot;] != &quot;ABC&quot;: return [record.iloc[0][&quot;End_Date&quot;] - record.iloc[0][&quot;Start_Date&quot;]] else: return [pd.NaT] * len(record) input_df = pd.read_csv(&quot;company.csv&quot;) print(input_df) input_df[[&quot;Start_Date&quot;, &quot;End_Date&quot;]] = input_df[[&quot;Start_Date&quot;, &quot;End_Date&quot;]].apply( pd.to_datetime ) grouping = input_df.groupby([&quot;EID&quot;]).apply(compute_yos) concat_grouping = chain.from_iterable(grouping) input_df[&quot;YoS&quot;] = list(concat_grouping) print(input_df) </code></pre> <p>input:</p> <pre><code> EID Company Start_Date End_Date T_F 0 A111 ABC 2015-07-20 NaT True 1 B111 DEF 1983-06-01 NaT False 2 B111 ABC 2017-01-01 NaT True 3 C111 GHI 1980-10-01 1981-08-31 True 4 D111 JKL 1973-05-01 1977-11-30 True 5 E111 ABC 2006-04-24 NaT True 6 F111 ABC 1991-06-10 1994-12-15 False 7 F111 MNO 1994-12-01 2002-08-31 False 8 F111 ABC 2002-08-01 NaT True 9 G111 ABC 1979-01-01 NaT True 10 H111 ABC 2002-02-01 NaT True </code></pre> <p>output:</p> <pre><code> EID Company Start_Date End_Date T_F YoS 0 A111 ABC 2015-07-20 NaT True NaT 1 B111 DEF 1983-06-01 NaT False 12268 days 2 B111 ABC 2017-01-01 NaT True NaT 3 C111 GHI 1980-10-01 1981-08-31 True 334 days 4 D111 JKL 1973-05-01 1977-11-30 True 1674 days 5 E111 ABC 2006-04-24 NaT True NaT 6 F111 ABC 1991-06-10 1994-12-15 False 4070 days 7 F111 MNO 1994-12-01 2002-08-31 False NaT 8 F111 ABC 2002-08-01 NaT True NaT 9 G111 ABC 1979-01-01 NaT True NaT 10 H111 ABC 2002-02-01 NaT True NaT </code></pre>
python|python-3.x|pandas|date
1
1,903,003
2,348,282
what is the pysvn command equvilent to "svn info file:///path/to/svn/repo"?
<p>I'm looking for a good python library to manipulate subversion repositories. I'm trying out <a href="http://pysvn.tigris.org/" rel="nofollow noreferrer">PySvn</a>, but finding that it can't handle something like</p> <pre><code>pysvn.Client().info("/path/to/svn/repo") </code></pre> <p>because it's not a working copy. Anyone know of any good libraries that can handle this kind of thing?</p> <p><strong>Update</strong> - I'll try to simplify it - I want to get info about the repository. The same kind of info I get when I run <code>svn info file:///path/to/svn/repo</code></p>
<p>Do you try info2 instead of info? Documentation says it can access URL of repository.</p>
python|svn
5
1,903,004
3,013,270
Validating and filling default values in XML based on XSD in Python
<p>How do I fill the default value in my XML during validation against XSD? If my attribute is not defined as <code>use="require"</code> and have <code>default="1"</code>, it could be possible to fill these default values from the XSD to the XML.</p> <p>Example: Original XML:</p> <pre><code>&lt;a&gt; &lt;b/&gt; &lt;b c="2"/&gt; &lt;/a&gt; </code></pre> <p>XSD scheme:</p> <pre><code>&lt;xs:element name="a"&gt; &lt;xs:complexType&gt; &lt;xs:sequence&gt; &lt;xs:element name="b" maxOccurs="unbounded"&gt; &lt;xs:attribute name="c" default="1"/&gt; &lt;/xs:element&gt; &lt;/xs:sequence&gt; &lt;/xs:complexType&gt; &lt;/xs:element&gt; </code></pre> <p>I want to validate the original XML using XSD and to fill all default values:</p> <pre><code>&lt;a&gt; &lt;b c="1"/&gt; &lt;b c="2"/&gt; &lt;/a&gt; </code></pre> <p>How do I get it in Python? With validation there is no problem (e.g. XMLSchema). The problem are the default values.</p>
<p>To follow up on my comment, here's some code</p> <pre><code>from lxml import etree from lxml.html import parse schema_root = etree.XML('''\ &lt;xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"&gt; &lt;xs:element name="a"&gt; &lt;xs:complexType&gt; &lt;xs:sequence&gt; &lt;xs:element name="b" maxOccurs="unbounded"&gt; &lt;xs:complexType&gt; &lt;xs:attribute name="c" default="1" type="xs:string"/&gt; &lt;/xs:complexType&gt; &lt;/xs:element&gt; &lt;/xs:sequence&gt; &lt;/xs:complexType&gt; &lt;/xs:element&gt; &lt;/xs:schema&gt;''') xmls = '''&lt;a&gt; &lt;b/&gt; &lt;b c="2"/&gt; &lt;/a&gt;''' schema = etree.XMLSchema(schema_root) parser = etree.XMLParser(schema = schema, attribute_defaults = True) root = etree.fromstring(xmls, parser) result = etree.tostring(root, pretty_print=True, method="xml") print result </code></pre> <p>will give you</p> <pre><code>&lt;a&gt; &lt;b c="1"/&gt; &lt;b c="2"/&gt; &lt;/a&gt; </code></pre> <p>I've modified your XSD slightly, wrapped <code>xs:attribute</code> in <code>xs:complexType</code> and added schema namespace. To have your defaults filled in, you need to pass <code>attribute_defaults=True</code> to <code>etree.XMLParser()</code> and it should work.</p>
python|xml|xsd
3
1,903,005
2,580,374
Syntax highlighting: rich text box control for .NET
<p>I'm looking for a free control/component/library something like a rich text box for editing codes of python (or other languages.)<br> I like to have some features:</p> <ul> <li>Highlight codes</li> <li>Auto Indent</li> <li>Line numbering</li> <li>Defining new styles or rules of highlighting (for OpenType keywords)</li> </ul> <p>Is there such a control? or I have to write my own?</p>
<p>Have a look at <a href="http://scintillanet.codeplex.com/" rel="nofollow noreferrer">ScintillaNET</a>.</p> <blockquote> <p>ScintillaNET is a powerful text editing control for Windows Forms applications and a managed wrapper around the versatile Scintilla Windows control. Created with the developer in mind, the ScintllaNET API makes it simple to add advanced text editing and syntax highlighting to your application or IDE (Integrated Developer Environment) .</p> </blockquote>
.net|python|controls|ironpython|components
4
1,903,006
2,596,929
Python Mindstorms RCX
<p>I've got 30 unopened Lego Mindstorms kits that I'd love to use in my intro programming class to do some simple robotics stuff at the end of the year. We're using Python in the class, so I'd prefer there to be a way for the kids to write the programs in Python. Unfortunately, these are old kits with RCX bricks - not the newer NXT ones, so most of the projects like NXT_Python can't help me. Is there any way to make that happen?</p>
<p>Running Python on the brick itself is probably hard (for the reason others already stated - size of the interpreter, available RAM on the brick for example) but this might be of interest:</p> <p><a href="http://mail.python.org/pipermail/python-list/2001-November/thread.html#691919" rel="nofollow noreferrer">According to this thread</a> you should be able to use <a href="http://www.hare.demon.co.uk/lego/pylnp.html" rel="nofollow noreferrer">pylnp</a> (remote) combined with <a href="http://brickos.sourceforge.net/" rel="nofollow noreferrer">BrickOS</a> (on the brick; formerly legOS).</p>
python|lego-mindstorms
3
1,903,007
67,072,814
Pandas: selecting multiple columns in dataframe by integer
<p>Assume I have this dataframe:</p> <pre><code>df = pd.DataFrame({'a' : (1, 2, 3), 'b' : (1, 2, 3), 'c' : (&quot;one&quot;, &quot;two&quot;, &quot;three&quot;), 'd' : (4, 5, 6), 'e' : (4, 5, 6), 'f' : (7, 8, 9), 'g' : (7, 8, 9), 'h' : (7, 8, 9)}) </code></pre> <p>I am trying to select the first, third, &amp; fifth until the last columns. Desired output would be:</p> <pre><code> a c e f g h 0 1 one 4 7 7 8 1 2 two 5 8 7 8 2 3 three 6 9 9 9 </code></pre> <p>How do I select multiple columns that are not in consecutive manner using integer? I have tried the following:</p> <pre><code>df.iloc[,[0, 3, 5:]] df.loc[,[0, 3, 5:]] df.iloc[,[0, 3, 5:len(df.columns)]] df.loc[,[0, 3, 5:len(df.columns)]] df.iloc[,[0 + 3 + 5:]] df.loc[,[0 + 3 + 5:]] df.iloc[,[0 + 3 + 5:len(df.columns)]] df.loc[,[0 + 3 + 5:len(df.columns)]] </code></pre> <p>None worked</p> <p>Please advise</p>
<p>Use <a href="https://numpy.org/doc/stable/reference/generated/numpy.r_.html" rel="nofollow noreferrer"><code>np.r_</code></a> for join slicers, python counts from <code>0</code>, so for third column need <code>2</code> and from <code>5th</code> column need <code>4:</code>:</p> <pre><code>df = df.iloc[:, np.r_[0, 2, 4:len(df.columns)]] print (df) a c e f g h 0 1 one 4 7 7 7 1 2 two 5 8 8 8 2 3 three 6 9 9 9 </code></pre>
pandas|dataframe|subset
1
1,903,008
66,848,740
How to pass updates into pd.read_sql? Update
<p>I have this query that I use on a weekly basis to create a report using <code>pd.read_sql</code>. I want to be able to update the case statement of store, update the Date Add, and store IN at the end of the statement without having to manually change the store numbers and the dates. Is there any way that I can edit the query to make the updates?</p> <p>This is the query</p> <pre><code>dataframe = pd.read_sql(&quot;&quot;&quot; SELECT Top(10) CAST( Store as VARCHAR) + 'þ' as Store, CONVERT( VARCHAR, Tran_Dt2, 101 ) + 'þ' as Tran_Dt, CONVERT(char(5), Start_Time, 108) + 'þ' as Start_Time, [Count] FROM ( SELECT CASE WHEN [Store] = 313 THEN 3174 WHEN [Store] = 126 THEN 3191 END AS Store , DATEADD (YEAR, +2, DATEADD(DAY, +4, Tran_Dt2)) as Tran_Dt2 ,[Start_Time] ,[Count] ,Store as Sister_Store FROM ( SELECT Store, CONVERT(datetime, Tran_Dt) as Tran_Dt2, Start_Time, Count FROM [VolumeDrivers].[dbo].[SALES_DRIVERS_ITC_Signup_65wks] WHERE CONVERT(datetime, Tran_Dt) between CONVERT(datetime,'2/8/2019') and CONVERT(datetime,'3/15/2019') AND Store IN (313, 126) --Single Store: Store = Store # ) AS A ) AS B ORDER BY Tran_Dt2, Store &quot;&quot;&quot;, con = conn) </code></pre> <p>I would want to be able to do something like declare a variable and have it populate in the code such as something like:</p> <pre><code>oldstore1 = 313 newstore1 = 3174 oldstore2 = 126 newstore2 = 3191 daframe = pd.ready_sql(&quot;&quot;&quot;... ... SELECT CASE WHEN [Store] = oldstore1 THEN newstore1 WHEN [Store] = oldstore2 THEN newstore2 ... </code></pre> <p>UPDATE----</p> <p>I am currently at this point and had the query working until my kernel restarted and I lost my code. Any tips on why it isn't working anymore?</p> <pre><code>#Declare variables for queries old_store1 = 313 new_store1 = 3157 old_store2 = 126 new_store2 = 3196 datefrom = '2/8/2019' dateto = '3/15/2019' yearadd = '+2' dayadd = '+4' ITC = pd.read_sql(&quot;&quot;&quot;SELECT CAST( Store as VARCHAR) + 'þ' as Store, CONVERT( VARCHAR, Tran_Dt2, 101 ) + 'þ' as Tran_Dt, CONVERT(char(5), Start_Time, 108) + 'þ' as Start_Time, [Count] FROM ( SELECT CASE WHEN [Store] = {old_store1} THEN {new_store1} WHEN [Store] = {old_store2} THEN {new_store2} END AS Store , DATEADD (YEAR, {yearadd}, DATEADD(DAY, {dayadd}, Tran_Dt2)) as Tran_Dt2 ,[Start_Time] ,[Count] ,Store as Sister_Store FROM ( SELECT Store, CONVERT(datetime, Tran_Dt) as Tran_Dt2, Start_Time, Count FROM [VolumeDrivers].[dbo].[SALES_DRIVERS_ITC_Signup_65wks] WHERE CONVERT(datetime, Tran_Dt) between CONVERT(datetime,{datefrom}) and CONVERT(datetime,{dateto}) AND Store IN ({old_store1}, {old_store2}) --Single Store: Store = Store # ) AS A ) AS B ORDER BY Tran_Dt2, Store &quot;&quot;&quot;, con = conn) </code></pre>
<p>Was able to figure out why it wasn't working. I guess python 3 and beyond has a built in function that allows you to place &quot;f&quot; in front of the query and will let you pass the variables you created. I know this isn't the most secure way of executing the script but I'll look into creating a for loop in the future that will allow it to be more secure. Thanks for all the help!</p> <pre><code>#Declare variables for queries old_store1 = 313 new_store1 = 3157 old_store2 = 126 new_store2 = 3196 datefrom = '2/15/2019' dateto = '3/22/2019' yearadd = '+2' dayadd = '+4' ITC = pd.read_sql(f&quot;&quot;&quot;SELECT CAST( Store as VARCHAR) + 'þ' as Store, CONVERT( VARCHAR, Tran_Dt2, 101 ) + 'þ' as Tran_Dt, CONVERT(char(5), Start_Time, 108) + 'þ' as Start_Time, [Count] FROM ( SELECT CASE WHEN [Store] = {old_store1} THEN {new_store1} WHEN [Store] = {old_store2} THEN {new_store2} ..... #run code and verify it works Sales_Drivers_ITCSignup = pd.read_sql(ITCQuery, con = conn, index_col='Store') Sales_Drivers_ITCSignup.head() </code></pre>
python|python-3.x|anaconda
0
1,903,009
42,927,488
How to extract words from a list of list that begin with x
<p>I want to extract all words that begins with each alphabet from a list of list. I have following code but it does not work for a list of list. </p> <pre><code>my_list= [['ARON', '0.1'], ['BEY', '0.2'], ['ABI', '0.05'], ['ZBBY', '0.9'], ['KB', '0.4']] result = [] for i in sorted_firstnames: if i[0] == 'a'.upper(): result.append(i) result </code></pre>
<p>you can try this </p> <pre><code>&gt;&gt;&gt; x = 'a' &gt;&gt;&gt; print([i for i in my_list if i[0][0].casefold() == x]) [['ARON', '0.1'], ['ABI', '0.05']] </code></pre> <p>and if you want only word,</p> <pre><code>&gt;&gt;&gt; print([i[0] for i in my_list if i[0][0].casefold() == x]) ['ARON', 'ABI'] </code></pre>
python|python-2.7|python-3.x|nested-lists
0
1,903,010
72,200,654
How to get client secret in keycloak with admin user in different realm using python-keycloak
<p>I have this workflow in place which works. I get a token from keycloak with admin username/password to this endpoint <code>auth/realms/master/protocol/openid-connect/token</code> With this token I request about client-secret of a specific client which is connected to another realms, not master. So, I request to this endpoint providing my brand new token in the header as bearer to this endpoint <code>auth/admin/realms/realm_name/clients/client_name/client-secret</code> And I can have client-secret With this client-secret I can get a client token requesting with client credentials to this endpoint <code>auth/realms/realm_name/protocol/openid-connect/token</code> And finally I use this client token to my stuff. I can use python-keycloak to get admin token</p> <pre><code>keycloak_admin = KeycloakAdmin(server_url=url, username='user', password='password', verify=True) </code></pre> <p>But once I'm here, I cannot have client secret, due to my client is not in admin realm. Using browser with my regular admin, I can change between realms and accessing to other realm clients. And as I said, using a set of request to specific endpoints I can have what I want working, but I don't know how to do it using python-keycloak.</p> <p>Thanks a lot for your help. I guess I've made my self clear enough.</p> <p>Regards</p>
<p>I have this issue fixed. In case someone could be facing similar issues <a href="https://stackoverflow.com/questions/53538100/how-to-get-client-secret-via-keycloak-api">quite similar with requets strategy</a></p> <p>Key is paying attention to required endpoint, if it's admin or openid</p> <pre><code>serv_url = &quot;https://{keycloak_server}/auth/&quot; # Getting an admin token with admin user in master realm keycloak_admin = KeycloakAdmin(server_url=serv_url, username='user', password='pass', verify=True) # Use that admin token to connect to other realm where client is located insights_admin = KeycloakAdmin(server_url=serv_url, realm_name='{realm_name}', client_id='{client_name}', verify=True, custom_headers={ 'Authorization': 'Bearer ' + keycloak_admin.token.get('access_token'), 'Content-Type': 'application/json' }) # In order to get secret_key of acknowledge client, we neeed to request # for keycloak id, not string name client_id = insights_admin.get_client_id(&quot;{client_name}&quot;) # With that id, we ca get client secret secret = insights_admin.get_client_secrets(client_id) # Finally with that client secret, we create a OpenID object as a regular # &quot;user&quot; (client_name, secret_key), this is a little misleading, due to # here we need client_name again, not keycloak id, which is required to get # secret_key. In both cases, client_id is variable name to both methods # but refer to different concepts insights_client = KeycloakOpenID(server_url=serv_url, realm_name='{realm_name}', client_id=&quot;{client_name}&quot;, client_secret_key=secret[&quot;value&quot;], verify=True) # And finally we ask for a token to identify that user, considering # KeycloakOpenID already has client_name, secret_key tuple to identify itself # we just need to inform that we are going to use &quot;client_credentials&quot; instead # of default password authentication return insights_client.token(grant_type=&quot;client_credentials&quot;) </code></pre> <p>Being summarized looks pretty easy, but took me quite a while to deduce it. I hope someone could find it interesting and I could save some time to others.</p>
python|keycloak
1
1,903,011
50,804,826
Explain python Singleton class
<pre><code>class Singleton: instance = None def __new__(cls): if cls.instance is None: cls.instance = super().__new__(cls) return cls.instance singleton_obj1 = Singleton() singleton_obj2 = Singleton() print(singleton_obj1) print(singleton_obj2) </code></pre> <p><strong>output</strong></p> <pre><code>&lt;__main__.Singleton object at 0x10dbc0f60&gt; &lt;__main__.Singleton object at 0x10dbc0f60&gt; </code></pre> <p>Can someone explain what exactly happening at this line <code>cls.instance = super().__new__(cls)</code>. Which lines of code helped to make this class <code>Singleton</code>?</p>
<p>The constructor says,</p> <pre><code>If there is no instance recorded, create an instance and record it return the recorded instance </code></pre> <p>This is a standard singleton design pattern, for most languages, to ensure that only one instance of the class is ever created.</p>
python|singleton
8
1,903,012
3,954,467
How to serve up dynamic content via django and php on same domain?
<p>I just finished rewriting a significant portion of my web site using python's django, but I also have some legacy code in php that I haven't finished migrating over yet. Is it possible to get these two working on the same domain and if so, how do I go about doing it?</p> <p>I'm running this site on a virtual Ubuntu instance and serving content up via apache. I'm also using mod_wsgi to hook into the python content.</p> <p>One simplification is that the whole site, except for one sub-directory, runs off python. Let's say for arguments sake that my php code is at <a href="http://www.mysite.com/myphpapp/" rel="nofollow">http://www.mysite.com/myphpapp/</a>.</p>
<p>Basically you have to configure apache to use the default handler for the path hosting your legacy code, usually adding a <code>&lt;directory&gt;</code> or <code>&lt;location&gt;</code> section to your apache site config.</p> <p>Something like:</p> <pre><code>&lt;Location "/legacy"&gt; SetHandler None &lt;/Location&gt; </code></pre>
php|python|django|apache
2
1,903,013
3,947,191
Getting the entire output from subprocess.Popen
<p>I'm getting a slightly weird result from calling subprocess.Popen that I suspect has a lot to do with me being brand-new to Python.</p> <pre><code>args = [ 'cscript', '%USERPROFILE%\\tools\\jslint.js','%USERPROFILE%\\tools\\jslint.js' ] p = Popen(args, stdout=PIPE, shell=True).communicate()[0] </code></pre> <p>Results in output like the following (the trailing double \r\n is there in case it's important)</p> <pre><code>Microsoft (R) Windows Script Host Version 5.8 Copyright (C) Microsoft Corporation. All rights reserved.\r\n\r\n </code></pre> <p>If I run that command from an interactive Python shell it looks like this</p> <pre><code>&gt;&gt;&gt; args = ['cscript', '%USERPROFILE%\\tools\\jslint.js', '%USERPROFILE%\\tools\jslint.js'] &gt;&gt;&gt; p = subprocess.Popen(args, stdout=subprocess.PIPE, shell=True).communicate()[0] Lint at line 5631 character 17: Unexpected /*member 'OpenTextFile'. f = fso.OpenTextFile(WScript.Arguments(0), 1), ... Lint at line 5649 character 17: Unexpected /*member 'Quit'. WScript.Quit(1); </code></pre> <p>So there's all the output I really care about, but if I dump the value of the "p" variable I just set up...</p> <pre><code>&gt;&gt;&gt; p 'Microsoft (R) Windows Script Host Version 5.8\r\nCopyright (C) Microsoft Corpor ation. All rights reserved.\r\n\r\n' &gt;&gt;&gt; </code></pre> <p>Where'd all that data I want end up going? It definitely didn't end up in "p". Looks like it's going to stdout, but I didn't I explictly tell it not to do that?</p> <p>I'm running this on Windows 7 x64 with Python 2.6.6</p>
<p>Is it going to stderr? Try redirecting:</p> <pre><code>p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True).communicate()[0] </code></pre>
python|subprocess|stdout
7
1,903,014
50,612,518
Error in upgrading tensorflow
<p>I ran into a problem while upgrading the <code>tensorflow</code>. I currently use 0.12.1 version. Here is the message I got.</p> <pre><code>Exception: Traceback (most recent call last): File "/Users/peymanghahremani/tensorflow/lib/python2.7/site-packages/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/Users/peymanghahremani/tensorflow/lib/python2.7/site-packages/pip/commands/install.py", line 335, in run wb.build(autobuilding=True) File "/Users/peymanghahremani/tensorflow/lib/python2.7/site-packages/pip/wheel.py", line 749, in build self.requirement_set.prepare_files(self.finder) File "/Users/peymanghahremani/tensorflow/lib/python2.7/site-packages/pip/req/req_set.py", line 380, in prepare_files ignore_dependencies=self.ignore_dependencies)) File "/Users/peymanghahremani/tensorflow/lib/python2.7/site-packages/pip/req/req_set.py", line 620, in _prepare_file session=self.session, hashes=hashes) File "/Users/peymanghahremani/tensorflow/lib/python2.7/site-packages/pip/download.py", line 821, in unpack_url hashes=hashes File "/Users/peymanghahremani/tensorflow/lib/python2.7/site-packages/pip/download.py", line 659, in unpack_http_url hashes) File "/Users/peymanghahremani/tensorflow/lib/python2.7/site-packages/pip/download.py", line 853, in _download_http_url stream=True, File "/Users/peymanghahremani/tensorflow/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py", line 488, in get return self.request('GET', url, **kwargs) File "/Users/peymanghahremani/tensorflow/lib/python2.7/site-packages/pip/download.py", line 386, in request return super(PipSession, self).request(method, url, *args, **kwargs) File "/Users/peymanghahremani/tensorflow/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py", line 475, in request resp = self.send(prep, **send_kwargs) File "/Users/peymanghahremani/tensorflow/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py", line 596, in send r = adapter.send(request, **kwargs) File "/Users/peymanghahremani/tensorflow/lib/python2.7/site-packages/pip/_vendor/cachecontrol/adapter.py", line 47, in send resp = super(CacheControlAdapter, self).send(request, **kw) File "/Users/peymanghahremani/tensorflow/lib/python2.7/site-packages/pip/_vendor/requests/adapters.py", line 497, in send raise SSLError(e, request=request) SSLError: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:590) </code></pre>
<p>This is not an issue with TensorFlow but rather an issue with your version of OpenSSL. You can check your current version on OSX with: </p> <pre><code>$ python3 -c "import ssl; print(ssl.OPENSSL_VERSION)" </code></pre> <p>Then upgrade your version with:</p> <pre><code>$ brew update $ brew install openssl </code></pre>
python|python-2.7|tensorflow
0
1,903,015
35,159,846
How to remove the first four lines and the last 12 lines in to a file in Python?
<pre><code> h = httplib.HTTPSConnection(host, port) h.set_debuglevel(0) headers = { "Content-Type": "multipart/form-data; boundary=%s" % (boundary,), "Connection": "Keep-Alive", } h.request('POST', uri, body, headers) res = h.getresponse() #print res.read() data = """MIME-Version: 1.0 Content-Type: multipart/mixed; boundary=--Nuance_NMSP_vutc5w1XobDdefsYG3wq """ + res.read() msg = email.message_from_string(data) #print msg for index, part in enumerate(msg.walk(), start=1): content_type = part.get_content_type() #print content_type payload = part.get_payload() print res.getheaders() if content_type == "audio/x-wav" and len(payload): with open('output.pcm'.format(index), 'wb') as f_pcm: print f_pcm.write(payload) </code></pre> <p>I am sending a request to the server and the server is sending a response back to the client as above in the form of <code>.txt</code>. The <code>.txt</code> contains an information header on the top and header at the bottom, which is of text format and the rest is binary. </p> <p>How to write and parse the text and write it into a separate <code>.txt</code> file, and the binary into <code>.pcm</code> file? </p>
<p>The following kind of approach is recommended using Python's <a href="https://docs.python.org/2/library/email.html?highlight=email#module-email" rel="nofollow"><code>email</code></a> library to try and decode the MIME:</p> <pre><code>import ssl import os import json import email import uuid from io import BytesIO import httplib input_folder = os.path.dirname(os.path.abspath(__file__)) output_folder = os.path.join(input_folder, 'output') def get_filename(ext, base, sub_folder): filename = '{}.{}'.format(base, ext) return os.path.join(output_folder, sub_folder, filename) def compare_files(file1, file2): with open(file1, 'rb') as f_file1, open(file2, 'rb') as f_file2: if f_file1.read() == f_file2.read(): print 'Same:\n {}\n {}'.format(file1, file2) else: print 'Different:\n {}\n {}'.format(file1, file2) class Part(object): """Represent a part in a multipart messsage""" def __init__(self, name, contentType, data, paramName=None): super(Part, self).__init__() self.name = name self.paramName = paramName self.contentType = contentType self.data = data def encode(self): body = BytesIO() if self.paramName: body.write('Content-Disposition: form-data; name="%s"; paramName="%s"\r\n' % (self.name, self.paramName)) else: body.write('Content-Disposition: form-data; name="%s"\r\n' % (self.name,)) body.write("Content-Type: %s\r\n" % (self.contentType,)) body.write("\r\n") body.write(self.data) return body.getvalue() class Request(object): """A handy class for creating a request""" def __init__(self): super(Request, self).__init__() self.parameters = [] def add_json_parameter(self, name, paramName, data): self.parameters.append(Part(name=name, paramName=paramName, contentType="application/json; charset=utf-8", data=data)) def add_audio_parameter(self, name, paramName, data): self.parameters.append(Part(name=name, paramName=paramName, contentType="audio/x-wav;codec=pcm;bit=16;rate=16000", data=data)) def encode(self): boundary = uuid.uuid4().hex body = BytesIO() for parameter in self.parameters: body.write("--%s\r\n" % (boundary,)) body.write(parameter.encode()) body.write("\r\n") body.write("--%s--\r\n" % (boundary,)) return body.getvalue(), boundary def get_tts(required_text, LNG): required_text = required_text.strip() output_filename = "".join([x if x.isalnum() else "_" for x in required_text[:80]]) host = "mtldev08.nuance.com" port = 443 uri = "/NmspServlet/" if LNG == "ENG": parameters = {'lang' : 'eng_GBR', 'location' : '47.4925, 19.0513'} if LNG == "GED": parameters = {'lang' : 'deu-DEU', 'location' : '48.396231, 9.972909'} RequestData = """{ "appKey": "9c9fa7201e90d3d96718bc3f36ce4cfe1781f2e82f4e5792996623b3b474fee2c77699eb5354f2136063e1ff19c378f0f6dd984471a38ca5c393801bffb062d6", "appId": "NMDPTRIAL_AutomotiveTesting_NCS61HTTP", "uId": "Alexander", "inCodec": "PCM_16_8K", "outCodec": "PCM_16_8K", "cmdName": "NVC_TTS_CMD", "appName": "Python", "appVersion": "1", "language": "%(lang)s", "carrier": "carrier", "deviceModel": "deviceModel", "cmdDict": { "tts_voice": "Serena", "tts_language": "%(lang)s", "locale": "canada", "application_name": "Testing Python Script", "organization_id": "NUANCE", "phone_OS": "4.0", "phone_network": "wifi", "audio_source": "SpeakerAndMicrophone", "location": "%(location)s", "application_session_id": "1234567890", "utterance_number": "5", "ui_langugage": "en", "phone_submodel": "nmPhone2,1", "application_state_id": "45" } }""" % (parameters) TEXT_TO_READ = """{ "tts_type": "text" }""" TEXT_TO_READ = json.loads(TEXT_TO_READ) TEXT_TO_READ["tts_input"] = required_text TEXT_TO_READ = json.dumps(TEXT_TO_READ) request = Request() request.add_json_parameter("RequestData", None, RequestData) request.add_json_parameter("TtsParameter", "TEXT_TO_READ", TEXT_TO_READ) #ssl._create_default_https_context = ssl._create_unverified_context body, boundary = request.encode() h = httplib.HTTPSConnection(host, port) #h.set_debuglevel(1) headers = { "Content-Type": "multipart/form-data; boundary=%s" % (boundary,), "Connection": "Keep-Alive", } h.request('POST', uri, body, headers) res = h.getresponse() data = """MIME-Version: 1.0 Content-Type: multipart/mixed; boundary=--Nuance_NMSP_vutc5w1XobDdefsYG3wq """ + res.read() msg = email.message_from_string(data) for part in msg.walk(): content_type = part.get_content_type() payload = part.get_payload() if content_type == "audio/x-wav" and len(payload): ref_filename = get_filename('pcm', output_filename + '_ref', LNG) if not os.path.exists(ref_filename): with open(ref_filename, 'wb') as f_pcm: f_pcm.write(payload) cur_filename = get_filename('pcm', output_filename, LNG) with open(cur_filename, 'wb') as f_pcm: f_pcm.write(payload) compare_files(ref_filename, cur_filename) elif content_type == "application/json": with open(get_filename('json', output_filename, LNG), 'w') as f_json: f_json.write(payload) filename = r'input.txt' with open(filename) as f_input: for line in f_input: LNG, text = line.strip().split('|') print "Getting {}: {}".format(LNG, text) get_tts(text, LNG) </code></pre> <p>This assumes your <code>input.txt</code> file has the following format:</p> <pre><code>ENG|I am tired GED|Ich gehe nach hause </code></pre> <p>This will produce an output pcm and json file per line of text. It works with multiple files/languages.</p>
python|regex|python-2.7|mime
1
1,903,016
44,911,266
Run Python codes on subdirectories
<p>How do I run python codes (.py) on subdirectories from the main folder?</p> <p>What is the easiest way to do this?</p> <p>I tried:</p> <pre><code>os.chdir("path") #path = path to subdirectory import abc #abc = module on subdirectory </code></pre> <p>Error:</p> <pre><code>ImportError: No module named abc </code></pre>
<p>I believe you want to import <code>abc</code> into your current module, even though they're located on different folders. Depending on your python, there are different ways to do this:</p> <p>Python2.x</p> <pre><code>import imp abc = imp.load_source('abc', '/path/to/abc.py') </code></pre> <p>Python 3.4</p> <pre><code>from importlib.machinery import SourceFileLoader abc = SourceFileLoader('abc', '/path/to/abc.py').load_module() </code></pre> <p>In either case, <code>abc</code> will be imported for use as usual.</p> <pre><code>&gt;&gt;&gt; abc &lt;module 'abc' from '/path/to/abc.py'&gt; </code></pre> <p>This is cleaner because it does not involve polluting your <code>sys.path</code>.</p>
python
1
1,903,017
65,013,425
replacing precise string in python
<p>Given this seemingly straightforward string replace function python</p> <pre><code>sample_str = 'fl' print(sample_str.replace('fl','florida') result: 'florida' </code></pre> <p>How to avoid this result, however:</p> <pre><code>sample_lst = ['fl', 'florida'] for word in sample_lst: new_word = word.replace('fl', 'florida') print(new_word) 'florida' 'floridaorida' </code></pre> <p>Point being, I have a huge pandas df and am trying to replace things like 'fl' but only where 'fl' occurs in a string by itself, not when it occurs as part of some other string like 'florida' or 'nfl' etc.</p> <p>I tried using a regex string like this r'fd(?![0|_| ])'. That didn't work. This seems like a basic question so I presume I am overlooking some python fundamental long lost to my memory. Any ideas pythonistas out there?</p>
<p>Just check if word is equivilent to <code>'fl'</code> - if it is, make it into <code>'florida'</code>:</p> <pre><code>sample_lst = ['fl', 'florida'] for word in sample_lst: new_word = word if word == 'fl': new_word = 'florida' print(new_word) </code></pre> <p>or...</p> <pre><code>sample_lst = ['fl', 'florida'] for word in sample_lst: new_word = 'florida' if word == 'fl' else word print(new_word) </code></pre> <p>if you want to store the result in a new list, you could even do list comprehension!!</p> <pre><code>sample_lst = ['fl', 'florida'] result = ['florida' if word == 'fl' else word for word in sample_lst] </code></pre> <p>On the other hand, if you want to check if there is a word (it can be surrounded by spaces), you can use regex:</p> <pre><code>import re sample_lst = ['fl', 'florida'] for word in sample_lst: new_word = re.sub(r'\bfl\b', 'florida', word) print(new_word) </code></pre> <p>and list comprehension (of course we need list comprehension):</p> <pre><code>import re sample_lst = ['fl', 'florida'] result = [re.sub(r'\bfl\b', 'florida', word) for word in sample_lst] </code></pre>
python|string|replace
2
1,903,018
61,228,115
Python executable (pyinstaller) throws an error when I close it
<p>I created a basic Pong game with Turtle on Windows, which works pretty well, but every time I close it, it throws an error: 'Fatal error detected: Failed to execute script pong'. I have been looking for a solution prior to ask for help, but I couldn't find a proper answer.</p> <p>In order to create the .exe file I used 'pyinstaller'. This was the exact command I wrote:</p> <blockquote> <p>pyinstaller --onefile --windowed --add-data "bounce.wav;." --add-data "collision.wav;." --add-data "score.wav;." pong.py</p> </blockquote> <p>This is my code:</p> <pre><code>""" Basic pong game """ import turtle import winsound import os import sys win = turtle.Screen() win.title("Pong") win.bgcolor("black") win.setup(width=800, height=600) win.tracer(0) # Score score_a = 0 score_b = 0 # Paddle A paddle_a = turtle.Turtle() name_a = "Verde" paddle_a.speed(0) paddle_a.shape("square") paddle_a.color("green") paddle_a.shapesize(stretch_wid=5, stretch_len=1) paddle_a.penup() paddle_a.goto(-350, 0) # Paddle B paddle_b = turtle.Turtle() name_b = "Amarillo" paddle_b.speed(0) paddle_b.shape("square") paddle_b.color("yellow") paddle_b.shapesize(stretch_wid=5, stretch_len=1) paddle_b.penup() paddle_b.goto(350, 0) # Ball ball = turtle.Turtle() ball.speed(0) ball.shape("circle") ball.color("white") ball.penup() ball.goto(0, 0) ball.dx = 0.15 ball.dy = -0.15 # Pen pen = turtle.Turtle() pen.speed(0) pen.color("white") pen.penup() pen.hideturtle() pen.goto(0, 260) pen.write("{} 0 | 0 {}".format(name_a, name_b), align="center", font=("Courier", 24, "normal")) # Functions def resource_path(relative_path): if hasattr(sys, '_MEIPASS'): return os.path.join(sys._MEIPASS, relative_path) return os.path.join(os.path.abspath('.'), relative_path) def paddle_a_up(): if paddle_a.ycor() &lt; 240: y = paddle_a.ycor() y += 20 paddle_a.sety(y) def paddle_a_down(): if paddle_a.ycor() &gt; -240: y = paddle_a.ycor() y -= 20 paddle_a.sety(y) def paddle_b_up(): if paddle_b.ycor() &lt; 240: y = paddle_b.ycor() y += 20 paddle_b.sety(y) def paddle_b_down(): if paddle_b.ycor() &gt; -240: y = paddle_b.ycor() y -= 20 paddle_b.sety(y) # Keyboard binding win.listen() win.onkeypress(paddle_a_up, "w") win.onkeypress(paddle_a_down, "s") win.onkeypress(paddle_b_up, "Up") win.onkeypress(paddle_b_down, "Down") # Main game loop while True: win.update() # Move the ball ball.setx(ball.xcor() + ball.dx) ball.sety(ball.ycor() + ball.dy) # Border checking if ball.ycor() &gt; 290: winsound.PlaySound(resource_path("bounce.wav"), winsound.SND_ASYNC) ball.sety(290) ball.dy *= -1 if ball.ycor() &lt; -290: winsound.PlaySound(resource_path("bounce.wav"), winsound.SND_ASYNC) ball.sety(-290) ball.dy *= -1 if ball.xcor() &gt; 390: winsound.PlaySound(resource_path("score.wav"), winsound.SND_ASYNC) ball.goto(0, 0) ball.dx *= -1 score_a += 1 pen.clear() pen.write("{} {} | {} {}".format(name_a, score_a, score_b, name_b), align="center", font=("Courier", 24, "normal")) if ball.xcor() &lt; -390: winsound.PlaySound(resource_path("score.wav"), winsound.SND_ASYNC) ball.goto(0, 0) ball.dx *= -1 score_b += 1 pen.clear() pen.write("{} {} | {} {}".format(name_a, score_a, score_b, name_b), align="center", font=("Courier", 24, "normal")) # Paddle and ball collisions if (ball.xcor() &gt; 340 and ball.xcor() &lt; 350) and (ball.ycor() &lt; paddle_b.ycor() + 50 and ball.ycor() &gt; paddle_b.ycor() - 50): winsound.PlaySound(resource_path("collision.wav"), winsound.SND_ASYNC) ball.setx(340) ball.dx *= -1 if (ball.xcor() &lt; -340 and ball.xcor() &gt; -350) and (ball.ycor() &lt; paddle_a.ycor() + 50 and ball.ycor() &gt; paddle_a.ycor() - 50): winsound.PlaySound(resource_path("collision.wav"), winsound.SND_ASYNC) ball.setx(-340) ball.dx *= -1 """ End code """ </code></pre> <p>I have no clue what may cause that error, since the program runs fine and all the sound effects work correctly.</p> <p>Thank you all for your answers!</p>
<p>Hey not sure if you still need help on this but line 103 seems to be causing issues:</p> <pre><code>ball.setx(ball.xcor() + ball.dx) </code></pre> <p>It only occurs when closing the application, like you said. I'm not sure if there is a proper solution but a bandaid solution you can use to avoid pyinstaller from throwing errors is to wrap the entire while loop in a try/catch.</p> <pre><code># Main game loop try: while True: win.update() # Move the ball ball.setx(ball.xcor() + ball.dx) ball.sety(ball.ycor() + ball.dy) # Border checking if ball.ycor() &gt; 290: winsound.PlaySound(resource_path(&quot;bounce.wav&quot;), winsound.SND_ASYNC) ball.sety(290) ball.dy *= -1 if ball.ycor() &lt; -290: winsound.PlaySound(resource_path(&quot;bounce.wav&quot;), winsound.SND_ASYNC) ball.sety(-290) ball.dy *= -1 if ball.xcor() &gt; 390: winsound.PlaySound(resource_path(&quot;score.wav&quot;), winsound.SND_ASYNC) ball.goto(0, 0) ball.dx *= -1 score_a += 1 pen.clear() pen.write(&quot;{} {} | {} {}&quot;.format(name_a, score_a, score_b, name_b), align=&quot;center&quot;, font=(&quot;Courier&quot;, 24, &quot;normal&quot;)) if ball.xcor() &lt; -390: winsound.PlaySound(resource_path(&quot;score.wav&quot;), winsound.SND_ASYNC) ball.goto(0, 0) ball.dx *= -1 score_b += 1 pen.clear() pen.write(&quot;{} {} | {} {}&quot;.format(name_a, score_a, score_b, name_b), align=&quot;center&quot;, font=(&quot;Courier&quot;, 24, &quot;normal&quot;)) # Paddle and ball collisions if (ball.xcor() &gt; 340 and ball.xcor() &lt; 350) and (ball.ycor() &lt; paddle_b.ycor() + 50 and ball.ycor() &gt; paddle_b.ycor() - 50): winsound.PlaySound(resource_path(&quot;collision.wav&quot;), winsound.SND_ASYNC) ball.setx(340) ball.dx *= -1 if (ball.xcor() &lt; -340 and ball.xcor() &gt; -350) and (ball.ycor() &lt; paddle_a.ycor() + 50 and ball.ycor() &gt; paddle_a.ycor() - 50): winsound.PlaySound(resource_path(&quot;collision.wav&quot;), winsound.SND_ASYNC) ball.setx(-340) ball.dx *= -1 except Exception: sys.exit(0) </code></pre> <p>It's not the best solution I admit, but I hope this helps!</p>
python|pyinstaller|executable|fatal-error|python-turtle
0
1,903,019
60,383,024
How to send and recieve messages at the same time python sockets
<p>I'm trying to learn socket programming and currently have the following <code>server</code> and <code>client</code> code however the problem is that the server and/or client can't send and recieve messages at the same time, they're taking it in turns to send and recieve messages.</p> <p>I've looked at the below example but the answer doesn't seem to solve the issue, or I'm following it wrong. </p> <p><a href="https://stackoverflow.com/questions/33434007/python-socket-send-receive-messages-at-the-same-time">Python Socket - Send/Receive messages at the same time</a></p> <p>server</p> <pre><code>import socket import threading s = socket.socket() host = socket.gethostname() port = 8080 s.bind((host, port)) s.listen(1) print("Waiting for connections") conn, addr = s.accept() print("Client has connected") conn.send("Welcome to the server".encode()) def recv_msg(): while True: recv_msg = conn.recv(1024) if not recv_msg: sys.exit(0) recv_msg = recv_msg.decode() print(recv_msg) def send_msg(): send_msg = input(str("Enter message: ")) send_msg = send_msg.encode() conn.send(send_msg) print("message sent") while True: send_msg() t = threading.Thread(target=recv_msg) t.start() </code></pre> <p>client</p> <pre><code>import socket import threading s = socket.socket() host = socket.gethostname() port = 8080 s.connect((host, port)) print("Connected to the server") message = s.recv(1024) message = message.decode() print(message) def recv_msg(): while True: recv_msg = s.recv(1024) if not recv_msg: sys.exit(0) recv_msg = recv_msg.decode() print(recv_msg) def send_msg(): send_msg = input(str("Enter message: ")) send_msg = send_msg.encode() s.send(send_msg) print("Message sent") while True: send_msg() t = threading.Thread(target=recv_msg) t.start() </code></pre> <p>I'm ulitmately trying to create a chat app (with kivy) that sort of resembles Whatsapp/Imessage etc, I've not found a tutorial around how to do this (all the ones I've seen are about creating a chatroom) so if anyone's seen one that would be appreciated. </p>
<p>In client and server you have to start threads before you run loops <code>while True: send_msg()</code></p> <pre><code>t = threading.Thread(target=recv_msg) t.start() while True: send_msg() </code></pre> <hr> <p><strong>EDIT:</strong></p> <p><strong>server.py</strong></p> <pre><code>import socket import threading import sys # --- functions --- def recv_msg(): while True: recv_msg = conn.recv(1024) if not recv_msg: sys.exit(0) recv_msg = recv_msg.decode() print(recv_msg) def send_msg(): while True: send_msg = input(str("Enter message: ")) send_msg = send_msg.encode() conn.send(send_msg) print("message sent") # --- main --- host = socket.gethostname() port = 8080 s = socket.socket() s.bind((host, port)) s.listen(1) print("Waiting for connections") conn, addr = s.accept() print("Client has connected") conn.send("Welcome to the server".encode()) # thread has to start before other loop t = threading.Thread(target=recv_msg) t.start() send_msg() </code></pre> <p><strong>client.py</strong></p> <pre><code>import socket import threading import sys # --- functions --- def recv_msg(): while True: recv_msg = s.recv(1024) if not recv_msg: sys.exit(0) recv_msg = recv_msg.decode() print(recv_msg) def send_msg(): while True: send_msg = input(str("Enter message: ")) send_msg = send_msg.encode() s.send(send_msg) print("Message sent") # --- main --- host = socket.gethostname() port = 8080 s = socket.socket() s.connect((host, port)) print("Connected to the server") message = s.recv(1024) message = message.decode() print(message) # thread has to start before other loop t = threading.Thread(target=recv_msg) t.start() send_msg() </code></pre>
python|multithreading|sockets
0
1,903,020
57,760,915
How to run a task after another is complete without requiring it
<p>In <code>luigi</code> I'm trying to set up a workflow that goes like this:</p> <p>1) Parse data</p> <p>2) Do calculations on parsed data</p> <p>3) Tar calculated data together</p> <p>These operations need to be done in order and I have a couple of workflows set up like this. However, even though the requiring is easily done between 1 and 2 (2 requiring 1), I don't want to explicitly have 3 require 2, otherwise I can't re-use the task in other workflows. So, how can I do this?</p> <p>I know that using dynamic dependencies works, but its intended use is for when you don't know the dependency list ahead of time, whereas in this situation I do. It also requires me to make a <code>Workflow</code> task that yields tasks 2 and 3 in order, instead of just scheduling them.</p> <p>One possible solution I've tried is to make a super class that can take a task as a parameter, but unfortunately this doesn't work as classes cannot be parameters, only primitives + dates. So, what is the right way of making this work?</p> <p>I've included the current method below:</p> <pre><code>class TaskOne(luigi.Task): def output(self): return luigi.LocalTarget("...") def run(self): with self.output().open('w') as out_file: // Do parsing class TaskTwo(luigi.Task): def requires(self): return TaskOne() def output(self): return luigi.LocalTarget(".../success.txt") def run(self): with self.input().open('r') as in_file: // Do calculations with self.output().open('w') as out_file: out_file.write("1") class TarTask(luigi.Task): directory = luigi.Parameter() def output(self): return luigi.LocalTarget(directory+".tar.xz") def run(self): // Tar to temporary tar target then mv file to output location class Workflow(luigi.Task): def output(self): return luigi.LocalTarget(".../wf_success.txt") def run(self): yield TaskTwo() yield TarTask(directory) with self.output().open('w') as out_file: out_file.write("1") </code></pre>
<p>So, I've come up with one way of confronting this problem. You can dynamically set the requires method of a task instance like so:</p> <pre class="lang-py prettyprint-override"><code>from types import MethodType def sequence_tasks(tasks): prev_task = None for task in tasks: def requires_method(self): return self.prev_task task.requires = MethodType(requires_method, task) setattr(task, "prev_task", prev_task) prev_task = task return prev_task </code></pre> <p>This doesn't work if the tasks in your sequence do require other tasks however. For that, you would need to make a more sophisticated <code>requires_method</code> that calls the old requires and appends/sets an attribute to add your new requirement task.</p>
python|luigi
0
1,903,021
58,102,325
Find Max Frequency for every Sequence_ID
<p>I have a Dataframe Like:</p> <pre><code>Time Frq_1 Seq_1 Frq_2 Seq_2 Frq_3 Seq_3 12:43:04 - 30,668 - 30,670 4,620 30,671 12:46:05 - 30,699 - 30,699 3,280 30,700 12:46:17 4,200 30,700 - 30,704 - 30,704 12:46:18 3,060 30,700 4,200 30,700 - 30,700 12:46:18 3,060 30,700 4,200 30,700 - 30,700 12:46:19 3,060 30,700 4,220 30,700 - 30,700 12:46:20 3,060 30,700 4,240 30,700 - 30,700 12:46:37 - 30,698 - 30,699 3,060 30,700 12:46:38 - 30,699 3,060 30,700 4,600 30,700 12:47:19 - 30,668 - 30,669 - 30,669 12:47:20 - 30,667 - 30,667 - 30,668 12:47:20 - 30,667 - 30,667 - 30,668 12:47:21 - 30,667 - 30,667 - 30,668 12:47:21 - 30,665 - 30,665 - 30,665 12:47:22 - 30,665 - 30,665 - 30,665 12:48:35 - 30,688 - 30,690 3,020 30,690 12:49:29 4,160 30,690 - 30,691 - 30,693 </code></pre> <p>I want check the total dataframe and find the result with below condition:</p> <blockquote> <ol> <li>Sequence_ID for which Frequency is not null</li> <li>Sequence_ID for which Frequency is Max (in case of multiple Sequence_ID with non zero Frequency)</li> </ol> </blockquote> <p>I want my result as below:</p> <pre><code>Time Sequence_ID Frequency 12:43:04 4,620 30,671 12:46:18 4,200 30,700 12:49:29 4,160 30,690 </code></pre> <blockquote> <p>Time = correspond to row of (Sequence_ID &amp; Frequency)</p> </blockquote>
<p>This turned out to be quite involved. Here we go anyway:</p> <pre><code>long_df = pd.wide_to_long(df.reset_index(), stubnames=['Seq_', 'Frq_'], suffix='\d+', i='index', j='j') long_df['Frq_'] = pd.to_numeric(long_df.Frq_.str.replace(',','.') .replace('-',float('nan'))) long_df.reset_index(drop=True, inplace=True) ix = long_df.groupby('Seq_').Frq_.idxmax() </code></pre> <hr> <pre><code>print(long_df.loc[ix[ix.notna()].values.astype(int)]) Time Seq_ Frq_ 34 12:43:04 30,671 4.62 16 12:49:29 30,690 4.16 42 12:46:38 30,700 4.60 </code></pre> <p>Seems like for the sequence <code>30,700</code>, the highest frequency is <code>4.60</code>, not <code>4.20</code> </p> <hr> <p>The first step is to collapse the dataframe into three rows, one for the <code>Time</code>, another for the sequence and for the frequency. We can use <code>pd.wide_to_long</code> with the stubnames <code>['Seq_', 'Frq_']</code>:</p> <pre><code>long_df = pd.wide_to_long(df.reset_index(), stubnames=['Seq_', 'Frq_'], suffix='\d+', i='index', j='j') print(long_df) Time Seq_ Frq_ index j 0 1 12:43:04 30,668 - 1 1 12:46:05 30,699 - 2 1 12:46:17 30,700 4,200 3 1 12:46:18 30,700 3,060 4 1 12:46:18 30,700 3,060 5 1 12:46:19 30,700 3,060 6 1 12:46:20 30,700 3,060 7 1 12:46:37 30,698 - 8 1 12:46:38 30,699 - 9 1 12:47:19 30,668 - 10 1 12:47:20 30,667 - 11 1 12:47:20 30,667 - 12 1 12:47:21 30,667 - 13 1 12:47:21 30,665 - 14 1 12:47:22 30,665 - 15 1 12:48:35 30,688 - 16 1 12:49:29 30,690 4,160 ... </code></pre> <p>The next step is to cast to float the fequencies to <code>float</code>, to be able to find the maximum values:</p> <pre><code>long_df['Frq_'] = pd.to_numeric(long_df.Frq_.str.replace(',','.') .replace('-',float('nan'))) print(long_df) Time Seq_ Frq_ index j 0 1 12:43:04 30,668 NaN 1 1 12:46:05 30,699 NaN 2 1 12:46:17 30,700 4.20 3 1 12:46:18 30,700 3.06 4 1 12:46:18 30,700 3.06 5 1 12:46:19 30,700 3.06 6 1 12:46:20 30,700 3.06 7 1 12:46:37 30,698 NaN ... </code></pre> <p>Then we can groupby <code>Seq_</code> and find the indices with the highest values. One could also think of using <code>max</code>, but this would remove the <code>Time</code> column.</p> <pre><code>long_df.reset_index(drop=True, inplace=True) ix = long_df.groupby('Seq_').Frq_.idxmax() </code></pre> <p>And finally index based on the above:</p> <pre><code>print(long_df.loc[ix[ix.notna()].values.astype(int)]) Time Seq_ Frq_ 34 12:43:04 30,671 4.62 16 12:49:29 30,690 4.16 42 12:46:38 30,700 4.60 </code></pre>
python|python-3.x|pandas|numpy|list-comprehension
2
1,903,022
56,316,577
Bokeh slider with CustomJS callback fails to use callback_policy='mouseup' option
<p>I am trying to create a simple <strong>flask</strong> graphical app using <strong>bokeh</strong> for plotting. My code uses the <strong>json_item</strong> function to embed a plot into an html page and is based on the bokeh <a href="https://github.com/bokeh/bokeh/tree/1.1.0/examples/embed/json_item.py" rel="nofollow noreferrer">example</a>.To control plot parameters, I have just added two sliders, for which I have set the option <code>callback_policy='mouseup'</code>. However, when I drag any of the sliders, it produces multiple plots instead of one plot. I am using the latest bokeh version 1.1.0.</p> <p>I have searched web on that topic, but it looks that people have no such the problem with <code>callback_policy='mouseup'</code>. Probably, it does not work in my specific setup or I have an error, which I cannot catch. My python <strong>app.py</strong> code is <a href="https://pastebin.com/DfjYHVjm" rel="nofollow noreferrer">here</a> and <strong>index.html</strong> from <strong>templates</strong> folder is <a href="https://pastebin.com/JNPQru2t" rel="nofollow noreferrer">here</a>. I will be very grateful for any advice.</p>
<p>For Bokeh versions 1.1 and earlier, the <code>callback_policy</code> only applies to the old-style <code>callback</code> property of the <code>Slider</code>, not the newer generic <code>js_on_change</code> methods. So you should be doing this:</p> <pre><code>fs.callback = cbk ss.callback = cbk </code></pre> <p>However, in the upcoming release of Bokeh, things have been improved and clarified. The above method will continue to work (until Bokeh 2.0), but the recommended way to do things will be to watch the new <code>value_throttled</code> property:</p> <pre><code># use this for version 1.2 and later: fs.js_on_change('value_throttled', cbk) ss.js_on_change('value_throttled', cbk) </code></pre> <p>Note that in the new method above will work for both JS callbacks, and now also for python callbacks in Bokeh server apps (with <code>on_change</code> of <code>js_on_change</code> instead, of course). </p>
python|flask|bokeh
0
1,903,023
56,349,724
How can the values from one dataframe be used to calculate the total number of values that are greater than or lesser than it in a second dataframe?
<p>I have different dataframes named: <code>step1</code>, <code>step2</code>,<code>step5</code> and so on and each one of them has a column named <code>BackGas_Flow_sccm</code>.</p> <p>I used the <code>.describe()</code> on the <code>BackGas_Flow_sccm</code> column of every dataframe in order to use the 25% &amp; the 75% to create new features like the <code>IQR</code>, <code>Max</code> &amp; <code>Min</code> . After doing it, I dropped all the other columns and just kept the <code>IQR</code>, <code>Max</code> &amp; <code>Min</code> columns in the dataframe giving the result as follows:</p> <pre><code> Max Min step1 0.0061032863849765275 0.0023474178403755843 step2 0.0061032863849765275 0.0023474178403755843 step5 0.43849765258215967 0.4309859154929577 step7 0.4394366197183098 0.43192488262910805 step12 0.44178403755868545 0.43051643192488265 step15 0.44413145539906096 0.4291079812206573 step16 0.44272300469483566 0.43145539906103286 step19 0.8201877934272299 0.5610328638497655 step24 0.008450704225352117 0.0009389671361502306 step25 0.0061032863849765275 0.0023474178403755843 step26 0.0061032863849765275 0.0023474178403755843 step27 0.0061032863849765275 0.0023474178403755843 </code></pre> <p>Now, I would like to use the values from this dataframe and calculate the number of values that are above the <code>Max</code> value or below the <code>Min</code> value, in the dataframes like <code>step1</code>, <code>step2</code>,<code>step5</code>.</p> <p>I could do:</p> <pre><code>step1[step1['BacksGas_Flow_sccm'] &gt; 0.0061032863849765275] step1[step1['BacksGas_Flow_sccm'] &lt; 0.0023474178403755843] </code></pre> <p>and it would give me the result as 424 and 135 respectively; meaning that there are 424 values in the <code>step1</code> df that are above 0.0061032863849765275 and 135 values that are below 0.0023474178403755843. But entering the numbers like 0.0061032863849765275 can be tedious.</p> <p>So, is there a way this can be achieved in a more efficient manner?</p> <p><strong>Edit 1</strong> <a href="https://i.stack.imgur.com/czXrL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/czXrL.png" alt="Image of dataframes in my Variable explorer window"></a></p>
<p>First you should store those dfs into either <code>list</code> or <code>dict</code> </p> <pre><code>d={'step1':step1,'step2':step2....} </code></pre> <p>Then we can <code>concat</code> it </p> <pre><code>s=pd.concat(d)['BacksGas_Flow_sccm'].unstack(0).describe().loc[['25%','75%']].T </code></pre> <p>After this we can then call for loop </p> <pre><code>for x in x.index: (d[x]['BacksGas_Flow_sccm'] &gt; s.loc[x,'75%']).sum() (d[x]['BacksGas_Flow_sccm'] &lt; s.loc[x,'25%']).sum() </code></pre> <p>Or without for loop </p> <pre><code>pd.concat(d)['BacksGas_Flow_sccm'].gt(s['75%'],level=0).sum(level=0) pd.concat(d)['BacksGas_Flow_sccm'].lt(s['25%'],level=0).sum(level=0) </code></pre>
python|python-3.x|pandas
2
1,903,024
18,280,245
Where does python tempfile writes its files?
<p>In python you can create a tempfile as follows:</p> <pre><code>tempfile.TemporaryFile() </code></pre> <p>And then you can write to it. Where is the file written in a GNU/Linux system? I can't seem to find it in the /tmp directory or any other directory.</p> <p>Thank you,</p>
<p>Call the <a href="http://docs.python.org/2/library/tempfile.html#tempfile.gettempdir" rel="noreferrer"><code>tempfile.gettempdir()</code> function</a>:</p> <blockquote> <p>Return the directory currently selected to create temporary files in.</p> </blockquote> <p>You can change where temporary files are created by setting the <a href="http://docs.python.org/2/library/tempfile.html#tempfile.tempdir" rel="noreferrer"><code>tempfile.tempdir</code> value</a> to different directory if you want to influence where temporary files are created. Quoting from the documentation, if that value is <code>None</code> the rules are as follows:</p> <blockquote> <p>If tempdir is unset or <code>None</code> at any call to any of the above functions, Python searches a standard list of directories and sets tempdir to the first one which the calling user can create files in. The list is:</p> <ol> <li>The directory named by the <code>TMPDIR</code> environment variable.</li> <li>The directory named by the <code>TEMP</code> environment variable.</li> <li>The directory named by the <code>TMP</code> environment variable.</li> <li>A platform-specific location: <ul> <li>On RiscOS, the directory named by the <code>Wimp$ScrapDir</code> environment variable.</li> <li>On Windows, the directories <code>C:\TEMP</code>, <code>C:\TMP</code>, <code>\TEMP</code>, and <code>\TMP</code>, in that order.</li> <li>On all other platforms, the directories <code>/tmp</code>, <code>/var/tmp</code>, and <code>/usr/tmp</code>, in that order.</li> </ul></li> <li>As a last resort, the current working directory.</li> </ol> </blockquote>
python|linux|temporary-files
12
1,903,025
55,301,856
python pandas renaming column name startswith
<p>i have multiple excel files with uniform column names, except for one. </p> <p>One file calls it EndOfMarchStatus, another file calls it EndofAprilStatus, and so on.</p> <p>i need to change the column name to just say EndofMonthStatus. there really is no answer i could find that matches this question. </p> <p>some form of rename command with wildcards or startswith will probably work.</p> <p>things i've tried but did not work are:</p> <pre><code>sheet1df.columns.str.replace('Endof.*', 'EndOfMonthStatus') sheet1df.rename(columns={sheet1df.filter(regex='*.Status').columns[0]: 'EndOfMonthStatus'}, inplace=True) sheet1df.rename(columns={'^Status':'EndOfMonthStatus'}, inplace=True) sheet1df.rename(columns=lambda x: x.replace('Endof%', 'EndOfMonthStatus'), inplace=True) </code></pre>
<p>You can use <code>str.replace</code>:</p> <pre><code>df.columns = df.columns.str.replace('(?&lt;=EndOf)(\w+)(?=Status)', 'Month') </code></pre>
python|pandas|rename|wildcard|startswith
2
1,903,026
55,212,467
Downloading just the .torrent file from a magnet uri. Not sure what I'm actually downloading
<p>Given a magnet file, I'm trying to get a <code>.torrent</code> file using the Python bindings for libtorrent.</p> <pre><code>#!/usr/bin/env python import libtorrent as lt import time import sys import random ses = lt.session() r = random.randrange(10000, 49000) ses.listen_on(r, r+50) print("Listening on ports %s - %s." % (r, r+50)) params = { 'save_path': '.', 'storage_mode': lt.storage_mode_t(2), 'paused': False, 'auto_managed': True, 'duplicate_is_error': True, 'file_priorities': [0]*5 } link = "magnet:?xt=urn:btih:209c8226b299b308beaf2b9cd3fb49212dbd13ec&amp;dn=Tears+of+Steel&amp;tr=udp%3A%2F%2Fexplodie.org%3A6969&amp;tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&amp;tr=udp%3A%2F%2Ftracker.empire-js.us%3A1337&amp;tr=udp%3A%2F%2Ftracker.leechers-paradise.org%3A6969&amp;tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337&amp;tr=wss%3A%2F%2Ftracker.btorrent.xyz&amp;tr=wss%3A%2F%2Ftracker.fastcast.nz&amp;tr=wss%3A%2F%2Ftracker.openwebtorrent.com&amp;ws=https%3A%2F%2Fwebtorrent.io%2Ftorrents%2F&amp;xs=https%3A%2F%2Fwebtorrent.io%2Ftorrents%2Ftears-of-steel.torrent" h = lt.add_magnet_uri(ses, link, params) ses.add_extension('ut_metadata') ses.add_extension('ut_pex') ses.add_extension('metadata_transfer') ses.add_dht_router("router.utorrent.com", 6881) ses.add_dht_router("router.bittorrent.com", 6881) ses.add_dht_router("dht.transmissionbt.com", 6881) ses.add_dht_router("dht.aelitis.com", 6881) ses.start_dht() ses.start_lsd() ses.start_upnp() ses.start_natpmp() while (not h.has_metadata()): time.sleep(1) status = ses.status() print("Seeking metadata for torrent (%s DHT nodes online)." % status.dht_nodes) torinfo = h.get_torrent_info() torfile = lt.create_torrent(h.get_torrent_info()) f = open("torrentfile.torrent", "wb") f.write(lt.bencode(torfile.generate())) f.close() </code></pre> <p>Several minutes later the transfer is complete and I <code>cat</code> the results:</p> <pre><code>[me@localhost torrent]$ cat torrentfile.torrent d8:announce23:udp://explodie.org:696913:announce-listll23:udp://explodie.org:696934:udp://tracker.coppersurfer.tk:696931:udp://tracker.empire-js.us:133740:udp://tracker.leechers-paradise.org:696933:udp://tracker.opentrackr.org:133726:wss://tracker.btorrent.xyz25:wss://tracker.fastcast.nz32:wss://tracker.openwebtorrent.comee13:creation datei1552857262e4:info0:e[me@localhost torrent]$ </code></pre> <p>The expected output is a binary <code>.torrent</code> file that contains all the file parts and hashes, etc. Some (possibly) relevant system info:</p> <pre><code>[me@localhost torrent]$ python --version Python 2.7.14 [me@localhost torrent]$ python -c "import libtorrent; print libtorrent.version" 1.0.10.0 [me@localhost torrent]$ uname -a Linux ip-172-31-53-167.ec2.internal 4.14.104-95.84.amzn2.x86_64 #1 SMP Sat Mar 2 00:40:20 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux </code></pre> <p>Any suggestions would be appreciated. I'm using sample code that is practically identical to snippets that are claimed to work for others. Thank you.</p>
<p>This looks like an ABI issue introduced in 1.0.10.</p> <p>If you look at the <a href="https://github.com/arvidn/libtorrent/releases/tag/libtorrent-1_0_10" rel="nofollow noreferrer">changelog</a> for 1.0.10, it introduced a new type for bencoded entries (<code>preformatted</code>). This was to preserve invalid key ordering in torrent files (to allow for re-encoding it and produce the same info-hash).</p> <p>Unfortunately this broke the ABI with previous 1.0.x releases. I <a href="https://github.com/arvidn/libtorrent/commit/76381835be19da2f8f1fc501445e31d32e6d83e4" rel="nofollow noreferrer">fixed</a> this in the <code>RC_1_0</code> branch, for a release in 1.0.12, but apparently this was never released.</p> <p>In short, it looks like your python binding library is built with a version prior to 1.0.10, but your libtorrent library was 1.0.10 or later.</p> <p>As long as the python bindings and the main library are from the same release of libtorrent, you should be good.</p>
python|python-2.7|bittorrent|libtorrent
1
1,903,027
55,555,791
How to iterate through all characters in string then check if any characters contain lowercase, uppercase, digits, and punctuation and return True
<p>I'm trying to check if a string contains any characters that are lowercase, uppercase, a digit, or punctuation and if any are then return True, but whenever i run it the code only checks the first character. Am i iterating incorrectly?</p> <p>I've tried to create conditionals that check if a character is lowercase, uppercase, a digit, or punctuation, and if it is then return true. else return false.</p> <p>this is what i have at the moment:</p> <pre><code>def check_characters(password, characters): '''Put your docstring here''' for i in password: if i.islower(): return True if i.isupper(): return True if i.isdigit(): return True if i.punctuation(): return True else: return False def main(): password = "n11+" print(check_characters(password, ascii_lowercase)) print(check_characters(password, ascii_uppercase)) print(check_characters(password, digits)) print(check_characters(password, punctuation)) </code></pre> <p>I expect it return True for the lowercase if the character contains a lowercase, and the same for the other call to functions, but the actual output is all True when it should only be true for the lowercase, digit, and punctuation</p>
<p>Use this function:</p> <pre><code>import string def check_characters(password, characters): '''Put your docstring here''' for i in password: if i.islower(): return True if i.isupper(): return True if i.isdigit(): return True if i in string.punctuation: return True </code></pre> <p>You have to use this because you did an <code>else</code>, and <code>return</code> only outputs once, so if you didn't have that, it won't return and keep running.</p>
python
0
1,903,028
57,682,751
How to convert a ctypes array of c_uint to a numpy array
<p>I have the following ctypes array:</p> <pre><code>data = (ctypes.c_uint * 100)() </code></pre> <p>And I want to create a numpy array <code>np_data</code> containing the integer values from ctypes array data (the ctypes array is obviously populated later with values)</p> <p>I have seen that there is a ctypes interface in numpy (<a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.ctypes.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.ctypes.html</a>) but as far as I understood this is only to get ctypes from a numpy array and not the opposite.</p> <p>I can obviously traverse <code>data</code> and populate <code>np_data</code> array items one by one, but I am wondering if there is a more efficient/straightforward way to do achieve this task.</p>
<p>You could use <a href="https://numpy.org/doc/stable/reference/routines.ctypeslib.html#numpy.ctypeslib.as_array" rel="nofollow noreferrer">[NumPy]: numpy.ctypeslib.as_array(obj, shape=None)</a>.</p> <blockquote> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import ctypes as ct &gt;&gt;&gt; import numpy as np &gt;&gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; CUIntArr10 = ctypes.c_uint * 10 &gt;&gt;&gt; &gt;&gt;&gt; ui10 = CUIntArr10(*range(10, 0, -1)) &gt;&gt;&gt; &gt;&gt;&gt; [e for e in ui10] # The ctypes array [10, 9, 8, 7, 6, 5, 4, 3, 2, 1] &gt;&gt;&gt; &gt;&gt;&gt; np_arr = np.ctypeslib.as_array(ui10) &gt;&gt;&gt; np_arr # And the np one array([10, 9, 8, 7, 6, 5, 4, 3, 2, 1], dtype=uint32) </code></pre> </blockquote> <p>Didn't get to the specific line of code (nor did I test my assumption), but I have a feeling that the contents copying is done by a single <em>memcpy</em> call, which would make it much faster than doing things &quot;manually&quot; from <em>Python</em>.</p>
python|arrays|numpy|ctypes
3
1,903,029
57,509,664
it's not sorted using "sorted", and getting a wrong answer. And I am not sure if the algorithm is right
<p>I am using <em>Kruskal's Algorithm</em> to find the <em>Minimum Spanning Tree</em>. I just followed the algorithm that has been provided during the lecture and I should keep the format of having <em>Edge</em> class. <em>sorted</em> is not working so I can't figure out if the logic is wrong.</p> <p>Is there any reason of naming parent in the constructor of <em>Edge</em> class. </p> <pre><code>import sys class Edge: def __init__(self, start_ver, to_vertex, weight): self.start_ver = start_ver self.to_vertex = to_vertex self.weight = weight self.spanning_tree = False # def __lt__(self, other): # return self.weight &lt; other.weight class UnionFind: def __init__(self, ver_num): self.parent = None self.create_set(ver_num) def create_set(self, ver_num): self.parent = list(range(ver_num)) def find_set(self, ver_num): if self.parent[ver_num] != ver_num: self.parent[ver_num] = self.find_set(self.parent[ver_num]) return self.parent[ver_num] def merge_set(self, one_ver, two_ver): self.parent[self.find_set(one_ver)] = self.find_set(two_ver) def MST_Kruskal(ver_num, edge_list): union_find = UnionFind(ver_num) mst_tree = list() sorted(edge_list, key=lambda vertex: vertex.weight) for edge in edge_list: if not edge.spanning_tree: if union_find.find_set(edge.start_ver) != union_find.find_set(edge.to_vertex): mst_tree.append(edge) if len(mst_tree) == ver_num - 1: edge.spanning_tree = True union_find.merge_set(edge.start_ver, edge.to_vertex) sorted(edge_list, key=lambda vertex: vertex.weight) else: return total = 0 for x in mst_tree: total += x.weight print(total) def main(): edge_list = list() vertex_num, edge_num = map(int, (sys.stdin.readline().split())) for e in range(edge_num): start, end, weight = map(int, sys.stdin.readline().split()) edge = Edge(start-1, end-1, weight) edge_list.append(edge) MST_Kruskal(vertex_num, edge_list) if __name__== "__main__": main() </code></pre> <p><strong>input</strong></p> <pre><code>4 5 1 2 10 2 3 15 1 3 5 4 2 2 4 3 40 </code></pre> <p><strong>expected output</strong></p> <pre><code>17 </code></pre>
<p>You are confusing the function sorted(iterable[, key][, reverse]) with the list method sort(*, key=None, reverse=None).</p> <p>Where sorted according to the documentation does: "Return a new sorted list from the items in iterable." While sort according to documentationd does: This method sorts the list in place, using only &lt; comparisons between items. Exceptions are not suppressed - if any comparison operations fail, the entire sort operation will fail (and the list will likely be left in a partially modified state).</p> <p>So for your code to work you need to change: sorted(edge_list, key=lambda vertex: vertex.weight) to edge_list.sort(key=lambda vertex: vertex.weight)</p> <p>This assuming that everything else is correct in your code</p>
python-3.x|minimum-spanning-tree|kruskals-algorithm
2
1,903,030
57,519,484
Create a different dictionary from an existing dictionary in python
<p>I have a dictionary in python which is of format </p> <pre><code>dict = { 'p_id': 254, 's_id': 1, 'object_cnt': 4, 'type0': 0, 'address0': 65500, 'size0': 2, 'value0': 23.4, 'type1': 1, 'address1': 65535, 'size1': 2, 'value1': 45.7, 'type2': 2, 'address2': 65, 'size2': 0, 'value2': 1, 'type3': 3, 'address3': 535, 'size3': 0, 'value3': 0, } </code></pre> <p>Since the object_cnt is 4, there will four objects in this dictionary. </p> <pre><code> 'type0': 0, 'address0': 65500, 'size0': 2, 'value0': 23.4, </code></pre> <p>The above can be considered as one object. I want to create a dictionary of form </p> <pre><code>new_dict = { '65500' : (2,23.4) '65535' : (2,45) '65' : (0,1) '535' : (0,0) } #address of a object as key and (size_object,value_object) as value </code></pre> <p>Can someone help me with this?</p> <p>Thanks</p>
<p>Use string formatting:</p> <pre><code>res = {d['address%s' % i]: [d['size%s' %i], d['value%s' % i]] for i in range(d['object_cnt'])} </code></pre> <p>Output:</p> <pre><code>{65: [0, 1], 535: [0, 0], 65500: [2, 23.4], 65535: [2, 45.7]} </code></pre>
python|python-3.x|dictionary
2
1,903,031
57,532,851
Getting text from <span> tag element
<p>I have a table in which each row has a xpath and within each row a column is embedded. There is a tag in row's xpath that changes text based on what you choose on that page.</p> <pre><code>&lt;div class='xyz'&gt; &lt;span&gt; some text &lt;/span&gt; &lt;/div&gt; </code></pre> <p>I am doing <code>//div[@class='xyz']/span.text()</code></p> <p>However, I am not able to get the text from here. </p> <p>I am using python with VSCode.</p>
<p>The syntax to get the text from span tag using <em>xpath</em> is incorrect. </p> <p>This is the proper <em>xpath</em>, </p> <pre><code>//div[@class='xyz']/span/text() </code></pre> <p>Or you can use <code>.text</code> with web driver <code>find_element_by_xpath</code> to extract text.</p> <pre><code>span_text = driver.find_element_by_xpath("//div[@class='xyz']/span").text </code></pre> <p>If <code>/span</code> is the only child element of <code>//div[@class='xyz']</code> then you can use this path instead of the one above <code>driver.find_element_by_xpath("//div[@class='xyz']").text</code></p> <p>You can read about how to use xpath with selenium webdriver <a href="https://selenium-python.readthedocs.io/locating-elements.html#locating-by-xpath" rel="nofollow noreferrer">here</a>.</p>
python-3.x|selenium-webdriver|xpath|css-selectors|webdriverwait
2
1,903,032
42,324,447
Python 3 inside Sublime Text with Anaconda on a mac
<p>I cannot seem to get Python3 interpreter to build inside sublime text using Anaconda. I have tried all possible configurations but to no avail, the system does not seem recognize installed libraries and throws an importError back at me.</p> <p>this is my python project's settings for anaconda:</p> <pre><code>{ "build_systems": [ { "file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)", "name": "Anaconda Python Builder", "selector": "source.python", "shell_cmd": "\"/usr/local/bin/python3\" -u \"$file\"" } ], "folders": [ { "path": "Practice" } ], "settings": { "python_interpreter": "python3" } } </code></pre> <p>edit: python3 installed with homebrew</p>
<p>Tools -> Command Palette -> Anaconda: Set Python interpreter</p>
python-3.x|sublimetext3|sublime-anaconda
4
1,903,033
53,859,112
SMC-Python Adding and Removing Blacklisted IP's
<p>I'm trying to programmatically add a blacklisted IP to the firewall. I try this but get an error. I'm not that new to python, but I'm not all that proficient in reading the documentation, so here is that if it helps.</p> <p><a href="https://media.readthedocs.org/pdf/smc-python/latest/smc-python.pdf" rel="nofollow noreferrer">https://media.readthedocs.org/pdf/smc-python/latest/smc-python.pdf</a></p> <p><a href="https://smc-python.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">https://smc-python.readthedocs.io/en/latest/index.html</a></p> <pre><code>from smc import session from smc_monitoring.monitors.blacklist import BlacklistQuery from smc.core.engines import Engine from smc.administration.system import System session.login(url='http://nope', api_key='supersecret') print("logged in") # # Method 1 ERROR system = System() print(system.smc_version) system.blacklist(src='1.1.1.1/32', dst='2.2.2.2/32', duration=3600) session.logout() </code></pre> <blockquote> <p>Traceback (most recent call last): File "/home/matthew/PycharmProjects/GitSMC/BlacklistTest.py", line 12, in </p> <p>system.blacklist(src='1.1.1.1/32', dst='2.2.2.2/32', duration=3600)</p> <p>File "/home/matthew/PycharmProjects/GitSMC/venv/lib/python3.7/site-packages/smc/administration/system.py", line 159, in blacklist json=prepare_blacklist(src, dst, duration, **kw))</p> <p>File "/home/matthew/PycharmProjects/GitSMC/venv/lib/python3.7/site-packages/smc/base/mixins.py", line 32, in make_request result = getattr(request, method)()</p> <p>File "/home/matthew/PycharmProjects/GitSMC/venv/lib/python3.7/site-packages/smc/api/common.py", line 66, in create return self._make_request(method='POST')</p> <p>File "/home/matthew/PycharmProjects/GitSMC/venv/lib/python3.7/site-packages/smc/api/common.py", line 101, in _make_request raise err</p> <p>smc.api.exceptions.ActionCommandFailed: Invalid JSON format: At line 1 and column 17, end_point1 is not recognized as JSON attribute.</p> </blockquote>
<p>There are multiple ways to blacklist, either through the System entry point like you have above, or individually against a single firewall/cluster. If using the System entry point, the blacklist entry will go to all SMC managed firewalls. Based on the message, it appears you might be using a newer version of smc-python (i.e. >6.5.x).</p> <p>In that case it's best to use the engine level blacklisting:</p> <pre><code>from smc.elements.other import Blacklist engine = Engine('myfw') blacklist = Blacklist() blacklist.add_entry(src='1.1.1.1/32', dst='2.2.2.2/32') engine.blacklist_bulk(blacklist) </code></pre> <p>I just noticed that the System entry point does not have a blacklist function for SMC 6.5 (which hasn't technically been fully certified for this library yet), but I will add to the develop branch as 6.5.x will be officially supported in the next couple of weeks.</p> <p>If you are using SMC version &lt;= 6.4.x, you can use the engine.blacklist, or System.blacklist commands.</p> <p>DLP</p>
python
3
1,903,034
54,027,656
MSSQL 'Numeric Value Out of Range' Error for 20 digit python Long Int into Numeric(24,0) Column
<p>I am attempting to insert a 20 digit, 64 bit python long integer into a MSSQL Numeric(24,0) column. This results in a 'Numeric Value Out of Range' Error from MSSQL. I am using the pypyodbc module and ODBC Driver 13 for SQL Server to INSERT the data from my python application. </p> <p>I've tried using a Numeric(38,0) column in SQL just to test the limit, but receive the same error. In the insert statement, I've also attempted explicitly casting the id as a Numeric(24,0) data type. All attempts resulted in the same error. </p> <pre><code>#Python SQL Insert Code id = 'ADD7A9FA-E77B-4BBB-92AA-3D9C7BBB44D0' idlist = id.split('-') val = [int(idlist[0] + idlist[1] + idlist[2], 16), int(idlist[3] + idlist[4], 16)] cmd = "INSERT INTO jssuser.dbo.API_VIMSPost (\ [id_int1], \ [id_int2]) \ VALUES (?, ?)" #function to simplify the use of pypyodbc sqlcmd.sqlCmd(cmd, values = val) #SQL Table Code USE [jssuser] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[API_VIMSPost]( [id_int1] [numeric](24, 0) NOT NULL, [id_int2] [numeric](24, 0) NOT NULL, CONSTRAINT [PK_PostID] PRIMARY KEY CLUSTERED ( [id_int1] ASC, [id_int2] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO </code></pre> <p>I would expect that a python long int of 20 digits and 64 bits would fit into a MSSQL column of type Numeric(20,0) or greater. However, an Insert results in a 'Numeric Value Out of Range' error.</p>
<p>The size of Python integer values is only limited by available memory; they are not restricted to an arbitrary number of bits. However, the largest integer value that the SQL Server ODBC driver(s) can handle is a (64-bit signed) <code>bigint</code> whose maximum positive value is <code>math.pow(2, 63) - 1</code>, or <code>9223372036854775807</code> which is nineteen (19) digits long.</p> <p>When pypyodbc tries to pass a 20-digit integer the ODBC driver chokes on it, so this fails</p> <pre class="lang-python prettyprint-override"><code>x = 13807673592980537824 crsr.execute("CREATE TABLE ##tmp (id INT PRIMARY KEY, id_int1 NUMERIC(24, 0))") sql = "INSERT INTO ##tmp (id, id_int1) VALUES (?, ?)" params = (1, x) crsr.execute(sql, params) </code></pre> <p>However, the following works because pypyodbc doesn't tell the ODBC driver to expect an integer</p> <pre class="lang-python prettyprint-override"><code>x = 13807673592980537824 crsr.execute("CREATE TABLE ##tmp (id INT PRIMARY KEY, id_int1 NUMERIC(24, 0))") sql = "INSERT INTO ##tmp (id, id_int1) VALUES (?, ?)" params = (1, Decimal(x)) # convert Python `int` to Python `decimal.Decimal` crsr.execute(sql, params) </code></pre>
python|sql-server|pypyodbc
1
1,903,035
14,371,627
How can I put a file upload form into my Pyramid app?
<p>I am going through of <a href="http://docs.pylonsproject.org/projects/pyramid_cookbook/en/latest/forms/file_uploads.html" rel="nofollow">this tutorial</a>, but it is very confusing to me. First of all, in the <code>store_mp3_view</code> function, both <code>file</code> and <code>filename</code> are referenced when declaring <code>filename</code> and <code>input_file</code> (first two lines). From the form above, it seems that only <code>file</code> is an input (<code>filename</code> is never mentioned). Is <code>filename</code> automatically input?</p> <p>Additionally, is the loop at the end writing the data to the output file necessary? For my application, I want to the upload process to start a separate script that parses data from the file. Do I have to first put the data into an output file and then parse that or was that just for the example?</p>
<blockquote> <p>From the form above it seems that only file is an input (filename is never mentioned). Is filename automatically input?</p> </blockquote> <p>Correct, 'filename' attribute should be available anytime you upload a file from a form.</p> <blockquote> <p>Do I have to first put the data into an output file and then parse that or was that just for the example?</p> </blockquote> <p>input_data is a file object, but you do not need to write input_data out to disk before parsing it, this example just happens to be writing it to disk.</p>
python|file-upload|pyramid
1
1,903,036
68,464,336
Split log data by MB size in Python
<p>There is a process that generates log data of size more than 10 mb. I have been instructed to split the data into 10mb chunks maximum and write to text files means if the log size is 25 mb then it should be divided into 3 parts - 10, 10, 5mb and written to 3 text files. Also the second and third text file names should be like &quot;file..._1&quot;, &quot;file..._2&quot;. To write the _1 and _2, I am using the code - <code>filename=&quot;file&quot; + &quot;_&quot; + np.arange(1, 10, 1) + &quot;.txt&quot;</code> but when it is creating a new file with underscore, it is giving UFuncTypeError.</p> <p>My code is:</p> <pre class="lang-py prettyprint-override"><code>def writelog(self, filename, msgstr): #writing log to .txt file filename = &quot;log-&quot; + str(date.today()) + &quot;.txt&quot; current_date_and_time = str(datetime.now()) logfile = open(filename, 'a') logfile.write(current_date_and_time + msgstr) logfile.close() #checking if the text file is more than 10mb, then create a new file filelocation = &quot;...location.../log-2021-07-20.txt&quot; filesize = os.stat(filelocation) sizeoflog = filesize.st_size / (1024 * 1024) print('Size of log in MB- ' + str(sizeoflog)) if sizeoflog &gt; 10: filename = &quot;log-&quot; + str(date.today()) + &quot;_&quot; + np.arange(1, 10, 1) + &quot;.txt&quot; logfile = open(filename, 'a') logfile.write(current_date_and_time + msgstr) logfile.close() return filename </code></pre> <p><code>msgstr</code> is a dictionary that I passed in main.py</p> <p>So, the summary is:</p> <ol> <li>split the data into 10mb chunks each and write to file</li> <li>first file name will be like <code>log-today's date.txt</code>, second file name will be <code>log-today's date_1.txt</code> and so on.</li> <li>each file content should start with <code>current_date_and_time</code> and then the <code>msgstr</code>.</li> </ol> <p>How can I address these problems ? I am a beginner in Python..</p>
<p>Here's my approach. I created 2 simple helper functions, one for the filesize (with a <code>try: except</code> block) and another to find the last logfile with a size under 10MB.</p> <p>Since they don't care about the class itself, you should use the <code>@staticmethod</code> <a href="https://realpython.com/instance-class-and-static-methods-demystified/#static-methods" rel="nofollow noreferrer">decorator</a>. Note that you need to change the method calls to both <code>getsize()</code> and <code>find_current_log()</code> as I don't know the class name.</p> <pre class="lang-py prettyprint-override"><code>from datetime import datetime import os class ClassNameGoesHere: @staticmethod def getsize(filename): try: return os.stat(filename).st_size / 1048576 except FileNotFoundError: return 0 @staticmethod def find_current_log(filename): base_filename = os.path.basename(filename) if '_' in base_filename: counter = int(base_filename.split('_')[1].split('.')[0]) else: counter = 0 while ClassNameGoesHere.getsize(filename) &gt;= 10: counter += 1 if '_' in base_filename: base_filename = f&quot;{base_filename.split('_')[0]}_{counter}.txt&quot; else: base_filename = f&quot;{base_filename.split('.')[0]}_{counter}.txt&quot; filename = f'{os.path.dirname(filename)}{os.sep}{base_filename}' return filename def writelog(self, filename, msgstr): filename = ClassNameGoesHere.find_current_log(filename) with open(filename, 'a') as outfile: outfile.write(f'{datetime.now()} | {msgstr}\n') somelogger = ClassNameGoesHere() somelogger.writelog('path/to/file/log-2021-07-21.txt', 'this is a test messsage') </code></pre>
python
0
1,903,037
25,469,770
Django raw query - use dot notation for traversing related model's fields
<p>This is my <code>raw</code> query in Django</p> <pre><code>q = Book.objects.raw(''' SELECT * FROM ( SELECT "book"."name", "author"."name", RANK() OVER (PARTITION BY "author"."id") AS "rank" FROM "book" INNER JOIN "book" ON ("book"."author_id" = "author"."id") ) AS "book_table" WHERE "rank" &lt; %s ''', 10) </code></pre> <p>In the above queryset, the <code>name</code> field is ambiguous. I pass this object to another library which requires the usage of dot notation i.e. <code>q[0].name</code> should refer to the book's name and <code>q[0].author.name</code> should refer to author's name. Is it possible to use dot notation with raw query (last resort is using <code>"author"."name" AS "author_name"</code>, but that'll introduce redundant code because those functions take in input from Django's managed queries too, which support the dot notation). </p>
<p>This would probably be better to use Django's own query syntax as much as possible, and only add the rank field manually. You can do this with <code>extra</code> rather than <code>raw</code>, and use <code>select_related</code> to traverse the relationship. Something like this:</p> <pre><code>Book.objects.select_related('author').extra( select={'rank': 'RANK() OVER (PARTITION BY "author"."id")'}, where=['"rank" &lt; 10'] ) </code></pre>
python|django|django-queryset
-2
1,903,038
25,496,205
When function can only be called once
<p>The way it is I can only call <code>funct()</code> once per iteration. So I can't do this:</p> <pre><code>result=[funct(arg) for arg in args if arg and funct(arg)] </code></pre> <p>If there is a connection drop this function returns <code>None</code>. If None is returned I don't want to add it to the resulted list. How to achieve it?</p> <pre><code>def funct(arg): if arg%2: return arg*2 args=[1,2,3,None,4,5] result=[funct(arg) for arg in args if arg] print result </code></pre>
<p>You can use filter as you will not be returning any <code>0</code> values to your list:</p> <pre><code>result = filter(None,map(funct,filter(None,args))) </code></pre> <p>It will filter your <code>args</code> list and any <code>None</code> values returned</p> <p>On a list with 20 elements args:</p> <pre><code>In [18]: %%timeit [val for arg in args if arg for val in [funct(arg)] if val is not None] ....: 100000 loops, best of 3: 10.6 µs per loop In [19]: timeit filter(None,map(funct,filter(None,args))) 100000 loops, best of 3: 6.42 µs per loop In [20]: timeit [a for a in [funct(arg) for arg in args if arg] if a] 100000 loops, best of 3: 7.98 µs per loop </code></pre>
python
2
1,903,039
25,504,751
refactor legacy python code: from u'...' to '...'
<p>I have a legacy code project which uses a lot of unicode strings like this: <code>u'...'</code></p> <p>I want to update the code to use <code>from __future__ import unicode_literals</code></p> <p>Any automated help from pycharm or an other tool?</p> <p><strong>Update</strong></p> <p>A simple search+replace does not work, since the code could contain strings like <code>'fuu'</code> and I don't want that to be replace to <code>'fu'</code>.</p>
<p>Yes, pycharm has automated find and replace with regex matching. You could also use a simple tool like <code>sed</code>. </p> <p><strong>But be forewarned, it is not the case that you can blindly change all modules to include the import:</strong> </p> <pre><code>from __future__ import unicode_literals </code></pre> <p>This can cause unintended problems, the issue is not with strings which were <code>u'unicode'</code> being changed into <code>'unicode'</code>, that part is of no consequence. The issue is with strings that actually should have been <code>'bytestrings'</code> being changed into unicode.</p> <p>Before you make this global change, you need to ensure that all places where bytestrings are used can really safely be changed to unicode. Those that can't need to be prefixed as <code>b'bytestrings'</code>. </p>
python|unicode|pycharm|automated-refactoring
2
1,903,040
44,745,964
How to submit a form
<p>I want to enter the web site <a href="https://paste.ubuntu.com/" rel="nofollow noreferrer">https://paste.ubuntu.com/</a> and send my data (my code) then I want to get back.</p> <p>I tried the code below but it didn't work.</p> <pre><code>import requests url = "http://duckduckgo.com/html" payload = {'q':'python'} r = requests.post(url, payload) with open("requests_results.html", "w") as f: f.write(r.content) </code></pre> <p>When I tried it, I just get a weird HTML form but not the data I want.</p>
<p>This code does some work, try for yourself. Requested html file uses <strong>relative addressing</strong>, so it looks ugly without adding <strong>absolute addressing</strong>.</p> <pre><code>import requests code = ''' line 1 line 2 line 3 ''' payload = { 'poster': 'Default Poster', 'syntax': 'python', 'content': code } url = 'https://paste.ubuntu.com/' r = requests.post(url, data=payload) text = r.text # If you want your html file to look the same as original # then uncomment next line, otherwise it's ugly # text = text.replace('href="/','href="https://paste.ubuntu.com/') with open('requests_results.html', 'w') as f: f.write(text) </code></pre>
python|python-3.x|python-requests
0
1,903,041
23,577,229
Information on managing large, multi-faceted enterprise Python codebases?
<p>I've googled and googled, but have found almost nothing in the way of discussions or best practices in managing larger enterprise codebases in Python. Here, I'm simply soliciting any and all pointers to such information. Here's some background and some of the questions I'm looking to answer.</p> <p>We're long-time Java developers, who have solved similar problems to those mentioned below largely using well established Java best practices, as well as Maven, Ant and a Sonotype Nexus repo.</p> <p>I'm talking internal software only here. We're not looking to distribute anything Python-based. We've got multiple development groups using Python, each developing sharable utility code libraries, final web applications and stand-alone tools, all in pure Python. Each group has its own Github source repository.</p> <p>How do we manage our shareable code, both within a group and across groups? Do we create eggs (or something similar) and distribute and install them into the Python system? If so, would we store them in our Nexus repo like our Java jars, or is there a more Python-specific method if internal package distribution? Or, do we just share raw code, checking out sources from multiple Github repos?</p> <p>If we share raw code, how do we manage getting the Python searchpath right as we bring together code from multiple repositories?</p> <p>How do we manage package namespaces when we want our packages to all live in a com.ourcompany base namespace? It seems like python isn't too happy when you bring together source trees with overlapping namespaces.</p> <p>How do we manage third party package versioning? I've never seen easy_install or pip passed a version number. How do we lock down third party package versions?</p> <p>Do tools exist to aid in Python code reviews, CI, regression testing, etc.?</p> <p>We're relative newbies to Python code, so some of these questions may have fairly obvious answers. Still, I find it surprising that I can't find more information on managing larger Python codebases. </p> <p>What issues will we encounter that I haven't thought to ask about, or don't yet know enough to even know to ask about?</p> <p>Any valuable pointers will be greatly appreciated.</p>
<p>Well, I won't even try to answer <em>all</em> those (excellent) questions, but here are a few opinionated pointers which will hopefully help (as someone who works in both worlds, though more Java).</p> <h2>Packaging</h2> <blockquote> <p>If so, would we store them in our Nexus repo like our Java jars, or is there a more Python-specific method if internal package distribution? Or, do we just share raw code, checking out sources from multiple Github repos?</p> </blockquote> <p>Packaging in Python is historically a bit of a mess IMHO, though it feels like it's improving. <a href="https://docs.python.org/2/distutils/" rel="nofollow noreferrer">Distutils</a> is the major / native tool here - I've not used it much, feels slightly scary in places. In general, also check <a href="https://python-packaging-user-guide.readthedocs.org/en/latest/current.html" rel="nofollow noreferrer">recommended tools</a>.</p> <p><a href="https://pip.pypa.io/en/latest/" rel="nofollow noreferrer">Pip</a> has all but won the war of mindshare, especially when installing 3rd party libraries. I've not solved the local library problem myself, (maybe someone else reading has), but if I were, I'd probably opt for Pip with local/network-disk repos e.g. by <a href="https://pip.pypa.io/en/latest/user_guide.html#installing-from-wheels" rel="nofollow noreferrer">installing from wheels</a>.</p> <p>Another option (which can cause all sorts of hassles itself) is to package in your OS's native packager, be it Debian-style <a href="http://en.wikipedia.org/wiki/Advanced_Packaging_Tool" rel="nofollow noreferrer">apt</a> or by <a href="https://docs.python.org/2/distutils/builtdist.html#creating-rpm-packages" rel="nofollow noreferrer">creating RPMs</a>, etc. Of course, Windows not so much.</p> <h3>Versioning etc</h3> <blockquote> <p>How do we manage third party package versioning? I've never seen easy_install or pip passed a version number. </p> </blockquote> <h3>Pip</h3> <p><a href="https://pip.pypa.io/en/latest/reference/pip_install.html#requirement-specifiers" rel="nofollow noreferrer">Pip definitely supports version specifiers</a>. Turns out <a href="http://peak.telecommunity.com/DevCenter/EasyInstall#changing-the-active-version" rel="nofollow noreferrer">Easy Install does too</a>. I suppose many people / smaller projects opt for latest-and-greatest, which of course isn't always as "appropriate" in the enterprise... </p> <h3>Virtualenv</h3> <p>No discussion of versioning and Python would miss a Python2/3 reference, but I'm sure you're aware of all this already. </p> <p>More important then would be to mention <a href="https://virtualenv.pypa.io/en/latest/" rel="nofollow noreferrer">virtualenv</a>. It truly frees you from the mess you can get in to testing multiple versions, bearing in mind especially that your (*NIX) operating systems typcially rely heavily on Python themselves. It's a big subject so have a look at the docs.</p> <h2>Developer Tooling</h2> <blockquote> <p>Do tools exist to aid in Python code reviews, CI, regression testing, etc.?</p> </blockquote> <h3>Code Review</h3> <p>Very much so. Most code review tools are multi-language (it's just a formatting issue really), so just pick your favourite enterprise-friendly one, be it <a href="https://www.atlassian.com/software/crucible/overview" rel="nofollow noreferrer">Crucible</a>, Github's one (Barkeep?), <a href="https://code.google.com/p/gerrit/" rel="nofollow noreferrer">Gerrit</a>, or whatever.</p> <h3>CI</h3> <p>For CI you have almost as many options again. Running python apps is usually less involved than Java ones, so most CI systems, though often Java-focused, support Python. (FWIW, we use <a href="http://drone.io/" rel="nofollow noreferrer">drone.io</a> for <a href="https://code.google.com/p/quodlibet/" rel="nofollow noreferrer">Quod Libet</a>). Jenkins should have no problem doing this, and it seems people have <a href="https://stackoverflow.com/questions/1091465/teamcity-for-python-django-continuous-integration">done so with TeamCity</a>.</p> <p>However, the "original" or "most Pythonic" is probably <a href="http://buildbot.net/" rel="nofollow noreferrer">Buildbot</a>, but I've not used it personally. Looks a lot newer than I remember, and it had quite a lot of support in the Python community I think... </p> <h3>Testing</h3> <p>For testing, though not <em>quite</em> as mature as JUnit / TestNG, check out the de-facto / JUnit-like unit testing <a href="https://docs.python.org/2/library/unittest.html" rel="nofollow noreferrer">unittest</a>, but also (nicer?) alternatives like <a href="https://nose.readthedocs.org/en/latest/" rel="nofollow noreferrer">nose.py</a>.</p> <p>For higher level (BDD) testing, try something like <a href="http://lettuce.it/tutorial/simple.html" rel="nofollow noreferrer">Lettuce</a> - as the name implies heavily inspired by Cucumber, or maybe <a href="https://pythonhosted.org/behave/" rel="nofollow noreferrer">Behave</a>. I've not tried them, but common opinion is they're less mature than Cucumber / JBehave / Concordion / Rspec etc.</p>
python|maven|enterprise|devtools
0
1,903,042
36,199,581
How can I set values for each HiddenInput fields of Django?
<p>I wrote codes, but I don't know how to set <code>'name'</code> and <code>'value'</code> of hidden tag with Django template. I read <a href="https://docs.djangoproject.com/ja/1.9/ref/forms/widgets/" rel="nofollow">Django's Widgets Docs</a>, but I couldn't find the way.</p> <pre><code>(Pdb) print(errors) &lt;ul class="errorlist"&gt;&lt;li&gt;friend_id&lt;ul class="errorlist"&gt;&lt;li&gt;This field is required.&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;add_remove&lt;ul class="errorlist"&gt;&lt;li&gt;This field is required.&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt; </code></pre> <p>First, I tried to write like</p> <pre><code>&lt;input type="hidden" name="friend_id" value="{{ user_info.user_id }}"&gt; </code></pre> <p>and</p> <pre><code>friend_id = request.POST.friend_id </code></pre> <p>But I couldn't get how to get POST values without Django's Form. So, I used Django's Form with following codes.</p> <p>views.py</p> <pre><code>from myapp.forms import HiddenUserPage hiddenform = HiddenUserPage if request.method == 'POST': hidden = hiddenform(request.POST) if hidden.is_valid(): from myapp.models import Friends try: friend_id = hidden.cleaned_data['friend_id'] add_remove = hidden.cleaned_data['add_remove'] if add_remove == "add": f = Friends(user_id=request.user.user_id, friend_id=friend_id) f.save() elif add_remove == "remove": f = Friends.objects.filter(user_id=request.user.user_id).get(friend_id=friend_id) f.delete() except: errors = "DB error" else: errors = hidden.errors else: hidden = hiddenform() errors = "" view = { 'errors': errors, 'hidden': hidden, } template = 'myapp/user/user_page.html' return render(request, template, view) </code></pre> <p>forms.py</p> <pre><code>class HiddenUserPage(forms.Form): friend_id = forms.CharField(widget=forms.HiddenInput()) add_remove = forms.CharField(widget=forms.HiddenInput()) </code></pre> <p>user_page.html</p> <pre><code> &lt;form method="POST" action="" class=""&gt; {% csrf_token %} &lt;p class="submit"&gt; &lt;button class="confirmbutton" type="submit"&gt; {% if is_friend %} remove friend &lt;!-- # I'd like to write like # --&gt; &lt;!-- &lt;input type="hidden" name="friend_id" value="remove"&gt; # --&gt; &lt;!-- &lt;input type="hidden" name="friend_id" value="{{ user_info.user_id }}"&gt; # --&gt; {{ hidden.add_remove }} {{ hidden.friend_id }} {% else %} add friend &lt;!-- &lt;input type="hidden" name="friend_id" value="add"&gt; # --&gt; &lt;!-- &lt;input type="hidden" name="friend_id" value="{{ user_info.user_id }}"&gt; # --&gt; {{ hidden.add_remove }} {{ hidden.friend_id }} {% endif %} &lt;/button&gt; &lt;/p&gt; &lt;/form&gt; </code></pre> <p>Sorry, my code is filthy.</p>
<p>Looks like the question is in providing initial data to the form, then it's is generally done in the view passing <code>initial</code> to the form instantiation, e.g.:</p> <pre><code># In your view.py def ...(...): # Inside your view function if request.method == 'GET': # Provide initial data to the form here # Get your 'user_info' from models or sessions, # or wherever you keep it hidden = hiddenform(initial={"friend_id":user_info.user_id}) if reuest.method == 'POST': hidden = hiddenform(request.POST) # Process posted form data ... # More code general for both HTTP verbs view = {'errors': errors, 'hidden': hidden} template = 'myapp/user/user_page.html' return render(request, template, view) </code></pre> <p>You might also want to bound the form to model data directly, see <a href="https://docs.djangoproject.com/ja/1.9/ref/forms/api/#dynamic-initial-values" rel="nofollow">the docs for more info</a>.</p>
python|django|hidden
2
1,903,043
36,077,346
Python: How to handle missing values in a CSV?
<p>I have a given CSV sample as follows:</p> <pre><code>ID,ID_TYPE,OB_DATE,VERSION_NUM,MET_DOMAIN_NAME,OB_END_CTIME,OB_DAY_CNT,SRC_ID,REC_ST_IND,PRCP_AMT,OB_DAY_CNT_Q,PRCP_AMT_Q,METO_STMP_TIME,MIDAS_STMP_ETIME,PRCP_AMT_J 90, RAIN, 2006-01-01 00:00,1, WADRAIN,900,1,24109,1011,0,0,6, 2006-01-17 09:04,0, 150, RAIN, 2006-01-01 00:00,1, DLY3208,900,1,30747,1011,0,0,6, 2006-01-09 13:21,3, 174, RAIN, 2006-01-01 00:00,1, WADRAIN,900,1,24775,1011,0.2,0,6, 2006-01-17 09:04,0, </code></pre> <p>I would like to determine the weekday of each given date in my CSV. My code which achieves that looks as follows:</p> <pre><code>import csv from datetime import datetime as dt csv_file = open('raindata.csv') csv_reader = csv.DictReader(csv_file) field_names = list(csv_reader.fieldnames) if 'WEEKDAY' in field_names: print "data has error" elif 'RECWEEKDAY' in field_names: print "data has error" else: field_names.insert(field_names.index('OB_DATE') + 1, 'WEEKDAY') field_names.insert(field_names.index('METO_STMP_TIME') + 1, 'RECWEEKDAY') def get_weekday(ob_date): return dt.strptime(ob_date, ' %Y-%m-%d %H:%M').strftime('%A') output = open('raindata.csv','w') csv_writer = csv.DictWriter(output, field_names) csv_writer.writeheader() for row in csv_reader: row['WEEKDAY'] = get_weekday(row['OB_DATE']) row['RECWEEKDAY'] = get_weekday(row['METO_STMP_TIME']) csv_writer.writerow(row) </code></pre> <p>My script runs fine and gives the correct result but it fails where the <code>Date</code> values are missing from <strong>OB_DATE</strong> column and <strong>METO_STMP_TIME</strong> column.</p> <p>How do I change the existing code, so that for a blank <code>Date</code> value the corresponding <code>Weekday</code> value is also blank?</p>
<p>Just catch the exception that is thrown when the date/time string is missing or invalid and then set the value to an empty string.</p> <pre><code>try: row['WEEKDAY'] = get_weekday(row['OB_DATE']) except ValueError: row['WEEKDAY'] = '' </code></pre>
python|csv|python-2.x
2
1,903,044
15,328,474
System call to shut down or 'power cycle' ethernet card on windows?
<p>I am trying to write a python script that makes a series of system calls in order to rapidly change IP addresses. Part of the string of events that must happen is going into network connections in the ctrl panel and clicking the local area connection off and then clicking it back on. It seems that there must a system call that would accomplish this task. What is it? Or might there be a python specific command?</p>
<pre><code>import subprocess subprocess.call(['ipconfig', '/renew']) </code></pre> <p>Using <a href="http://docs.python.org/2/library/subprocess.html" rel="nofollow"><code>subprocess</code></a> (Python) and <a href="http://compnetworking.about.com/od/workingwithipaddresses/a/ipconfig.htm" rel="nofollow"><code>ipconfig</code></a> (Windows). Though you'll only get a different IP address if your DHCP server chooses to dish one out to you, contact your system administrator.</p>
python|windows
0
1,903,045
29,739,489
get child by id in kivy and add new label to it
<p>I am very new to python and kivy. I am recently working on a kivy server/client app that is based on the code from this site: <a href="http://kivy.org/docs/guide/other-frameworks.html" rel="nofollow">http://kivy.org/docs/guide/other-frameworks.html</a></p> <p>My goal is to create a server app that can receive messages from the client app, and will then transform one message from the client app to one label that can be touched/moved/scaled in a scatter widget individually. (i.e. if you have sent a 10 different messages from the client app you should be able to see 10 labels on the server screen that you can manipulate)</p> <p>However with my limited knowledge in kivy and python, instead of adding new widgets, I can only achieve updating one widget. I just tried to use for loop to add new widgets, unfortunately I got stuck</p> <p>Here is the version where it is working as it is only updating the label</p> <p>class ServerApp(App):</p> <pre><code>def build(self): self.layout = BoxLayout(orientation='vertical', spacing=10) self.label = Button(text='Censoring process begin\nBeware of keyword "umbrella"\n ', color=[1.0,1.0,1.0,1.0]) self.label.color = [0.9,0.2,0.2,1.0] self.upperscroll = Button(pos_hint={'x': 0, 'center_y': .5}, size_hint=(None, None)) self.scatter = Scatter() self.displaybox = Label() self.displaybox.color = [0.4,0.9,0.4,1.0] reactor.listenTCP(8800, EchoFactory(self)) reactor.listenTCP(8880, MultiEchoFactory(self)) self.layout.add_widget(self.label) self.layout.add_widget(self.scatter) self.scatter.add_widget(self.displaybox) return self.layout def handle_message(self, msg): if any(word in msg.lower() for word in wordlist): self.displaybox.color = [0.9,0.4,0.4,1.0] self.displaybox.text = "content blocked" self.label.text += "Alert! Sender posts %s \n" %msg else: self.label.text += "Safe - sender posts %s \n" %msg self.displaybox.color = [0.4,0.9,0.4,1.0] self.displaybox.text = "%s" % msg msg = msg return msg </code></pre> <p>this is the version where it does not work as it is trying to add new child widget</p> <pre><code>class ServerApp(App): def build(self): i = 0 self.layout = BoxLayout(orientation='vertical', spacing=10) self.label = Button(text='Censoring process begin\nBeware of keyword "umbrella"\n ', color=[1.0,1.0,1.0,1.0]) self.label.color = [0.9,0.2,0.2,1.0] self.upperscroll = Button(pos_hint={'x': 0, 'center_y': .5}, size_hint=(None, None)) self.scatter = Scatter(id="scatter" + str(i)) self.displaybox = Label(id='displaybox' + str(i)) self.displaybox.color = [0.4,0.9,0.4,1.0] reactor.listenTCP(8800, EchoFactory(self)) reactor.listenTCP(8880, MultiEchoFactory(self)) self.layout.add_widget(self.label) self.layout.add_widget(self.scatter) self.scatter.add_widget(self.displaybox) return self.layout def handle_message(self, msg): for i in range(100): if any(word in msg.lower() for word in wordlist): self.layout.add_widget(self.scatter+str(i)(pos=(random(350),random(400)))) self.scatter+str(i).add_widget(self.displaybox+str(i)) **self.displaybox+i**.color = [0.9,0.4,0.4,1.0] **self.displaybox+i**.text = "content blocked" # this is where error occurs as python cannot identify the new label by adding "i" self.label.text += "Alert! Sender posts %s \n" %msg else: self.label.text += "Safe - sender posts %s \n" %msg self.scatter+i.add_widget(self.displaybox+i) self.displaybox+i.color = [0.4,0.9,0.4,1.0] self.displaybox+i.text = "%s" % msg i+=1 msg = msg return msg </code></pre> <p>I wonder how could I fix this problem and add multiple scatter widgets with various labels once the (msg)message is sent from the client app?</p> <p>Thank you so much</p>
<p>To access the widget by id (provided if you use ids in your kv language code), use <strong>ids</strong>, like this:</p> <pre><code>... scatter_id = 'scatter' + str(i) # form the id by string scatter_widget = getattr(self.ids, scatter_id) # use getattr to access it displaybox_id = 'displaybox' + str(i) displaybox_widget = getattr(self.ids, displaybox_id) scatter_widget.add_widget(displaybox_widget) ... </code></pre> <p>Alternatively:</p> <pre><code>self.ids['scatter' + str(i)].add_widget(self.ids['displaybox' + str(i)]) ... </code></pre> <p>Above are basically the same, it's more about readability and coding style.</p> <p>You can read more about <strong>Widget.ids</strong> <a href="http://kivy.org/docs/api-kivy.uix.widget.html?highlight=ids#kivy.uix.widget.Widget.ids" rel="nofollow">here</a></p> <p>Hope this helps.</p>
python|kivy
1
1,903,046
46,420,709
How to mock AWS S3 with aiobotocore
<p>I have a project that uses aiohttp and aiobotocore to work with resources in AWS. I am trying to test class that works with AWS S3 and I am using moto to mock AWS. Mocking works just fine with examples that use synchronous code (example from moto docs)</p> <pre><code>import boto3 from moto import mock_s3 class MyModel(object): def __init__(self, name, value): self.name = name self.value = value def save(self): s3 = boto3.client('s3', region_name='us-east-1') s3.put_object(Bucket='mybucket', Key=self.name, Body=self.value) def test_my_model_save(): with mock_s3(): conn = boto3.resource('s3', region_name='us-east-1') conn.create_bucket(Bucket='mybucket') model_instance = MyModel('steve', 'is awesome') model_instance.save() body = conn.Object('mybucket', 'steve').get()['Body'].read().decode("utf-8") assert body == 'is awesome' </code></pre> <p>However, after rewriting this to use aiobotocore mocking does not work - it connects to real AWS S3 in my example.</p> <pre><code>import aiobotocore import asyncio import boto3 from moto import mock_s3 class MyModel(object): def __init__(self, name, value): self.name = name self.value = value async def save(self, loop): session = aiobotocore.get_session(loop=loop) s3 = session.create_client('s3', region_name='us-east-1') await s3.put_object(Bucket='mybucket', Key=self.name, Body=self.value) def test_my_model_save(): with mock_s3(): conn = boto3.resource('s3', region_name='us-east-1') conn.create_bucket(Bucket='mybucket') loop = asyncio.get_event_loop() model_instance = MyModel('steve', 'is awesome') loop.run_until_complete(model_instance.save(loop=loop)) body = conn.Object('mybucket', 'steve').get()['Body'].read().decode("utf-8") assert body == 'is awesome' </code></pre> <p>So my assumption here is that moto does not work properly with aiobotocore. How can I effectively mock AWS resources if my source code looks like in the second example?</p>
<p>Mocks from <code>moto</code> don't work because they use a synchronous API. However, you can start <code>moto</code> server and configure <code>aiobotocore</code> to connect to this test server. <a href="https://github.com/aio-libs/aiobotocore/blob/master/tests/moto_server.py" rel="noreferrer">Take a look on aiobotocore tests</a> for inspiration.</p>
python|python-3.x|python-asyncio|aiohttp|moto
11
1,903,047
49,711,914
Django/Python: How to change filename when saving file using models.FileField?
<p>I found this example to upload a file using FileField and it works great.</p> <p><a href="https://simpleisbetterthancomplex.com/tutorial/2016/08/01/how-to-upload-files-with-django.html" rel="nofollow noreferrer">https://simpleisbetterthancomplex.com/tutorial/2016/08/01/how-to-upload-files-with-django.html</a></p> <p>Problem is that it saves the original filename of the file being uploaded. I don't want that. I can change the filename within models.py by overriding the save function (see below). For the life of me, I cannot figure out how to pass a filename in when I execute form.save() from views.py. I need to know the filename for another process. I thought about even returning a filename from the models.py save function. I'm a bit of a noob so forgive any missing details. I've searched this site and read loads of documentation, but I'm missing something. Any advice would be appreciated.</p> <p>Forms.py</p> <pre><code>class DocumentForm(forms.ModelForm): message = forms.CharField(widget=forms.Textarea(attrs={'rows': 5, 'cols': 50})) class Meta: model = Document fields = ('description', 'document', ) </code></pre> <p>Models.py</p> <pre><code>class Document(models.Model): description = models.CharField(max_length=255, blank=True) document = models.FileField(upload_to='atlasapp/documents/') uploaded_at = models.DateTimeField(auto_now_add=True) def save(self, *args, **kwargs): randomNum = random.randint(10000,90000) new_name = str(randomNum) + ".txt" self.document.name = new_name super(Document, self).save(*args, **kwargs) </code></pre> <p>Views.py</p> <pre><code>def model_form_upload(request): if request.method == 'POST': form = DocumentForm(request.POST, request.FILES) if form.is_valid(): form.save() return redirect('model_form_upload') else: form = DocumentForm() return render(request, 'model_form_upload.html', {'form': form}) </code></pre>
<p>Could you perhaps call <code>save()</code> on the form with <code>commit=False</code>, set the name on the <code>Document</code> file, and then save the <code>Document</code>? For example:</p> <pre><code>def model_form_upload(request): if request.method == 'POST': form = DocumentForm(request.POST, request.FILES) if form.is_valid(): document = form.save(commit=False) document.name = 'some_new_name' document.save() return redirect('model_form_upload') else: form = DocumentForm() return render(request, 'model_form_upload.html', {'form': form}) </code></pre>
django|python-2.7
4
1,903,048
49,532,342
LLDB: Python callback on breakpoint with SBTarget.EvaluateExpression
<p>I'm trying to execute a Python callback when a certain function is called. It works if the function is called by running the process, but it fails when I call the function with <code>SBTarget.EvaluateExpression</code></p> <p>Here's my C code: </p> <pre><code>#include &lt;stdio.h&gt; int foo(void) { printf("foo() called\n"); return 42; } int main(int argc, char **argv) { foo(); return 0; } </code></pre> <p>And here's my Python script: </p> <pre><code>import lldb import os def breakpoint_cb(frame, bpno, err): print('breakpoint callback') return False debugger = lldb.SBDebugger.Create() debugger.SetAsync(False) target = debugger.CreateTargetWithFileAndArch('foo', 'x86_64-pc-linux') assert target # Break at main and start the process. main_bp = target.BreakpointCreateByName('main') process = target.LaunchSimple(None, None, os.getcwd()) assert process.state == lldb.eStateStopped foo_bp = target.BreakpointCreateByName('foo') foo_bp.SetScriptCallbackFunction('breakpoint_cb') # Callback is executed if foo() is called from the program #process.Continue() # This causes an error and the callback is never called. opt = lldb.SBExpressionOptions() opt.SetIgnoreBreakpoints(False) v = target.EvaluateExpression('foo()', opt) err = v.GetError() if err.fail: print(err.GetCString()) else: print(v.value) </code></pre> <p>I get the following error:</p> <pre><code>error: Execution was interrupted, reason: breakpoint 2.1. The process has been left at the point where it was interrupted, use "thread return -x" to return to the state before expression evaluation </code></pre> <p>I get the same error when the breakpoint has no callback, so it's really the breakpoint that is causing problems, not the callback. The expression is evaluated when <code>opt.SetIgnoreBreakpoints(True)</code> set, but that doesn't help in my case.</p> <p>Is this something that can be fixed or is it a bug or missing feature?</p> <p>Operating system is Arch Linux, LLDB version is 6.0.0 from the repository.</p>
<p>The IgnoreBreakpoints setting doesn't mean you don't hit breakpoints while running. For instance, you will notice that the breakpoint hit count will get updated either way. Rather it means: </p> <p>True: that if we hit a breakpoint we will auto-resume</p> <p>False: if we hit a breakpoint we will stop regardless</p> <p>The False feature is intended for calling a function because you want to stop in it or some function it called for the purposes of debugging that function. So overriding the breakpoint conditions and commands is the right thing to do.</p> <p>For your purposes, I think you want IgnoreBreakpoints to be True, since you also want the expression evaluation to succeed.</p> <p>OTOH, if I understand your intent, the thing that's causing you a problem is that when IgnoreBreakpoints is false, lldb doesn't call the breakpoint's commands. It should only skip that bit of work when we are forcing the stop.</p>
python|lldb
0
1,903,049
49,480,148
Best way to apply a function to a slice of a 3d numpy array
<p>Suppose I have something like <code>myArray.shape == (100, 80, 2)</code></p> <p>I want to do something like this: <code>numpy.apply_along_axis(function, 0, myArray)</code> where <code>function</code> uses both items on the <code>axis=2</code> axis of myArray, but I know <code>numpy.apply_along_axis</code> only works for 1D slices.</p> <p>My question is: Is there a generic way to go about acting a function to 2D slices without having to use a loop or does it depend on how I have <code>function</code> defined? And if so, what would be the most efficient way of doing this? </p> <p>Is it possible to use <code>numpy.apply_along_axis</code> to act on one 1D slice and <code>zip</code> each element in the other slice to each element in the first slice somehow? Would it help to restructure <code>myArray</code>? </p> <p>Note: This <a href="https://stackoverflow.com/questions/23470582/efficient-way-to-apply-function-to-each-2d-slice-of-3d-numpy-array">question</a> did not answer my question, so please don't mark as duplicate.</p>
<p>Define a simple function that takes a 2d array, and returns a scalar</p> <pre><code>In [54]: def foo(x): ...: assert(x.ndim == 2) ...: return x.mean() ...: In [55]: X = np.arange(24).reshape(2,3,4) </code></pre> <p>It's not entirely clear how you want to iterate on the 3d array, but let's assume it's on the first axis. The straight forward list comprehension approach is:</p> <pre><code>In [56]: [foo(x) for x in X] Out[56]: [5.5, 17.5] </code></pre> <p><code>vectorize</code> normally feeds scalars to the function, but the newer versions have <code>signature</code> parameter that allow us to use it as:</p> <pre><code>In [58]: f = np.vectorize(foo, signature='(n,m)-&gt;()') In [59]: f(X) Out[59]: array([ 5.5, 17.5]) </code></pre> <p>The original vectorize does not promise any speed up, and the signature version is even a bit slower.</p> <p><code>apply_along_axis</code> just hides iteration. Even though it only operates on 1d arrays, we can use it with a bit of reshaping:</p> <pre><code>In [62]: np.apply_along_axis(lambda x: foo(x.reshape(3,4)), 1, X.reshape(2,-1)) Out[62]: array([ 5.5, 17.5]) </code></pre> <p>As long as you are only iterating on one axis, the list comprehension approach is both fastest and easiest.</p>
python|arrays|numpy
6
1,903,050
49,615,290
os.system opens just the code
<p>I am trying to run a python program inside of another python program. and those two programs run in a thread. Now, I don't know why, but when I try these two lines on my PC it opens the program and runs it, however on my laptop, it just opens a weird window with just the code itself and does not run the code.</p> <pre><code>import os os.system("theName.py") </code></pre> <p>Any ideas?</p>
<p>Using <a href="https://docs.python.org/3/library/os.html#os.system" rel="nofollow noreferrer"><code>os.system</code></a> on a <code>.py</code> file does the same thing as executing the file directly at the command line. Depending on your platform and your settings, and whether the file has the exec bit set, and whether it starts with a proper shebang line, that could do any of the following:</p> <ul> <li>Run the script.</li> <li>Open the script in whatever default editor is set for <code>.py</code> files.</li> <li>Try to run the script with the wrong Python version.</li> <li>Try to run your script as if it were shell code instead of Python code, which fails with a syntax error unless you’re very unlucky.</li> <li>Fail with an error about not knowing how to execute this kind of file.</li> <li>Fail with an error about the file not being executable.</li> </ul> <p>You’re probably getting the second one on your laptop—but any of them are possible, and only one of them is what you actually want.</p> <p>As the docs for <code>os.system</code> say, you almost always want to use the <a href="https://docs.python.org/3/library/subprocess.html" rel="nofollow noreferrer"><code>subprocess</code></a> module instead of <code>os.system</code>. In this case, what you probably want is something like:</p> <pre><code>subprocess.run([sys.executable, 'script.py'], check=True) </code></pre> <p>That means to run <code>script.py</code> <a href="https://docs.python.org/3/library/sys.html#sys.executable" rel="nofollow noreferrer">using the same Python interpreter being used to run the current script</a>, let input and output pass through (just like <code>system</code> does), and check and raise an exception if it exits with an exception or other failure instead of ignoring the error. That may not be exactly what you want; in that case, read the <code>subprocess</code> docs (including <a href="https://docs.python.org/3/library/subprocess.html#replacing-older-functions-with-the-subprocess-module" rel="nofollow noreferrer">the recipes for replacing older functions</a>) for how to do what you want instead.</p>
python|operating-system
2
1,903,051
49,580,317
Why does this Regexp take 99.89% fewer steps using pcre rather than Python?
<p>I just built this expression within the regex101 editor but accidentally forgot to switch it to the Python flavour syntax. I'm not familiar with the differences, but figured they would be fairly minor. They are not. </p> <p><code>Perl/pcre</code> takes 99.89% fewer steps than <code>Python</code> (6,377,715 vs 6,565 steps)</p> <p><a href="https://regex101.com/r/PRwtJY/3" rel="nofollow noreferrer">https://regex101.com/r/PRwtJY/3</a></p> <p><strong>Regexp:</strong></p> <pre><code>^(\d{1,3}) +((?:[a-zA-Z0-9\(\)\-≠,]+ )+) +£ *((?:[\d] {1,4}|\d)+)∑([ \d]+)? </code></pre> <p>Any help would be appreciated! Thanks.</p> <h2><strong>EDIT</strong></h2> <p>The data source is a multi-line txt extracted from a PDF, resulting in a less than perfect output (you can see the <a href="https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/666841/CT600-2017.pdf" rel="nofollow noreferrer">base source PDF here</a>)</p> <p>I'm trying to extract the box numbers, title, and any number that is present (filled in) for particular lines. If you check the link above you can see the full sample. <em>For example:</em></p> <p>Below is a screenshot of Regex101 showing positive matches. The topmost line match shows the box number (155), the title (Trading profits), and the number (5561). <a href="https://i.stack.imgur.com/wZvtN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wZvtN.png" alt="Example syntax"></a></p> <p><strong>Restrictions:</strong></p> <ul> <li>Ideally extract the values as you see them in the <a href="https://regex101.com/r/PRwtJY/3" rel="nofollow noreferrer">PCRE compiler</a> - with little or no extra whitespace before or after the match - just the box number, title, and value.</li> <li>Only match if there is a number/value filled in (e.g. 5561 in the above example, hence not matching the line immediately after it - box 160, but matching box 165).</li> <li>The format changes lower down the form, and I have a separate regex for that, so ignore it.</li> </ul>
<p>Proposal: use the newer <a href="https://pypi.python.org/pypi/regex/" rel="nofollow noreferrer"><strong><code>regex</code> module</strong></a> which supports atomic groups and possessive quantifiers. This cuts the steps needed about 50% compared to your <em>initial</em> <code>PCRE</code> expression (see <a href="https://regex101.com/r/PRwtJY/6" rel="nofollow noreferrer"><strong>a demo on regex101.com</strong></a>):</p> <pre><code>^ (\d{1,3})\s++ ((?&gt;[^£\n]+))£\s++ ([ \d]+)(?&gt;[^∑\n]+)∑\s++ ([ \d]+) </code></pre> <p><hr> To get this working, you could do:</p> <pre><code>import regex as re rx = re.compile(r''' ^ (\d{1,3})\s++ ((?&gt;[^£\n]+))£\s++ ([ \d]+)(?&gt;[^∑\n]+)∑\s++ ([ \d]+)''', re.M | re.X) matches = [[group.strip() for group in m.groups()] for m in rx.finditer(data)] print(matches) </code></pre> <p>Which yields for the except given:</p> <pre><code>[['145', 'Total turnover from trade', '5 2 0 0 0', '0 0'], ['155', 'Trading profits', '5 5 6 1', '0 0'], ['165', 'Net trading profits ≠ box 155 minus box 160', '5 5 6 1', '0 0'], ['235', 'P rofits before other deductions and reliefs ≠ net sum of', '5 5 6 1', '0 0'], ['300', 'Profits before qualifying donations and group relief ≠', '5 5 6 1', '0 0'], ['315', 'Profits chargeable to Corporation Tax ≠', '5 5 6 1', '0 0'], ['475', 'Net Corporation Tax liability ≠ box 440 minus box 470', '1 0 5 6', '5 9'], ['510', 'Tax chargeable ≠ total of boxes 475, 480, 500 and 505', '1 0 5 6', '5 9'], ['525', 'Self-assessment of tax payable ≠ box 510 minus box 515', '1 0 5 6', '5 9'], ['600', 'Tax outstanding ≠', '1 0 5 6', '5 9']] </code></pre>
python|regex|pcre
1
1,903,052
70,338,253
calculate sum of rows in pandas dataframe grouped by date
<p>I have a csv that I loaded into a Pandas Dataframe.</p> <p>I then select only the rows with duplicate dates in the DF:</p> <pre><code>df_dups = df[df.duplicated(['Date'])].copy() </code></pre> <p>I'm trying to get the sum of all the rows with the exact same date for 4 columns (all float values), like this:</p> <pre><code>df_sum = df_dups.groupby('Date')[&quot;Received Quantity&quot;,&quot;Sent Quantity&quot;,&quot;Fee Amount&quot;,&quot;Market Value&quot;].sum() </code></pre> <p>However, this does not give the desired result. When I examine df_sum.groups, I've noticed that it did not include the first date in the indices. So for two items with the same date, there would only be one index in the groups object.</p> <pre><code>pprint(df_dups.groupby('Date')[&quot;Received Quantity&quot;,&quot;Sent Quantity&quot;,&quot;Fee Amount&quot;,&quot;Market Value&quot;].groups) </code></pre> <p>I have no idea how to get the sum of all duplicates.</p> <p>I've also tried:</p> <pre><code>df_sum = df_dups.groupby('Date')[&quot;Received Quantity&quot;,&quot;Sent Quantity&quot;,&quot;Fee Amount&quot;,&quot;Market Value&quot;].apply(lambda x : x.sum()) </code></pre> <p>This gives the same result, which makes sense I guess, as the indices in the groupby object are not complete. What am I missing here?</p>
<p>Check the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.duplicated.html" rel="nofollow noreferrer">documentation</a> for the method <code>duplicated</code>. By default duplicates are marked with <code>True</code> except for the first occurence, which is why the first date is not included in your sums.</p> <p>You only need to pass in <code>keep=False</code> in <code>duplicated</code> for your desired behaviour.</p> <pre><code>df_dups = df[df.duplicated(['Date'], keep=False)].copy() </code></pre> <p>After that the sum can be calculated properly with the expression you wrote</p> <pre><code>df_sum = df_dups.groupby('Date')[&quot;Received Quantity&quot;,&quot;Sent Quantity&quot;,&quot;Fee Amount&quot;,&quot;Market Value&quot;].apply(lambda x : x.sum()) </code></pre>
python|pandas
1
1,903,053
70,251,790
How to improve the readablity of this graph of multiple series with matplotlib?
<p>If I have several curves on either side of the x-axis (like the green and orange curve in my case) what would be the best way to improve the display of this graph, for a better reading?</p> <p>I was thinking for example by integrating a zoomed part on the curves between 0 and 0.15s on the x-axis.</p> <p>Also each value of the curves correspond to a number, represented by a different marker (square, triangle, circle..) on the curves. Is there a better way to represent these curves and display these markers? In a slightly cleaner and more scientific way.</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt A = [0.807, 0.633, 0.416, 0.274, 0.188] time_A = [0.0990, 0.1021, 0.1097, 0.1109, 0.1321] B = [0.764, 0.753, 0.716, 0.576, 0.516] time_B = [0.1727, 0.1742, 0.1772, 0.1869, 0.1765] C = [0.729, 0.719, 0.674, 0.631, 0.616] time_C = [0.5295, 0.5368, 0.5431, 0.5391, 0.5443] E = [0.709, 0.605, 0.390, 0.259, 0.155] time_E = [0.0829, 0.0929, 0.0910, 0.0950, 0.0972] D = [0.703, 0.541, 0.174, 0.062, 0.020] time_D = [0.0740, 0.0792, 0.0819, 0.0837, 0.0858] F = [0.748, 0.566, 0.366, 0.198, 0.168] time_F = [0.0885, 0.0936, 0.09621, 0.0974, 0.0999] markers = [&quot;s&quot;, &quot;^&quot;, &quot;o&quot;, 'p', '*'] plt.plot(time_A, A, c='tab:blue', label='A') plt.plot(time_B, B, c='tab:red', label='B') plt.plot(time_C, C, c='tab:orange', label='C') plt.plot(time_D, D, c='tab:green', label='D') plt.plot(time_E, E, c='yellow', label='E') plt.plot(time_F, F, c='tab:cyan', label='F') for i in range(5): plt.plot(time_A[i], A[i], c='tab:blue', marker=markers[i], markersize=7) plt.plot(time_B[i], B[i], c='tab:red', marker=markers[i], markersize=7) plt.plot(time_C[i], C[i], c='tab:orange', marker=markers[i], markersize=7) plt.plot(time_D[i], D[i], c='tab:green', marker=markers[i], markersize=7) plt.plot(time_E[i], E[i], c='yellow', marker=markers[i], markersize=7) plt.plot(time_F[i], F[i], c='tab:cyan', marker=markers[i], markersize=7) textstr = '\n'.join(( f'\u25A0 1', f'\u25B2 2', f'\u25CF 3', f'\u2B1F 4', f'\u2605 5')) plt.text(0.4, 0.5, textstr, verticalalignment='top', fontsize = 'small') plt.legend(fontsize = 'small') plt.xlabel('time (s)') plt.ylabel('score') plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/j1zT2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j1zT2.png" alt="enter image description here" /></a></p> <hr /> <p>Below is the result with the broken axis between 0.2 and 0.5 according to the comments. What is the correct way to integrate markers into curves with matplotlib?</p> <p><a href="https://i.stack.imgur.com/sdMNm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sdMNm.png" alt="enter image description here" /></a></p>
<p>Here are some ideas:</p> <ul> <li>use a dummy line to add labels for the markers; use two columns for the legend</li> <li>set a log scale on the x-axis, but with regular tick labels</li> <li>connect the markers of the same style with a fine line (order the points left to right for the line not to cross itself)</li> <li>use the color 'gold' instead of 'yellow' to make it better visible</li> <li>write everything as much as possible using loops</li> </ul> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt from matplotlib.ticker import NullFormatter, ScalarFormatter, FixedLocator import numpy as np A = [0.807, 0.633, 0.416, 0.274, 0.188] time_A = [0.0990, 0.1021, 0.1097, 0.1109, 0.1321] B = [0.764, 0.753, 0.716, 0.576, 0.516] time_B = [0.1727, 0.1742, 0.1772, 0.1869, 0.1765] C = [0.729, 0.719, 0.674, 0.631, 0.616] time_C = [0.5295, 0.5368, 0.5431, 0.5391, 0.5443] E = [0.709, 0.605, 0.390, 0.259, 0.155] time_E = [0.0829, 0.0929, 0.0910, 0.0950, 0.0972] D = [0.703, 0.541, 0.174, 0.062, 0.020] time_D = [0.0740, 0.0792, 0.0819, 0.0837, 0.0858] F = [0.748, 0.566, 0.366, 0.198, 0.168] time_F = [0.0885, 0.0936, 0.09621, 0.0974, 0.0999] names = ['A', 'B', 'C', 'D', 'E', 'F'] times = [time_A, time_B, time_C, time_D, time_E, time_F] scores = [A, B, C, D, E, F] markers = [&quot;s&quot;, &quot;^&quot;, &quot;o&quot;, 'p', '*'] colors = ['tab:blue', 'tab:red', 'tab:orange', 'tab:green', 'gold', 'tab:cyan'] fig, ax = plt.subplots(figsize=(12, 5)) for time, score, name, color in zip(times, scores, names, colors): ax.plot(time, score, c=color, label=name) for i in range(len(scores[0])): ax.plot([], [], color='black', ls='', marker=markers[i], markersize=7, label=i + 1) for time, score, name, color in zip(times, scores, names, colors): ax.plot(time[i], score[i], color=color, marker=markers[i], markersize=7) time_i = np.array([time[i] for time in times]) score_i = np.array([score[i] for score in scores]) order = np.argsort(time_i) ax.plot(time_i[order], score_i[order], color='grey', linestyle=':', linewidth=0.5, zorder=0) ax.legend(fontsize='small', ncol=2) ax.set_xscale('log') xmin, xmax = ax.get_xlim() ax.set_xticks(np.arange(0.1, round(xmax, 1), 0.1)) ax.set_xticks(np.arange(round(xmin, 2), round(xmax, 1), 0.01), minor=True) ax.xaxis.set_major_formatter(ScalarFormatter()) ax.xaxis.set_minor_formatter(NullFormatter()) ax.set_xlabel('time (s)') ax.set_ylabel('score') plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/O4pYr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O4pYr.png" alt="line plot with markers and updated legend" /></a></p> <p>If the values of the x-ticks are very important, the minor ticks could also get labels, for example:</p> <pre class="lang-py prettyprint-override"><code>minor_formatter = lambda x, pos: f'{x:.2f}' if (x &lt; .1) or (x &lt; .2 and round(100 * x) % 2 == 0) or ( x &gt; .2 and round(100 * x) % 10 == 5) else '' ax.xaxis.set_minor_formatter(minor_formatter) ax.tick_params(axis='x', which='minor', size=6, labelcolor='grey') ax.tick_params(axis='x', which='major', size=12) </code></pre> <p><a href="https://i.stack.imgur.com/m3DXR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m3DXR.png" alt="labeling some of the minor x ticks" /></a></p>
python|matplotlib
2
1,903,054
53,461,561
trying to concatenate two layers in keras with the same shape giving error in shapes matching
<p>I am trying to build a multi-input multi-output model using <a href="https://keras.io/getting-started/functional-api-guide/#multi-input-and-multi-output-models" rel="nofollow noreferrer">keras functional api</a> and I am following their code but I got that error:</p> <blockquote> <p>ValueError: A <code>Concatenate</code> layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 50), (None, 50, 1)]</p> </blockquote> <p>I have skipped the Embedding layer, here is the code:</p> <pre><code>def build_model(self): main_input = Input(shape=(self.seq_len, 1), name='main_input') print(main_input.shape) # seq_len = 50 # A LSTM will transform the vector sequence into a single vector, # containing information about the entire sequence lstm_out = LSTM(self.seq_len,input_shape=(self.seq_len,1) )(main_input) self.auxiliary_output = Dense(1, activation='sigmoid', name='aux_output')(lstm_out) auxiliary_input = Input(shape=(self.seq_len,1), name='aux_input') print(auxiliary_input.shape) x = concatenate([lstm_out, auxiliary_input]) # We stack a deep densely-connected network on top x = Dense(64, activation='relu')(x) x = Dense(64, activation='relu')(x) x = Dense(64, activation='relu')(x) # And finally we add the main logistic regression layer main_output = Dense(1, activation='sigmoid', name='main_output')(x) self.model = Model(inputs=[main_input, auxiliary_input], outputs=[main_output, auxiliary_output]) print(self.model.summary()) self.model.compile(optimizer='rmsprop', loss='binary_crossentropy', loss_weights=[1., 0.2]) </code></pre> <p>I got that error in the concatenation step, although printing the shape of both layers are (?,50,1). I do not know exactly why I got this, and what is the exact error in the input_shape of the first layer and why it does not give me the same shape as it should be using <code>print(main_input.shape)</code>, and how to solve it ?</p> <blockquote> <p>UPDATE:</p> </blockquote> <p>I found a solution for the error by changing the shape of the second input layer</p> <pre><code>auxiliary_input = Input(shape=(self.seq_len,), name='aux_input') </code></pre> <p>so now they can concatenate smoothly, but still not clear to me why ?</p>
<p>For the second input, you specified before the bug that,</p> <pre><code>input_shape = (50,1)# seq_length=50 </code></pre> <p>This means final shape is:</p> <pre><code>(None,50,1) </code></pre> <p>Now, when the first input passes through <code>LSTM</code> , since you didn't specify <code>return_sequences=True</code> it will return a tensor of shape <code>(batch_size, units)</code> viz. <code>(None, 50)</code> which you are concatenating with the above mentioned <code>(None, 50, 1)</code> </p> <p>Your error went away because you changed the input shape for the second input as <code>(50,)</code> so the final shape becomes <code>(None,50)</code> which matches with output of <code>LSTM</code> and hence it concatenated smoothly</p>
python|machine-learning|keras|neural-network|lstm
1
1,903,055
53,539,705
Python multiprocessing returns queue is empty when actually it is not
<p>In this program, after some iterations all the processes terminate which means that input_queue is empty as per the condition in target function. But after returning to the main function when I print the input_queue there are still items left in that queue, then why those multiple processes terminated at first place?</p> <pre><code>import cv2 import timeit import face_recognition import queue from multiprocessing import Process, Queue import multiprocessing import os s = timeit.default_timer() def alternative_process_target_func(input_queue, output_queue): while not input_queue.empty(): try: frame_no, small_frame, face_loc = input_queue.get(False) # or input_queue.get_nowait() print('Frame_no: ', frame_no, 'Process ID: ', os.getpid(), '----', multiprocessing.current_process()) except queue.Empty: print('___________________________________ Breaking __________________________________________________') break # stop when there is nothing more to read from the input def alternative_process(file_name): start = timeit.default_timer() cap = cv2.VideoCapture(file_name) frame_no = 1 fps = cap.get(cv2.CAP_PROP_FPS) length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) print('Frames Per Second: ', fps) print('Total Number of frames: ', length) print('Duration of file: ', int(length / fps)) processed_frames = 1 not_processed = 1 frames = [] process_this_frame = True frame_no = 1 Input_Queue = Queue() while (cap.isOpened()): ret, frame = cap.read() if not ret: print('Size of input Queue: ', Input_Queue.qsize()) print('Total no of frames read: ', frame_no) end1 = timeit.default_timer() print('Time taken to fetch useful frames: ', end1 - start) threadn = cv2.getNumberOfCPUs() Output_Queue = Queue(maxsize=Input_Queue.qsize()) process_list = [] #quit = multiprocessing.Event() #foundit = multiprocessing.Event() for x in range((threadn - 1)): # print('Process No : ', x) p = Process(target=alternative_process_target_func, args=(Input_Queue, Output_Queue))#, quit, foundit p.daemon = True #print('I am a new process with process id of: ', os.getpid()) p.start() process_list.append(p) #p.join() i = 1 for proc in process_list: print('I am hanged here and my process id is : ', os.getpid()) proc.join() print('I have been joined and my process id is : ', os.getpid()) i += 1 for value in range(Output_Queue.qsize()): print(Output_Queue.get()) end = timeit.default_timer() print('Time taken by face verification: ', end - start) print('--------------------------------------------------------------------------------------------------') #Here I am again printing the Input Queue which should be empty logically. for frame in range(Input_Queue.qsize()): frame_no, _, _ = Input_Queue.get() print(frame_no) break if process_this_frame: print(frame_no) small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25) rgb_small_frame = small_frame[:, :, ::-1] face_locations = face_recognition.face_locations(rgb_small_frame) # frames.append((rgb_small_frame, face_locations)) Input_Queue.put((frame_no, rgb_small_frame, face_locations)) frame_no += 1 if processed_frames &lt; 5: processed_frames += 1 not_processed = 1 else: if not_processed &lt; 15: process_this_frame = False not_processed += 1 else: processed_frames = 1 process_this_frame = True print('-----------------------------------------------------------------------------------------------') cap.release() cv2.destroyAllWindows() #chec_queues() #compare_images() #fps_finder() alternative_process('user_verification_2.avi')#'hassan_checking.avi' </code></pre>
<p>Your code contains <code>while not input_queue.empty()</code>. I guess during the work <code>input_queue</code> becomes empty, while loop stops and then you add something else to <code>input_queue</code> to process this something else. But it's too late at that moment.</p> <p>Usually you work with queues like this:</p> <pre><code>while True: element = my_queue.get() ... </code></pre> <p>To stop this loop you may count number of treated elements, use <code>timeout</code> argument or kill process under some condition. Another option is to use <code>multiprocessing.Pool</code> or <code>concurrent.futures.ProcessPoolExecutor</code>.</p>
python|parallel-processing|queue|multiprocessing
0
1,903,056
53,779,148
How to add multiple tasks to a main coroutine in asyncio3.7?
<p>I am trying to update my code from asyncio3.6 to asyncio3.7. One philosophical element of the transition is that it is strongly encouraged to use a single entry point in your program, in particular a single <code>asyncio.run(main())</code>.</p> <p>This program works:</p> <pre><code>import asyncio async def foo(): while True: await asyncio.sleep(1) print("hi") asyncio.run(foo()) </code></pre> <p>And this program works:</p> <pre><code>import asyncio async def foo(): while True: await asyncio.sleep(1) print("hi") async def main(): await foo() asyncio.run(main()) </code></pre> <p>But the following program exits with no error before anything is printed:</p> <pre><code>import asyncio async def foo(): while True: await asyncio.sleep(1) print("hi") async def bar(): while True: await asyncio.sleep(1) print("ho") async def main(): asyncio.create_task(foo()) asyncio.create_task(bar()) asyncio.run(main()) </code></pre> <p>Is the asyncio3.7 prescribed best practice to have main await a future which is, say, set when some error occurs?</p> <p>I know that adding some <code>await future</code> line to the bottom of main makes the third program "work", but I'm still not happy with the use of <code>create_task</code> for the infinite coroutines inside main; the whole point of having a single entry point is that you can catch all unhandled exceptions raised by your program at a single point. But when you have so-called "dangling" tasks like this it doesn't suppress the need to <code>set_exception_handler</code> on the loop.</p>
<p>In this case you simply need something like:</p> <pre><code>async def main(): # wait for both `foo()` and `bar()` to finish await asyncio.gather(foo(), bar()) </code></pre> <p>You can also use <code>asyncio.wait(return_when=asyncio.FIRST_COMPLETED)</code> to wait until <em>either</em> <code>foo()</code> or <code>bar()</code> finishes, but that requires additional care to actually retrieve their results in order to correctly propagate the exceptions.</p> <p>Awaiting an explicit future is more advanced usage, most appropriate when the decision to exit the program must be made inside a deeply nested callback.</p>
python-asyncio
2
1,903,057
33,116,178
is it possible to re-direct stdout in a Bluemix python app?
<p>I've read that re-direction stdout to a local file in Bluemix, for a python app, maybe other apps, may not be supported.</p> <p>I've recently tried the following in my Procfile and it seems to be working:</p> <pre><code>web: python server.py 1&gt;server.out </code></pre> <p>Maybe I'm somehow lucky to have good success, or maybe the documentation I read is no longer accurate.</p>
<p>As you probably know, Bluemix is built on Cloud Foundry, and there are two important considerations to think about:</p> <ul> <li><strong>Local file system storage is short-lived</strong>. When an application instance crashes or stops, the resources assigned to that instance are reclaimed by the platform including any local disk changes made since the app started. When the instance is restarted, the application will start with a new disk image. <em>Although your application can write local files while it is running, the files will disappear after the application restarts</em>.</li> <li><strong>Instances of the same application do not share a local file system</strong>. Each application instance runs in its own isolated container. Thus <em>if your application needs the data in the files to persist across application restarts, or the data needs to be shared across all running instances of the application, the local file system should not be used.</em></li> </ul> <p>For this reason local file system <strong><em>should not be used</em></strong>.</p> <p>If you want more information on this topic please take a look at <a href="https://docs.cloudfoundry.org/devguide/deploy-apps/prepare-to-deploy.html" rel="noreferrer">Considerations for Designing and Running an Application in the Cloud</a></p>
python|ibm-cloud
5
1,903,058
33,478,795
How to test my scrapy method with unitest class
<p>I want to try some method in my spider. For example in my project, I have this schema:</p> <pre><code> toto/ ├── __init__.py ├── items.py ├── pipelines.py ├── settings.py ├── spiders │ ├── __init__.py │ └── mySpider.py └── Unitest └── unitest.py </code></pre> <p>my <code>unitest.py</code> look like that:</p> <pre><code># -*- coding: utf-8 -*- import re import weakref import six import unittest from scrapy.selector import Selector from scrapy.crawler import Crawler from scrapy.utils.project import get_project_settings from unittest.case import TestCase from toto.spiders import runSpider class SelectorTestCase(unittest.TestCase): sscls = Selector def test_demo(self): print "test" if __name__ == '__main__': unittest.main() </code></pre> <p>and my <code>mySpider.py</code>, look like that:</p> <pre><code>import scrapy class runSpider(scrapy.Spider): name = 'blogspider' start_urls = ['http://blog.scrapinghub.com'] def parse(self, response): for url in response.css('ul li a::attr("href")').re(r'.*/\d\d\d\d/\d\d/$'): yield scrapy.Request(response.urljoin(url), self.parse_titles) def parse_titles(self, response): for post_title in response.css('div.entries &gt; ul &gt; li a::text').extract(): yield {'title': post_title} </code></pre> <p>In my unitest.py file, How I can call my spider ? I tried to add <code>from toto.spiders import runSpider</code> in my unitest.py file, but but it does not... I've got this error:</p> <blockquote> <p>Traceback (most recent call last): File "unitest.py", line 10, in from toto.spiders import runSpider ImportError: No module named toto.spiders</p> </blockquote> <p>How I can fix It?</p>
<p>Try: </p> <pre><code>import sys import os sys.path.insert(0, os.path.join(os.path.dirname(os.path.realpath(__file__)), '../..')) #2 folder back from current file from spiders.mySpider import runSpider </code></pre>
python|python-2.7|scrapy|scrapy-spider
1
1,903,059
13,165,479
how to deserialize a python printed dictionary?
<p>I have python's str dictionary representations in a database as varchars, and I want to retrieve the original python dictionaries</p> <p>How to have a dictionary again, based in the str representation of a dictionay?</p> <h2>Example</h2> <pre><code>&gt;&gt;&gt; dic = {u'key-a':u'val-a', &quot;key-b&quot;:&quot;val-b&quot;} &gt;&gt;&gt; dicstr = str(dic) &gt;&gt;&gt; dicstr &quot;{'key-b': 'val-b', u'key-a': u'val-a'}&quot; </code></pre> <p>In the example would be turning dicstr back into a usable python dictionary.</p>
<p>Use <code>ast.literal_eval()</code> and for such cases prefer <code>repr()</code> over <code>str()</code>, as <code>str()</code> doesn't guarantee that the string can be converted back to useful object.</p> <pre><code>In [7]: import ast In [10]: dic = {u'key-a':u'val-a', "key-b":"val-b"} In [11]: strs = repr(dic) In [12]: strs Out[12]: "{'key-b': 'val-b', u'key-a': u'val-a'}" In [13]: ast.literal_eval(strs) Out[13]: {u'key-a': u'val-a', 'key-b': 'val-b'} </code></pre>
python|dictionary|deserialization
12
1,903,060
38,221,181
No module named tensorflow in jupyter
<p>I have some imports in my jupyter notebook and among them is tensorflow:</p> <pre><code>ImportError Traceback (most recent call last) &lt;ipython-input-2-482704985f85&gt; in &lt;module&gt;() 4 import numpy as np 5 import six.moves.copyreg as copyreg ----&gt; 6 import tensorflow as tf 7 from six.moves import cPickle as pickle 8 from six.moves import range ImportError: No module named tensorflow </code></pre> <p>I have it on my computer, in a special enviroment and all connected stuff also:</p> <pre><code>Requirement already satisfied (use --upgrade to upgrade): tensorflow in /Users/mac/anaconda/envs/tensorflow/lib/python2.7/site-packages Requirement already satisfied (use --upgrade to upgrade): six&gt;=1.10.0 in /Users/mac/anaconda/envs/tensorflow/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied (use --upgrade to upgrade): protobuf==3.0.0b2 in /Users/mac/anaconda/envs/tensorflow/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied (use --upgrade to upgrade): numpy&gt;=1.10.1 in /Users/mac/anaconda/envs/tensorflow/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied (use --upgrade to upgrade): wheel in /Users/mac/anaconda/envs/tensorflow/lib/python2.7/site-packages (from tensorflow) Requirement already satisfied (use --upgrade to upgrade): setuptools in ./setuptools-23.0.0-py2.7.egg (from protobuf==3.0.0b2-&gt;tensorflow) </code></pre> <p>I can import tensorflow on my computer:</p> <pre><code>&gt;&gt;&gt; import tensorflow as tf &gt;&gt;&gt; </code></pre> <p>So I'm confused why this is another situation in notebook?</p>
<p>If you installed a TensorFlow as it said in official documentation: <a href="https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html#overview" rel="noreferrer">https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html#overview</a></p> <p>I mean creating an environment called <em>tensorflow</em> and tested your installation in python, but TensorFlow can not be imported in jupyter, you have to install jupyter in your tensorflow environment too:</p> <pre><code>conda install jupyter notebook </code></pre> <p>After that I run a jupyter and it can import TensorFlow too:</p> <pre><code>jupyter notebook </code></pre>
python|tensorflow|jupyter-notebook
74
1,903,061
39,947,395
Subset Sum : Why is DFS + pruning faster than 2 for loop?
<p>This issue is leetcode 416 Partition Equal Subset Sum.</p> <pre><code>#2 for loop, which got TLE class Solution(object): def canPartition(self, nums): """ :type nums: List[int] :rtype: bool """ nums.sort() allsum = sum(nums) if allsum % 2 == 1: return False subsets = {() : 0} temp = dict(subsets) for each in nums: for subset in subsets: new = temp[subset] + each if new * 2 == allsum: return True elif new * 2 &lt; allsum: temp[tuple(list(subset) + [each])] = new else: del temp[subset] subsets = dict(temp) return False DFS + pruning: class Solution(object): def canPartition(self, nums): """ :type nums: List[int] :rtype: bool """ nums.sort() if sum(nums) % 2 != 0: return False else: target = sum(nums) / 2 return self.path(nums, len(nums), target) def path(self, nums, length, target):#DFS + pruning if target == 0: return True elif target &lt; 0 or (target &gt; 0 and length == 0): return False if self.path(nums, length - 1, target - nums[length - 1]): return True return self.path(nums, length - 1, target) </code></pre> <p>Why is 2 for loop slower than DFS? They both have pruning, and I think the time complexity of DFS, which is a np problem, should be worse than 2 for loop, isn't it?</p>
<p>Just because you are using two loops it doesn't mean that your algorithm is in <em>O(n<sup>2</sup>)</em> or polynomial. The complexity of the algorithm depends on how many times each loop executes. In this part of code:</p> <pre><code> for each in nums: for subset in subsets: .... </code></pre> <p>The first loop will run <em>n</em> times because the size of <code>nums</code> is <em>n</em> and it doesn't change. However size of <code>subsets</code> is getting 2 times larger after each iteration. So the body of your second <code>for</code> loop will executes <code>1 + 2 + 4 + 8 + 16 + 32 + ... + 2^n = 2^(n+1)</code> times.</p> <p>So your algorithm is in <em>O(2<sup>n</sup>)</em> without even considering the costly operations (copying <code>list</code>s and <code>dict</code>s) you perform in the body of the second loop.</p> <p>Your second method (Which is not technically a DFS) is equivalent to your first method in the case complexity. They both are <em>O(n<sup>2</sup>)</em>. But in the second method you do less extra work in comparison to your first method (copying <code>list</code>s and <code>dict</code>s, etc). So your first method will might run faster but it doesn't matter in the long run. Both of this methods won't be efficient enough for a bigger size of input.</p> <p>Note that this is a very famous problem called <a href="https://en.wikipedia.org/wiki/Subset_sum_problem" rel="nofollow">Subset-sum</a> which is NP-Complete. It can be solved using <a href="http://www.geeksforgeeks.org/dynamic-programming-subset-sum-problem/" rel="nofollow">Dynamic Programming</a> in <a href="https://en.wikipedia.org/wiki/Pseudo-polynomial_time" rel="nofollow">Pseudo-polynomial time</a>.</p>
python|algorithm|loops|recursion|time-complexity
0
1,903,062
40,061,085
Python Flask: Go from Swagger YAML to Google App Engine?
<p>I have used the Swagger Editor to create a REST API and I have requested the server code download for Python Flask. I'm trying to deploy this out to Google Cloud Platform (I think that's the latest name? Or is it still GAE?) but I need to fill in some gaps. </p> <p>I know the Swagger code works because I have deployed it locally without any issues. However, it uses the connexion library instead of Flask outright.</p> <p>I'm mostly lost on how I can incorporate an app.yaml file for GCP and the right entrypoints within the generated code. In addition, I know that the generated code declares it's own app server which I don't think you need to do for GCP. Here's my current <strong>app.yaml</strong></p> <pre><code>application: some-app-name version: 1 runtime: python27 api_version: 1 threadsafe: yes entrypoint: python app.py libraries: - name: connexion version: "latest" </code></pre> <p>And here's my <strong>app.py</strong></p> <pre><code>import connexion if __name__ == '__main__': app = connexion.App(__name__, specification_dir='./swagger/') app.add_api('swagger.yaml', arguments={'title': 'this is my API'}) app.run(port=8080) </code></pre> <p>The primary error I'm getting now is</p> <pre><code>google.appengine.api.yaml_errors.EventError: the library "connexion" is not supported </code></pre> <p>I have a feeling that's because of the way I am declaring an app server in my app.py - it probably shouldn't be needed. How would I modify this file to still use my Swagger code but run on GCP?</p>
<p>You seem to have some inconsistencies in your file, it's unclear if you intended it to be a <a href="https://cloud.google.com/appengine/docs/python/config/appref" rel="nofollow noreferrer">standard environment <code>app.yaml</code> file</a> or a <a href="https://cloud.google.com/appengine/docs/flexible/python/runtime#overview" rel="nofollow noreferrer">flexible environment</a> one. I can't tell as I'm unfamiliar with swagger and flask.</p> <p>If it's supposed to be a standard environment one then:</p> <ul> <li>the <code>entrypoint:</code> is not a supported config keyword</li> <li>the <code>connexion</code> library is not one of <a href="https://cloud.google.com/appengine/docs/python/tools/built-in-libraries-27" rel="nofollow noreferrer">the runtime-provided third-party libraries</a>, so you can't <a href="https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27#requesting_a_library" rel="nofollow noreferrer">request it</a> (i.e. listing it in the <code>libraries</code> section). You need to <a href="https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27#installing_a_library" rel="nofollow noreferrer">install it (vendor it in)</a>. <ul> <li>it's missing the <code>handlers</code> section</li> </ul></li> </ul> <p>Probably a good idea to go through <a href="https://cloud.google.com/appengine/docs/python/getting-started/python-standard-env" rel="nofollow noreferrer">Getting Started with Flask on App Engine Standard Environment</a> </p> <p>If, however, your goal was a flexible environment <code>app.yaml</code> file then:</p> <ul> <li>you need the <code>env: flex</code> and <code>runtime: python</code> config in it(<code>vm: true</code> and <code>runtime: python27</code> in the original answer are now deprecated) </li> <li><a href="https://cloud.google.com/appengine/docs/flexible/python/runtime#dependencies" rel="nofollow noreferrer">installing/specifying dependencies</a> is done differently, not via the <code>libraries</code> section.</li> </ul>
python|google-app-engine|flask|swagger|google-cloud-platform
5
1,903,063
40,183,840
Python TypeError in my-code,
<p>I am getting this error while running the program below. I am running this code on CentOS. I don't know what is the problem is!</p> <p>I'm stuck with this error: <code>TypeError: put_photo() takes at most 3 arguments (4 given)</code></p> <pre><code>#!/usr/bin/python: # -*- coding: utf-8 -*- from sys import argv #import tweepy import facebook def main(): cfg = { "page_id" : "XXXX", "access_token" : "XXXX" } api = get_api(cfg) msg = "Hello, world!" status = api.put_wall_post(msg) def get_api(cfg): graph = facebook.graphapi(cfg['access_token']) resp = graph.get_object('me/accounts') page_access_token = None for page in resp['data']: if page['id'] == cfg['page_id']: page_access_token = page['access_token'] graph = facebook.GraphAPI(page_access_token) ''' caption = "இன்ரைய நாள் காட்டி #tamilcalender (©belongs to watermarked party)" albumid = '' with open(image.jpg,"rb") as image: posted_image_id = graph.put_photo(image, caption, albumid) ''' return graph if __name__ == "__main__": main() </code></pre>
<p><code>put_photo</code> API takes only two arguments.</p> <ul> <li><code>image</code> - A file object representing the image to be uploaded.</li> <li><code>album_path</code> - A path representing where the image should be uploaded. Defaults to /me/photos which creates/uses a custom album for each Facebook application.</li> </ul> <p><a href="http://facebook-sdk.readthedocs.io/en/latest/api.html?highlight=put_photo" rel="nofollow">Please check this link for more info.</a></p> <p>You are passing three aruguments - <code>image, caption, albumid</code>.</p> <p>Along with these three, as explained in above comments by <strong>@kindall</strong> and <strong>@BrandonIbbotson</strong>, the one mandatory argument is passed which is related to <code>self</code>.</p> <p>Just check above link for examples and just pass two valid arguments.</p>
python|facebook|python-2.7
1
1,903,064
40,132,678
Try to import csv file to Python notebook Jupyter but the file does not exist
<p>i am trying to load a csv file into Pandas. I am getting a weird error that I have never encountered before that the file does not exist even though it does. i have tried different way to fix it like change slashes to backslashes() and add r before 'c:' like(r'c:/) but still does not work</p> <pre><code>import pandas as pd %matplotlib inline df =pd.read_csv(‘C:/Users/caol3/Downloads/Data Sampler.csv’) IOErrorTraceback (most recent call last) &lt;ipython-input-3-3740a47c4f96&gt; in &lt;module&gt;() ----&gt; 1 df =pd.read_csv('C:/Users/caol3/Downloads/Data Sampler.csv') /opt/conda/envs/python2/lib/python2.7/site-packages/pandas/io/parsers.pyc in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision) 560 skip_blank_lines=skip_blank_lines) 561 --&gt; 562 return _read(filepath_or_buffer, kwds) 563 564 parser_f.__name__ = name /opt/conda/envs/python2/lib/python2.7/site-packages/pandas/io/parsers.pyc in _read(filepath_or_buffer, kwds) 313 314 # Create the parser. --&gt; 315 parser = TextFileReader(filepath_or_buffer, **kwds) 316 317 if (nrows is not None) and (chunksize is not None): /opt/conda/envs/python2/lib/python2.7/site-packages/pandas/io/parsers.pyc in __init__(self, f, engine, **kwds) 643 self.options['has_index_names'] = kwds['has_index_names'] 644 --&gt; 645 self._make_engine(self.engine) 646 647 def close(self): /opt/conda/envs/python2/lib/python2.7/site-packages/pandas/io/parsers.pyc in _make_engine(self, engine) 797 def _make_engine(self, engine='c'): 798 if engine == 'c': --&gt; 799 self._engine = CParserWrapper(self.f, **self.options) 800 else: 801 if engine == 'python': /opt/conda/envs/python2/lib/python2.7/site-packages/pandas/io/parsers.pyc in __init__(self, src, **kwds) 1211 kwds['allow_leading_cols'] = self.index_col is not False 1212 -&gt; 1213 self._reader = _parser.TextReader(src, **kwds) 1214 1215 # XXX pandas/parser.pyx in pandas.parser.TextReader.__cinit__ (pandas/parser.c:3427)() pandas/parser.pyx in pandas.parser.TextReader._setup_parser_source (pandas/parser.c:6861)() IOError: File C:/Users/caol3/Downloads/Data Sampler.csv does not exist </code></pre>
<p>Very late to the game here. Possible you haven't actually saved your file as a CSV file. It could still be in <code>.xlsx</code>format, in which your file will have the format <code>file.csv.xlsx</code>.Follow the steps below (if you haven't done so):</p> <pre><code>In your Excel spreadsheet, click File. Click Save As. Click Browse to choose where you want to save your file. Select &quot;CSV&quot; from the &quot;Save as type&quot; drop-down menu. Click Save </code></pre>
python|csv
0
1,903,065
19,275,527
how to handle blob sent via javascript through websocket back at the python server?
<p>i have to process the image back at the python server with Opencv . the blob goes to the python server but i could not figure out how to convert this blob back into image with openCV</p> <pre><code> //this is my javascript function to send image converted to blob, //back to my python server function () { ctx.drawImage(video, 0, 0, 320, 240); var data = canvas.get()[0].toDataURL('image/jpeg', 1.0); newblob = dataURItoBlob(data); ws.send(newblob); } </code></pre> <p>this is my python backend handling </p> <pre><code>class EchoServerProtocol(WebSocketServerProtocol): def onMessage(self, msg, binary): img = # here the code to convert blob into the image blur = cv2.blur(img, (5, 5)) hsv = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV) msg = hsv print "the image:", msg #conver the image back to blob and reply back to the websocket #havent written the code for this part yet self.sendMessage(msg, binary) </code></pre> <p>please help me figure this out</p>
<p>Check out how this blob is getting made and then reverse engineer it </p>
javascript|python|google-chrome|opencv|websocket
1
1,903,066
62,292,310
Do we have any function to get filtering of data in R or Python
<p>I am new to R and i am unable to figure it out how to filter the data which as require</p> <p>Below is the data with (326 rows and 6 columns)</p> <p><a href="https://i.stack.imgur.com/aENr8.png" rel="nofollow noreferrer">DataSet</a></p> <p>Here is the small example:</p> <pre><code>Author,Commenid,Parentid,Submissionid Score Stance User1 , 333c , 222b , 111b , 10 , Positive User2 , 444c , 333c , 5hdc , 15 , Neutral User3 , 222b , 555d , 23er , 20 , Negative User4 , 555d , 666f , 111b , 11 , Positive </code></pre> <p>here user1 means, he had replied to the user2 </p> <pre><code> user3 had replied to user1 user4 had replied to user3 </code></pre> <p>I want to filter as users who have same commentid and parentid ,For above example we will get data filtered as </p> <pre><code>Author Score Stance Reply Score Stance User2 15 Neutral User1 10 Positive User1 10 Positive User3 20 Negative User3 20 Negative User4 11 Positive </code></pre> <p>I tried a lot and I am not able to figure it out, can anyone help me how we can do it exactly(R or Python).</p> <p>Thanks in advance</p>
<p>Here is a base R answer.<br> First <code>match</code> columns <code>Commenid</code> with <code>Parentid</code>. Create a data set with the <code>Author</code> column and a <code>Reply</code> column of the authors matched before. Keep all rows with no <code>NA</code> values and join (<code>merge</code>) with the original data to have the other columns.</p> <pre><code>i &lt;- with(df1, match(Commenid, Parentid)) res &lt;- data.frame(Author = df1$Author, Reply = df1$Author[i]) res &lt;- res[complete.cases(res), ] merge(res, df1) # Author Reply Commenid Parentid Submissionid #1 User1 User2 333c 222b 111b #2 User3 User1 222b 555d 23er #3 User4 User3 555d 666f 111b </code></pre> <p>A <a href="/questions/tagged/dplyr" class="post-tag" title="show questions tagged &#39;dplyr&#39;" rel="tag">dplyr</a> solution could be</p> <pre><code>library(dplyr) df1 %&gt;% mutate(i = match(Commenid, Parentid), Reply = Author[i]) %&gt;% filter(!is.na(i)) %&gt;% select(Author, Reply, everything(vars = -i)) </code></pre> <p><strong>Data</strong></p> <pre><code>df1 &lt;- read.csv(text = " Author,Commenid,Parentid,Submissionid User1 , 333c , 222b , 111b User2 , 444c , 333c , 5hdc User3 , 222b , 555d , 23er User4 , 555d , 666f , 111b ") df1[] &lt;- lapply(df1, trimws) </code></pre> <h1>Edit</h1> <p>With the new data and problem described in comments, here is a <code>dplyr</code> solution. After what is basically the same as above, it joins the result with the original data set and reorders the columns.</p> <pre><code>library(dplyr) df2 %&gt;% mutate(i = match(Commenid, Parentid), Reply = Author[i]) %&gt;% filter(!is.na(i)) %&gt;% select(-i) %&gt;% select(Author, Score, Stance, Reply, everything()) %&gt;% left_join(df2 %&gt;% select(Author, Score, Stance), by = c("Reply" = "Author")) %&gt;% select(-matches("id$"), everything(), matches("id$")) </code></pre> <p><strong>New data</strong></p> <pre><code>df2 &lt;- read.csv(text = " Author,Commenid,Parentid,Submissionid, Score, Stance User1 , 333c , 222b , 111b , 10 , Positive User2 , 444c , 333c , 5hdc , 15 , Neutral User3 , 222b , 555d , 23er , 20 , Negative User4 , 555d , 666f , 111b , 11 , Positive ") names(df1) &lt;- trimws(names(df1)) df1[] &lt;- lapply(df1, trimws) </code></pre>
python|r
1
1,903,067
67,509,865
Getting TypeError in Card Class
<p><em>This is the Output Below. I get my standard deck of 52 cards but it is followed by a TypeError I don't understand where I'm getting the error message from or how to fix it ive tried numerous things but it either doesn't print anything or it'll give me a TypeError again with non-string (type Card)</em></p> <pre><code>decks = Deck() print(decks) 2 S 3 S 4 S 5 S 6 S 7 S 8 S 9 S T S J S Q S K S A S 2 C 3 C 4 C 5 C 6 C 7 C 8 C 9 C T C J C Q C K C A C 2 D 3 D 4 D 5 D 6 D 7 D 8 D 9 D T D J D Q D K D A D 2 H 3 H 4 H 5 H 6 H 7 H 8 H 9 H T H J H Q H K H A H Traceback (most recent call last): Python Shell, prompt 3, line 1 builtins.TypeError: __str__ returned non-string (type NoneType) </code></pre> <p><strong>I want it to be able to display my deck of 52 cards in a matrix but if i cant even get it to display normally it's frustrating.</strong></p> <pre><code>import random class Card: def __init__(self, rank, suit): # initialize number variables here self._rank = rank self._suit = suit def __str__(self): #overload this to get a readable string representation of our card object return str(self._rank) + ' ' + str(self._suit) def __eq__(self, other): if self._rank == other._rank: return True def __ne__(self, other): if self._rank != other._rank: return True class Deck: def __init__(self): # create a list of card objects only need to pass in self #intialize a string or list of suits #intialize a string or list of ranks # building the list of cards self._deck = [] self._dealt = [] suits = ['S', 'C', 'D', 'H'] ranks = ['2', '3' , '4', '5', '6', '7' , '8', '9','T', 'J' , 'Q', 'K', 'A'] for suit in suits: for rank in ranks: self._deck.append(Card(rank,suit)) def __str__(self): for i in self._deck: print(i) </code></pre> <p><strong>desired output:</strong></p> <pre><code>2 C, 3 C, 4 C, 5 C, 6 C, 7 C, 8 C, 9 C, T C, J C, Q C, K C, A C, 2 D, 3 D, 4 D, 5 D, 6 D, 7 D, 8 D, 9 D, T D, J D, Q D, K D, A D, 2 H, 3 H, 4 H, 5 H, 6 H, 7 H, 8 H, 9 H, T H, J H, Q H, K H, A H, 2 S, 3 S, 4 S, 5 S, 6 S, 7 S, 8 S, 9 S, T S, J S, Q S, K S, A S </code></pre> <p>Thankyou for any help in advance it is truly appreciated!</p>
<p>To fix the TypeError, you need a return value from <code>__str__</code>. It's ok (but not recommended) to print stuff though, just make sure you return a string.</p> <p>To print it in your desired matrix form, you could create a numpy array and print that:</p> <pre><code>import numpy as np class Deck: def __init__(self): # create a list of card objects only need to pass in self #intialize a string or list of suits #intialize a string or list of ranks # building the list of cards self._deck = [] self._dealt = [] suits = ['S', 'C', 'D', 'H'] ranks = ['2', '3' , '4', '5', '6', '7' , '8', '9','T', 'J' , 'Q', 'K', 'A'] for suit in suits: for rank in ranks: self._deck.append(str(Card(rank,suit))) def __str__(self): return str(np.array(self._deck).reshape(13,4)) deck = Deck() print(str(deck)) &gt;&gt;&gt; [['2 S' '3 S' '4 S' '5 S'] ['6 S' '7 S' '8 S' '9 S'] ['T S' 'J S' 'Q S' 'K S'] ['A S' '2 C' '3 C' '4 C'] ['5 C' '6 C' '7 C' '8 C'] ['9 C' 'T C' 'J C' 'Q C'] ['K C' 'A C' '2 D' '3 D'] ['4 D' '5 D' '6 D' '7 D'] ['8 D' '9 D' 'T D' 'J D'] ['Q D' 'K D' 'A D' '2 H'] ['3 H' '4 H' '5 H' '6 H'] ['7 H' '8 H' '9 H' 'T H'] ['J H' 'Q H' 'K H' 'A H']] </code></pre>
python|python-3.x|string|list|class
0
1,903,068
67,437,891
Adding calculated column to dataframe causes error using lambda function
<p>I am trying to add a new calculated column to a dataframe based on a function that does some math. The function uses values from c1 and c2 of my dataframe as inputs as well as some predefined constant variables.</p> <p>As part of the function, the values of c2 are used to lookup a value in a dictionary by useing lambda. This process throws a <strong>&quot;TypeError: 'DataFrame' objects are mutable, thus they cannot be hashed&quot;</strong> at me.</p> <p>There are no null or strange values my dataframe.</p> <p>The function call looks something like this:</p> <pre><code>df['new column'] = some_function(df['c1'], var1, var2,... df['c2']) </code></pre> <p>The part of &quot;some_function&quot; that fails looks like this :</p> <pre><code> value = some_dict.get(df['c2']) or some_dict[min(some_dict.keys(), key = lambda key: abs(key-df['c2']))] </code></pre> <p>If I replace <code>df['c2']</code> with a constant the code runs as excepted.</p> <p>If I use <code>df['c2'].mean()</code> i get <strong>&quot;TypeError: 'Series' objects are mutable, thus they cannot be hashed&quot;</strong></p> <pre><code>print(df.info()) </code></pre> <p>Returns:</p> <pre><code>&lt;class 'pandas.core.frame.DataFrame'&gt; Index: 729 entries, 2019-05-08 00:00:00.000 to 2021-05-05 00:00:00.000 Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 (c1) 729 non-null float64 1 (c2) 729 non-null float64 dtypes: float64(2) memory usage: 17.1+ KB None </code></pre> <p>c1 and c2 dont seem to differ, i tried swapping them in the function call and also use c1 as input in both places.</p> <pre><code>type(df['c1']) Out[178]: pandas.core.frame.DataFrame type(df['c2']) Out[179]: pandas.core.frame.DataFrame </code></pre> <p>Any ideas how i can fix this? Should i define a lookup function instead of using lambda?</p>
<p>You can try to do it this way:</p> <pre><code>df['new column'] = df.apply(lambda x: some_function(x['c1'], var1, var2,... x['c2']), axis=1) </code></pre> <p>As mentioned in the comment, you cannot pass a whole Pandas Series or DataFrame to a dictionary. You need to do it element-wise. Also, those dict functions or custom functions are not designed to process in vectorized way of operations like numpy and pandas functions do.</p> <p>With the use of <code>.apply()</code> like the above, you are passing the values of elements of each row of the dataframe to the custom function <code>some_function()</code> rather than passing the whole dataframe / series to the function as parameter inputs.</p> <p>In particular, as you want to pass the values of <code>df['c2']</code> to <code>some_dict.get()</code> and Python dict data type / object is not designed to work on a whole Pandas series (i.e. a Pandas column), we can bridge up this gap by passing the series broken down into element by element using this <code>.apply()</code> method on <code>axis=1</code>.</p> <p>You can define <code>some_function()</code> in a way just like an ordinary function accepting only scalar values (not vector objects like pandas dataframe / series). E.g.</p> <pre><code>def some_function(c1_val, var1, var2,... c2_val): ... value = some_dict.get(c2_val]) or some_dict[min(some_dict.keys(), key = lambda key: abs(key - c2_val))] .... </code></pre>
python|pandas|lambda|hash|calculated-columns
0
1,903,069
36,683,836
Python threading.Thread() returns NoneType?
<p>I am working on a small application that I know will have 3 threads independent from the main thread, at some point, and I will need to identify a thread from another. Suppose threads are <code>A</code>, <code>B</code>, <code>C</code>. <code>A</code> will need to join with <code>C</code> if something happens. I am trying to add the threads to a dictionary before starting them, so I can identify thread <code>C</code> later:</p> <pre><code>currentThreads['A'] = threading.Thread(target=func, args=[]]). currentThreads['A'].start() currentThreads['B'] = threading.Thread(target=func, args=[]).start() currentThreads['B'].start() </code></pre> <p>The behavior is weird: sometimes both <code>currentThreads[key].start()</code> yield <code>AttributeError: 'NoneType' object has no attribute 'start'</code>, sometimes only <code>currentThreads['B'].start()</code> does.</p> <p>Any clue why this might happen?</p>
<p>This is because <a href="https://docs.python.org/3/library/threading.html#threading.Thread.start" rel="nofollow"><code>start</code></a> returns <code>None</code> so in:</p> <pre><code>currentThreads['B'] = threading.Thread(target=func, args=[]).start() </code></pre> <p><code>currentThreads['B']</code> is <code>None</code> thus calling <code>currentThreads['B'].start()</code> will raise <code>AttributeError</code></p>
python|multithreading
1
1,903,070
36,253,216
Pythonic way of slicing a list w.r.t. first element in tuple
<p>I have a sorted list of tuples of the form</p> <pre><code>x = [(0,1), (0,2), (0,3), ... (1,1), (1,3), (1,4), ... ... (n,0), (n,4), ... ] </code></pre> <p>I want to slice the list such that all numbers of (x,y) where x is a certain value in the new list and the order is kept. Now, this would obviously work:</p> <pre><code>y = [(a,b) for (a,b) in x if a == n] </code></pre> <p>But it is really slow. It would be faster to find the first and last index that satisfies this condition with binary search. <code>index</code> gives you the first index for a value, and <code>index</code> of the reversed list would give the last index. How would apply it though without doing <code>[a for (a,b) in x]</code> and copying the whole list, in a pythonic way?</p>
<p>As suggested in the comments by @Liongold, you can use bisect. Assuming you want all tuples <code>t</code> with <code>t[0] == 1</code>:</p> <pre><code>from bisect import bisect_left x = [(0, 1), (0, 2), (1, 1), (1, 2), (2, 1), (2, 2)] start = bisect_left(x, (1, None)) # index of the very first (1, i) end = bisect_left(x, (2, None)) # index after the very last (1, i) y = x[start:end] # y: [(1, 1), (1, 2)] </code></pre> <p>You can find details in the <a href="https://docs.python.org/2/library/bisect.html#bisect.bisect_left" rel="nofollow">bisect docs</a>.</p>
python|tuples|slice|binary-search
2
1,903,071
36,362,265
Files won't be read in GUI program, only in shell
<p>Hey guys so I have a lengthy code here I need help with, specifically: my end process is an assignment creating a word cloud, But I haven't even start at that point yet. As of now, I've been able to create the function of creating a frequency accumulator and my first GUI platform.</p> <p>When running the program, the gui asks the user to type in the file name of their program. However, you can type gibberish or even leave it blank, click the transform file button, and it still opens up Shell and prompts the user for the text file name and then the number of words they want in the list.</p> <p>I don't even want the 2nd part (asking how many words) but I didn't know another way of doing it for my frequency counter.</p> <pre><code>from graphics import * ##Deals with Frequency Accumulator## def byFreq(pair): return pair[1] ##Function to allow user to upload their own text document## def FileOpen(userPhrase): filename = input("Enter File Name (followed by .txt): ") text = open(filename, 'r').read() text = text.lower() for ch in ('!"#$%&amp;()*+,-./:;&lt;=&gt;?@[\\]^_{}~'): text = text.replace(ch, " ") words = text.split() counts = {} for w in words: counts[w] = counts.get(w,0) + 1 n = eval(input("Output how many words?")) items = list(counts.items()) items.sort(key=byFreq, reverse=True) for i in range(n): word, count = items[i] print("{0:&lt;15}{1:&gt;5}".format(word, count)) ##This Function allows user to simply press button to see an example## def Example(): win = GraphWin("Word Cloud", 600, 600) file = open("econometrics.txt", "r", encoding = "utf-8") text = file.read() text = text.lower() for ch in ('!"#$%&amp;()*+,-./:;&lt;=&gt;?@[\\]^_{}~'): text = text.replace(ch, " ") words = text.split() counts = {} for w in words: counts[w] = counts.get(w,0) + 1 n = eval(input("Output how many words?")) items = list(counts.items()) items.sort(key=byFreq, reverse=True) for i in range(n): word, count = items[i] print("{0:&lt;15}{1:&gt;5}".format(word, count)) ######################################################################### ##Gold Boxes## def boxes(gwin, pt1, pt2, words): button = Rectangle(pt1, pt2) button.setFill("gold") button.draw(gwin) #Middle of the box coordinates labelx = (pt1.getX() + pt2.getX())/2.0 labely = (pt1.getY() + pt2.getY())/2.0 #Labels label = Text(Point(labelx,labely),words) label.setFill("black") label.draw(gwin) ####GUI function##### def main(): #Creates the actual GUI win = GraphWin("Word Cloud Prompt", 600, 600) #Box which user types into: inputBox = Entry(Point(300,150),50) inputBox.draw(win) #Gold Boxes at Top boxes(win, Point(220,300), Point(370,350), "Transform Text File") boxes(win, Point(220,400), Point(370,450), "Example text file") #Tells user what to do prompt = Text(Point(300,25),"Welcome to the Word Cloud program!") prompt.draw(win) prompt = Text(Point(300,125),"Enter your textfile name") prompt.draw(win) prompt = Text(Point(300,180),"Want to see our own file into a Word Cloud? Click below") prompt.draw(win) #display answer display = Text(Point(300, 500),"") display.draw(win) #User Clicks a box: pt = win.getMouse() #Store user info userPhrase = inputBox.getText() key = inputBox.getText() #Incase a button isn't clicked output = "No button was clicked, Please restart program" #Clicking the Transform Text File Button if pt.getY() &gt;= 300 and pt.getY() &lt;= 350: if pt.getX() &gt;= 220 and pt.getX() &lt;= 370: output = FileOpen(userPhrase) #Clicking the Example Text File Button if pt.getY() &gt;= 400 and pt.getY() &lt;= 450: if pt.getX() &gt;= 220 and pt.getX() &lt;= 370: output = Example() #State Answer display.setText(output) display.setFill("purple3") display.setStyle("bold") prompt.setText("Thank You! Click anywhere to close!") prompt.setFill("red") #closing program pt = win.getMouse() win.close() main() </code></pre>
<p>(Editor's Note: I've modified some of the formatting for my ease of reading, but the function will be identical)</p> <pre><code>def FileOpen(userPhrase): """FileOpen allows users to upload their own text document.""" filename = input("Enter File Name (followed by .txt): ") text = open(filename, 'r').read() text = text.lower() for ch in ('!"#$%&amp;()*+,-./:;&lt;=&gt;?@[\\]^_{}~'): text = text.replace(ch, " ") words = text.split() counts = {} for w in words: counts[w] = counts.get(w, 0) + 1 n = eval(input("Output how many words?")) items = list(counts.items()) items.sort(key=byFreq, reverse=True) for i in range(n): word, count = items[i] print("{0:&lt;15}{1:&gt;5}".format(word, count)) </code></pre> <p>This is the function we're concerned with. You call it with an argument <code>userPhrase</code> that comes from a gui text entry field, but then you never <em>use</em> <code>userPhrase</code> anywhere in the function. Consider instead:</p> <pre><code>def file_open(userPhrase=None): # file_open is lowercase and snake_case for PEP8 # userPhrase=None to make it an optional argument filename = userPhrase if userPhrase is not None else \ input("Enter file name (including extension): ") ... </code></pre> <p>Then you'll have to call it differently if you want <code>file_open</code> to prompt for a filename if not given one.</p> <pre><code>def main(): ... if 300 &lt;= pt.getY() &lt;= 350 and 220 &lt;= pt.getX() &lt;= 370: # you can chain conditionals as above. It makes it much easier to read! # `a &lt;= N &lt;= b` is easily read as `N is between a and b` userPhrase = inputBox.getText() if userPhrase.strip(): # if there's something there file_open(userPhrase) else: file_open() </code></pre>
python|word-cloud
0
1,903,072
36,362,175
PySpark similarities retrieved by IndexedRowMatrix().columnSimilarities() are not acessible: INFO ExternalSorter: Thread * spilling in-memory map
<p>When I run the code:</p> <pre><code>from pyspark import SparkContext from pyspark.mllib.recommendation import ALS, MatrixFactorizationModel, Rating from random import random import os from scipy.sparse import csc_matrix import pandas as pd from pyspark.mllib.linalg.distributed import RowMatrix from pyspark.mllib.linalg import Vectors from pyspark.mllib.linalg.distributed import CoordinateMatrix, MatrixEntry from pyspark.sql import SQLContext sc =SparkContext() sqlContext = SQLContext(sc) df = pd.read_csv("/Users/Andre/Code/blitsy-analytics/R_D/Data/cust_item_counts.csv", header=None) customer_map = {x[1]:x[0] for x in enumerate(df[0].unique())} item_map = {x[1]:x[0] for x in enumerate(df[1].unique())} df[0] = df[0].map(lambda x: customer_map[x]) df[1] = df[1].map(lambda x: item_map[x]) #matrix = csc_matrix((df[2], (df[0], df[1])),shape=(max(df[0])+1, max(df[1])+1)) entries = sc.parallelize(df.apply(lambda x: tuple(x), axis=1).values) mat = CoordinateMatrix(entries).toIndexedRowMatrix() sim = mat.columnSimilarities() sim.entries.map(lambda x: x).first() </code></pre> <p>I get thrown into a loop of threads spilling onto disk:</p> <pre><code>&gt; 16/04/01 12:09:25 INFO ContextCleaner: Cleaned accumulator 294 &gt; 16/04/01 12:09:25 INFO ContextCleaner: Cleaned accumulator 293 &gt; 16/04/01 12:09:25 INFO ContextCleaner: Cleaned accumulator 292 &gt; 16/04/01 12:09:25 INFO ContextCleaner: Cleaned accumulator 291 &gt; 16/04/01 12:09:42 INFO ExternalSorter: Thread 108 spilling in-memory &gt; map of 137.6 MB to disk (1 time so far) 16/04/01 12:09:42 INFO &gt; ExternalSorter: Thread 112 spilling in-memory map of 158.1 MB to disk &gt; (1 time so far) 16/04/01 12:09:42 INFO ExternalSorter: Thread 114 &gt; spilling in-memory map of 154.2 MB to disk (1 time so far) 16/04/01 &gt; 12:09:42 INFO ExternalSorter: Thread 113 spilling in-memory map of &gt; 143.4 MB to disk (1 time so far) </code></pre> <p>This isn't true from the matrix 'mat' which returns it's first row entry. </p> <p>Is this to do with memory management or the function columnSimilarity() itself?</p> <p>I have ~86000 rows and columns in the sim variable. </p> <p>My dataset was a list of tuples (user_id, item_id, value). I turn the user_id and item_id range into values between 0 and len(user_id| tem_id). This is so an id of 800000 doesn't force a matrix that large. </p> <p>There are 800,000 entries of this type. The matrix in the variable 'mat' holds the value from the tuple at the coordinates of (user_id, item_id). This is verified by me as being the case.</p> <p>The matrix at 'mat' has ~41,000 users and ~86,000 items. The column Similarity creates comparisons between each item which is why it has dimensions 86k x 86k</p> <p>This was all done in the pyspark terminal ./bin/pyspark.</p>
<p>As discussed in the comment, the issue is related to the fact that you have lots of data which were not well partitioned considering your cluster configuration. That's why it was spilling on disk.</p> <p>You'll need to give your application more resources memory wise and/or augment data partitions. </p>
python|apache-spark|pyspark|recommendation-engine|bigdata
1
1,903,073
19,689,325
Put specific lines into an array
<p>I have a file which is like:</p> <blockquote> <p>0.5 0.5 0.5 0.5</p> <p>1 0.1</p> <p>0.6 0.6 0.6 0.6</p> <p>1 0.2</p> </blockquote> <p>So my question is I just want the lines with &quot;0.5&quot; and &quot;0.6&quot; and put them in a array which would be like</p> <blockquote> <p>0.5 0.5 0.5</p> <p>0.6 0.6 0.6</p> </blockquote> <p>How should I do this? I have tried several methods such as readlines and row.split, but I just cannot get the right form. Maybe I did not write the correct form of readlines and row.split.</p>
<p>Well, you can do this by going over all the lines, and checking if the lines start with your desired variables, for your case (<code>text.txt</code> is the name of your presumed file):</p> <pre><code>with open('text.txt') as f: l = [var.rstrip() for var in f if var.startswith(('0.5','0.6'))] print(l) </code></pre>
python
2
1,903,074
22,250,666
beautifulsoup find specific tags
<p>I started to get into beautifulsoup but I came to a problem that I can't seem to solve.</p> <p>I have <a href="http://www.gw2spidy.com/item/19697" rel="nofollow">this</a></p> <p>website and want to parse the value of the item. The value can be found between the </p> <pre><code>&lt;span class="gw2money-fragment"&gt;%value &lt;i class="gw2money-silver"&gt;s&lt;/i&gt;&lt;/span&gt; </code></pre> <p>tags and</p> <pre><code>&lt;span class="gw2money-fragment"&gt;%value &lt;i class="gw2money-copper"&gt;c&lt;/i&gt;&lt;/span&gt;. </code></pre> <p>Getting those values wasn't a problem, the problem is checking whether the value is inside the <code>&lt;i class="gw2money-silver"&gt;</code> or <code>&lt;i class="gw2money-copper"&gt;</code> tags.</p> <pre><code>r = requests.get("http://www.gw2spidy.com/item/24467", proxies=proxyDict) soup = BeautifulSoup(r.text) checksoup = soup.find_all("span") numliste = [] for links in checksoup: #print(links) price = links.contents[0] print(price) del numliste[0] print(numliste) </code></pre> <p>This is how I retrieve the copper and silver values currently.</p>
<p>I'd search for the <code>gw2money-fragment</code> class and then test to see what the class is on the contained <code>i</code> element:</p> <pre><code>for row in soup.find_all('tr'): fragments = row.find_all('span', class_='gw2money-fragment') if not fragments: continue label = row.th or row.td print(label.text) for fragment in fragments: value = fragment.text.split()[0] type_ = fragment.i['class'][0].rsplit('-', 1)[-1] print('-', value, type_) </code></pre> <p>Demo:</p> <pre><code>&gt;&gt;&gt; for row in soup.find_all('tr'): ... fragments = row.find_all('span', class_='gw2money-fragment') ... if not fragments: ... continue ... label = row.th or row.td ... print(label.text) ... for fragment in fragments: ... value = fragment.text.split()[0] ... type_ = fragment.i['class'][0].rsplit('-', 1)[-1] ... print('-', value, type_) ... Sell Price: - 1 silver - 50 copper Buy Price: - 1 silver - 32 copper Topaz Nugget - 2 silver - 98 copper - 1 silver - 77 copper Sunstone Nugget - 3 silver - 17 copper - 2 silver - 15 copper Carnelian Nugget - 3 silver - 48 copper - 2 silver - 15 copper Peridot Nugget - 3 silver - 21 copper - 2 silver - 19 copper Adorned Tiger's Eye Jewel - 4 silver - 26 copper - 3 silver - 85 copper Tiger's Eye Copper Amulet of Precision - 6 silver - 26 copper - 4 silver - 51 copper Tiger's Eye Copper Ring of Precision - 6 silver - 46 copper - 4 silver - 76 copper Tiger's Eye Copper Stud of Precision - 7 silver - 2 copper - 5 silver - 43 copper </code></pre>
python|beautifulsoup|html-parser
3
1,903,075
21,933,904
matplotlib pdf savefig exiting early
<p>So I've copied the example given <a href="http://matplotlib.org/examples/pylab_examples/multipage_pdf.html" rel="nofollow">here</a> and when I run it I get:</p> <pre><code>Traceback (most recent call last): File "C:\Users\User\Documents\Project work\pdf.py", line 9, in &lt;module&gt; with PdfPages('multipage_pdf.pdf') as pdf: AttributeError: __exit__ </code></pre> <p>So where do I go from here? Thanks</p>
<p><code>PdfPages</code> has become a context manager only in version 1.3.1. See the <a href="https://mail.python.org/pipermail/python-announce-list/2013-October/010071.html" rel="nofollow">chagelog</a>.</p> <p>In particular, observe the following line:</p> <blockquote> <p>Added a context manager for creating multi-page pdfs (see <code>matplotlib.backends.backend_pdf.PdfPages</code>).</p> </blockquote>
python|matplotlib
3
1,903,076
16,706,956
Is there a difference between "raise exception()" and "raise exception" without parenthesis?
<p>Defining a parameterless exception:</p> <pre><code>class MyException(Exception): pass </code></pre> <p>When raised, is there any difference between:</p> <pre><code>raise MyException </code></pre> <p>and</p> <pre><code>raise MyException() </code></pre> <p>I couldn't find any; is it simply an overloaded syntax?</p>
<p>The short answer is that both <code>raise MyException</code> and <code>raise MyException()</code> do the same thing. This first form auto instantiates your exception.</p> <p>The <a href="http://docs.python.org/3/reference/simple_stmts.html#the-raise-statement" rel="noreferrer">relevant section from the docs</a> says:</p> <blockquote> <p><em>raise</em> evaluates the first expression as the exception object. It must be either a subclass or an instance of BaseException. If it is a class, the exception instance will be obtained when needed by instantiating the class with no arguments.</p> </blockquote> <p>That said, even though the semantics are the same, the first form is microscopically faster, and the second form is more flexible (because you can pass it arguments if needed).</p> <p>The usual style that most people use in Python (i.e. in the standard library, in popular applications, and in many books) is to use <code>raise MyException</code> when there are no arguments. People only instantiate the exception directly when there some arguments need to be passed. For example: <code>raise KeyError(badkey)</code>.</p>
python|exception
134
1,903,077
43,782,889
Tweepy: How can I look up more than 100 user screen names
<p>You can only <a href="https://dev.twitter.com/rest/reference/get/users/lookup" rel="nofollow noreferrer">retrieve 100 user objects per request</a> with the <code>api.lookup_users()</code> method. Is there an easy way to retrieve more than 100 using Tweepy and Python? I have read this post: <a href="https://stackoverflow.com/questions/29223454/user-id-to-username-tweepy">User ID to Username tweepy</a> but it does not help with the more than 100 problem. I am pretty novice in Python so I cannot come up with a solution myself. What I have tried is this:</p> <pre><code>users = [] i = 0 num_pages = 2 while i &lt; num_pages: try: # Look up a collection of ids users.append(api.lookup_users(user_ids=ids[100*i:100*(i+1)-1])) except tweepy.TweepError: # We get a tweep error print('Something went wrong, quitting...') i = i + 1 </code></pre> <p>where <code>ids</code> is a list containing the ids, but I get <code>IndexError: list index out of range</code> when I try to get a user with index higher than 100. If it helps I am only interested in getting the screen names from the user ids. </p>
<p>I haven't tested it since I don't have access to the API.<br> But if you have a collection of user ids in any range, this should fetch all of them. </p> <p>It fetches any remainder first, meaning if you have a list of 250 ids, it will fetch 50 users with the last 50 ids in the list.<br> Then it will fetch the remaining 200 users in batches of hundreds. </p> <pre><code>from tweepy import api, TweepError users = [] user_ids = [] # collection of user ids count_100 = int(len(user_ids) / 100) # amount of hundred user ids if len(user_ids) % 100 &gt; 0: for i in range(0, count_100 + 1): try: if i == 0: remainder = len(user_ids) % 100 users.append(api.lookup_users(user_ids=user_ids[:-remainder])) else: end_at = i * 100 start_at = end_at - 100 users.append(api.lookup_users(user_ids=user_ids[start_at:end_at])) except TweepError: print('Something went wrong, quitting...') </code></pre>
python|twitter|tweepy
0
1,903,078
43,714,543
Unable to save downloaded images into a folder on the desktop using python
<p>I have made a scraper which is at this moment parsing image links and saving downloaded images into python directory by default. The only thing i wanna do now is choose a folder on the desktop to save those images within but can't. Here is what I'm up to:</p> <pre><code>import requests import os.path import urllib.request from lxml import html def Startpoint(): url = "https://www.aliexpress.com/" response = requests.get(url) tree = html.fromstring(response.text) titles = tree.xpath('//div[@class="item-inner"]') for title in titles: Pics="https:" + title.xpath('.//span[@class="pic"]//img/@src')[0] endpoint(Pics) def endpoint(images): sdir = (r'C:\Users\ar\Desktop\mth') testfile = urllib.request.URLopener() xx = testfile.retrieve(images, images.split('/')[-1]) filename=os.path.join(sdir,xx) print(filename) Startpoint() </code></pre> <p>Upon execution the above code throws an error showing: "join() argument must be str or bytes, not 'tuple'"</p>
<p>you can download images with <code>urllib</code> of python. You can see the official documentation of python here <a href="https://docs.python.org/2/library/urllib.html" rel="nofollow noreferrer">urllib documentation for python 2.7</a> . If you want to use python 3 then follow this documentation <a href="https://docs.python.org/3/library/urllib.html" rel="nofollow noreferrer">urllib for python 3</a> </p>
python|web-scraping
1
1,903,079
54,446,702
store Django models in a dataframe and get column names from it even though dataframe is empty
<p>I have Django models, which is based on SQL tables, which I need to read and store in a dataframe and later retrieve just one column to work on. For first run, table would be blank because of which dataframe is coming out blank <code>[]</code> </p> <p>I would need models to return blank dataframe with column names, even though data is not there. Current line of code that I've been using to retrieve model into dataframe is as follows:- </p> <pre><code>dt = pd.DataFrame.from_records(my_table.objects.all().values()) my_val = dt.col1.iat[-1] </code></pre> <p>currently, code is failing with following error </p> <pre><code>AttributeError: 'DataFrame' object has no attribute 'col1' </code></pre>
<p>Pass <code>columns</code> parameter with the names of your columns, matching the order of the actual columns of the data.</p> <p>For example, like this:</p> <pre><code>dt = pd.DataFrame.from_records(my_table.objects.all().values(), columns['col1','col2','col3']) </code></pre> <p>Quoting the documentation of the parameter:</p> <blockquote> <pre>columns : sequence, default None Column names to use. If the passed data do not have names associated with them, this argument provides names for the columns. Otherwise this argument indicates the order of the columns in the result (any names not found in the data will become all-NA columns)</pre> </blockquote>
python|django|pandas|django-models
0
1,903,080
71,187,209
How do you pass data into a Django form?
<p>I'm using Django 4.0 and I'm trying to create a web app where the user selects a choice from a dropdown Django form. However, the choices will vary depending on the question and I want the form to be able to adapt to this.</p> <p>This is what I have in forms.py:</p> <pre><code>class QuestionAnswerForm(forms.Form): def __init__(self, q_id, *args, **kwargs): self.q_id = q_id super(QuestionAnswerForm, self).__init__(*args, **kwargs) q_id = self.q_id # this line throws an error question = Question.objects.get(pk=q_id) choice = forms.ChoiceField(choices=get_choices(question)) </code></pre> <p>However, I get the error: name 'self' not defined. I just want to know an easy way to pass the question id to the form so that the get_choices function can then return the choices that need to be displayed on the form.</p> <p>In views.py, the start of my view for the question sets the form in this way:</p> <pre><code>def question_detail_view(request, q_id): print(f&quot;Question id is {q_id}&quot;) form = QuestionAnswerForm(request.POST or None, q_id=q_id) </code></pre> <p>My question is: how do I access the q_id in the QuestionAnswerForm class?</p>
<p>I found out how to do it using <a href="https://stackoverflow.com/questions/1993014/passing-kwargs-to-django-form">Passing **kwargs to Django Form</a>:</p> <p>forms.py:</p> <pre><code>class QuestionAnswerForm(forms.Form): def __init__(self, *args, **kwargs): q_id = kwargs.pop('q_id') super(QuestionAnswerForm, self).__init__(*args, **kwargs) if q_id: self.fields['choice'].choices = get_choices(Question.objects.get(pk=q_id)) choice = forms.ChoiceField() </code></pre> <p>views.py:</p> <pre><code>def question_detail_view(request, q_id): form = QuestionAnswerForm(request.POST or None, q_id=q_id) </code></pre>
python|django|django-forms
1
1,903,081
39,252,779
Get Access Token for Google Analytics Embed API server side authorization
<p>I am trying to set up server side authorization for Google Analytics Embed API. When I run this on the command line:</p> <pre><code>sudo pip install --upgrade google-api-python-client </code></pre> <p>I get this message:</p> <pre><code>The directory '/Users/XXXX/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory '/Users/XXXX/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. Collecting google-api-python-client Downloading google_api_python_client-1.5.3-py2.py3-none-any.whl (50kB) 100% |████████████████████████████████| 51kB 991kB/s Requirement already up-to-date: httplib2&lt;1,&gt;=0.8 in /Library/Python/2.7/site-packages (from google-api-python-client) Collecting six&lt;2,&gt;=1.6.1 (from google-api-python-client) Downloading six-1.10.0-py2.py3-none-any.whl Collecting uritemplate&lt;1,&gt;=0.6 (from google-api-python-client) Downloading uritemplate-0.6.tar.gz Collecting oauth2client&lt;4.0.0,&gt;=1.5.0 (from google-api-python-client) Downloading oauth2client-3.0.0.tar.gz (77kB) 100% |████████████████████████████████| 81kB 2.5MB/s Collecting simplejson&gt;=2.5.0 (from uritemplate&lt;1,&gt;=0.6-&gt;google-api-python-client) Downloading simplejson-3.8.2-cp27-cp27m-macosx_10_9_x86_64.whl (67kB) 100% |████████████████████████████████| 71kB 6.8MB/s Collecting pyasn1&gt;=0.1.7 (from oauth2client&lt;4.0.0,&gt;=1.5.0-&gt;google-api-python-client) Downloading pyasn1-0.1.9-py2.py3-none-any.whl Collecting pyasn1-modules&gt;=0.0.5 (from oauth2client&lt;4.0.0,&gt;=1.5.0-&gt;google-api-python-client) Downloading pyasn1_modules-0.0.8-py2.py3-none-any.whl Collecting rsa&gt;=3.1.4 (from oauth2client&lt;4.0.0,&gt;=1.5.0-&gt;google-api-python-client) Downloading rsa-3.4.2-py2.py3-none-any.whl (46kB) 100% |████████████████████████████████| 51kB 6.1MB/s Installing collected packages: six, simplejson, uritemplate, pyasn1, pyasn1-modules, rsa, oauth2client, google-api-python-client Found existing installation: six 1.4.1 DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project. Uninstalling six-1.4.1: Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/commands/install.py", line 317, in run prefix=options.prefix_path, File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_set.py", line 736, in install requirement.uninstall(auto_confirm=True) File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_install.py", line 742, in uninstall paths_to_remove.remove(auto_confirm) File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_uninstall.py", line 115, in remove renames(path, new_path) File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/utils/__init__.py", line 267, in renames shutil.move(old, new) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move copy2(src, real_dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2 copystat(src, dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat os.chflags(dst, st.st_flags) OSError: [Errno 1] Operation not permitted: '/tmp/pip-yzJYPo-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info' </code></pre> <p>I am logged in as the admin. I have doubled checked permissions on the directories and parent directories. I am not sure what am I doing wrong?</p>
<p>I think you might want to read this : <a href="https://github.com/pypa/pip/issues/3165" rel="nofollow">https://github.com/pypa/pip/issues/3165</a></p> <p>they say that you could do :</p> <p>sudo pip install --ignore-installed six</p> <p>sudo pip install --ignore-installed --upgrade google-api-python-client </p> <p>Let me know if it helps,</p> <p>Eric Lafontaine</p>
python|command-line|google-analytics|sudo
1
1,903,082
39,159,092
Is there a direct way to ignore parts of a python datetime object?
<p>I'm trying to compare two datetime objects, but ignoring the year. For example, given</p> <pre><code>a = datetime.datetime(2015,07,04,01,01,01) b = datetime.datetime(2016,07,04,01,01,01) </code></pre> <p>I want a == b to return True by ignoring the year. To do a comparison like this, I imagine I could just create new datetime objects with the same year like:</p> <pre><code>c = datetime.datetime(2014,a.month,a.day,a.hour,a.minute,a.second) d = datetime.datetime(2014,b.month,b.day,b.hour,b.minute,b.second) </code></pre> <p>However, this doesn't seem very pythonic. Is there a more direct method to do a comparison like what I'm asking?</p> <p>I'm using python 3.4.</p>
<pre><code>(a.month, a.day, a.hour, a.minute, a.second == b.month, b.day, b.hour, b.minute, b.second) </code></pre> <p>A less explicit method is to compare the corresponding elements in the time tuples:</p> <pre><code>a.timetuple()[1:6] == b.timetuple()[1:6] </code></pre>
python|datetime
9
1,903,083
39,021,173
Python subprocess pwd inconsistent when file structure includes alias
<p>When I run the following script</p> <pre><code>#!/usr/bin/env python import subprocess print(subprocess.check_output(["pwd"])) </code></pre> <p>the result is</p> <blockquote> <p>/scratch1/name/Dropbox (NAM)/docs/research/Y2/results/s8</p> </blockquote> <p>whilst from my Ubuntu terminal, the command</p> <pre><code>pwd </code></pre> <p>yields</p> <blockquote> <p>/quake/home/name/docs/research/Y2/results/s8</p> </blockquote> <p>which is an alias to the first path. Why are they inconsistent?</p>
<p>TL;DR - Use <a href="https://docs.python.org/2/library/os.html#os.getcwd" rel="nofollow"><code>os.getcwd()</code></a></p> <hr> <p>You could use <a href="https://docs.python.org/2/library/os.path.html#os.path.realpath" rel="nofollow"><code>os.path.realpath</code></a> to turn a path containing symlinks into the physical path, resolving any symlinks:</p> <pre><code>~/src/stackoverflow $ mkdir targetdir ~/src/stackoverflow $ ln -s targetdir symlink ~/src/stackoverflow $ cd symlink ~/src/stackoverflow/symlink $ ~/src/stackoverflow/symlink $ python &gt;&gt;&gt; import os &gt;&gt;&gt; import subprocess &gt;&gt;&gt; import shlex &gt;&gt;&gt; &gt;&gt;&gt; path = subprocess.check_output('pwd').strip() &gt;&gt;&gt; path '/Users/lukasgraf/src/stackoverflow/symlink' &gt;&gt;&gt; os.path.realpath(path) '/Users/lukasgraf/src/stackoverflow/targetdir' </code></pre> <hr> <p>There is also the <code>-P</code> option to the <code>pwd</code> command that enforces this.</p> <p>From the <code>pwd</code> man page (on OS X):</p> <blockquote> <p>The pwd utility writes the absolute pathname of the current working directory to the standard output.</p> <p>Some shells may provide a builtin pwd command which is similar or identical to this utility. Consult the builtin(1) manual page.</p> <pre><code> The options are as follows: -L Display the logical current working directory. -P Display the physical current working directory (all symbolic links resolved). If no options are specified, the -L option is assumed. </code></pre> </blockquote> <p>So this would work too:</p> <pre><code>&gt;&gt;&gt; subprocess.check_output(shlex.split('pwd -P')) '/Users/lukasgraf/src/stackoverflow/targetdir\n' &gt;&gt;&gt; </code></pre> <hr> <p>However, the best option is to use <a href="https://docs.python.org/2/library/os.html#os.getcwd" rel="nofollow"><code>os.getcwd()</code></a> from the Python standard library:</p> <pre><code>&gt;&gt;&gt; os.getcwd() '/Users/lukasgraf/src/stackoverflow/targetdir' </code></pre> <p>It's not explicitly documented, but it seems to already resolve symlinks for you. In any case, you will want to avoid shelling out (using <code>subprocess</code>) for something that the standard library already provides for you, like getting the current working directory.</p>
python|shell|subprocess|alias|pwd
1
1,903,084
47,607,896
Recognize full name in namecard
<p>I am having a problem with my project and I hope that I can be received your helps. I want to save full name from a text which I used OCR to recognize text from an image. How can I do that? I sorry because my English is not well. </p>
<p>Remove all numbers, special characters, email etc from the text. What you will end up with is the remaining text. You can then try using <strong>nltk</strong> to find Proper Nouns (NNP).</p> <pre><code>import nltk nltk.pos_tag(["Tam","Nguyen"]) </code></pre> <p>The only issue is that you may get false positives. For instance, if Tam Nguyen is followed by <strong>Chief Technology Officer</strong>, then you would have NNPs for those too. See if this helps narrow down your problem.</p>
python|ocr
0
1,903,085
37,474,397
How to edit QTreeWidgetItem when it is editable
<p><a href="https://i.stack.imgur.com/6nY9z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6nY9z.png" alt="enter image description here"></a></p> <p>When the item is double-clicked and the user enters a new item name I want this text value to be assigned to the <code>item._name</code> attribute which is printed <code>onClick</code>. How to achieve this? </p> <pre><code>from PyQt4 import QtCore, QtGui app = QtGui.QApplication([]) class Tree(QtGui.QTreeWidget): def __init__(self, *args, **kwargs): super(Tree, self).__init__() for i, item_name in enumerate(['Item_1','Item_2','Item_3','Item_4','Item_5']): rootItem = QtGui.QTreeWidgetItem() rootItem.setFlags(rootItem.flags() | QtCore.Qt.ItemIsEditable) rootItem._name = 'Root %s'%i rootItem.setText(0, rootItem._name) for number in range(3): childItem = QtGui.QTreeWidgetItem(rootItem) childItem.setFlags(rootItem.flags() | QtCore.Qt.ItemIsEditable) childItem._name = 'Child %s'%number childItem.setText(0, childItem._name) self.addTopLevelItem(rootItem) self.clicked.connect(self.onClick) self.show() def onClick(self, index): print self.currentItem()._name tree=Tree() app.exec_() </code></pre>
<p><a href="https://i.stack.imgur.com/oGbx8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oGbx8.png" alt="enter image description here"></a></p> <p>Subclass <code>QTreeWidgetItem</code>. Define <code>setData</code> method to assign the value to the attribute <code>_name</code>.</p> <pre><code>from PyQt4 import QtCore, QtGui app = QtGui.QApplication([]) class TreeWidgetItem(QtGui.QTreeWidgetItem): def __init__(self, parent=None): super(TreeWidgetItem, self).__init__(parent) def setData(self, column, role, value): super(TreeWidgetItem, self).setData(column, role, value) self._name = value.toString() class Tree(QtGui.QTreeWidget): def __init__(self, *args, **kwargs): super(Tree, self).__init__() for i, item_name in enumerate(['Item_1','Item_2','Item_3','Item_4','Item_5']): rootItem = TreeWidgetItem(self) rootItem.setFlags(rootItem.flags() | QtCore.Qt.ItemIsEditable) rootItem._name = 'Root %s'%i rootItem.setText(0, rootItem._name) for number in range(3): childItem = TreeWidgetItem(rootItem) childItem.setFlags(rootItem.flags() | QtCore.Qt.ItemIsEditable) childItem._name = 'Child %s'%number childItem.setText(0, childItem._name) self.addTopLevelItem(rootItem) self.clicked.connect(self.onClick) self.show() def onClick(self, index): print self.currentItem()._name </code></pre>
python|qt|pyqt|qtreewidget|qtreewidgetitem
0
1,903,086
37,343,314
django rest framework setting up a Foreign key dynamically in create
<p>I have a model:</p> <pre><code>class Foo(models.Model) field1 = CharField(max_length=24) capacity = models.IntegerField(default=10) def used(self): return self.bar_set.count() def is_available(self): return self.capacity - self.used() @staticmethod def get_or_create_foo(req_count=0): foos = list(Foo.objects.all()) for foo in foos: if foo.available() &gt;= req_count: return foo else: foo = Foo() foo.save() return foo class Bar(models.Model) field1 = models.CharField(max_length=24) field1 = models.CharField(max_length=24) foo = models.ForeignKey(Foo) </code></pre> <p>Now I have serializers like this:</p> <pre><code>class FooSerializer(models.ModelSerializer) class Meta: model = models.Foo class BarSerializer(models.ModelSerializer) count = models.IntegerField() field1 = models.CharField(max_length=24) foo = models.ForeignKey(Foo) class Meta: model = models.Bar def create(self, validated_data): instance = super(BarSerializer, self).create(validated_data) instance.foo = Foo.get_or_create_foo(validated_data['count']) instance.save() return instance </code></pre> <p>Problem is <code>instance = super(Bar, self).create(validated_data)</code> at this line I get and exception:</p> <pre><code>Traceback (most recent call last): File "/home/webmaster/prj/venv/lib/python3.4/site-packages/django/db/backends/utils.py", line 64, in execute return self.cursor.execute(sql, params) File "/home/webmaster/prj/venv/lib/python3.4/site-packages/django/db/backends/sqlite3/base.py", line 318, in execute return Database.Cursor.execute(self, query, params) sqlite3.IntegrityError: NOT NULL constraint failed: app_bar_foo_id </code></pre> <p>data passed to BarSerializer is like this (notice no foo is sent here, since it needs to populated dynamically):</p> <pre><code>{ "count": 5, "field1": "some text" } </code></pre> <p>I guess because call to super creates an instance and save it without FK, because it is not passed in the request and fails there, what is the work around to populate the field <code>foo</code> in Bar when its instantiated using the <code>BarSerializer</code>.</p>
<p>Not sure about that <code>super(BarSerializer, self)</code> call in <code>create()</code>, I'd do it like this</p> <pre><code>def create(self, validated_data): instance = BarSerializer(**validated_data) instance.foo = Foo.get_or_create_foo(validated_data['count']) instance.save() return instance </code></pre>
python|django|django-rest-framework
1
1,903,087
37,485,174
python - locale in dateutil / parser
<p>I set</p> <pre><code>locale.setlocale(locale.LC_TIME, ('de', 'UTF-8')) </code></pre> <p>the string to parse is:</p> <pre><code>Montag, 11. April 2016 19:35:57 </code></pre> <p>I use:</p> <pre><code>note_date = parser.parse(result.group(2)) </code></pre> <p>but get the following error:</p> <blockquote> <p>Traceback (most recent call last): File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1531, in globals = debugger.run(setup['file'], None, None, is_module) File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 938, in run pydev_imports.execfile(file, globals, locals) # execute the script File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/Users/adieball/Dropbox/Multiverse/Programming/python/repositories/kindle/kindle2en.py", line 250, in main(sys.argv[1:]) File "/Users/adieball/Dropbox/Multiverse/Programming/python/repositories/kindle/kindle2en.py", line 154, in main note_date = parser.parse(result.group(2)) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/dateutil/parser.py", line 1164, in parse return DEFAULTPARSER.parse(timestr, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/dateutil/parser.py", line 555, in parse raise ValueError("Unknown string format") ValueError: Unknown string format</p> </blockquote> <p>a debug show that parser is not using the "correct" dateutil values (german), it's still using the english ones.</p> <p><a href="https://i.stack.imgur.com/cjPCx.png" rel="noreferrer"><img src="https://i.stack.imgur.com/cjPCx.png" alt="enter image description here"></a></p> <p>I'm sure I'm missing something obvious here, but can't find it.</p> <p>Thanks.</p>
<p><code>dateutil.parser</code> doesn't use <code>locale</code>. You'll need to subclass <a href="https://dateutil.readthedocs.io/en/stable/parser.html#dateutil.parser.parserinfo" rel="noreferrer"><code>dateutil.parser.parserinfo</code></a> and construct a German equivalent:.</p> <pre><code>from dateutil import parser class GermanParserInfo(parser.parserinfo): WEEKDAYS = [("Mo.", "Montag"), ("Di.", "Dienstag"), ("Mi.", "Mittwoch"), ("Do.", "Donnerstag"), ("Fr.", "Freitag"), ("Sa.", "Samstag"), ("So.", "Sonntag")] s = 'Montag, 11. April 2016 19:35:57' note_date = parser.parse(s, parserinfo=GermanParserInfo()) </code></pre> <p>You'd need to extend this to also work for other values, such as month names.</p>
python|python-dateutil
5
1,903,088
34,386,096
Override Falcon's default error handler when no route matches
<p>When Falcon(-Framework) could not find a route for a specific request, 404 is returned. How can I override this default handler? I want to extend the handler with a custom response.</p>
<p>The default handler when no resource matches is the <a href="https://github.com/falconry/falcon/blob/master/falcon/responders.py#L21" rel="noreferrer">path_not_found</a> responder:</p> <p>But as you can see in the <a href="https://github.com/falconry/falcon/blob/master/falcon/api.py#L482" rel="noreferrer">_get_responder</a> method of falcon API, it can't be override without some monkey patching.</p> <p>As far as I can see, there are two different ways to use a custom handler:</p> <ol> <li>Subclass the API class, and overwrite the _get_responder method so it calls your custom handler</li> <li>Use a default route that matches any route if none of the application ones are matched. You probably prefer to use a <a href="http://falcon.readthedocs.org/en/latest/api/api.html#falcon.API.add_sink" rel="noreferrer">sink</a> instead of a route, so you capture any HTTP method (GET, POST...) with the same function.</li> </ol> <p>I would recommend the second option, as it looks much neater.</p> <p>Your code would look like:</p> <pre><code>import falcon class HomeResource: def on_get(self, req, resp): resp.body = 'Hello world' def handle_404(req, resp): resp.status = falcon.HTTP_404 resp.body = 'Not found' application = falcon.API() application.add_route('/', HomeResource()) # any other route should be placed before the handle_404 one application.add_sink(handle_404, '') </code></pre>
python|falconframework
5
1,903,089
34,170,226
Dictionary Comprehension from dictionary
<p>I´m “testing” the <strong>Dictionaries Comprehensions</strong> using a dictionary to generate other.</p> <p>So, I want to conserve the “keys” of the first one and multiply the values *2. And yes… I want to do it with comprehension to understand.</p> <p>I want to reach: <strong>{4:2, 7:4, 8:6, 9:8}</strong></p> <p>I am trying this:</p> <pre><code>dic1 = {4:1, 7:2, 8:3, 9:4} dictComp= {key:value for key in dic1.keys() for value in dic1.values() * 2} print(dictComp) </code></pre> <p>ERROR: </p> <pre><code>Traceback (most recent call last): File "C:\Python\file.py", line 13, in &lt;module&gt; dictComp= {key:value for key in dic1.keys() for value in dic1.values() * 2} File "C:\Python\file.py", line 13, in &lt;dictcomp&gt; dictComp= {key:value for key in dic1.keys() for value in dic1.values() * 2} TypeError: unsupported operand type(s) for *: 'dict_values' and 'int' </code></pre> <p>Anyone can help me? Thanks a lot! </p>
<p>Use dictionary comprehension like this:</p> <pre><code>In [2]: dic1 = {4:1, 7:2, 8:3, 9:4} In [3]: new_dict = {k: v * 2 for k, v in dic1.iteritems()} # dic1.items() for Python 3 In [4]: new_dict Out[4]: {4: 2, 7: 4, 8: 6, 9: 8} </code></pre>
python|dictionary-comprehension
4
1,903,090
66,051,683
How to iterate over python dict from the chosen key to it again?
<p><strong>It is easier to explain with an example to make my question clearer:</strong></p> <p>For example:</p> <pre><code>example_dict = {1 : &quot;A&quot;, 2 : &quot;B&quot;, 3 : &quot;C&quot;, 4 : &quot;D&quot;, 5 : &quot;E&quot;} </code></pre> <p>Imagining that I want to start my iteration on key 3 in order to get the corresponding values ​​until iterating to it again.</p> <pre><code># Chosen the key = 3 will return: [&quot;C&quot;,&quot;D&quot;,&quot;E&quot;,&quot;A&quot;,&quot;B&quot;] </code></pre> <p><strong>So what is the best way to iterate from a key to itself?</strong></p> <p>Is the iteration supposed to reach the end of the dictionary and go back to iterating from the beginning until it finds the key chosen initially?</p> <p>Another example:</p> <pre><code>example_dict = {23 : &quot;Hello&quot;, 3 : &quot;Bye&quot;, 11 : &quot;Shame&quot;, 45 : &quot;Nice&quot;, 2 : &quot;Pretty&quot;} # Chosen the key = 3 will return: [&quot;Bye&quot;,&quot;Shame&quot;,&quot;Nice&quot;,&quot;Pretty&quot;,&quot;Hello&quot;] </code></pre>
<p>An approach using <code>itertools</code>:</p> <p>You actually want to do it by <em>key</em>, so just find the &quot;index&quot; of the key first and use that (the index of the list created from the keys, to be precise). Use <code>cycle</code> and <code>islice</code> from itertools to create an iterator over the values, using the position of the key and the size of the dictionary:</p> <pre><code>&gt;&gt;&gt; idx = list(example_dict).index(3) &gt;&gt;&gt; list(islice(cycle(example_dict.values()), idx, idx + len(example_dict))) ['C', 'D', 'E', 'A', 'B'] </code></pre>
python|loops|dictionary
2
1,903,091
39,670,807
Pygame drawings only appear once I exit window
<p>I've created a rect, using Pygame display but it only appears in the Pygame window when I exit the window. </p> <p>Have I done something wrong with my game loop? I'm trying to set keydown events but it's not registering in the game loop. Maybe it's because the Pygame window only appears after I exit?</p>
<p>Got it. I had incorrect indentation in my while loop.</p> <p>However, when I run print(event) Python shows KEYDOWN but my rect won't move.</p> <p>Here is a bit of my code:</p> <pre><code>gameDisplay=pygame.display.set_mode((WIDTH, HEIGHT)) pygame.display.set_caption('SssSss') lead_x = 300 lead_y = 300 gameDisplay.fill(white) pygame.draw.rect(gameDisplay, black, [lead_x, lead_y, 10, 10]) pygame.display.update() gameExit=False while not gameExit: for event in pygame.event.get(): print (event) if (event.type == pygame.QUIT): gameExit=True if (event.type == pygame.KEYDOWN): if event.key == pygame.K_LEFT: lead_x -= 10 if event.key == pygame.K_RIGHT: lead_x += 10 pygame.display.update() </code></pre>
python|pygame|keydown
-1
1,903,092
16,285,163
Embed manifest with C Extension dll using distutil
<p>What is the preferred way to embed a manifest with a C Extension DLL when I generate it through distutil. Currently when I build a C Extension as part of the distutil process, it creates a manifest and the corresponding *.pyd but when I check the dependency using dependency walker, </p> <p><img src="https://i.stack.imgur.com/X1zmi.png" alt="enter image description here"></p> <p>But if I manually embed the manifest</p> <pre><code>mt -manifest jep.pyd.manifest -outputresource:jep.pyd;2 Microsoft (R) Manifest Tool version 6.2.9200.16384 Copyright (c) Microsoft Corporation 2012. All rights reserved. </code></pre> <p>The dependency gets satisfied</p> <p><img src="https://i.stack.imgur.com/iEDwC.png" alt="enter image description here"></p> <p>What is the suggested way to overcome the manual process to embed the manifest. Can this be done through distutil automatically?</p>
<p>Python extension DLLs are not expected to have the MSVCR manifest. You can take a look at the extension DLLs that ship with Python (e.g. _bz2.pyd, _ctypes.pyd, _lzma.pyd, _tkinter.pyd etc.) and you will see that they do not contain a manifest resource. </p>
python|windows|manifest|msvcrt
1
1,903,093
16,114,100
calling dot products and linear algebra operations in Cython?
<p>I'm trying to use dot products, matrix inversion and other basic linear algebra operations that are available in numpy from Cython. Functions like <code>numpy.linalg.inv</code> (inversion), <code>numpy.dot</code> (dot product), <code>X.t</code> (transpose of matrix/array). There's a large overhead to calling <code>numpy.*</code> from Cython functions and the rest of the function is written in Cython, so I'd like to avoid this.</p> <p>If I assume users have <code>numpy</code> installed, is there a way to do something like: </p> <pre><code>#include "numpy/npy_math.h" </code></pre> <p>as an <code>extern</code>, and call these functions? Or alternatively call BLAS directly (or whatever it is that numpy calls for these core operations)? </p> <p>To give an example, imagine you have a function in Cython that does many things and in the end needs to make a computation involving dot products and matrix inverses:</p> <pre><code>cdef myfunc(...): # ... do many things faster than Python could # ... # compute one value using dot products and inv # without using # import numpy as np # np.* val = gammaln(sum(v)) - sum(gammaln(v)) + dot((v - 1).T, log(x).T) </code></pre> <p>how can this be done? If there's a library that implements these in Cython already, I can also use that, but have not found anything. Even if those procedures are less optimized than BLAS directly, not having the overhead of calling <code>numpy</code> Python module from Cython will still make things overall faster.</p> <p>Example functions I'd like to call:</p> <ul> <li>dot product (<code>np.dot</code>)</li> <li>matrix inversion (<code>np.linalg.inv</code>)</li> <li>matrix multiplication </li> <li>taking transpose (equivalent of <code>x.T</code> in numpy)</li> <li>gammaln function (like <code>scipy.gammaln</code> equivalent, which should be available in C)</li> </ul> <p>I realize as it says on numpy mailing list (<a href="https://groups.google.com/forum/?fromgroups=#!topic/cython-users/XZjMVSIQnTE">https://groups.google.com/forum/?fromgroups=#!topic/cython-users/XZjMVSIQnTE</a>) that if you call these functions on large matrices, there is no point in doing it from Cython, since calling it from numpy will just result in the majority of the time spent in the optimized C code that numpy calls. However, in my case, I have <em>many calls to these linear algebra operations on small matrices</em> -- in that case, the overhead introduced by repeatedly going from Cython back to numpy and back to Cython will far outweigh the time spent actually computing the operation from BLAS. Therefore, I'd like to keep everything at the C/Cython level for these simple operations and not go through python.</p> <p>I'd prefer not to go through GSL, since that adds another dependency and since it's unclear if GSL is actively maintained. Since I'm assuming users of the code already have scipy/numpy installed, I can safely assume that they have all the associated C code that goes along with these libraries, so I just want to be able to tap into that code and call it from Cython.</p> <p><strong>edit</strong>: I found a library that wraps BLAS in Cython (<a href="https://github.com/tokyo/tokyo">https://github.com/tokyo/tokyo</a>) which is close but not what I'm looking for. I'd like to call the numpy/scipy C functions directly (I'm assuming the user has these installed.)</p>
<p>Calling BLAS bundled with Scipy is "fairly" straightforward, here's one example for calling DGEMM to compute matrix multiplication: <a href="https://gist.github.com/pv/5437087">https://gist.github.com/pv/5437087</a> Note that BLAS and LAPACK expect all arrays to be Fortran-contiguous (modulo the lda/b/c parameters), hence <code>order="F"</code> and <code>double[::1,:]</code> which are required for correct functioning.</p> <p>Computing inverses can be similarly done by applying the LAPACK function <code>dgesv</code> on the identity matrix. For the signature, see <a href="http://www.netlib.org/lapack/double/dgesv.f">here</a>. All this requires dropping down to rather low-level coding, you need to allocate temporary work arrays yourself etc etc. --- however these can be encapsulated into your own convenience functions, or just reuse the code from <code>tokyo</code> by replacing the <code>lib_*</code> functions with function pointers obtained from Scipy in the above way.</p> <p>If you use Cython's memoryview syntax (<code>double[::1,:]</code>) you transpose is the same <code>x.T</code> as usual. Alternatively, you can compute the transpose by writing a function of your own that swaps elements of the array across the diagonal. Numpy doesn't actually contain this operation, <code>x.T</code> only changes the strides of the array and doesn't move the data around.</p> <p>It would probably be possible to rewrite the <code>tokyo</code> module to use the BLAS/LAPACK exported by Scipy and bundle it in <code>scipy.linalg</code>, so that you could just do <code>from scipy.linalg.blas cimport dgemm</code>. <a href="https://github.com/scipy/scipy/blob/master/HACKING.rst.txt">Pull requests</a> are accepted if someone wants to get down to it.</p> <hr> <p>As you can see, it all boils down to passing function pointers around. As alluded to above, Cython does in fact provide its own protocol for exchanging function pointers. For an example, consider <code>from scipy.spatial import qhull; print(qhull.__pyx_capi__)</code> --- those functions could be accessed via <code>from scipy.spatial.qhull cimport XXXX</code> in Cython (they're private though, so don't do that).</p> <p>However, at the present, <code>scipy.special</code> does not offer this C-API. It would however in fact be quite simple to provide it, given that the interface module in scipy.special is written in Cython.</p> <p>I don't think there is at the moment any sane and portable way to access the function doing the heavy lifting for <code>gamln</code>, (although you could snoop around the UFunc object, but that's not a sane solution :), so at the moment it's probably best to just grab the relevant part of source code from scipy.special and bundle it with your project, or use e.g. GSL.</p>
python|numpy|scipy|cython|blas
25
1,903,094
32,118,611
Get one line space at end of tweets in python
<p>consider my code in python, minemaggi.txt file contains tweets and i am trying to remove stop words but in output file tweets are not comming in separate line. Also i want to remove all links from text file, what to do for that.</p> <pre><code>from nltk.tokenize import word_tokenize from nltk.corpus import stopwords import codecs import nltk stopset = set(stopwords.words('english')) writeFile = codecs.open("outputfile.txt", "w", encoding='utf-8') with codecs.open("minemaggi.txt", "r", encoding='utf-8') as f: line = f.read() new = '\n' tokens = nltk.word_tokenize(line) tokens = [w for w in tokens if not w in stopset] for token in tokens: writeFile.write('{}{}'.format(' ', token)) writeFile.write('{}'.format(new)) </code></pre>
<p>You need to explicitly add a newline character to the string you write to the file, like this:</p> <pre><code>writeFile.write('{}{}\n'.format(' ', token)) </code></pre>
python|twitter
0
1,903,095
32,010,891
Extract data from "p" html element with python lxml
<p>I want to extract all the data in the <code>p</code> <code>html</code> elements, but to treat differently only to the "headers" such as: <code>&lt;strong&gt;header1&lt;/strong&gt;</code>.<br> Is there a way to do it with <code>python</code> <code>lxml</code>? With the following code:</p> <pre><code>parser = etree.HTMLParser(target=MyParser()) etree.HTML(htmlContent, parser) </code></pre> <p>Whilst <code>class MyParser</code> is:</p> <pre><code>class MyParser(object): def start(self, tag, attrib): pass def end(self, tag): pass def data(self, data): --&gt; Here, differentiate between "normal data" and &lt;strong&gt;data&lt;/strong&gt; def close(self): pass </code></pre> <p><code>html</code> Example:</p> <pre><code>&lt;div class="entry-content clearfix"&gt; &lt;p style="text-align: center;"&gt;&lt;span style="text-decoration: underline;"&gt;&lt;strong&gt;header1&lt;/strong&gt;&lt;/span&gt;:&lt;br /&gt; data data 1...&lt;/p&gt; &lt;p style="text-align: center;"&gt;&lt;span style="text-decoration: underline;"&gt;&lt;strong&gt;header2&lt;/strong&gt;&lt;/span&gt;:&lt;br /&gt; data data 2...&lt;/p&gt; &lt;p style="text-align: center;"&gt;&lt;span style="text-decoration: underline;"&gt;&lt;strong&gt;header3&lt;/strong&gt;&lt;/span&gt;:&lt;br /&gt; data data 3...&lt;br /&gt; data data 3...&lt;br /&gt; data data 3...&lt;/p&gt; &lt;/div&gt; </code></pre> <p>Example of what I want to do: Lets say I aggregate all of the data in a <code>string</code>, and I want to highlight only the headers.<br> Now I cannot differentiate, so my string would be like:</p> <pre><code>header1 data data data 1... header2 data data data 2... </code></pre> <p>I want to highlight it like, so it would be like this:</p> <pre><code>[[header1]] data data data 1... [[header2]] data data data 2... </code></pre>
<p>The short answer is that you need to implement your class <code>MyParser</code>. </p> <p>When the start tag for an element is seen, push it on a stack. When the end tag for the element is seen pop it off the stack. When data is received you will know what tag you are in: the top one on the stack. The state machine pattern is often applicable to such parsing needs.</p>
python|html|parsing|lxml
1
1,903,096
31,817,325
While list length is < 100
<p>I am struggling to get this while loop to work in python.</p> <pre><code>urlList = [] while True: for r in range(1, 5000): try: response = urllib.request.urlopen('www.somewebsite.com/v0/info/' + str(r) + '.json') html = response.read().decode('utf-8') data = json.loads(html) if 'url' in data: urlList.append(data['url']) if len(urlList) == 100: break except urllib.error.HTTPError as err: print (err) continue print (urlList) </code></pre> <p>I currently have the if statement to break out of the while loop if the list length equals 100. which throws an odd error of urllib.error.URLError: </p> <p>I also tried While len(urlList) != 100 which makes the process not run. Also While len(urlList) &lt; 100 just makes it run through the entire range function. </p>
<p>Your urls are invalid.</p> <pre><code>response = urllib.request.urlopen('www.somewebsite.com' + str(r) + '.json') </code></pre> <p>This becomes:</p> <pre><code>www.somewebsite.com1.json www.somewebsite.com2.json www.somewebsite.com3.json ... </code></pre> <p>These invalid URLs throw an <code>urllib.error.HTTPError</code> error.</p> <hr> <p>Now that you've corrected the url, the above is invalid. The issue you have is because the <code>break</code> is breaking out of your inner loop (the <code>for</code>) and dropping you into the <code>while</code> loop, which repeats everything again.</p> <p>Try changing the code to be more like this:</p> <pre><code>urlList = [] for r in range(1, 5000): response = ...... ... if 'url' in data: urlList.append(data['url']) if len(urlList) == 100: break </code></pre> <p>This removes the <code>while</code> loop. It keeps the range, which seems to be important to your URLs. When the list reaches a size of 100, it'll break out of this single loop.</p>
python
5
1,903,097
40,510,347
Can snakemake avoid ambiguity when two different rule paths can generate a given output?
<h2>Initial workflow</h2> <p>I have a snakefile that can generate some output from paired-end data.</p> <p>In this snakefile, I have a rule that "installs" the data given information stored in the config file (<code>get_raw_data</code>).</p> <p>Then I have a rule that uses that data to generate intermediate files on which the rest of the workflow depends (<code>run_tophat</code>).</p> <p>Here are the input and output of these rules (<code>OPJ</code> stands for <code>os.path.join</code>):</p> <pre><code>rule get_raw_data: output: OPJ(raw_data_dir, "{lib}_1.fastq.gz"), OPJ(raw_data_dir, "{lib}_2.fastq.gz"), </code></pre> <p>(More details on the implementation of this rule later)</p> <pre><code>rule run_tophat: input: transcriptome = OPJ(annot_dir, "dmel-all-r5.9.gff"), fq1 = OPJ(raw_data_dir, "{lib}_1.fastq.gz"), fq2 = OPJ(raw_data_dir, "{lib}_2.fastq.gz"), output: junctions = OPJ(output_dir, "{lib}", "junctions.bed"), bam = OPJ(output_dir, "{lib}", "accepted_hits.bam"), </code></pre> <p>And (simplifying) my main rule would be something like that:</p> <pre><code>rule all: input: expand(OPJ(output_dir, "{lib}", "junctions.bed"), lib=LIBS), </code></pre> <h2>Extending the workflow to single-end data</h2> <p>I now have to run my workflow on data that is single-end.</p> <p>I would like to avoid the final output having different name patterns depending on whether the data is single or paired end.</p> <p>I can easily make variants of the above-mentioned two rules that should work with single-end data (<code>get_raw_data_single_end</code> and <code>run_tophat_single_end</code>), whose input and output are as follows:</p> <pre><code>rule get_raw_data_single_end: output: OPJ(raw_data_dir, "{lib}.fastq.gz") rule run_tophat_single_end: input: transcriptome = OPJ(annot_dir, "dmel-all-r5.9.gff"), fq = OPJ(raw_data_dir, "{lib}.fastq.gz"), output: junctions = OPJ(output_dir, "{lib}", "junctions.bed"), bam = OPJ(output_dir, "{lib}", "accepted_hits.bam"), </code></pre> <h2>How to provide snakemake with enough information to chose the correct rule path?</h2> <p>The config file contains the information about whether the <code>lib</code> wildcard is associated with single-end or paired-end data in the following manner: The library names are keys in either a <code>lib2raw</code> or a <code>lib2raw_single_end</code> dictionary (both dictionaries are read from the config file).</p> <p>I'm not expecting the same library name to be a key in both dictionaries. Therefore, in a sense, it is not ambiguous whether I want the single-end or paired-end branch of the workflow to be executed.</p> <p>A function <code>lib2data</code> (that uses these dictionaries) is used by both <code>get_raw_data</code> and <code>get_raw_data_single_end</code> to determine what shell command to run to "install" the data.</p> <p>Here is a simplified version of this function (the actual one contains an extra branch to generate a command for data from a SRR identifier):</p> <pre><code>def lib2data(wildcards): lib = wildcards.lib if lib in lib2raw: raw = lib2raw[lib] link_1 = "ln -s %s %s_1.fastq.gz" % (raw.format(mate="1"), lib) link_2 = "ln -s %s %s_2.fastq.gz" % (raw.format(mate="2"), lib) return "%s\n%s\n" % (link_1, link_2) elif lib in lib2raw_single_end: raw = lib2raw_single_end[lib] return "ln -s %s %s.fastq.gz\n" % (raw, lib) else: raise ValueError("Procedure to get raw data for %s unknown." % lib) </code></pre> <p>Apart from their output, the two <code>get_raw_data*</code> rules are the same and work the following way:</p> <pre><code>params: shell_command = lib2data, shell: """ ( cd {raw_data_dir} {params.shell_command} ) """ </code></pre> <p>Is snakemake able to determine the correct rule path given information that is not coded in rules input and output, but only in config files and functions?</p> <p>It seems that it is not the case. Indeed, I'm trying to test my new snakefile (with the added <code>*_single_end</code> rules), <del>but a <code>KeyError</code> occurs during the execution of the <code>get_raw_data</code> rule, whereas the library for which the rule is being executed is associated with single-end data</del>.</p> <p>How to achieve the desired behaviour (a two-branch workflow able to use the information in the configuration to chose the correct branch)?</p> <h3>Edit: The <code>KeyError</code> was due to an error in <code>lib2data</code></h3> <p>After using the correct dictionary to get the data associated with the library name, I end up having the following error:</p> <pre><code>AmbiguousRuleException: Rules run_tophat and run_tophat_single_end are ambiguous for the file tophat_junction_discovery_revision_supplement/HWT3/junctions.bed. Expected input files: run_tophat: ./HWT3_1.fastq.gz ./HWT3_2.fastq.gz Annotations/dmel-all-r5.9.gff run_tophat_single_end: ./HWT3.fastq.gz Annotations/dmel-all-r5.9.gff </code></pre> <h3>Edit 2: Adding input to the <code>get_raw_data*</code> rules</h3> <p>After reading <a href="https://groups.google.com/d/msg/Snakemake/jVWApJ7gZA8/Euz3aV7THOsJ" rel="nofollow noreferrer">this post on the snakemake mailing list</a>, I tried to add some input to my rules to avoid ambiguity.</p> <pre><code>def lib2data_input(wildcards): lib = wildcards.lib if lib in lib2raw: raw = lib2raw[lib] return [raw.format(mate="1"), raw.format(mate="2")] elif lib in lib2raw_single_end: raw = lib2raw_single_end[lib] return [raw] else: raise ValueError("Procedure to get raw data for %s unknown." % lib) rule get_raw_data: input: lib2data_input # [same output, params and shell as before] # [same modification for the single-end case] </code></pre> <p>This results in a <del><code>MissingInputException</code>. Strangely, the reportedly missing file does exist. Is the trick supposed to work?</del> (Can't reproduce this, now this results in:)</p> <pre><code>AmbiguousRuleException: Rules run_tophat_single_end and run_tophat are ambiguous for the file tophat_junction_discovery_revision_supplement/HTW2/junctions.bed. Expected input files: run_tophat_single_end: ./HTW2.fastq.gz Annotations/dmel-all-r5.9.gff run_tophat: ./HTW2_1.fastq.gz ./HTW2_2.fastq.gz Annotations/dmel-all-r5.9.gff </code></pre> <p>My way of specifying an input to the "data installation" rules is apparently not enough to guide snakemake to the correct rule.</p>
<p>As <a href="https://stackoverflow.com/a/40511148/1878788">suggested</a> by user1829905, I had tried to make the <code>get_raw_data*</code> rules one, but failed due to the output of this rule being variable.</p> <p>However, I can fuse the <code>run_tophat*</code> rules into one: They have the same output.</p> <pre><code>rule run_tophat: input: transcriptome = OPJ(annot_dir, "dmel-all-r5.9.gff"), fq = lib2fq, output: junctions = OPJ(output_dir, "{lib}", "junctions.bed"), bam = OPJ(output_dir, "{lib}", "accepted_hits.bam"), </code></pre> <p>I tried the following function to generate this fused rule's input:</p> <pre><code>def lib2fq(wildcards): lib = wildcards.lib if lib in lib2sr: return [OPJ(raw_data_dir, "{lib}_1.fastq.gz"), OPJ(raw_data_dir, "{lib}_2.fastq.gz")] elif lib in lib2raw: return [OPJ(raw_data_dir, "{lib}_1.fastq.gz"), OPJ(raw_data_dir, "{lib}_2.fastq.gz")] elif lib in lib2raw_single_end: return [OPJ(raw_data_dir, "{lib}.fastq.gz")] else: raise ValueError("Procedure to get raw data for %s unknown." % lib) </code></pre> <p>But this attempt failed with an <code>InputFunctionException</code>:</p> <pre><code>ValueError: Procedure to get raw data for {lib} unknown. </code></pre> <p>However, making the second rule's input defined explicitly in terms of the first rule's output solves the problem.</p> <pre><code>def lib2fq(wildcards): lib = wildcards.lib if lib in lib2raw_single_end: return rules.get_raw_data_single_end.output else: return rules.get_raw_data.output </code></pre> <p>I don't fully understand why this difference.</p>
python|bioinformatics|snakemake
0
1,903,098
26,126,121
Python logging threadName and multiprocessing.Process
<p>I have a program that uses multiprocessing.Process object to spin off chunks of the program. I'm passing it a configured logger with the following formatter</p> <pre><code>formatter = logging.Formatter( '[%(created)s] [%(threadName)s] %(message)s') </code></pre> <p>I create the process similar to this</p> <pre><code>process = multiprocessing.Process( name='abc', target=target_function, args=(log) ) </code></pre> <p>This logs messages in both the main process and the children as the following</p> <pre><code>[1412095772.77] [MainThread] Hello World from main process [1412095772.77] [MainThread] Hello World from child process </code></pre> <p>My understanding is that threadName should be using 'abc' from above and not MainThread again.</p> <p>Does anyone know why it appears as if it's not?</p>
<p>The <code>Formatter</code> using the name of the <em>thread</em> it's running in. In both the parent and child process, the active thread is <code>MainThread</code>, because each process is running a single thread. It sounds like you really want the <em>process</em> name to be printed, not the thread name:</p> <pre><code>formatter = logging.Formatter( '[%(created)s] [%(processName)s] %(message)s') </code></pre>
python|logging|multiprocessing
1
1,903,099
26,231,420
How to append one csv file to another with python
<p>I have two .csv files that I need to either join into a new file or append one to the other:</p> <p>filea:</p> <pre><code>jan,feb,mar 80,50,52 74,73,56 </code></pre> <p>fileb:</p> <pre><code>apr,may,jun 64,75,64 75,63,63 </code></pre> <p>What I need is:</p> <pre><code>jan,feb,mar,apr,may,jun 80,50,52,64,75,64 74,73,56,75,63,63 </code></pre> <p>What I'm getting:</p> <pre><code>jan,feb,mar 80,50,52 74,73,56 apr,may,jun 64,75,64 75,63,63 </code></pre> <p>I'm using the simplest code I can find. A bit too simple I guess:</p> <pre><code>sourceFile = open('fileb.csv', 'r') data = sourceFile.read() with open('filea.csv', 'a') as destFile: destFile.write(data </code></pre> <p>I'd be very grateful if anyone could tell me what I'm doing wrong and how to get them to append 'horizontally' instead of 'vertically'.</p>
<pre><code>from itertools import izip_longest with open("filea.csv") as source1,open("fileb.csv")as source2,open("filec.csv","a") as dest2: zipped = izip_longest(source1,source2) # use izip_longest which will add None as a fillvalue where we have uneven length files for line in zipped: if line[1]: # if we have two lines to join dest2.write("{},{}\n".format(line[0][:-1],line[1][:-1])) else: # else we are into the longest file, just treat line as a single item tuple dest2.write("{}".format(line[0])) </code></pre>
python|csv
1