Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,903,800 | 62,932,278 |
Looping through multiple Excel file to modify and rewrite original using pandas
|
<p>I'm new to this so please bear with me....
I have an .xls file with 49 rows and 5 columns viz. a,b,c,d,e. I want to calculate the squareroot of (b^2+c^2+d^2) and put into the new column as f. in the same .xls file.
Now imagine i have 49 of these files.
I'm trying to write a code using pandas that automatically parse each and every file in the folder and add a column in the original file with the above formula.</p>
<p>My code is :</p>
<pre><code>import glob
import pandas as pd
import numpy as np
#size = len(glob.glob('test/*.xls'))
file = glob.glob('test/*.xls')
for f in file:
print(f)
name = 12
df = pd.read_excel(f, header = None)
df.columns = ['a','b', 'c', 'd', 'e','f']
df['Result'] = ((df['b']**2)+(df['c']**2)+(df['d']**2))**(1/2)
df.to_excel(r'test/Nodal pressure at 8 us_at_Y_'+str(name)+'.5.xls', index = False)
name = name + 1
</code></pre>
<p>I don't know if it is possible or not but anyhelp would be useful. Also, i'm new to the coding so i may not some basic.</p>
|
<p>try the following where you create a new column <code>Filename</code> when reading in the data and groupby the filename when writing to excel and drop that column just before writing.:</p>
<pre><code>import glob
import pandas as pd
file = glob.glob('test/*.xls')
df = pd.concat([pd.read_excel(f, header=None).assign(Filename=os.path.basename(f)) for f in file])
df.columns = ['a','b', 'c', 'd', 'e','f','Filename']
df['Result'] = ((df['b']**2)+(df['c']**2)+(df['d']**2))**(1/2)
name = 12
for x in df.groupby(['Filename']):
x.drop('Filename', axis=1).to_excel(r'test/Nodal pressure at 8 us_at_Y_'+str(name)+'.5.xls', index=False)
name += 1
</code></pre>
|
python|excel|pandas|anaconda
| 0 |
1,903,801 | 35,429,468 |
Running Caffe creates high load over ksoftirqd/0
|
<p>I'm running Caffe using python on AWS. </p>
<p>The scripts uses the GPU, uploads an existing model, and checks the output of it per image URL. </p>
<p>On the first few tries the script ran well. Than it stuck on a few different phases at each time. </p>
<p>Using 'top' I could see that <code>ksoftirqd/0</code> gets about 93% of the CPU when the process is stuck. </p>
<p>I don't think there is a bug in my script, because originally it ran well no the server. When I reboot the server, sometimes it helps. But later we get the same problem. </p>
<p>Killing all python process on the server doesn't help. Only rebootting it. </p>
<p>Any ideas what I can do here?</p>
|
<p>It seems like you are experiencing a <a href="https://askubuntu.com/a/7919">very high network load</a>.<br>
What exactly are you trying to download from URLs?<br>
Are there any other processes running at the same time on the machine?</p>
<p>It is difficult to diagnose your problem without the specifics of your script.</p>
|
python|linux|amazon-web-services|caffe
| 0 |
1,903,802 | 58,680,235 |
How to calculate the mean of multiple Python Pandas datetime64[ns] values per row of the dataframe?
|
<pre><code>MC_schedule_df:
Act_Arr_Run-0 Act_Arr_Run-1 Act_Arr_Run-2 Act_Arr_Run-3
0 2005-08-05 05:15:08 2005-08-05 05:12:00 2005-08-05 05:16:50 2005-08-05 05:09:13
1 2005-08-05 06:18:30 2005-08-05 06:14:50 2005-08-05 06:14:29 2005-08-05 06:07:31
2 2005-08-05 06:22:17 2005-08-05 06:18:06 2005-08-05 06:26:25 2005-08-05 06:22:49
3 2005-08-05 08:52:56 2005-08-05 08:58:51 2005-08-05 09:05:27 2005-08-05 08:58:43
4 2005-08-05 13:04:24 2005-08-05 12:58:11 2005-08-05 13:05:41 2005-08-05 13:02:33
5 2005-08-05 13:22:08 2005-08-05 13:14:44 2005-08-05 13:09:08 2005-08-05 13:12:27
6 2005-08-05 14:26:38 2005-08-05 14:13:38 2005-08-05 14:17:31 2005-08-05 14:17:33
7 2005-08-05 18:08:41 2005-08-05 18:17:15 2005-08-05 18:14:21 2005-08-05 18:15:54
8 2005-08-05 19:46:15 2005-08-05 19:45:28 2005-08-05 19:46:20 2005-08-05 19:48:44
9 2005-08-05 23:13:53 2005-08-05 23:06:06 2005-08-05 23:06:25 2005-08-05 23:04:07
</code></pre>
<p>Hello,</p>
<p>I have the dataframe (MC_schedule_df) shown above, consisting of the following datatypes:</p>
<pre><code>In[1]: MC_schedule_df.dtypes
Out[1]:
Act_Arr_Run-0 datetime64[ns]
Act_Arr_Run-1 datetime64[ns]
Act_Arr_Run-2 datetime64[ns]
Act_Arr_Run-3 datetime64[ns]
dtype: object
</code></pre>
<p>The dataframe consists of rows of datetime values, of which i want to calculate the mean per row. I have tried the following code:</p>
<pre><code>MC_schedule_df = MC_schedule_df.assign(Average=MC_schedule_df.mean(axis=1))
</code></pre>
<p>This results in a column filled with NaN values. I have tried to find out why this does not work and thus have read loads of documentation. My current guess is that Python is not able to 'destilate' the appropriate information from the datetime values to calculate the mean.</p>
<p>How to calculate the mean of these multiple Python Pandas datetime64[ns] values? Any help is appreciated.</p>
<p>Edit: i tried the methods of <a href="https://stackoverflow.com/questions/27907902/datetime-objects-with-pandas-mean-function">Datetime objects with pandas mean function</a>. However, this method does not work, as i want to calculate the mean per row, and thus can not easily call the series.</p>
|
<p>You can use what shown in <a href="https://stackoverflow.com/a/47293691/10426037">this answer</a>. As pointed out in the link, you cannot calculate the mean of a bunch of dates, the operation is not supported. But you can calculate the average of a bunch of timedeltas.<br>
Use the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer">pandas apply</a> function to generalize it and apply it to a DataFrame instead of a Series.</p>
<pre><code>mean_values = MC_schedule_df.apply(lambda dt : (dt - dt.min()).mean() + dt.min(), axis=1)
</code></pre>
<p>Using your sample dataframe, <code>mean_values</code> is:</p>
<pre><code>0 2005-08-05 05:13:17.750
1 2005-08-05 06:13:50.000
2 2005-08-05 06:22:24.250
3 2005-08-05 08:58:59.250
4 2005-08-05 13:02:42.250
5 2005-08-05 13:14:36.750
6 2005-08-05 14:18:50.000
7 2005-08-05 18:14:02.750
8 2005-08-05 19:46:41.750
9 2005-08-05 23:07:37.750
dtype: datetime64[ns]
</code></pre>
|
python|pandas|datetime
| 0 |
1,903,803 | 73,337,884 |
meep module in spyder5 modulenotfounderror
|
<p>I create an env of meep.
'$ conda create -n mp -c conda-forge pymeep pymeep-extras'
then
'$ conda activate mp'</p>
<p>I can import meep module in python.</p>
<p>However, I want to use run it in spyder5. But in the env. mp, I have no spyder. I am confused how to use spyder in mp env.</p>
|
<p>I also had the problem and solved it. The solution is quite simple, that is installing meep and spyder at the same time with this command:
conda create -n meep -c conda-forge pymeep pymeep-extras spyder</p>
|
python|module|conda|spyder|environment
| 0 |
1,903,804 | 31,410,710 |
Having issues when install mitmproxy through pip
|
<p>I am having following issue when installing mitmproxy through pip.
I have tried other fixed related to egg error. Here on stack overflow.
<a href="https://stackoverflow.com/questions/17886647/cant-install-via-pip-because-of-egg-info-error">Can't install via pip because of egg_info error</a>
<a href="https://stackoverflow.com/questions/28914202/pip-install-matplotlib-fails-cannot-build-package-freetype-python-setup-py-e">pip install matplotlib fails: 'cannot build package freetype; "python setup.py egg_info" failed with error code 1'</a></p>
<pre><code>104:bin user129856$ sudo pip install mitmproxy
The directory '/Users/alokchoudhary/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/alokchoudhary/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting mitmproxy
/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Downloading mitmproxy-0.12.1.tar.gz (6.5MB)
100% |ββββββββββββββββββββββββββββββββ| 6.5MB 18kB/s
Collecting pyperclip>=1.5.8 (from mitmproxy)
Downloading pyperclip-1.5.11.zip
Collecting pyasn1>0.1.2 (from mitmproxy)
Downloading pyasn1-0.1.8.tar.gz (75kB)
100% |ββββββββββββββββββββββββββββββββ| 77kB 827kB/s
Collecting tornado>=4.0.2 (from mitmproxy)
Downloading tornado-4.2.tar.gz (433kB)
100% |ββββββββββββββββββββββββββββββββ| 434kB 260kB/s
Collecting lxml>=3.3.6 (from mitmproxy)
Downloading lxml-3.4.4.tar.gz (3.5MB)
100% |ββββββββββββββββββββββββββββββββ| 3.5MB 32kB/s
Collecting netlib<0.13,>=0.12 (from mitmproxy)
Downloading netlib-0.12.1.tar.gz (64kB)
100% |ββββββββββββββββββββββββββββββββ| 65kB 729kB/s
Complete output from command python setup.py egg_info:
warning: no files found matching 'OpenSSL/RATIONALE'
warning: no previously-included files found matching 'leakcheck'
warning: no previously-included files matching '*.py' found under directory 'leakcheck'
warning: no previously-included files matching '*.pem' found under directory 'leakcheck'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
no previously-included directories found matching 'doc/_build'
zip_safe flag not set; analyzing archive contents...
Installed /private/tmp/pip-build-wOHXdq/netlib/.eggs/pyOpenSSL-0.15.1-py2.7.egg
Searching for cffi
Reading https://pypi.python.org/simple/cffi/
Best match: cffi 1.1.2
Downloading https://pypi.python.org/packages/source/c/cffi/cffi-1.1.2.tar.gz#md5=ca6e6c45b45caa87aee9adc7c796eaea
Processing cffi-1.1.2.tar.gz
Writing /tmp/easy_install-_e2qwn/cffi-1.1.2/setup.cfg
Running cffi-1.1.2/setup.py -q bdist_egg --dist-dir /tmp/easy_install-_e2qwn/cffi-1.1.2/egg-dist-tmp-382ExN
c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found
#include <ffi.h>
^
1 error generated.
Traceback (most recent call last):
File "<string>", line 20, in <module>
File "/private/tmp/pip-build-wOHXdq/netlib/setup.py", line 87, in <module>
"install": CFFIInstall,
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "build/bdist.macosx-10.10-intel/egg/setuptools/dist.py", line 268, in __init__
File "build/bdist.macosx-10.10-intel/egg/setuptools/dist.py", line 313, in fetch_build_eggs
File "build/bdist.macosx-10.10-intel/egg/pkg_resources/__init__.py", line 836, in resolve
File "build/bdist.macosx-10.10-intel/egg/pkg_resources/__init__.py", line 1081, in best_match
File "build/bdist.macosx-10.10-intel/egg/pkg_resources/__init__.py", line 1093, in obtain
File "build/bdist.macosx-10.10-intel/egg/setuptools/dist.py", line 380, in fetch_build_egg
File "build/bdist.macosx-10.10-intel/egg/setuptools/command/easy_install.py", line 629, in easy_install
File "build/bdist.macosx-10.10-intel/egg/setuptools/command/easy_install.py", line 659, in install_item
File "build/bdist.macosx-10.10-intel/egg/setuptools/command/easy_install.py", line 842, in install_eggs
File "build/bdist.macosx-10.10-intel/egg/setuptools/command/easy_install.py", line 1070, in build_and_install
File "build/bdist.macosx-10.10-intel/egg/setuptools/command/easy_install.py", line 1058, in run_setup
distutils.errors.DistutilsError: Setup script exited with error: command 'cc' failed with exit status 1
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /private/tmp/pip-build-wOHXdq/netlib
</code></pre>
<p><strong>Updated after first response for libffi:</strong></p>
<p>After Installing libffi, it started breaking on libxml. I found the lxml on pip.
and its break again and looking for libxml :( </p>
<pre><code>104:~ user2368563$ brew install libxml
Error: No available formula for libxml
Searching formulae...
libxml++ libxml2 libxmlsec1
Searching taps...
homebrew/versions/libxml278
104:~ user2368563$ brew install libxml2
Warning: libxml2-2.9.2 already installed
104:bin user2368563$ sudo pip install lxml
The directory '/Users/alokchoudhary/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/alokchoudhary/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting lxml
/Library/Python/2.7/site-packages/pip-7.1.0-py2.7.egg/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Downloading lxml-3.4.4.tar.gz (3.5MB)
100% |ββββββββββββββββββββββββββββββββ| 3.5MB 116kB/s
Installing collected packages: lxml
Running setup.py install for lxml
Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/private/tmp/pip-build-bDtXaT/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-gmvCN9-record/install-record.txt --single-version-externally-managed --compile:
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url'
warnings.warn(msg)
Building lxml version 3.4.4.
Building without Cython.
Using build configuration of libxslt 1.1.28
running install
running build
running build_py
creating build
creating build/lib.macosx-10.10-intel-2.7
creating build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/__init__.py -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/_elementpath.py -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/builder.py -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/cssselect.py -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/doctestcompare.py -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/ElementInclude.py -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/pyclasslookup.py -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/sax.py -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/usedoctest.py -> build/lib.macosx-10.10-intel-2.7/lxml
creating build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/__init__.py -> build/lib.macosx-10.10-intel-2.7/lxml/includes
creating build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/__init__.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/_diffcommand.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/_html5builder.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/_setmixin.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/builder.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/clean.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/defs.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/diff.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/ElementSoup.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/formfill.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/html5parser.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/soupparser.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
copying src/lxml/html/usedoctest.py -> build/lib.macosx-10.10-intel-2.7/lxml/html
creating build/lib.macosx-10.10-intel-2.7/lxml/isoschematron
copying src/lxml/isoschematron/__init__.py -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron
copying src/lxml/lxml.etree.h -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/lxml.etree_api.h -> build/lib.macosx-10.10-intel-2.7/lxml
copying src/lxml/includes/c14n.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/config.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/dtdvalid.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/etreepublic.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/htmlparser.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/relaxng.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/schematron.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/tree.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/uri.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/xinclude.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/xmlerror.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/xmlparser.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/xmlschema.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/xpath.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/xslt.pxd -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/etree_defs.h -> build/lib.macosx-10.10-intel-2.7/lxml/includes
copying src/lxml/includes/lxml-version.h -> build/lib.macosx-10.10-intel-2.7/lxml/includes
creating build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources
creating build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/rng
copying src/lxml/isoschematron/resources/rng/iso-schematron.rng -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/rng
creating build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl
copying src/lxml/isoschematron/resources/xsl/RNG2Schtrn.xsl -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl
copying src/lxml/isoschematron/resources/xsl/XSD2Schtrn.xsl -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl
creating build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_abstract_expand.xsl -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_dsdl_include.xsl -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_message.xsl -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_skeleton_for_xslt1.xsl -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_svrl_for_xslt1.xsl -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/readme.txt -> build/lib.macosx-10.10-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
running build_ext
building 'lxml.etree' extension
creating build/temp.macosx-10.10-intel-2.7
creating build/temp.macosx-10.10-intel-2.7/src
creating build/temp.macosx-10.10-intel-2.7/src/lxml
cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -I/usr/include/libxml2 -I/private/tmp/pip-build-bDtXaT/lxml/src/lxml/includes -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.macosx-10.10-intel-2.7/src/lxml/lxml.etree.o -w -flat_namespace
In file included from src/lxml/lxml.etree.c:239:
/private/tmp/pip-build-bDtXaT/lxml/src/lxml/includes/etree_defs.h:14:10: fatal error: 'libxml/xmlversion.h' file not found
#include "libxml/xmlversion.h"
^
1 error generated.
error: command 'cc' failed with exit status 1
----------------------------------------
Command "/usr/bin/python -c "import setuptools, tokenize;__file__='/private/tmp/pip-build-bDtXaT/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-gmvCN9-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/tmp/pip-build-bDtXaT/lxml
</code></pre>
|
<p>If you read through your log carefully you might spot this line:</p>
<pre><code>c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found
</code></pre>
<p>The "fatal error" part is especially important. :)</p>
<p>This means that the <a href="https://sourceware.org/libffi/" rel="nofollow">ffi</a> headers couldn't be located by your compiler. I'm not sure how to do it since I'm not a Mac user but maybe homebrew could help you, or Google. To me it seems like you should install <a href="http://brew.sh" rel="nofollow">homebrew</a> and then just run:</p>
<pre><code>brew install libffi
</code></pre>
<p>Then try pip again.</p>
<p><strong>Edit</strong></p>
<p>The full list of dependencies are:</p>
<ul>
<li>python</li>
<li>libffi</li>
<li>libssl</li>
<li>libxml2</li>
<li>libxslt1</li>
</ul>
<p>So you'll need all those, and their headers, if you want to continue down this path.</p>
<p>An easier solution is to download pre-built binaries for your Mac, from <a href="http://mitmproxy.org/download/osx-mitmproxy-0.12.1.tar.gz" rel="nofollow">mitmproxy.org</a> (OSX Mountain Lion and later). I found this info in the <a href="https://mitmproxy.org/doc/install.html#docOSX" rel="nofollow">installation docs</a>.</p>
|
python|pip
| 0 |
1,903,805 | 15,622,821 |
Deduplicate and reorder dictionary
|
<p>Let's say we have a basic model. </p>
<pre><code>class Log(models.Model):
key = fields.BigInteger()
calldate = fields.DateTimeField()
followupdate = fields.DateTimeField()
</code></pre>
<p>There can be multiple followupdates set for the same key. What I would like to do is present the most recently (by calldate) scheduled followup in the list. </p>
<p>In my view, I have the following:</p>
<p><code># views.py</code></p>
<pre><code>def callbacks(request):
""" get objects where a followupdate has been specified """
q = Log.objects.filter(followupdate__isnull = False).order_by("-calldate")
""" deduplicate key so only most recently scheduled followupdate show """
newresults = []
seen_key = []
for result in q:
if result.key not in seen_key:
seen_key.append(result.key)
newresults.append(result)
results = newresults
""" What I want to do is equivalent to results.order_by("followupdate") """
""" But since it's a dictionary now and not a queryset, I can't """
return render_to_response('callbacks.html', {"callbacks":results})
</code></pre>
<p>What I need help with is reordering the dictionary, <code>results</code>, by the key <code>followupdate</code>. In each dictionary there is a key like this: <code>'followupdate': datetime.date(2013, 3, 25)</code> this is what I need to reorder by. </p>
|
<p><code>.sort()</code> seems to work like this - </p>
<pre><code>l.sort(key=lambda item:item['followupdate'], reverse=True)
</code></pre>
<p>Test cases -</p>
<pre><code>>>> l = [{'followupdate': datetime.date(2013, 3, 25)}, {'followupdate': datetime.date(2013, 3, 24)}, {'followupdate': datetime.date(2013, 3, 29)}]
>>> l
[{'followupdate': datetime.date(2013, 3, 25)}, {'followupdate': datetime.date(2013, 3, 24)}, {'followupdate': datetime.date(2013, 3, 29)}]
>>> l.sort(key=lambda item:item['followupdate'], reverse=True)
>>> l
[{'followupdate': datetime.date(2013, 3, 29)}, {'followupdate': datetime.date(2013, 3, 25)}, {'followupdate': datetime.date(2013, 3, 24)}]
</code></pre>
|
python|django
| 2 |
1,903,806 | 15,983,272 |
Does Python have sync?
|
<p>The <a href="http://linux.die.net/man/2/sync" rel="noreferrer">sync man page</a> says:</p>
<blockquote>
<p>sync() causes all buffered modifications to file metadata and data to
be written to the underlying file systems.</p>
</blockquote>
<p>Does Python have a call to do this?</p>
<p>P.S. Not <a href="http://docs.python.org/2/library/os.html#os.fsync" rel="noreferrer">fsync</a>, I see that.</p>
|
<p>Python 3.3 has os.sync, see <a href="http://docs.python.org/3/library/os.html#os.sync" rel="nofollow noreferrer">the docs</a>. The <a href="http://hg.python.org/releasing/3.3.1/file/8e5812b35480/Modules/posixmodule.c#l3062" rel="nofollow noreferrer">source</a> confirms it is the same thing.</p>
<p>For Python 2 you can to make an <a href="http://docs.python.org/3/library/subprocess.html" rel="nofollow noreferrer">external call</a> to the system:</p>
<pre><code>from subprocess import check_call
check_call(['sync'])
</code></pre>
|
python|linux|sync
| 17 |
1,903,807 | 59,694,301 |
Conditional Formatting on duplicate values using pandas
|
<p>I have a dataFrame with 2 columns a A and B. I have to separate out subset of dataFrames using pandas to delete all the duplicate values.</p>
<pre><code>For Example
</code></pre>
<p>My dataFrame looks like this</p>
<pre><code>**A B**
1 1
2 3
4 4
8 8
5 6
4 7
</code></pre>
<p>Then the output should be </p>
<pre><code>**A B**
1 1 <--- both values Highlighted
2 3
4 4 <--- both values Highlighted
8 8 <--- both values Highlighted
5 6
4 7 <--- value in column A highlighted
</code></pre>
<p>How do I do that?</p>
<p>Thanks in advance.</p>
|
<p>You can use this:</p>
<pre><code>def color_dupes(x):
c1='background-color:red'
c2=''
cond=x.stack().duplicated(keep=False).unstack()
df1 = pd.DataFrame(np.where(cond,c1,c2),columns=x.columns,index=x.index)
return df1
df.style.apply(color_dupes,axis=None)
# if df has many columns: df.style.apply(color_dupes,axis=None,subset=['A','B'])
</code></pre>
<hr>
<p>Example working code:</p>
<p><a href="https://i.stack.imgur.com/IQJ9M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IQJ9M.png" alt="enter image description here"></a></p>
<p>Explanation:
First we <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>stack</code></a> the dataframe so as to bring all the columns into a series and find <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.duplicated.html" rel="nofollow noreferrer"><code>duplicated</code></a> with <code>keep=False</code> to mark all duplicates as true:</p>
<pre><code>df.stack().duplicated(keep=False)
0 A True
B True
1 A False
B False
2 A True
B True
3 A True
B True
4 A False
B False
5 A True
B False
dtype: bool
</code></pre>
<p>After this we <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>unstack()</code></a> the dataframe which gives a boolean dataframe with the same dataframe structure:</p>
<pre><code>df.stack().duplicated(keep=False).unstack()
A B
0 True True
1 False False
2 True True
3 True True
4 False False
5 True False
</code></pre>
<p>Once we have this we assign the background color to values if True else no color using <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where</code></a></p>
|
python|pandas|dataframe|duplicates|conditional-formatting
| 3 |
1,903,808 | 49,212,530 |
Determine function name from within an aliased function
|
<p>How can I determine whether a function was called using the function's name or by the name of an alias of that function?</p>
<p>I can inspect a function to get its name from within the body of a function by doing:</p>
<pre><code>import inspect
def foo():
print(inspect.stack()[0][3])
foo() # prints 'foo'
</code></pre>
<p>source: <a href="https://stackoverflow.com/q/5067604/5992438">Determine function name from within that function (without using traceback)</a></p>
<p>However, if I alias the function and try the same thing I get the original function name (not the alias)</p>
<pre><code>bar = foo
bar() # prints 'foo'
</code></pre>
<p>I would like to be able to be able to do the following:</p>
<pre><code>def foo():
print(... some code goes here ...)
bar = foo
foo() # prints 'foo'
bar() # prints 'bar'
</code></pre>
|
<p>Based on the limited knowledge I have of the scope of your problem, this works:</p>
<pre><code>import inspect
def foo():
print(inspect.stack()[1][4][0].strip())
foo()
bar = foo
bar()
</code></pre>
<p>Results:</p>
<pre><code>foo()
bar()
</code></pre>
|
python|python-3.x|introspection
| 1 |
1,903,809 | 60,016,635 |
Migrating code to TF2 and not getting the ModuleNotFoundError: No module named 'tensorflow.contrib.framework'
|
<p>I have this code here from <a href="https://github.com/skokec/DAU-ConvNet" rel="nofollow noreferrer">https://github.com/skokec/DAU-ConvNet</a>, which is based on this paper <a href="https://arxiv.org/abs/1902.07474" rel="nofollow noreferrer">https://arxiv.org/abs/1902.07474</a> and replaces the standard grid-based filters in a convolutional block with an adaptive filter version. When I try to import the package I get this error: </p>
<blockquote>
<p>ModuleNotFoundError: No module named 'tensorflow.contrib.framework'.</p>
</blockquote>
<p>I know that "tensorflow.contrib" is apparently being removed in version 2.0 and I need to revert back to version <= 1.14 to make it work. BUT I wanted to see if someone can make this code work in the newer TF.2 version as this is a very interesting paper and the results are very good and by migrating this code to TF2 it will encourage other people to try it out and experiment with this layer in different architectures/setups. Below is the source code:</p>
<pre><code>import os
import numpy as np
import tensorflow as tf
from tensorflow.python.layers import base
from tensorflow.python.layers import utils
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.ops import nn_ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import nn
from tensorflow.python.ops import init_ops
class DAUConv2dTF(base.Layer):
def __init__(self, filters,
dau_units,
max_kernel_size,
strides=1,
data_format='channels_first',
activation=None,
use_bias=True,
weight_initializer=init_ops.random_normal_initializer(stddev=0.1),
mu1_initializer=None,
mu2_initializer=None,
sigma_initializer=None,
bias_initializer=init_ops.zeros_initializer(),
weight_regularizer=None,
mu1_regularizer=None,
mu2_regularizer=None,
sigma_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
weight_constraint=None,
mu1_constraint=None,
mu2_constraint=None,
sigma_constraint=None,
bias_constraint=None,
trainable=True,
mu_learning_rate_factor=500,
dau_unit_border_bound=0.01,
dau_sigma_trainable=False,
name=None,
**kwargs):
super(DAUConv2dTF, self).__init__(trainable=trainable, name=name,
activity_regularizer=activity_regularizer,
**kwargs)
self.rank = 2
self.filters = filters
self.dau_units = utils.normalize_tuple(dau_units, self.rank, 'dau_components')
self.max_kernel_size = max_kernel_size
self.padding = np.floor(self.max_kernel_size/2.0)
self.strides = strides
self.data_format = utils.normalize_data_format(data_format)
self.activation = activation
self.use_bias = use_bias
self.bias_initializer = bias_initializer
self.bias_regularizer = bias_regularizer
self.bias_constraint = bias_constraint
self.weight_initializer = weight_initializer
self.weight_regularizer = weight_regularizer
self.weight_constraint = weight_constraint
self.mu1_initializer = mu1_initializer
self.mu1_regularizer = mu1_regularizer
self.mu1_constraint = mu1_constraint
self.mu2_initializer = mu2_initializer
self.mu2_regularizer = mu2_regularizer
self.mu2_constraint = mu2_constraint
self.sigma_initializer = sigma_initializer
self.sigma_regularizer = sigma_regularizer
self.sigma_constraint = sigma_constraint
if self.mu1_initializer is None:
raise Exception("Must initialize MU1")
if self.mu2_initializer is None:
raise Exception("Must initialize MU2")
if self.sigma_initializer is None:
self.sigma_initializer=init_ops.constant_initializer(0.5)
self.mu_learning_rate_factor = mu_learning_rate_factor
self.input_spec = base.InputSpec(ndim=self.rank + 2)
self.dau_unit_border_bound = dau_unit_border_bound
self.num_dau_units_all = np.int32(np.prod(self.dau_units))
self.dau_weights = None
self.dau_mu1 = None
self.dau_mu2 = None
self.dau_sigma = None
self.dau_sigma_trainable = dau_sigma_trainable
def set_dau_variables_manually(self, w = None, mu1 = None, mu2 = None, sigma = None):
""" Manually set w,mu1,mu2 and/or sigma variables with custom tensor. Call before build() or __call__().
The shape must match the expecated shape as returned by the get_dau_variable_shape(input_shape)
otherwise the build() function will fail."""
if w is not None:
self.dau_weights = w
if mu1 is not None:
self.dau_mu1 = mu1
if mu2 is not None:
self.dau_mu2 = mu2
if sigma is not None:
self.dau_sigma = sigma
def _get_input_channel_axis(self):
if self.data_format == 'channels_first':
channel_axis = 1
else:
channel_axis = -1
return channel_axis
def _get_input_channels(self, input_shape):
channel_axis = self._get_input_channel_axis()
if input_shape[channel_axis].value is None:
raise ValueError('The channel dimension of the inputs '
'should be defined. Found `None`.')
return input_shape[channel_axis].value
def get_dau_variable_shape(self, input_shape):
# get input
num_input_channels = self._get_input_channels(input_shape)
dau_params_shape_ = (num_input_channels, self.dau_units[0], self.dau_units[1], self.filters)
dau_params_shape = (1, num_input_channels, self.num_dau_units_all, self.filters)
return dau_params_shape
def add_dau_weights_var(self, input_shape):
dau_params_shape = self.get_dau_variable_shape(input_shape)
return self.add_variable(name='weights',
shape=dau_params_shape,
initializer=self.weight_initializer,
regularizer=self.weight_regularizer,
constraint=self.weight_constraint,
trainable=True,
dtype=self.dtype)
def add_dau_mu1_var(self, input_shape):
dau_params_shape = self.get_dau_variable_shape(input_shape)
mu1_var = self.add_variable(name='mu1',
shape=dau_params_shape,
initializer=self.mu1_initializer,
regularizer=self.mu1_regularizer,
constraint=self.mu1_constraint,
trainable=True,
dtype=self.dtype)
# limit max offset based on self.dau_unit_border_bound and kernel size
mu1_var = tf.minimum(tf.maximum(mu1_var,
-(self.max_kernel_size - self.dau_unit_border_bound)),
self.max_kernel_size - self.dau_unit_border_bound)
return mu1_var
def add_dau_mu2_var(self, input_shape):
dau_params_shape = self.get_dau_variable_shape(input_shape)
mu2_var = self.add_variable(name='mu2',
shape=dau_params_shape,
initializer=self.mu2_initializer,
regularizer=self.mu2_regularizer,
constraint=self.mu2_constraint,
trainable=True,
dtype=self.dtype)
# limit max offset based on self.dau_unit_border_bound and kernel size
mu2_var = tf.minimum(tf.maximum(mu2_var,
-(self.max_kernel_size - self.dau_unit_border_bound)),
self.max_kernel_size - self.dau_unit_border_bound)
return mu2_var
def add_dau_sigma_var(self, input_shape, trainable=False):
dau_params_shape = self.get_dau_variable_shape(input_shape)
# create single sigma variable
sigma_var = self.add_variable(name='sigma',
shape=dau_params_shape,
initializer=self.sigma_initializer,
regularizer=self.sigma_regularizer,
constraint=self.sigma_constraint,
trainable=self.dau_sigma_trainable,
dtype=self.dtype)
# but make variable shared across all channels as required for the efficient DAU implementation
return sigma_var
def add_bias_var(self):
return self.add_variable(name='bias',
shape=(self.filters,),
initializer=self.bias_initializer,
regularizer=self.bias_regularizer,
constraint=self.bias_constraint,
trainable=True,
dtype=self.dtype)
def build(self, input_shape):
input_shape = tensor_shape.TensorShape(input_shape)
dau_params_shape = self.get_dau_variable_shape(input_shape)
if self.dau_weights is None:
self.dau_weights = self.add_dau_weights_var(input_shape)
elif np.any(self.dau_weights.shape != dau_params_shape):
raise ValueError('Shape mismatch for variable `dau_weights`')
if self.dau_mu1 is None:
self.dau_mu1 = self.add_dau_mu1_var(input_shape)
elif np.any(self.dau_mu1.shape != dau_params_shape):
raise ValueError('Shape mismatch for variable `dau_mu1`')
if self.dau_mu2 is None:
self.dau_mu2 = self.add_dau_mu2_var(input_shape)
elif np.any(self.dau_mu2.shape != dau_params_shape):
raise ValueError('Shape mismatch for variable `dau_mu2`')
if self.dau_sigma is None:
self.dau_sigma = self.add_dau_sigma_var(input_shape, trainable=self.dau_sigma_trainable)
elif np.any(self.dau_sigma.shape != dau_params_shape):
raise ValueError('Shape mismatch for variable `dau_sigma`')
if self.use_bias:
self.bias = self.add_bias_var()
else:
self.bias = None
input_channel_axis = self._get_input_channel_axis()
num_input_channels = self._get_input_channels(input_shape)
self.input_spec = base.InputSpec(ndim=self.rank + 2,
axes={input_channel_axis: num_input_channels})
kernel_shape = tf.TensorShape((self.max_kernel_size, self.max_kernel_size, num_input_channels, self.filters))
self._convolution_op = nn_ops.Convolution(
input_shape,
filter_shape=kernel_shape,
dilation_rate=(1,1),
strides=(self.strides,self.strides),
padding="SAME",
data_format=utils.convert_data_format(self.data_format,
self.rank + 2))
self.built = True
def call(self, inputs):
def get_kernel_fn(dau_w, dau_mu1, dau_mu2, dau_sigma, max_kernel_size, mu_learning_rate_factor=1):
# add mu1/mu2 gradient multiplyer
if mu_learning_rate_factor != 1:
dau_mu1 = mu_learning_rate_factor * dau_mu1 + (1 - mu_learning_rate_factor) * tf.stop_gradient(dau_mu1)
dau_mu2 = mu_learning_rate_factor * dau_mu2 + (1 - mu_learning_rate_factor) * tf.stop_gradient(dau_mu2)
[X,Y] = np.meshgrid(np.arange(max_kernel_size),np.arange(max_kernel_size))
X = np.reshape(X,(max_kernel_size*max_kernel_size,1,1,1)) - int(max_kernel_size/2)
Y = np.reshape(Y,(max_kernel_size*max_kernel_size,1,1,1)) - int(max_kernel_size/2)
X = X.astype(np.float32)
Y = Y.astype(np.float32)
# Gaussian kernel
X = tf.convert_to_tensor(X,name='X',dtype=tf.float32)
Y = tf.convert_to_tensor(Y,name='Y',dtype=tf.float32)
gauss_kernel = tf.exp(-1* (tf.pow(X - dau_mu1,2.0) + tf.pow(Y - dau_mu2,2.0)) / (2.0*tf.pow(dau_sigma,2.0)),name='gauss_kernel')
gauss_kernel_sum = tf.reduce_sum(gauss_kernel,axis=0, keep_dims=True,name='guass_kernel_sum')
gauss_kernel_norm = tf.divide(gauss_kernel, gauss_kernel_sum ,name='gauss_kernel_norm')
# normalize to sum of 1 and add weight
gauss_kernel_norm = tf.multiply(dau_w, gauss_kernel_norm,name='gauss_kernel_weight')
# sum over Gaussian units
gauss_kernel_norm = tf.reduce_sum(gauss_kernel_norm, axis=2, keep_dims=True,name='gauss_kernel_sum_units')
# convert to [Kw,Kh,S,F] shape
gauss_kernel_norm = tf.reshape(gauss_kernel_norm, (max_kernel_size, max_kernel_size, gauss_kernel_norm.shape[1], gauss_kernel_norm.shape[3]),name='gauss_kernel_reshape')
return gauss_kernel_norm
try:
# try with XLA if exists
from tensorflow.contrib.compiler import xla
gauss_kernel_norm = xla.compile(computation=get_kernel_fn, inputs=(self.dau_weights, self.dau_mu1, self.dau_mu2, self.dau_sigma, self.max_kernel_size, self.mu_learning_rate_factor))[0]
except:
# otherwise revert to direct method call
gauss_kernel_norm = get_kernel_fn(self.dau_weights, self.dau_mu1, self.dau_mu2, self.dau_sigma, self.max_kernel_size, self.mu_learning_rate_factor)
outputs = self._convolution_op(inputs, gauss_kernel_norm)
if self.use_bias:
if self.data_format == 'channels_first':
if self.rank == 1:
# nn.bias_add does not accept a 1D input tensor.
bias = array_ops.reshape(self.bias, (1, self.filters, 1))
outputs += bias
if self.rank == 2:
outputs = nn.bias_add(outputs, self.bias, data_format='NCHW')
if self.rank == 3:
# As of Mar 2017, direct addition is significantly slower than
# bias_add when computing gradients. To use bias_add, we collapse Z
# and Y into a single dimension to obtain a 4D input tensor.
outputs_shape = outputs.shape.as_list()
if outputs_shape[0] is None:
outputs_shape[0] = -1
outputs_4d = array_ops.reshape(outputs,
[outputs_shape[0], outputs_shape[1],
outputs_shape[2] * outputs_shape[3],
outputs_shape[4]])
outputs_4d = nn.bias_add(outputs_4d, self.bias, data_format='NCHW')
outputs = array_ops.reshape(outputs_4d, outputs_shape)
else:
outputs = nn.bias_add(outputs, self.bias, data_format='NHWC')
if self.activation is not None:
return self.activation(outputs)
return outputs
def compute_output_shape(self, input_shape):
input_shape = tensor_shape.TensorShape(input_shape).as_list()
if self.data_format == 'channels_last':
space = input_shape[1:-1]
new_space = []
for i in range(len(space)):
new_dim = utils.conv_output_length(
space[i],
self.max_kernel_size[i],
padding=self.padding,
stride=self.strides[i],
dilation=1)
new_space.append(new_dim)
return tensor_shape.TensorShape([input_shape[0]] + new_space +
[self.filters])
else:
space = input_shape[2:]
new_space = []
for i in range(len(space)):
new_dim = utils.conv_output_length(
space[i],
self.kernel_size[i],
padding=self.padding,
stride=self.strides,
dilation=1)
new_space.append(new_dim)
return tensor_shape.TensorShape([input_shape[0], self.filters] +
new_space)
from tensorflow.contrib.framework.python.ops import add_arg_scope
from tensorflow.python.ops import variable_scope
from tensorflow.contrib.layers.python.layers import layers as layers_contrib
from tensorflow.contrib.layers.python.layers import utils as utils_contrib
@add_arg_scope
def dau_conv2d_tf(inputs,
filters,
dau_units,
max_kernel_size,
stride=1,
mu_learning_rate_factor=500,
data_format=None,
activation_fn=nn.relu,
normalizer_fn=None,
normalizer_params=None,
weights_initializer=init_ops.random_normal_initializer(stddev=0.1), #init_ops.glorot_uniform_initializer(),
weights_regularizer=None,
weights_constraint=None,
mu1_initializer=None,
mu1_regularizer=None,
mu1_constraint=None,
mu2_initializer=None,
mu2_regularizer=None,
mu2_constraint=None,
sigma_initializer=None,
sigma_regularizer=None,
sigma_constraint=None,
biases_initializer=init_ops.zeros_initializer(),
biases_regularizer=None,
biases_constraint=None,
dau_unit_border_bound=0.01,
dau_sigma_trainable=False,
reuse=None,
variables_collections=None,
outputs_collections=None,
trainable=True,
scope=None):
if data_format not in [None, 'NWC', 'NCW', 'NHWC', 'NCHW', 'NDHWC', 'NCDHW']:
raise ValueError('Invalid data_format: %r' % (data_format,))
layer_variable_getter = layers_contrib._build_variable_getter({
'bias': 'biases',
'weight': 'weights',
'mu1': 'mu1',
'mu2': 'mu2',
'sigma': 'sigma'
})
with variable_scope.variable_scope(
scope, 'DAUConv', [inputs], reuse=reuse,
custom_getter=layer_variable_getter) as sc:
inputs = ops.convert_to_tensor(inputs)
input_rank = inputs.get_shape().ndims
if input_rank != 4:
raise ValueError('DAU convolution not supported for input with rank',
input_rank)
df = ('channels_first'
if data_format and data_format.startswith('NC') else 'channels_last')
layer = DAUConv2dTF(filters,
dau_units,
max_kernel_size,
strides=stride,
data_format=df,
activation=None,
use_bias=not normalizer_fn and biases_initializer,
mu_learning_rate_factor=mu_learning_rate_factor,
weight_initializer=weights_initializer,
mu1_initializer=mu1_initializer,
mu2_initializer=mu2_initializer,
sigma_initializer=sigma_initializer,
bias_initializer=biases_initializer,
weight_regularizer=weights_regularizer,
mu1_regularizer=mu1_regularizer,
mu2_regularizer=mu2_regularizer,
sigma_regularizer=sigma_regularizer,
bias_regularizer=biases_regularizer,
activity_regularizer=None,
weight_constraint=weights_constraint,
mu1_constraint=mu1_constraint,
mu2_constraint=mu2_constraint,
sigma_constraint=sigma_constraint,
bias_constraint=biases_constraint,
dau_unit_border_bound=dau_unit_border_bound,
dau_sigma_trainable=dau_sigma_trainable,
trainable=trainable,
name=sc.name,
_scope=sc,
_reuse=reuse)
outputs = layer.apply(inputs)
# Add variables to collections.
layers_contrib._add_variable_to_collections(layer.dau_weights, variables_collections, 'weights')
layers_contrib._add_variable_to_collections(layer.dau_mu1, variables_collections, 'mu1')
layers_contrib._add_variable_to_collections(layer.dau_mu2, variables_collections, 'mu2')
layers_contrib._add_variable_to_collections(layer.dau_sigma, variables_collections, 'sigma')
if layer.use_bias:
layers_contrib._add_variable_to_collections(layer.bias, variables_collections, 'biases')
if normalizer_fn is not None:
normalizer_params = normalizer_params or {}
outputs = normalizer_fn(outputs, **normalizer_params)
if activation_fn is not None:
outputs = activation_fn(outputs)
return utils_contrib.collect_named_outputs(outputs_collections, sc.name, outputs)
</code></pre>
<p>Any help would be highly appreciated.
Cheers,
H </p>
|
<p>In TensorFlow 2.x, the recommended method of creating reusable neural network layers is to create a new <code>tf.keras.layers.Layer</code> subclass. TensorFlow provides a <a href="https://www.tensorflow.org/guide/keras/custom_layers_and_models" rel="nofollow noreferrer">great tutorial on this</a>. You can reuse the vast majority of the code in your posted example in a <code>tf.keras</code> layer class. You might also be able to inherit from <code>tensorflow.python.keras.layers.convolutional.Conv</code> to reduce the amount of boilerplate code.</p>
<p>As for the some modules not being found, you should use the aliases that TensorFlow exposes. Here is an incomplete list:</p>
<ul>
<li><code>array_ops.reshape</code> -> <code>tf.reshape</code></li>
<li><code>init_ops.constant_initializer</code> -> <code>tf.initializers.constant</code></li>
<li><code>tensor_shape</code> -> <code>tf.TensorShape</code></li>
<li><code>nn_ops.Convolution</code> -> <code>tf.nn.convolution</code> or <code>tf.keras.layers.Conv?D</code></li>
</ul>
|
python|tensorflow|tensorflow2.0|tflearn|migrating
| 1 |
1,903,810 | 59,922,807 |
python how to count words in a list element
|
<p>below code returns a list:</p>
<pre><code>[['We test robots'], ['Give us a try'], [' ']]
</code></pre>
<p>now I need to count words in each element,
how could I achieve this in Python without importing any packages. In the above I should get 3,4 and 1 for three list elements.
thanks</p>
<pre><code>import re
S ="We test robots.Give us a try? "
splitted = [l.split(',') for l in (re.split('\.|\!|\?',S)) if l]
print (splitted)
</code></pre>
|
<p>There are multiple ways to do this, here's two:</p>
<pre><code># using map
list(map(lambda x: len(x[0].split()) if len(x[0]) > 1 else 1, l))
[3, 4, 1]
# using list comprehension
[len(x[0].split()) if len(x[0]) > 1 else 1 for x in l]
[3, 4, 1]
</code></pre>
|
python|list|counting
| 3 |
1,903,811 | 60,301,614 |
How to segregate higher numbers from lower numbers
|
<pre><code>import random
num = input('enter a number:')
n1=random.randrange(0,50)
n2=random.randrange(0,50)
n3=random.randrange(0,50)
n4=random.randrange(0,50)
n5=random.randrange(0,50)
</code></pre>
<p>What i want to do is to compare those random numbers to my input number if the random number is higher it will be displayed all higher numbers than the user input also, it will display the smaller numbers compare to the user input at the same time
For example i entered
30
And the random numbers are 5, 45, 18, 22, 50
Output:
Higher number than 30 are: 45, 50
Lower number than 30 are: 5, 18, 22</p>
<p>I tried using if else but it wont display anything if the series of numbers are mixed like the example. </p>
|
<pre><code>import random
num = input('enter a number:')
random_numbers = []
# Generate all the numbers and store in a list
for i in range (0,5):
random_numbers.append(random.randrange(0,50))
# Get lower values
lower = [number for number in random_numbers if number<num]
# Get higher ones
higher = [number for number in random_numbers if number>num]
print("Higher number than {num} are: " + str(higher))
print("Lower number than {num} are: " + str(lower))
</code></pre>
|
python|comparison
| 1 |
1,903,812 | 3,108,488 |
python httplib: getting the outgoing request headers
|
<p>I do:</p>
<pre><code>con = HTTPConnection(SERVER_NAME)
con.request('GET', PATH, HEADERS)
resp = con.getresponse()
</code></pre>
<p>For debugging reasons, I want to see the request I used (it's fields, path, method,..). I would expect there to be some sort of <code>con.getRequest()</code> or something of the sort but didn't find anything. Ideas?</p>
|
<p>Try</p>
<pre><code>con.setdebuglevel(1)
</code></pre>
<p>That will enable debugging output, which among other things, will print out all the data it sends.</p>
<p>If you only want to get the headers and request line, not the request body (or any other debugging output), you can subclass <code>HTTPConnection</code> and override the <code>_output</code> method, which is called by the class itself to produce output (except for the request body). You'd want to do something like this:</p>
<pre><code>class MyHTTPConnection(HTTPConnection):
def _output(self, s):
print repr(s)
super(MyHTTPConnection, self)._output(s)
</code></pre>
<p>For more details on how that works and possible alternatives, have a look at the <code>httplib</code> source code.</p>
|
python|httplib|httpconnection
| 3 |
1,903,813 | 6,101,016 |
PyQt - how can element exist without pointer?
|
<p>This program shows window and 3 buttons.<br>
Buttons are generated by iteration <code>for i in range(3):</code>.</p>
<p>I don't understand this:<br>
If <code>button</code> in the end only contains last button, where are all other buttons?<br>
Why they are not deleted, when <code>button</code> starts referring to other element?</p>
<pre><code>import sys
from PyQt4.QtGui import *
class MainWindow(QWidget):
def __init__(self, parent=None):
QWidget.__init__(self, parent)
self.vbox = QVBoxLayout()
for i in range(3):
button = QPushButton(str(i), self)
self.vbox.addWidget(button)
self.setLayout(self.vbox)
app = QApplication(sys.argv)
myapp = MainWindow()
myapp.show()
sys.exit(app.exec_())
</code></pre>
<p>To what all buttons are attached?<br>
And how I can access them? <code>button</code> for last created button, but other buttons?</p>
|
<p>A reference to the button is held in the 'vbox' object.</p>
|
python|pyqt
| 3 |
1,903,814 | 67,812,176 |
Flask API error converting JSON string to pandas dataframe
|
<p>I am building my first API in Flask. It shall receive json string from Postman and according to the log in terminal POST request is working, I can see my json string in line 15 in the code print(json_).</p>
<p>However next line is a problem:
query = pd.read_json(json_, orient='index')
This line shall convert json into pd dataframe, so I can convert it to an numpy array and load it into my machine learning model.
Outside of Flask my logic work well, but here code gots broken. I put several print command to trace the code breaking point and it seems to me this query line.
Any suggestions are very much appreciated. Thank you in advance!
Vlad</p>
<p>The complete code for API looks like this:</p>
<pre><code>from flask import Flask, request, jsonify
import joblib
import traceback
import pandas as pd
import numpy as np
app = Flask(__name__)
@app.route('/predict', methods=['POST'])
def predict():
try:
json_ = request.json
print(json_)
query = pd.read_json(json_, orient='index')
print('query', query)
res = np.array(query).reshape(1,-1)
print('results', res)
prediction = rf.predict(res)
print(prediction)
return jsonify({'prediction': list(prediction)})
except:
return jsonify({'trace': traceback.format_exc()})
if __name__ == '__main__':
try:
port = int(sys.argv[1])
except:
port = 12345
rf = joblib.load('random_forest_model_diabetes_refined_31_5_2021.pkl') # Load ML model
print ('Model loaded')
app.run(debug=True, port=port)
</code></pre>
|
<p>When I replaced my query line</p>
<pre><code>query = pd.read_json(json_, orient='index')
</code></pre>
<p>with:</p>
<pre><code>query = pd.json_normalize(json_)
</code></pre>
<p>It works. I am puzzled.</p>
|
python|pandas|flask|rest
| 0 |
1,903,815 | 30,318,985 |
How to exclude a particular html tag(without any id) from several tags while using scrapy?
|
<pre><code><div class="region size2of3">
<h2>Mumbai</h2>
<strong>Fort</strong>
<div>Elphinstone building, Horniman Circle,</div>
<div>Veer Nariman Road, Fort</div>
<div>Mumbai 400001</div>
<div>Timings: 08:00-00:30 hrs (Mon-Sun)</div>
<div><br></div>
</div>
</code></pre>
<p>I want to exclude the "Timings: 08:00-00:30 hrs (Mon-Sun)" div tag while parsing.</p>
<p>Here's my code:</p>
<pre><code>import scrapy
from job.items import StarbucksItem
class StarbucksSpider(scrapy.Spider):
name = "starbucks"
allowed_domains = ["starbucks.in"]
start_urls = ["http://www.starbucks.in/coffeehouse/store-locations/"]
def parse(self, response):
for sel in response.xpath('//div[@class="region size2of3"]'):
item = StarbucksItem()
item['title'] = sel.xpath('div/text()').extract()
yield item
</code></pre>
|
<p>I would use <a href="https://developer.mozilla.org/en-US/docs/Web/XPath/Functions/starts-with" rel="nofollow"><code>starts-with()</code> XPath function</a> to get the <code>div</code> element's text that starts with "Timings":</p>
<pre><code>sel.xpath('.//div[starts-with(., "Timings")]/text()').extract()
</code></pre>
<p>Note that the HTML structure of the page doesn't make it easy to distinguish locations between each other - there is no location-specific containers that you can iterate over. In this case, I would find every <code>h2</code> or <code>strong</code> tag and use <code>following-sibling</code>, example from the <a href="http://doc.scrapy.org/en/latest/topics/shell.html" rel="nofollow">Scrapy Shell</a>:</p>
<pre><code>In [10]: for sel in response.xpath('//div[contains(@class, "region")]/*[self::h2 or self::strong]'):
name = sel.xpath('text()').extract()[0]
timings = sel.xpath('./following-sibling::div[starts-with(., "Timings")]/text()').extract()[0]
print name, timings
....:
Mumbai Timings: 08:00-00:30 hrs (Mon-Sun)
Fort Timings: 08:00-00:30 hrs (Mon-Sun)
Colaba Timings: 07:00-01:00 hrs (Mon-Sun)
Goregaon Timings: 10:00-23:30 hrs (Mon-Sun)
Powai Timings: 07:00-00:00 hrs (Mon-Sun)
...
Hi-Tech City Timings: 09:00 - 22:30 hrs (Mon - Sun)
Madhapur Timings: 11:00 -23:00 hrs (Mon - Sun)
Banjara Hills Timings: 10:00 -22:30 hrs (Mon - Sun)
</code></pre>
<p>Also note that, if you want to extract the time range values, you can use <a href="http://doc.scrapy.org/en/latest/topics/selectors.html#using-selectors-with-regular-expressions" rel="nofollow"><code>.re()</code></a>:</p>
<pre><code>In [18]: for sel in response.xpath('//div[contains(@class, "region")]/*[self::h2 or self::strong]'):
name = sel.xpath('text()').extract()[0]
timings = sel.xpath('./following-sibling::div[starts-with(., "Timings")]/text()')[0].re(r'(\d+:\d+)\s*\-\s*(\d+:\d+)')[:2]
print name, timings
Mumbai [u'08:00', u'00:30']
Fort [u'08:00', u'00:30']
Colaba [u'07:00', u'01:00']
Goregaon [u'10:00', u'23:30']
...
Hi-Tech City [u'09:00', u'22:30']
Madhapur [u'11:00', u'23:00']
Banjara Hills [u'10:00', u'22:30']
</code></pre>
<p>Additionally, make sure you have <code>yield</code> inside the loop body (see the code you've posted).</p>
<hr>
<p>If you want to exclude <code>Timings</code> and get the rest of the location description, use:</p>
<pre><code>for sel in response.xpath('//div[contains(@class, "region")]/*[self::h2 or self::strong]'):
print " ".join(item.strip() for item in sel.xpath('following-sibling::div[position() < 4 and not(starts-with(., "Timings"))]/text()').extract())
</code></pre>
|
python|html|web-scraping|scrapy|scrapy-spider
| 0 |
1,903,816 | 67,128,516 |
jupyter notebook geopandas doesnt read my data when i call them
|
<p>#Firt of all im using new environment to install geopandas with conda cuz i didnt install it in(base) of jupyter notebook. Now my packages working but i cant read my data
Here is my error:
import pandas as pd
import geopandas
import numpy as np
import matplotlib.pyplot as plt
import fiona</p>
<pre><code>ΓΌlke = geopandas.read_file("countries.geojson")
---------------------------------------------------------------------------
CPLE_OpenFailedError Traceback (most recent call last)
fiona/_shim.pyx in fiona._shim.gdal_open_vector()
fiona/_err.pyx in fiona._err.exc_wrap_pointer()
CPLE_OpenFailedError: countries.geojson: No such file or directory
During handling of the above exception, another exception occurred:
DriverError Traceback (most recent call last)
<ipython-input-6-88e26eb4192f> in <module>
----> 1 ΓΌlke = geopandas.read_file("countries.geojson")
~\anaconda3\envs\geo_env\lib\site-packages\geopandas\io\file.py in _read_file(filename, bbox, mask, rows, **kwargs)
158
159 with fiona_env():
--> 160 with reader(path_or_bytes, **kwargs) as features:
161
162 # In a future Fiona release the crs attribute of features will
~\anaconda3\envs\geo_env\lib\site-packages\fiona\env.py in wrapper(*args, **kwargs)
406 def wrapper(*args, **kwargs):
407 if local._env:
--> 408 return f(*args, **kwargs)
409 else:
410 if isinstance(args[0], str):
~\anaconda3\envs\geo_env\lib\site-packages\fiona\__init__.py in open(fp, mode, driver, schema, crs, encoding, layer, vfs, enabled_drivers, crs_wkt, **kwargs)
254
255 if mode in ('a', 'r'):
--> 256 c = Collection(path, mode, driver=driver, encoding=encoding,
257 layer=layer, enabled_drivers=enabled_drivers, **kwargs)
258 elif mode == 'w':
~\anaconda3\envs\geo_env\lib\site-packages\fiona\collection.py in __init__(self, path, mode, driver, schema, crs, encoding, layer, vsi, archive, enabled_drivers, crs_wkt, ignore_fields, ignore_geometry, **kwargs)
160 if self.mode == 'r':
161 self.session = Session()
--> 162 self.session.start(self, **kwargs)
163 elif self.mode in ('a', 'w'):
164 self.session = WritingSession()
fiona/ogrext.pyx in fiona.ogrext.Session.start()
fiona/_shim.pyx in fiona._shim.gdal_open_vector()
DriverError: countries.geojson: No such file or directory
enter code here
</code></pre>
|
<p>Make sure that your geojson is in the same directory that your <code>.ipynb</code> file is.</p>
<p>Or you can just put the geojson file location in the read statement, for example:</p>
<pre><code>ΓΌlke = geopandas.read_file("C:\user\folder1\folder2\data\countries.geojson")
</code></pre>
|
python|pandas|matplotlib|geopandas
| 0 |
1,903,817 | 66,958,119 |
Error when installing gensim using pip install
|
<p>Used command <code>pip install --upgrade gensim</code> from <a href="https://pypi.org/project/gensim/" rel="nofollow noreferrer">https://pypi.org/project/gensim/</a>
Anyone knows what might cause this?</p>
<pre><code>error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.28.29910\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
----------------------------------------
ERROR: Failed building wheel for gensim
Running setup.py clean for gensim
Failed to build gensim
Installing collected packages: gensim
Running setup.py install for gensim ... error
ERROR: Command errored out with exit status 1:
command: 'c:\users\appdata\local\programs\python\python39\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Andreea Elena\\AppData\\Local\\Temp\\pipinstall-khjrriwd\\gensim_18d18388d198487b8f7aebdfc3c97b94\\setup.py'"'"'; __file__='"'"'C:\\Users\\AppData\\Local\\Temp\\pip-install-khjrriwd\\gensim_18d18388d198487b8f7aebdfc3c97b94\\stup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\AppData\Local\Temp\pip-record-c7348b68\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\appdata\local\programs\python\python39\Iclude\gensim'```
</code></pre>
|
<p>It was a version problem, python 3.9 won't work with gensim. Installed 3.8 and works now.</p>
|
python|gensim
| 5 |
1,903,818 | 64,022,764 |
Manipulate string containing parenthesis in Python
|
<p>Say I have a string like this,</p>
<p><code>"(a=1) and ((b=2) or (c=3))"</code></p>
<p>Where ever there is <strong>"and"</strong> i need to do convert it to this in python,</p>
<p><code>"query[(a=1)].add(query[(b=2) or (c=3)])"</code></p>
<p>as you can see there are two things happening,<br />
when ever i am doing <strong>add</strong> i am wrapping operands with query[] and i am converting <code>a and b</code> to <code>a.add(b)</code>.</p>
<p>One more example if the string is like,</p>
<p><code>"(a=1) and ((b=2) and (c=3))"</code> where there are two and's</p>
<p>the result should be,</p>
<p><code>"query[(a=1)].add(query[(b=2)].add(query[(c=3)]))"</code></p>
<p>I cannot hard code this, because the parentheses could be of any nested level.</p>
<p>The expression i have shown above is simplified one for explaination, it could be like this also,</p>
<p><code>'((Attributes.name=="usertype") & (cast(User_Values.value, db.String())=='"Employee"')) and (((Attributes.name=="emails") & (User_Values.value.contains([{"type":"examplecom"}]))) or ((Attributes.name=="emails") & (User_Values.value.contains([{"value":"exampleorg"}]))))'</code></p>
<p><strong>Progres</strong>:-</p>
<p>Was trying to use "pyparsing" library to to get outer brackets content of operands.</p>
<pre><code>ms = '((Attributes.name=="usertype") & (cast(User_Values.value, db.String())=='"Employee"')) and (((Attributes.name=="emails") & (User_Values.value.contains([{"type":"examplecom"}]))) and ((Attributes.name=="emails") & (User_Values.value.contains([{"value":"exampleorg"}]))))'
scanner = originalTextFor(nestedExpr('(',')'))
for match in scanner.searchString(ms):
print("match is ..........", match[0])
</code></pre>
<p>got this,</p>
<pre><code>((Attributes.name=="usertype") & (cast(User_Values.value, db.String())==Employee))
(((Attributes.name=="emails") & (User_Values.value.contains([{"type":"examplecom"}]))) and ((Attributes.name=="emails") & (User_Values.value.contains([{"value":"exampleorg"}]))))
</code></pre>
<p>next i am looking to get outermost parentheses content of operands of <strong>and</strong>.
That is not happening in the above example. Its just giving two independent parentheses content.</p>
<p><strong>Edit on 8th October 2020</strong></p>
<p>The solution by Ken T makes sense, but i notice an issue there.</p>
<p><code>"((b=2) and (c=3)) and (a=1)"</code> where there are two and's</p>
<p>the result should be,</p>
<p><code>"query[(b=2)].add(query[(c=3)]).add(query[(a=1)])"</code>
but the result is,</p>
<pre><code>query[((b=2)].add(query[c=3)) and (a=1])
</code></pre>
<p>Example,</p>
<p>for input,</p>
<pre><code>'(((Values.attribute=="1") & (cast(Values.value, db.String()).like('"%a%"'))) and ((Values.attribute=="3") & (cast(Values.value, db.String()).like('"%b%"')))) and ((Values.attribute=="1") & (cast(Values.value, db.String()).like('"%a%"')))'
</code></pre>
<p>The expected output is,</p>
<pre><code>query[((Values.attribute=="1") & (cast(Values.value, db.String()).like(%a%)))]
.add(query[(Values.attribute=="3") & (cast(Values.value, db.String()).like(%b%))])
.add(query[(Values.attribute=="1") & (cast(Values.value, db.String()).like(%a%)]))
</code></pre>
<p>but actual output is,</p>
<pre><code>query[(((Values.attribute=="1") & (cast(Values.value, db.String()).like(%a%)))]
.add(query[(Values.attribute=="3") & (cast(Values.value, db.String()).like(%b%))))]
.add(query[(Values.attribute=="1") & (cast(Values.value, db.String()).like(%a%)]))
</code></pre>
<p>observe the parenthesis.</p>
<p><em>there must not be and .add() inside query[]</em></p>
<p>How do i correct the solution by Ken T.</p>
|
<p>Recursive! This is a recursive problem.</p>
<pre><code>import re
def queryGen(text, lastOP=''):
pattern = re.compile("\((.+?)\)\s+(and|or)+\s\((.+)\)")
res = pattern.search(text)
if not res:
if lastOP == 'or':
return text
elif lastOP == 'and':
return f'query[{text}]'
if res.group(2)=='and':
return f"query[({res.group(1)})].add({queryGen(res.group(3), lastOP='and')})"
if res.group(2)=='or':
return f"query[({res.group(1)}) or ({queryGen(res.group(3), lastOP='or')})]"
print(queryGen("(a=1) and ((b=2) or (c=3))"))
print(queryGen("(a=1) and ((b=2) and (c=3))"))
print(queryGen("""((Attributes.name=="usertype") & (cast(User_Values.value, db.String())=='"Employee"')) and (((Attributes.name=="emails") & (User_Values.value.contains([{"type":"examplecom"}]))) or ((Attributes.name=="emails") & (User_Values.value.contains([{"value":"exampleorg"}]))))"""))
</code></pre>
<p>Return:</p>
<pre><code>query[(a=1)].add(query[(b=2) or (c=3)])
query[(a=1)].add(query[(b=2)].add(query[c=3]))
query[((Attributes.name=="usertype") & (cast(User_Values.value, db.String())=='"Employee"'))].add(query[((Attributes.name=="emails") & (User_Values.value.contains([{"type":"examplecom"}]))) or ((Attributes.name=="emails") & (User_Values.value.contains([{"value":"exampleorg"}])))])
</code></pre>
<p>You can test the regular expression pattern interactively at the following website:</p>
<p><a href="https://regex101.com/r/C1GFuS/1" rel="nofollow noreferrer">https://regex101.com/r/C1GFuS/1</a></p>
|
python|regex|string|recursion|pyparsing
| 1 |
1,903,819 | 66,542,809 |
How to avoid printing the line that calls an exception or warning in Python?
|
<p>When I want to throw a warning or exception in Python 3, such as in the following code:</p>
<pre><code>import warnings
def main():
warnings.warn('This is a warning.')
raise RuntimeError('This is an exception.')
if __name__=='__main__':
main()
</code></pre>
<p>The terminal tells me in which line the warning or exception was raised:</p>
<pre><code>test_exception.py:4: UserWarning: This is a warning.
warnings.warn('This is a warning.')
Traceback (most recent call last):
File "test_exception.py", line 8, in <module>
main()
File "test_exception.py", line 5, in main
raise RuntimeError('This is an exception.')
RuntimeError: This is an exception.
</code></pre>
<p>It's nice to know the location, but line 2 and 7 are redundant. Since the contents will be displayed in line 1 and 8. How can I avoid printing the lines that throw an exception or warning? I notice that some packages like Pytorch throw warnings and exceptions elegantly, in exact the way I wish. For example, Pytorch raises exception in the following style:</p>
<pre><code>import torch
import numpy as np
a = torch.from_numpy(np.random.rand(2, 3))
b = torch.from_numpy(np.random.rand(3, 4))
a = a.to('cpu')
b = b.to('cuda:0')
c = torch.mm(a, b)
</code></pre>
<pre><code>Traceback (most recent call last):
File "test_torch_exception.py", line 7, in <module>
c = torch.mm(a, b)
RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_mm
</code></pre>
|
<p>If you change your source code slightly to a real life situation it becomes maybe clearer why the behavior is like this:</p>
<pre><code>import warnings
warn = 'This is a warning'
exception = 'This is an exception'
def main():
warnings.warn(warn)
raise RuntimeError(exception)
if __name__=='__main__':
main()
</code></pre>
<p>Now the output is:</p>
<pre><code>test_exception.py:6: UserWarning: This is a warning
warnings.warn(warn)
Traceback (most recent call last):
File "C:\Users\11ldornbusch\Desktop\2del.py", line 10, in <module>
main()
File "C:\Users\11ldornbusch\Desktop\2del.py", line 7, in main
raise RuntimeError(exception)
RuntimeError: This is an exception
</code></pre>
<p>So if you use variables your output shows in one line the content of the variable, and in the other the name. This can give you more context on what the problem is, for example a "Nothing" as output will gives you also the name of the variable which is nothing.</p>
<p>You can also use logging or Hook the output in python to modify the output of the warning:
From the documentation available here: <a href="https://docs.python.org/3/library/warnings.html" rel="nofollow noreferrer">https://docs.python.org/3/library/warnings.html</a></p>
<blockquote>
<p>The printing of warning messages is done by calling showwarning(),
which may be overridden; the default implementation of this function
formats the message by calling formatwarning(), which is also
available for use by custom implementations.</p>
</blockquote>
|
python
| 1 |
1,903,820 | 72,292,849 |
What exactly is the bitwise AND doing with collections.Counter?
|
<p>There was a <a href="https://stackoverflow.com/q/72263877">recent question</a> where the, correct, answer really surprised me by using <code>&</code> on two Counters and then "getting it right".</p>
<p>From the <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>Counters support rich comparison operators for equality, subset, and superset relationships: ==, !=, <, <=, >, >=. All of those tests treat missing elements as having zero counts so that Counter(a=1) == Counter(a=1, b=0) returns true.</p>
</blockquote>
<p>But that doesn't go into the specifics of <code>&</code>. I wrote a small test script:</p>
<pre><code>from collections import Counter
from pprint import pp
cls = Counter # `dict` fails: TypeError: unsupported operand type
o1 = cls(a=1,b=2,c=3,de=3,f=3,i1=9)
o2 = cls(a=1,b=2,c=3,de=5,f=6,i2=9)
res = o1 & o2
pp(dict(o1=o1,o2=o2,res=res))
</code></pre>
<h4>the output is:</h4>
<pre><code>{'o1': Counter({'i1': 9, 'c': 3, 'de': 3, 'f': 3, 'b': 2, 'a': 1}),
'o2': Counter({'i2': 9, 'f': 6, 'de': 5, 'c': 3, 'b': 2, 'a': 1}),
'res': Counter({'c': 3, 'de': 3, 'f': 3, 'b': 2, 'a': 1})}
</code></pre>
<p>It seems to me that <code>counter1 & counter2</code> means:</p>
<ul>
<li>calculate the intersection of the keys of both.</li>
<li>for the values on common keys, compute the <code>min</code></li>
</ul>
<p>Am I correct? Asides from <code>Counter</code>, and <code>set</code>, do any other standard library data structures also define <code>__and__</code> (the backing dunder for <code>&</code>, IIRC)?</p>
|
<p>Your understanding is pretty much correct. <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="nofollow noreferrer">If you look a couple lines down from where you've quoted</a>, you'll see an example use of <code>&</code> on <code>Counter</code> objects -- you don't need to dive into the source code to find it:</p>
<blockquote>
<p>Intersection and union return the minimum and maximum of corresponding counts. ...</p>
<pre class="lang-py prettyprint-override"><code>>>> c & d # intersection: min(c[x], d[x])
Counter({'a': 1, 'b': 1})
>>> c | d # union: max(c[x], d[x])
Counter({'a': 3, 'b': 2})
</code></pre>
</blockquote>
|
python|dictionary|data-structures
| 2 |
1,903,821 | 72,355,601 |
How to Python gnupg (GPG) encrypt with recipient's email address rather than their fingerprint?
|
<p>How to Python-gnupg (GnuPG / GPG / OpenPGP) encrypt with recipient's email address rather than their fingerprint?</p>
<p><a href="https://www.saltycrane.com/blog/2011/10/python-gnupg-gpg-example/" rel="nofollow noreferrer">This example</a> shows (which failes on my Ubuntu 20.04 / such a thing, but it's an old example; excerpt:</p>
<pre><code>encrypted_data = gpg.encrypt(unencrypted_string, 'testgpguser@mydomain.com')
</code></pre>
<p>More-current (maybe?) references (like <a href="https://gnupg.readthedocs.io/en/latest/#" rel="nofollow noreferrer">this</a> and <a href="https://github.com/isislovecruft/python-gnupg" rel="nofollow noreferrer">this</a>) do not mention recipient email addresses, seemingly requiring numeric-only fingerprints for (presumably) public-key identication. Is it possible in today's environment (to identify a key solely by it's associated email_address/identity)? Possibly requiring a <a href="https://superuser.com/q/227991/98033">keyserver</a>?</p>
<p>My tested <a href="https://gist.githubusercontent.com/johnnyutahh/e32478cac863d5d2d7930cc94cd3a857/raw/" rel="nofollow noreferrer">python-gnupg system versions</a>.</p>
|
<p>Looking at the version number in your question, you appear to be using the <a href="https://github.com/isislovecruft/python-gnupg" rel="nofollow noreferrer">pretty-bad-protocol</a> rewrite, which hasn't been updated since 2018.</p>
<p>If you simply install <code>python-gnupg</code>:</p>
<pre><code>$ pip install python-gnupg
</code></pre>
<p>You get version <code>0.4.9</code>, which was released <a href="https://github.com/vsajip/python-gnupg/releases/tag/0.4.9" rel="nofollow noreferrer">just a few days ago</a>:</p>
<pre><code>Collecting python-gnupg
Downloading http://.../python_gnupg-0.4.9-py2.py3-none-any.whl (18 kB)
Installing collected packages: python-gnupg
Successfully installed python-gnupg-0.4.9
</code></pre>
<p>Using this version of the <code>gnupg</code> module, your code works without a problem:</p>
<pre><code>>>> import gnupg
>>> gpg = gnupg.G
gnupg.GPG( gnupg.GenKey(
>>> gpg = gnupg.GPG()
>>> res = gpg.encrypt("this is a test", "bob@example.com")
>>> res.data
b'-----BEGIN PGP MESSAGE-----\n...\n-----END PGP MESSAGE-----\n'
>>>
</code></pre>
<hr />
<p>It is of course <em>better</em> to use a fingerprint, because you may have multiple keys in your keychain with the same email address, and you can't be certain which one you'll get. Using a fingerprint ensures that you get that specific key.</p>
|
python|encryption|gnupg|keyserver|python-gnupgp
| 2 |
1,903,822 | 65,610,888 |
TypeError: update_graph_scatter() takes 0 positional arguments but 1 was given
|
<p><strong>TypeError: update_graph_scatter() takes 0 positional arguments but 1 was given
<a href="https://i.stack.imgur.com/d23dn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d23dn.png" alt="enter image description here" /></a>
Getting the above error while using dash with python.
Below is my code.</strong></p>
<pre><code>import dash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
import plotly.graph_objs as go
import requests
app = dash.Dash()
app.layout = html.Div([
html.Div([
html.Iframe(src = 'https://www.flightradar24.com', height = 500, width = 1200)
]),
html.Div([
html.Pre(
id='counter_text',
children='Active flights worldwide:'
),
dcc.Graph(id='live-update-graph',style={'width':1200}),
dcc.Interval(
id='interval-component',
interval=6000, # 6000 milliseconds = 6 seconds
n_intervals=0
)])
])
counter_list = []
@app.callback(Output('counter_text', 'children'),
[Input('interval-component', 'n_intervals')])
@staticmethod
def update_layout(n):
url = "https://data-live.flightradar24.com/zones/fcgi/feed.js?faa=1\
&mlat=1&flarm=1&adsb=1&gnd=1&air=1&vehicles=1&estimated=1&stats=1"
# A fake header is necessary to access the site:
res = requests.get(url, headers={'User-Agent': 'Mozilla/5.0'})
data = res.json()
counter = 0
for element in data["stats"]["total"]:
counter += data["stats"]["total"][element]
counter_list.append(counter)
return 'Active flights worldwide: {}'.format(counter)
@app.callback(Output('live-update-graph','figure'),
[Input('interval-component', 'n_intervals')])
def update_graph(n):
fig = go.Figure(
data = [go.Scatter(
x = list(range(len(counter_list))),
y = counter_list,
mode='lines+markers'
)])
return fig
if __name__ == '__main__':
app.run_server()
</code></pre>
<h2><strong>As you can see in the browser i am getting this issue.</strong></h2>
<hr />
<hr />
<p>Please help as
i have tried every solution to no avail</p>
|
<p>The problem is the decorator, <code>@staticmethod</code>, here:</p>
<pre class="lang-py prettyprint-override"><code>@app.callback(Output('counter_text', 'children'),
[Input('interval-component', 'n_intervals')])
@staticmethod
def update_layout(n):
url = "https://data-live.flightradar24.com/zones/fcgi/feed.js?faa=1\
&mlat=1&flarm=1&adsb=1&gnd=1&air=1&vehicles=1&estimated=1&stats=1"
</code></pre>
<p>This is a normal function, not a method of a class, so the <code>@staticmethod</code> decorator is incorrect here. I removed that, and your app worked nicely.</p>
|
python|plotly-dash
| 0 |
1,903,823 | 50,793,309 |
How do I assign a oneof field on a protobuf message if the child message has no fields?
|
<p>I want to create a BigTable <code>DeleteFromRow</code> mutation. The proto for the <code>Mutation</code> and the <code>DeleteFromRow</code> look like this:</p>
<pre><code>oneof mutation {
// Set a cell's value.
SetCell set_cell = 1;
// Deletes cells from a column.
DeleteFromColumn delete_from_column = 2;
// Deletes cells from a column family.
DeleteFromFamily delete_from_family = 3;
// Deletes cells from the entire row.
DeleteFromRow delete_from_row = 4;
}
}
message DeleteFromRow {
}
</code></pre>
<p>In Python, you cannot directly instantiate a <code>DeleteFromRow</code> object and set the <code>delete_from_row</code> field of the <code>Mutation</code> to that object.</p>
<p>So this <strong>does not work</strong>:</p>
<pre><code>request = bigtable_pb2.MutateRowRequest(table_name='tablename', row_key=row_key)
mutation = request.mutations.add()
mutation.delete_from_row = data_pb2.Mutation.DeleteFromRow()
</code></pre>
<p>As raised by other SO users (see <a href="https://stackoverflow.com/questions/18376190/attributeerror-assignment-not-allowed-to-composite-field-task-in-protocol-mes/22771612">this question</a>), that results in a </p>
<pre><code>AttributeError: Assignment not allowed to composite field "delete_from_row" in protocol message object.
</code></pre>
<p>According to the <a href="https://developers.google.com/protocol-buffers/docs/reference/python-generated#oneof" rel="nofollow noreferrer">protobuf docs</a>, you should set a oneof field by setting one of the child fields. So a <code>DeleteFromFamily</code> mutation should be created this way:</p>
<pre><code>mutation.delete_from_family.family_name = 'some_family'
</code></pre>
<p>However, how do I do that for the <code>DeleteFromRow</code> message that has no fields?</p>
|
<p>You can use <a href="https://developers.google.com/protocol-buffers/docs/reference/python/google.protobuf.message.Message-class.html#SetInParent" rel="nofollow noreferrer">Message.SetInParent</a>:</p>
<blockquote>
<p>Mark this as present in the parent.</p>
<p>This normally happens automatically when you assign a field of a sub-message, but sometimes you want to make the sub-message present while keeping it empty. If you find yourself using this, you may want to reconsider your design.</p>
</blockquote>
<p>Example:</p>
<pre><code>message Msg {
oneof kind {
int64 int_field = 1;
EmptyMsg msg_field = 1;
}
}
message EmptyMsg {}
</code></pre>
<pre><code>msg = Msg()
print(msg.WhichOneof('kind')) # None
msg.msg_field # No-op (return EmptyMsg but don't set oneof field)
print(msg.WhichOneof('kind')) # None
msg.msg_field.SetInParent()
print(v.WhichOneof('kind')) # msg_field
</code></pre>
|
python|grpc|bigtable
| 2 |
1,903,824 | 50,957,993 |
Fill all values in a group with the first non-null value in that group
|
<p>The following is the pandas dataframe I have:</p>
<pre><code>cluster Value
1 A
1 NaN
1 NaN
1 NaN
1 NaN
2 NaN
2 NaN
2 B
2 NaN
3 NaN
3 NaN
3 C
3 NaN
4 NaN
4 S
4 NaN
5 NaN
5 A
5 NaN
5 NaN
</code></pre>
<p>If we look into the data, cluster 1 has Value 'A' for one row and remain all are NA values. I want to fill 'A' value for all the rows of cluster 1. Similarly for all the clusters. Based on one of the values of the cluster, I want to fill the remaining rows of the cluster. The output should be like,</p>
<pre><code>cluster Value
1 A
1 A
1 A
1 A
1 A
2 B
2 B
2 B
2 B
3 C
3 C
3 C
3 C
4 S
4 S
4 S
5 A
5 A
5 A
5 A
</code></pre>
<p>I am new to python and not sure how to proceed with this. Can anybody help with this ?</p>
|
<h3><code>groupby</code> + <code>bfill</code>, and <code>ffill</code></h3>
<pre><code>df = df.groupby('cluster').bfill().ffill()
df
cluster Value
0 1 A
1 1 A
2 1 A
3 1 A
4 1 A
5 2 B
6 2 B
7 2 B
8 2 B
9 3 B
10 3 B
11 3 C
12 3 C
13 4 S
14 4 S
15 4 S
16 5 A
17 5 A
18 5 A
19 5 A
</code></pre>
<hr />
<p>Or,</p>
<h3><code>groupby</code> + <code>transform</code> with <code>first</code></h3>
<pre><code>df['Value'] = df.groupby('cluster').Value.transform('first')
df
cluster Value
0 1 A
1 1 A
2 1 A
3 1 A
4 1 A
5 2 B
6 2 B
7 2 B
8 2 B
9 3 B
10 3 B
11 3 C
12 3 C
13 4 S
14 4 S
15 4 S
16 5 A
17 5 A
18 5 A
19 5 A
</code></pre>
|
python|pandas|dataframe|nan
| 5 |
1,903,825 | 3,761,871 |
How to add the binary of a int with the binary of a string
|
<p>Basically i want to be able to get a 32bit int and attach its binary to the binary of a string.
E.g.
(IM going to use 8bit instead of 32bit)
i want
255 + hi
11111111 + 0110100001101001 = 111111110110100001101001
So the int holds its binary value,i dont care how it comes out i just want it to be able to send the data over a socket.</p>
<p>(This is all over websockets and the new sec-websocket-key's to stop hacking, if anyone just knows how to do the websocket handshake that would be just as nice)</p>
<p>Thankyou ! I have been trying on this for days and im not one to come to this type of website to get the answer</p>
<h2>EDIT</h2>
<p>Ive been ask to give more info so her is the full deal. I have connected to the user of a stream port, he has sent me headers now i need to reply to to complete the connection. The import data is</p>
<p>Sec-WebSocket-Key1: 4 @1 46546xW%0l 1 5"Random string following rules"(i will call this sk1)</p>
<p>Sec-WebSocket-Key2: 12998 5 Y3 1 .P00 "Random string following rules"(i will call this sk2)</p>
<p>^n:ds[4U "Random string following rules"(i will call this sk3)</p>
<p>1) int1 = compress the numbers into sk1 and divid them by the amount of spaces in sk1</p>
<p>2) int2 = compress the numbers into sk2 and divid them by the amount of spaces in sk2</p>
<p>3) fullapend = Add append the bytes of int2 to int1 then append the bytes in sk3</p>
<p>4) Finally MD5 digest fullapend</p>
<p>5) Send the final result to the host along with some other headers and if they match up the connection holds open</p>
<p>That is everything that needs to happen and i have not got a clue how to do it</p>
<h2>Finished !</h2>
<p>Well basic both answers was right and i would like to apologise if i seemed a bit rude, i didnt know the \x was a (something) meaning binary. but that worked a treat. Once i have the finished function to connect send etc... i will post it on here and else where for anyone else thats stuck, again thankyou !</p>
|
<p>Something like </p>
<pre><code>struct.pack("!i%ds" % len(your_string), your_int, your_string)
</code></pre>
<p>should do pretty much what you want !</p>
|
python|websocket
| 3 |
1,903,826 | 50,477,826 |
how to set desired error threshold in keras?
|
<p>I'm trying to learn how to use keras and i'm wondering if i can set my own error threshold, but i'm confused. Can someone help me? suppose i want the learning process stop when the error reach 0.02, how do i do that? Thank you for the help. </p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(512, activation = 'relu', input_shape=(dimData,)))
model.add(Dense(512, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(nClasses, activation = 'softmax'))
#configure the network
model.compile(optimizer = 'rmsprop', loss = 'categorical_crossentropy', metrics=['accuracy'])
#train the network
history = model.fit(train_data, train_labels_one_hot, batch_size=256, epochs = 20, verbose =1,
validation_data=(test_data, test_labels_one_hot))
</code></pre>
|
<p>I think you should say early stopping this operation, this code you refer to.</p>
<pre><code>from keras.callbacksimport EarlyStopping
keras.callbacks.EarlyStopping(
monitor='val_loss',
patience=0,
verbose=0,
mode='auto'
)
model.fit(X, y, validation_split=0.2, callbacks=[early_stopping])
</code></pre>
<p>you need to adjust val_acc equal to 0.98 when the stop.</p>
|
python|neural-network|keras
| 1 |
1,903,827 | 50,663,428 |
midi2audio/FluidSynth: [WinError 2] The system cannot find the file specified
|
<p>I am using the python package midi2audio to translate a midi file into a WAV.</p>
<p>Running:</p>
<pre><code>filepath = 'C:/Users/Jack/Documents/GaTech/Research/Code/Data/Midi/C4/test12.mid'
soundfont = 'C:/Users/Jack/Downloads/weedsgm3.sf2'
fs = FluidSynth(soundfont)
if os.path.isfile(filepath):
print('The File Exists')
else:
print('The File does not exist')
fs.midi_to_audio(filepath, 'output.wav')
</code></pre>
<p>Outputs: </p>
<pre><code>The File Exists
FileNotFoundError: [WinError 2] The system cannot find the file specified
</code></pre>
<p>To be clear the error is referencing the file specified in filepath and not soundfont. There is little documentation on the package so I am not sure what to do.</p>
<p>Has anyone with experience with midi2audio experienced the same issue and know what is the root of the problem? </p>
|
<pre><code>fs = FluidSynth()
</code></pre>
<p>This creates a <code>FluidSynth</code> object, with the default values for all the constructor's parameters.</p>
<pre><code>FluidSynth(sample_rate=22050)
</code></pre>
<p>This creates a second <code>FluidSynth</code> object. The object reference is not assigned to any variable, so it is thrown away immediately.</p>
<pre><code>FluidSynth(soundfont)
</code></pre>
<p>And a third object.</p>
<pre><code>fs.midi_to_audio(filepath, 'output.wav')
</code></pre>
<p>The object referenced by <code>fs</code> uses the default sound font and the default sample rate.</p>
<p>You have to give all the parameters to the constructor at once:</p>
<pre><code>fs = FluidSynth(sound_font=soundfont, sample_rate=22050)
</code></pre>
<p>(And it might be a good idea to specify the full path to the output file.)</p>
|
python-3.x|midi|audio-converter|fluidsynth
| 1 |
1,903,828 | 35,105,939 |
Take a PDF file and take every word from it into a dictionary setting it equal to default 0
|
<p>Okay, so what I'm trying to do is take a URL for a PDF file of just simply open up the PDF file in the program and take out every word from it. Then put it inside a dictionary that set its default value to zero. My problem is that when I try to get the PDF from a URL it either just goes right to the PDF file on the internet or it just takes every line from it rather than every word. I've tried with .txt files and it just ends up doing every line as well instead of every word. </p>
<p>Here is some code I've tried:</p>
<pre><code>run = open('Harry.txt')
def words(file):
docline = {}
docwords = {}
for line in file:
docline[line] = 0
for word in docline:
docwords[word] = 0
return docwords
print(dict(words(run)))
</code></pre>
|
<p>This should work:</p>
<pre><code>run = open('Harry.txt')
def words(file):
docwords = {}
for line in file:
for word in line.split():
docwords[word] = 0
return docwords
print(dict(words(run)))
</code></pre>
|
python|pdf|dictionary|text
| 0 |
1,903,829 | 26,762,511 |
NameError: name 'OpenKey' is not defined using winreg
|
<p>In Python, I'm trying to open a regedit Key to add String value to it. However, it's somehow not recognizing the <code>OpenKey()</code> or <code>ConnectRegistry</code> method.</p>
<pre><code>import winreg
import sys
#Create 2 keys with unique GUIDs as Names
KeyName1 = "AppEvents\{Key1}"
KeyName2 = "AppEvents\{Key2}"
KeyName1_Path = "C:\Install\Monitor\Path.asmtx"
winreg.CreateKey(winreg.HKEY_CURRENT_USER, KeyName1)
winreg.CreateKey(winreg.HKEY_CURRENT_USER, KeyName2)
#Add String as Path
# aReg = ConnectRegistry(None,HKEY_CURRENT_USER) #NameError: name 'ConnectRegistry' is not defined
keyVal=OpenKey(winreg.HKEY_CURRENT_USER,r"AppEvents\{Key2}", 0,KEY_WRITE) ameError: name 'OpenKey' is not defined
SetValueEx(keyVal,"Path",0,REG_SZ, KeyName1_Path)
</code></pre>
|
<p>As you have imported it with <code>import winreg</code> you need to refer to all methods within that name space using <code>winreg.xxxxxx</code>. </p>
<p>As such, you need to use <code>winreg.OpenKey</code> and <code>winreg.ConnectRegistry</code>.</p>
<p>Alternatively, you could do</p>
<pre><code>from winreg import CreateKey, OpenKey, ConnectRegistry, etc
</code></pre>
<p>This would then allow you to use <code>CreateKey</code>, etc without the need of the <code>winreg</code> prefix.</p>
|
python|regedit|winreg
| 2 |
1,903,830 | 45,132,809 |
How to select batch size automatically to fit GPU?
|
<p>I am training deep neural networks with a GPU. If I make samples too large, batches too large, or networks too deep, I get an out of memory error. In this case, it is sometimes possible to make smaller batches and still train.</p>
<p>Is it possible to calculate GPU size required for training and determine what batch size to choose beforehand?</p>
<p><strong>UPDATE</strong></p>
<p>If I print network summary, it displays number of "trainable parameters". Can't I estimate from this value? For example, take this, multiply by batch size, double for gradients etc?</p>
|
<p>PyTorch Lightning recently added a feature called "auto batch size", especially for this! It computes the max batch size that can fit into the memory of your GPU :)</p>
<p>More info can be found <a href="https://pytorch-lightning.readthedocs.io/en/1.1.1/training_tricks.html#auto-scaling-of-batch-size" rel="noreferrer">here</a>.</p>
<p>Original PR: <a href="https://github.com/PyTorchLightning/pytorch-lightning/pull/1638" rel="noreferrer">https://github.com/PyTorchLightning/pytorch-lightning/pull/1638</a></p>
|
tensorflow|out-of-memory|deep-learning|gpu|keras
| 8 |
1,903,831 | 60,683,446 |
How to extract date from a filename using regular expression
|
<p>I have to handle a long filename in a specific format that contains two dates and someone's full name. Here is a template that describes this format:</p>
<p><code>firstname_middlename_lastname_yyyy-mm-dd_text1_text2_yyyy-mm-dd.xls</code></p>
<p>How to extract the fullname, first date, and second date from that filename using regular expression?</p>
<p>I've tried to extract the first date like:</p>
<pre><code>string1 = 'CHEN_MOU_MOU_1999-04-11_Scientific_Report_2020-03-14.xlsx'
ptn = re.compile('\b(\d{4}-\d{2}-\d{2})\b')
print(ptn.match(string1))
</code></pre>
<p>But it doesn't seem to work. The output I get is <code>None</code>.</p>
<p>Any help will be appreciated.</p>
|
<p>The reason your solution does not work is because <code>_</code> is considered an alphanumeric character in Python.</p>
<p>From <a href="https://docs.python.org/3/howto/regex.html#matching-characters" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p><code>\w</code><br>
Matches any alphanumeric character; this is equivalent to the class <code>[a-zA-Z0-9_]</code>.</p>
</blockquote>
<p>So <code>\b</code> does not match <code>_</code> in your string. But it'll match <code>-</code>.</p>
<p>From <a href="https://docs.python.org/3/howto/regex.html#more-metacharacters" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p><code>\b</code>
This is a zero-width assertion that matches only at the beginning or end of a word. A word is defined as a sequence of alphanumeric characters, so the end of a word is indicated by whitespace or a non-alphanumeric character.</p>
</blockquote>
<p>But if you replace <code>_</code> around your dates with a <code>-</code> (hyphen), then your solution works just fine.</p>
<pre><code>>>> import re
>>> string1 = 'CHEN_MOU_MOU-1999-04-11-Scientific Report-2020-03-14.xlsx'
>>> ptn = re.compile(r'\b(\d{4}-\d{2}-\d{2})\b')
>>> ptn.findall(string1)
['1999-04-11', '2020-03-14']
</code></pre>
<p>Following is a solution that should work for your task:</p>
<pre><code>$ python
Python 3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 21:26:53) [MSC v.1916 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import re
>>> string1 = 'CHEN_MOU_MOU_1999-04-11_Scientific_Report_2020-03-14.xlsx'
>>> fullnamepattern = r'[a-zA-Z]+_[a-zA-Z]+_[a-zA-Z]+'
>>> datepattern = r'\d{4}-\d{2}-\d{2}'
>>> re.search(fullnamepattern, string1).group()
'CHEN_MOU_MOU'
>>> re.findall(datepattern, string1)
['1999-04-11', '2020-03-14']
</code></pre>
|
python|regex
| 2 |
1,903,832 | 58,090,277 |
Transpose rows to column, while flattening dataframe based on group
|
<p>I have the following dataframe...</p>
<pre><code>idx Group key value Time IsTrue
1 bicycle person yes 9:30 yes
2 bicycle name bob 9:30 yes
3 bicycle alive yes 9:30 yes
5 non-cycle person no 1:30 no
6 non-cycle name jack 1:30 no
</code></pre>
<p>And I want to the following result from the dataframe </p>
<pre><code>idx Group Time IsTrue person name alive
1 bicycle 9:30 yes yes bob yes
2 non-cycle 1:30 no no jack NA
</code></pre>
<p>Where the key columns become new columns and values are the rows for those new columns. All the others rows have the same always have the same values except for key and value columns. The keys change so I am going for something dynamic. </p>
<p>My current solution uses a pandas groupby & apply (based on the Group column), and creates a new dataframe for each group, but that seems way over engineered. Any simpler solutions to this? </p>
|
<p><strong>edit</strong>:<br>
as you fixed output. I added another solution using <code>set_index</code> and <code>unstack</code></p>
<pre><code>df.set_index(['Group', 'Time', 'IsTrue', 'key'])['value'].unstack().reset_index()
Out[503]:
key Group Time IsTrue alive name person
0 bicycle 9:30 yes yes bob yes
1 non-cycle 1:30 no NaN jack no
</code></pre>
<hr>
<p><strong>Original:</strong><br>
Your desired output is confusing. Let's try this solution if it is what you want. If it is not, I will delete it</p>
<pre><code>df.pivot_table(index=['Group', 'Time', 'IsTrue'], columns='key', values='value', aggfunc='first').reset_index()
Out[487]:
key Group Time IsTrue alive name person
0 bicycle 9:30 yes yes bob yes
1 non-cycle 1:30 no NaN jack no
</code></pre>
|
python|pandas|dataframe
| 4 |
1,903,833 | 59,183,664 |
Iterating over a dictionary and creating an object for each key with its values within the same object
|
<pre><code>import random
import os
import json
class User:
def _init_(self, username, password, balance):
self.username = username
self.password = password
self.balance = balance
def file_read(source):
with open (source) as file:
data = file.read()
dictionary = json.loads(data)
return dictionary
</code></pre>
<p>and then the external file is this </p>
<pre><code>{"John":["pass123", 2000], "Jenson": ["pass123", 2000]}
</code></pre>
<p>my initial thought was to use a
<code>for items in dict</code>
but i am unsure how to create multiple objects from that preferably being named by username
thank you.</p>
|
<p>Simple solution using a dict comprehension and var-args:</p>
<pre class="lang-py prettyprint-override"><code>{ k: User(k, *v) for k, v in file_read(filename).items() }
</code></pre>
<p>Alternatively you can do it with destructuring:</p>
<pre class="lang-py prettyprint-override"><code>{ k: User(k, pw, bal) for k, (pw, bal) in file_read(filename).items() }
</code></pre>
|
python|dictionary|oop
| 1 |
1,903,834 | 58,524,853 |
Google API Response Data: Count most frequent dictionary values from list of dictionaries
|
<p>I have a Python script (Python 3.7) that accesses a Google Sheet and gets all of the data from the sheet using the <code>get_all_records()</code> method from the <code>gspread</code> library.</p>
<p>The response data is a list of dictionaries, with each row from the google sheet represented as it's own dictionary, and the key/value pairs as the column header and row values respectively like so:</p>
<pre><code>[{'Away Team': 'Gillingham',
'Bet': 'Over 2.5 Goals',
'Home Team': 'AFC Wimbledon',
'Timestamp': '10/17/2019 10:36:01'},
{'Away Team': 'Liverpool',
'Bet': 'Home Win',
'Home Team': 'Man United',
'Timestamp': '10/18/2019 22:59:18'},
{'Away Team': 'Newcastle',
'Bet': 'BTTS',
'Home Team': 'Arsenal',
'Timestamp': '10/18/2019 22:59:31'},
{'Away Team': 'Man City',
'Bet': 'BTTS',
'Home Team': 'Everton',
'Timestamp': '10/20/2019 20:29:45'},
{'Away Team': 'Man City',
'Bet': 'BTTS',
'Home Team': 'Everton',
'Timestamp': '10/20/2019 20:29:52'},
{'Away Team': 'Man City',
'Bet': 'BTTS',
'Home Team': 'Everton',
'Timestamp': '10/20/2019 20:30:00'},
{'Away Team': 'Man City',
'Bet': 'BTTS',
'Home Team': 'Everton',
'Timestamp': '10/20/2019 20:30:02'},
{'Away Team': 'Newcastle',
'Bet': 'BTTS',
'Home Team': 'Arsenal',
'Timestamp': '10/18/2019 22:59:31'}]
</code></pre>
<p>The values of the <code>'Bet'</code> key can be 1 of 8 values. For each unique value, I want to count the frequency of the values in the <code>'Home Team'</code> key across all of the dictionaries.</p>
<p>In the example above, the most frequent <code>'Home Team'</code> value for key-value pair <code>'Bet': 'BTTS'</code> is Everton</p>
<p>I tried creating new dictionaries for each unique <code>'Bet'</code> key value using <code>default dictionary</code> from the <code>collections</code> module but I soon realized I could only create new dictionaries with either the <code>'Home Team'</code> values as keys with the <code>'Bet'</code> value as the value but I can't then capture frequency.</p>
<p>The data on the sheet is collected via a Google form so I can be assured of the integrity of the data captured as the form only allows values to be selected from predefined drop-downs or radio buttons.</p>
<p>Some advice or pointers in the right direction on modules/techniques to help me out here would be greatly appreciated.</p>
|
<p>I'm not familiar with the collections module, but it's really easy to achieve using old-school, plain Python.</p>
<p>Supposing we have your list of dictionaries stored in a variable called <code>raw_dicts</code>. Allow me to suggest parsing it into a more convenient data structure for our task:</p>
<pre><code>parsed_dict = dict()
for dictionary in raw_dicts:
bet = dictionary['Bet']
if bet not in parsed_dict.keys():
parsed_dict[bet] = dict()
if dictionary['Home Team'] not in parsed_dict[bet].keys():
parsed_dict[bet][dictionary['Home Team']] = 0
parsed_dict[bet][dictionary['Home Team']] += 1
</code></pre>
<p>What I did here is creating a counter for each bet-team pair. We get this beautiful dictionary:</p>
<pre><code>{
"Over 2.5 Goals": {
"AFC Wimbledon": 1
},
"Home Win": {
"Man United": 1
},
"BTTS": {
"Everton": 4,
"Arsenal": 2
}
}
</code></pre>
<p>Now that we have such a nice dictionary, all we're left with is a simple maximization problem, whose solution is by the textbook:</p>
<pre><code>most_frequent_bet = ""
most_frequent_team = ""
highest_frequency = 0
for bet in parsed_dict.keys():
for team in parsed_dict[bet].keys():
if parsed_dict[bet][team] > highest_frequency:
most_frequent_bet = bet
most_frequent_team = team
highest_frequency = parsed_dict[bet][team]
</code></pre>
<p>This could have been achieved by any number of other methods, some a lot more elegant and short than mine. What I wanted to do here is an easy-to-read code that goes step-by-step.</p>
|
python|python-3.7
| 0 |
1,903,835 | 58,268,688 |
python - Loop outputs same timestamp every time
|
<p>I have this code where I reads outputs from other scripts then inputs those in a database. Additionally, a timestamp is added.</p>
<pre class="lang-py prettyprint-override"><code> import time
import os
import subprocess
import sys
from time import sleep
import datetime
import sqlite3
import fnmatch, shutil
sensorID = "1"
dbname = 'sensorsData.db'
t = time.localtime()
timestamp = time.strftime('%Y-%m-%d %H:%M:%S', t)
refresh = 300 #time in seconds , getting new data from sensors
#get data from sensor
def gettemp():
temp = subprocess.check_output("sudo python /home/pi/AIRQMONITOR/temp.py", shell=True)
print(timestamp)
return(temp)
def getpm25():
pm25 = subprocess.check_output("sudo python /home/pi/AIRQMONITOR/pm2.py", shell=True)
return (pm25)
def getpm10():
pm10 = subprocess.check_output("sudo python /home/pi/AIRQMONITOR/pm10.py", shell=True)
return (pm10)
def getco():
co = subprocess.check_output("sudo python /home/pi/AIRQMONITOR/co.py", shell=True)
return(co)
#log data
def logdata(temp,co,pm25,pm10):
conn = sqlite3.connect(dbname)
curs=conn.cursor()
curs.execute("INSERT INTO sensors values(?,?,?,?,?)", (timestamp, temp, co, pm25, pm10))
curs.execute("INSERT INTO temperatures values(?,?,?)", (timestamp, sensorID, temp ))
curs.execute("INSERT INTO co values(?,?,?)", (timestamp, sensorID, co ))
curs.execute("INSERT INTO pm25 values(?,?,?)", (timestamp, sensorID, pm25 ))
curs.execute("INSERT INTO pm10 values(?,?,?)", (timestamp, sensorID, pm10 ))
conn.commit()
conn.close()
#main
def main():
while True:
temp = gettemp()
pm25 = getpm25()
pm10 = getpm10()
co = getco()
logdata(temp, co, pm25, pm10)
time.sleep(refresh)
#-----execute program... gooo!
main()
</code></pre>
<p>But, at every loop, the same timestamp from the first run is outputted:</p>
<p><a href="https://i.stack.imgur.com/E1aMD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E1aMD.png" alt="terminal outputs showing same timestamp after 3 loops"></a></p>
<p>How can this be?
Thanks in advance!</p>
|
<p>You run these lines at the start of your code:</p>
<pre><code>t = time.localtime()
timestamp = time.strftime('%Y-%m-%d %H:%M:%S', t)
</code></pre>
<p>These only run once (not in a loop) and the timestamp in them never changes. If you want to update the timestamp you should make those lines part of your loop, or better yet, use a function:</p>
<pre><code>def get_timestamp():
t = time.localtime()
return time.strftime('%Y-%m-%d %H:%M:%S', t)
</code></pre>
<p>And replace every time you try to use <code>timestamp</code> with <code>get_timestamp()</code></p>
|
python|timestamp
| 3 |
1,903,836 | 22,856,930 |
Different class variables and usage
|
<p>In a class in Python there are several ways to assign something to a variable.<br>
I always don't know where I should do it, what is the difference in usage and in which scenario should I use a distinct variant.</p>
<pre><code>class Class(object):
NUMBER = 31415
foo = 'FOO'
def __init__(self):
self.foobar = 'foobar'
</code></pre>
|
<p>Class variables are shared between all instances of a class. That turns out to be pretty self-explanatory. Instance variables however are local to each instantiated object.</p>
<p>Here's an example, first we instantiate a bunch of classes.</p>
<pre><code>In [19]: classes = [Class() for _ in range(5)]
In [20]: classes
Out[20]:
[<__main__.Class at 0x20bb290>,
<__main__.Class at 0x20bb2d0>,
<__main__.Class at 0x20bb310>,
<__main__.Class at 0x20bb350>,
<__main__.Class at 0x20bb4d0>]
</code></pre>
<p>And then we change the <code>NUMBER</code>-variable of <code>Class</code></p>
<pre><code>In [21]: Class.NUMBER = "Hah!"
In [22]: print [x.NUMBER for x in classes]
['Hah!', 'Hah!', 'Hah!', 'Hah!', 'Hah!']
</code></pre>
<p>However, once you've instantiated the objects, you can change <code>x.NUMBER</code>, and once you do the change <em>is</em> local to that object. I understand this can be quite confusing.</p>
<p>Where as we cannot even touch the <code>foobar</code>-value yet, as it does not exist before the object is instantiated:</p>
<pre><code>In [23]: Class.foobar
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-23-c680e41ebe07> in <module>()
----> 1 Class.foobar
AttributeError: type object 'Class' has no attribute 'foobar'
</code></pre>
<p>This has nothing to do with whether or not the class attributes are upper or lowercase. We can just as well access <code>Class.foo</code>. The point is that everything before the <code>__init__</code> exists <em>before</em> that class is instantiated to an object. Once that class is instantiated as an object, the object will have the <em>instance</em> attributes, namely <code>obj.foobar</code>.</p>
|
python|global-variables|class-variables
| 3 |
1,903,837 | 45,402,583 |
Check If All Elements in a List Are Equal to Another Value
|
<p>How do I check if every value in a list is equal to another value, x? For example, if I had a list that was completely full of the number 100, how would I return false based on that condition. Or if a list was full of the number 100 except for one single element which was 88, then I'd want to return true and for the if statement to execute.</p>
<p>Thank you.</p>
|
<p>Python has an builtin <code>any()</code> function, e.g.:</p>
<pre><code>In []:
lst = [100]*5
x = 100
print(lst)
any(a != x for a in lst)
Out[]:
[100, 100, 100, 100, 100]
False
In []:
lst[2] = 88
print(lst)
any(a != x for a in lst)
Out[]:
[100, 100, 88, 100, 100]
True
</code></pre>
|
python|list|list-manipulation
| 1 |
1,903,838 | 14,568,476 |
Python: why except is called even after exit
|
<p>Please check the below code,</p>
<pre><code>import sys
try:
seq=eval(raw_input("Enter seq number: "))
if seq <= 0 or seq >= 9999:
print "Sequence number not in range [0001-9999]"
sys.exit(1)
except:
print "!!! Sequence number not in range [0001-9999]"
sys.exit(1)
</code></pre>
<p>I gave a string to <code>eval raw_input</code> function.</p>
<pre><code>$> python test.py
Enter seq number: "12"
Sequence number not in range [0001-9999]
!!! Sequence number not in range [0001-9999]
</code></pre>
<p>Why is it not exiting even after receiving <code>exit</code> call? </p>
|
<p><code>sys.exit</code> just raises an exception (<code>SystemExit</code>), which is then caught. As a demonstration:</p>
<pre><code>import sys
import traceback
try:
sys.exit(1)
except:
print "This threw the following exception:"
traceback.print_exc()
# This threw the following exception:
# Traceback (most recent call last):
# File "test.py", line 5, in <module>
# sys.exit(1)
# SystemExit: 1
</code></pre>
|
python|python-2.7
| 8 |
1,903,839 | 41,472,676 |
Searching for a word in a url error
|
<p>I have one million odd urls and search terms in a text file with unique ID. I need to open the urls and search for the searchterms, if present represent as <code>1</code> else <code>0</code>. </p>
<p>Input file:</p>
<pre><code>"ID" "URL","SearchTerm1","Searchterm2"
"1","www.google.com","a","b"
"2","www.yahoo.com","f","g"
"3","www.att.net","k"
"4" , "www.facebook.com","cs","ee"
</code></pre>
<p>Code Snippet:</p>
<pre><code>import urllib2
import re
import csv
import datetime
from BeautifulSoup import BeautifulSoup
with open('txt.txt') as inputFile, open ('results.txt','w+') as proc_seqf:
header = 'Id' + '\t' + 'URL' + '\t'
for i in range(1,3):
header += 'Found_Search' + str(i) + '\t'
header += '\n'
proc_seqf.write(header)
for line in inputFile:
line=line.split(",")
url = 'http://' + line[1]
req = urllib2.Request(url, headers={'User-Agent' : "Magic Browser"})
html_content = urllib2.urlopen(req).read()
soup = BeautifulSoup(html_content)
if line[2][0:1] == '"' and line[2][-1:] == '"':
line[2] = line[2][1:-1]
matches = soup(text=re.compile(line[2]))
#print soup(text=re.compile(line[2]))
#print matches
if len(matches) == 0 or line[2].isspace() == True:
output_1 =0
else:
output_1 =1
#print output_1
#print line[2]
if line[3][0:1] == '"' and line[3][-1:] == '"':
line[3] = line[3][1:-1]
matches = soup(text=re.compile(line[3]))
if len(matches) == 0 or line[3].isspace() == True:
output_2 =0
else:
output_2 =1
#print output_2
#print line[3]
proc_seqf.write("{}\t{}\t{}\t{}\n".format(line[0],url,output_1, output_2))
</code></pre>
<p>output File: </p>
<pre><code>ID,SearchTerm1,Searchterm2
1,0,1
2,1,0
3,0
4,1,1
</code></pre>
<p>Two issues with the code:</p>
<ol>
<li><p>when I run around 200 urls at once it gives me <code>urlopen error [Errno 11004] getaddrinfo failed error</code>.</p></li>
<li><p>Is there a way to search something which closely matches but not exact match?</p></li>
</ol>
|
<blockquote>
<p>when I run around 200 urls at once it gives me urlopen error [Errno 11004]
getaddrinfo failed error.</p>
</blockquote>
<p>This error message is telling you that the DNS lookup for the server hosting
the url has failed.</p>
<p>This is a outside the control of your program, but you can decide how to
handle the situation.</p>
<p>The simplest approach is to trap the error, log it and carry on:</p>
<pre><code>try:
html_content = urllib2.urlopen(req).read()
except urllib2.URLError as ex:
print 'Could not fetch {} because of {}, skipping.'.format(url, ex)
# skip the rest of the loop
continue
</code></pre>
<p>However, it's possible that the error is transient, and that the lookup will
work if you try later; for example, perhaps the DNS server is configured to
reject incoming requests if it receives too many in too short a space of time.<br>
In this situation, you can write a function to retry after a delay:</p>
<pre><code>import time
class FetchException(Exception):
pass
def fetch_url(req, retries=5):
for i in range(1, retries + 1):
try:
html_content = urllib2.urlopen(req).read()
except urllib2.URLError as ex:
print 'Could not fetch {} because of {}, skipping.'.format(url, ex)
time.sleep(1 * i))
continue
else:
return html_content
# if we reach here then all lookups have failed
raise FetchFailedException()
# In your main code
try:
html_content = fetch_url(req)
except FetchFailedException:
print 'Could not fetch {} because of {}, skipping.'.format(url, ex)
# skip the rest of the loop
continue
</code></pre>
<blockquote>
<p>Is there a way to search something which closely matches but not exact match?</p>
</blockquote>
<p>If you want to match a string with an optional trailing dot, use the <code>?</code> modifier.</p>
<p>From the <a href="https://docs.python.org/2/library/re.html#regular-expression-syntax" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>Causes the resulting RE to match 0 or 1 repetitions of the preceding RE. ab? will match either βaβ or βabβ.</p>
</blockquote>
<pre><code>>>> s = 'Abc In'
>>> m = re.match(r'Abc In.', s)
>>> m is None
True
# Surround `.` with brackets so that `?` only applies to the `.`
>>> m = re.match(r'Abc In(.)?', s)
>>> m.group()
'Abc In'
>>> m = re.match(r'Abc In(.)?', 'Abc In.')
>>> m.group()
'Abc In.'
</code></pre>
<p>Notice the <code>r</code> character preceding the regex patterns. This denotes a <a href="https://docs.python.org/2/library/re.html#raw-string-notation" rel="nofollow noreferrer">raw string</a>. It's good practice to use raw strings in your regex patterns because they make it much easier to handle backslash (<code>\</code>) characters, which are very common in regexes.</p>
<p>So you could construct a regex to match optional trailing dots like this:</p>
<p><code>matches = soup(text=re.compile(r'{}(.)?').format(line[2]))</code></p>
|
python|python-2.7|web-scraping
| 2 |
1,903,840 | 6,709,067 |
Python - regex - Splitting string before word
|
<p>I am trying to split a string in python before a specific word. For example, I would like to split the following string before <code>"path:"</code>. </p>
<ul>
<li>split string before <code>"path:"</code></li>
<li>input: <code>"path:bte00250 Alanine, aspartate and glutamate metabolism path:bte00330 Arginine and proline metabolism"</code></li>
<li>output: <code>['path:bte00250 Alanine, aspartate and glutamate metabolism', 'path:bte00330 Arginine and proline metabolism']</code></li>
</ul>
<p>I have tried </p>
<pre><code>rx = re.compile("(:?[^:]+)")
rx.findall(line)
</code></pre>
<p>This does not split the string anywhere. The trouble is that the values after <code>"path:"</code> will never be known to specify the whole word. Does anyone know how to do this?</p>
|
<p>using a regular expression to split your string seems a bit overkill: the string <code>split()</code> method may be just what you need.</p>
<p>anyway, if you really need to match a regular expression in order to split your string, you should use the <a href="http://docs.python.org/library/re.html#re.split" rel="noreferrer"><code>re.split()</code></a> method, which splits a string upon a regular expression match.</p>
<p>also, use a correct regular expression for splitting:</p>
<pre><code>>>> line = 'path:bte00250 Alanine, aspartate and glutamate metabolism path:bte00330 Arginine and proline metabolism'
>>> re.split(' (?=path:)', line)
['path:bte00250 Alanine, aspartate and glutamate metabolism', 'path:bte00330 Arginine and proline metabolism']
</code></pre>
<p>the <code>(?=...)</code> group is a lookahead assertion: the expression matches a space <em>(note the space at the start of the expression)</em> which is followed by the string <code>'path:'</code>, without consuming what follows the space.</p>
|
python|regex|string|split|splice
| 5 |
1,903,841 | 53,973,175 |
The for-loop only loops once, so it's like there was no loop used
|
<p>so first thing I'm new to python and I came across a simple problem but still complicated. Basically I try to loop all of the things from a list, and make them go through a conditional check if there the ones.</p>
<p>This is to check if a sentence is a greeting.</p>
<pre><code>greets = ["Hi","Hello", "Hey"]
#Thinking
def isGreet(mes): #Checks if it's a greeting
words = mes.split()
for greet in greets:
print(greet)
if (words[0]==greet):
return 1;
else:
return 0;
</code></pre>
<p>When a user types in something, the code should check if it's a greeting, and if it is, return true, and if it's not return false. Simple, isn't it? But when I type in something, the code only returns true if it's hi which is used, but when i type let's say hello there, it would return false. I added a print function to see if loops works, but it only prints Hi, so I concluded that there must be something wrong with the for loop. Reaally appreciate any help. </p>
|
<blockquote>
<p>The for-loop only loops once, so it's like there was no loop used</p>
</blockquote>
<p>yes, because you're returning from the function no matter what at the first iteration. So your test works if the first word tested is the first in the list only. Else it returns 0.</p>
<p>no need for a loop, use <code>in</code></p>
<pre><code>greets = {"Hi","Hello", "Hey"} # set should be faster, only if a lot of words, though
def isGreet(mes):
return mes.split()[0] in greets
</code></pre>
<p>as stated in comments, <code>mes.split()[0]</code> is somehow wasteful because it keeps splitting other words we don't need, so replace by <code>mes.split(maxsplit=1)[0]</code> or <code>mes.split(None,1)[0]</code> for python 2.</p>
|
python|python-3.x|for-loop|conditional-statements
| 4 |
1,903,842 | 25,748,007 |
Motor: RuntimeError: maximum recursion depth exceeded while encoding an object to BSON
|
<p>I have an API, built on asynchronous Tornado and mongoDB. It works fine, except one handler:</p>
<pre><code>@gen.coroutine
def get(self, *args, **kwargs):
"""
Gets tracking lib
"""
data = self._get_request_data()
self._serialize_request_data(AuthValidator, data)
tags = yield self.motor.tags.find_one({"client_id": data["client_id"]})
raise Return(self.write(tags))
</code></pre>
<p>When request comes, tornado returns HTTP 500 with following stack trace:</p>
<pre><code>response: Traceback (most recent call last):
File "/Users/artemkorhov/Projects/cartreminder/env/lib/python2.7/site-packages/tornado/web.py", line 1334, in _execute
result = yield result
File "/Users/artemkorhov/Projects/cartreminder/env/lib/python2.7/site-packages/tornado/gen.py", line 617, in run
value = future.result()
File "/Users/artemkorhov/Projects/cartreminder/env/lib/python2.7/site-packages/tornado/concurrent.py", line 109, in result
raise_exc_info(self._exc_info)
File "/Users/artemkorhov/Projects/cartreminder/env/lib/python2.7/site-packages/tornado/gen.py", line 620, in run
yielded = self.gen.throw(*sys.exc_info())
File "/Users/artemkorhov/Projects/cartreminder/cartreminder_app/tracking_api/api_handlers/endpoints.py", line 35, in get
tags = yield self.motor.tags.find_one({"client_id": data["client_id"]})
File "/Users/artemkorhov/Projects/cartreminder/env/lib/python2.7/site-packages/tornado/gen.py", line 617, in run
value = future.result()
File "/Users/artemkorhov/Projects/cartreminder/env/lib/python2.7/site-packages/tornado/concurrent.py", line 109, in result
raise_exc_info(self._exc_info)
File "/Users/artemkorhov/Projects/cartreminder/env/lib/python2.7/site-packages/motor/__init__.py", line 676, in call_method
result = sync_method(self.delegate, *args, **kwargs)
File "/Users/artemkorhov/Projects/cartreminder/env/lib/python2.7/site-packages/pymongo/collection.py", line 721, in find_one
for result in cursor.limit(-1):
File "/Users/artemkorhov/Projects/cartreminder/env/lib/python2.7/site-packages/pymongo/cursor.py", line 1038, in next
if len(self.__data) or self._refresh():
File "/Users/artemkorhov/Projects/cartreminder/env/lib/python2.7/site-packages/pymongo/cursor.py", line 982, in _refresh
self.__uuid_subtype))
RuntimeError: maximum recursion depth exceeded while encoding an object to BSON
</code></pre>
<p>In mongoDB "tags" collection i have (for example):</p>
<pre><code>{
"_id" : ObjectId("540eec8227c565f77d4dcd23"),
"client_id" : "1111",
"tags" : {
"cart_add" : [
{
"action_element" : "#addbutton1",
"info_element" : "#product_element1"
}
],
"cart_delete" : [
{
"action_element" : "#deleteButton1",
"info_element" : "#product_element1"
}
],
"email_known" : {
"info_element" : ".tag1"
},
"order_complete" : {
"action_element" : "#order_button1",
"info_element" : {
"product_wrap" : ".product_wrap",
"product_id" : ".product_id_element",
"quantity" : ".product_quantity_element",
"price" : ".product_price_element"
}
}
}
}
</code></pre>
<p>The interesting part is that same 'find' method works perfect in other handlers, which built almost the same</p>
|
<p>Your "data" dictionary has a circular reference, so when Motor passes "data" to PyMongo to be encoded as BSON and sent to the server, the BSON encoder recurses more than 1000 times. I can reproduce this error message like so:</p>
<pre><code>>>> import bson
>>> d = {}
>>> d['key'] = d # Circular reference!
>>> bson.BSON.encode(d)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/emptysquare/.virtualenvs/motor/lib/python2.7/site-packages/bson/__init__.py", line 590, in encode
return cls(_dict_to_bson(document, check_keys, uuid_subtype))
RuntimeError: maximum recursion depth exceeded while encoding an object to BSON
</code></pre>
<p>Try doing "pprint.pprint" on "data" to see where the self-reference occurs:</p>
<pre><code>>>> import pprint
>>> pprint.pprint(d)
{'key': <Recursion on dict with id=140199700593680>}
</code></pre>
|
mongodb|python-2.7|tornado|pymongo|tornado-motor
| 5 |
1,903,843 | 44,520,902 |
django cache.clear() ending session (logout)
|
<p>I'm using memcached in Django to cache the entire site.</p>
<p><a href="https://docs.djangoproject.com/en/1.11/topics/cache/#the-per-site-cache" rel="nofollow noreferrer">https://docs.djangoproject.com/en/1.11/topics/cache/#the-per-site-cache</a></p>
<p>I've added some code in a post-save signal handler method to clear the cache when certain objects are created or updated in the model.</p>
<pre><code>from proximity.models import Advert
# Cache
from django.core.cache import cache
@receiver(post_save, sender=Advert)
def save_advert(sender, instance, **kwargs):
# Clear cache
cache.clear()
</code></pre>
<p>Unfortunately now after creating a new object, the user is logged out.</p>
<p>I think that the reason can be that I'm caching sessions.</p>
<pre><code># Cache config
CACHE_MIDDLEWARE_SECONDS = 31449600 #(approximately 1 year, in seconds)
CACHE_MIDDLEWARE_KEY_PREFIX = COREAPP
CACHES = {
"default": {
"BACKEND": "django.core.cache.backends.memcached.MemcachedCache",
"LOCATION": "127.0.0.1:11211",
}
}
SESSION_ENGINE = "django.contrib.sessions.backends.cache"
</code></pre>
<p>Should I use per-view cache maybe?</p>
|
<pre><code>from django.contrib.auth import update_session_auth_hash
update_session_auth_hash(request, user)
</code></pre>
<p>pass request and user in above method when you clearing a cache. but according to your way. you are clearing cache in signal that don't have request. So if you are updating Advert model from admin then. override admin <code>save_model()</code> method to save and here you can get user and request, So call above <code>update_session_auth_hash</code> after clear cache. It will not logged out user. If you updating data from own view then use same to call method that continue user as logged in.</p>
<p><strong>Edit</strong></p>
<pre><code>def form_valid(self, form)
user = self.request.user
form.save() # When you save then signal will call that clear your cache
update_session_auth_hash(self.request, user)
</code></pre>
|
python|django|django-sessions|django-cache
| 0 |
1,903,844 | 23,894,707 |
Why is it that two different numbers return the same string in a tuple?
|
<p>I'm something of a newbie to Python coding and I've just been making short games to get into writing code more fluently. I have right now a "simulation" that is essentially a text-based fight between a hero and a goblin. I am using a tuple to store the moves list and then calling on the elements in that tuple in a series of if statements. MY problem is that when the user enters the number 2, the "potion" move is used, but when the user enter 3, the "potion" move is also used. The number 2 should trigger the "block" move, but does not. I think this may have to do with my limited knowledge of tuples, but can anyone clarify this for me? Much appreciated. The code is as follows...</p>
<pre><code>#begins battle loop
while goblin > 0:
hmoves = ('sword',
'shield bash',
'block',
'potion')
choice = int(input("\nEnter a number 0 - 3 to choose an attack: "))
if hmoves[choice] is 'sword':
print(name, "attacked with his sword!")
goblin -= 3
print("\ngoblin used bite!")
hero -= 2
print("Goblin HP:", goblin, "Hero HP:", hero)
elif hmoves[choice] is 'shield bash':
print(name, "used shield bash!")
goblin -= 2
print("\ngoblin used bite!")
hero -= 2
print("\nGoblin HP:", goblin, "Hero HP:", hero)
elif hmoves[choice] is 'block':
print(name, "used block!")
print("\ngoblin used bite!")
print("but it was blocked.")
hero = hero
goblin = goblin
print("\nGoblin HP:", goblin, "Hero HP:", hero)
elif hmoves[choice] is 'potion':
print(name, "used a health potion.")
hero += 4
print("\ngoblin used bite!")
hero -= 2
print("\nGoblin HP:", goblin, "Hero HP:", hero)
#print("Goblin HP:", goblin, "Hero HP:", hero)
if goblin <= 0:
print("Congratulations you've completed the simulation.")
else:
print("Sorry, you did not pass the simulation.")
</code></pre>
|
<p>You should change your stuff from <code>is</code> to <code>==</code>:</p>
<pre><code>goblin = 20
hero = 20
name = "lol"
#begins battle loop
while goblin > 0:
hmoves = ('sword',
'shield bash',
'block',
'potion')
choice = int(input("\nEnter a number 0 - 3 to choose an attack: "))
if hmoves[choice] == 'sword':
print(name, "attacked with his sword!")
goblin -= 3
print("\ngoblin used bite!")
hero -= 2
print("Goblin HP:", goblin, "Hero HP:", hero)
elif hmoves[choice] == 'shield bash':
print(name, "used shield bash!")
goblin -= 2
print("\ngoblin used bite!")
hero -= 2
print("\nGoblin HP:", goblin, "Hero HP:", hero)
elif hmoves[choice] == 'block':
print(name, "used block!")
print("\ngoblin used bite!")
print("but it was blocked.")
hero = hero
goblin = goblin
print("\nGoblin HP:", goblin, "Hero HP:", hero)
elif hmoves[choice] == 'potion':
print(name, "used a health potion.")
hero += 4
print("\ngoblin used bite!")
hero -= 2
print("\nGoblin HP:", goblin, "Hero HP:", hero)
</code></pre>
<p><a href="https://stackoverflow.com/questions/132988/is-there-a-difference-between-and-is-in-python">Refer to the difference between is and ==.</a> The two strings are not necessarily the same object in memory, but they are same in terms of the characters. It will work sometimes though because of <a href="http://en.wikipedia.org/wiki/String_interning" rel="nofollow noreferrer">string interning</a>, which is used for efficiency purposes.</p>
|
python|text|tuples
| 2 |
1,903,845 | 24,353,147 |
How do I reshape or pivot a DataFrame in Pandas
|
<p>I would like to reshape a DataFrame in Pandas but not sure how to go about it. Here's what I'm starting with:</p>
<pre><code>Phase Weight Value CF
AA heavy 0.28 1.0
AB light 3.26 1.0
BX med 0.77 1.0
XY x light -0.01 1.0
AA heavy 0.49 1.5
AB light 5.10 1.5
BX med 2.16 1.5
XY x light 0.98 1.5
AA heavy 2.48 2.0
AB light 11.70 2.0
BX med 5.81 2.0
XY x light 3.46 2.0
</code></pre>
<p>I would like to reshape to this:</p>
<pre><code>Phase Weight 1.0 1.5 2.0
AA heavy 0.28 0.49 2.48
AB light 3.26 5.10 11.70
BX med 0.77 2.16 5.81
XY x light -0.01 0.98 3.46
</code></pre>
<p>So column names are now the values that were in CF and the intersection of the row and columns in the new table are the values that were in the value column in the original table.</p>
<p>I know I can do it with the Phase column as an index like so:</p>
<pre><code>df.pivot(index='Phase', columns='CF', values='Value)
</code></pre>
<p>But then I miss the weight column.I tried this but I'm getting an error</p>
<pre><code>df.pivot(index='Phase', columns=['Weight','CF'], values='Value')
</code></pre>
<p>Is there a way to do this with a single statement ? If not, what is the best way ?</p>
|
<p>You can <code>pd.pivot_table</code> which can take multiple names as arguments to index/column parameters. I also think you want Weight on the index (which makes it a column in the output) rather than on columns (which turns the distinct values into columns).</p>
<pre><code>In [27]: df.pivot_table(index=['Phase','Weight'], columns='CF', values='Value').reset_index()
Out[27]:
CF Phase Weight 1.0 1.5 2.0
0 AA heavy 0.28 0.49 2.48
1 AB light 3.26 5.10 11.70
2 BX med 0.77 2.16 5.81
3 XY x light -0.01 0.98 3.46
</code></pre>
<p>Edit:</p>
<p>On your other question, the <code>.columns</code> of a DataFrame are an Index (just like on the rows), and have a <code>.name</code> in addition to the actual values. As far as I'm aware, it's generally only used for display purposes.</p>
<pre><code>In [74]: df.columns
Out[74]: Index([u'Phase', u'Weight', 1.0, 1.5, 2.0], dtype='object')
In [75]: df.columns.name
Out[75]: 'CF'
In [76]: df.columns.values
Out[76]: array(['Phase', 'Weight', 1.0, 1.5, 2.0], dtype=object)
</code></pre>
|
python|pandas
| 4 |
1,903,846 | 35,798,516 |
Separate the string in Python, excluding some elements which contain separator
|
<p>I have a really ugly string like this:</p>
<pre><code># ugly string follows:
ugly_string1 = SVEF/XX:1/60/24.02.16 07:30:00/"isk/kWh"/0/ENDTIME
# which also may look like this (part within quotes is different):
ugly_string2 = SVEF/XX:1/60/24.02.16 07:30:00/"kWh"/0/ENDTIME
</code></pre>
<p>and I'd like to separate it to get this list in Python:</p>
<pre><code>['SVEF/XX:1', '60', '24.02.16 07:30:00', '"isk/kWh"', '0', 'ENDTIME']
# or from the second string:
['SVEF/XX:1', '60', '24.02.16 07:30:00', '"kWh"', '0', 'ENDTIME']
</code></pre>
<p>The first element (<code>SVEF/XX:1</code>) will <strong>always</strong> be the same, but the fourth element might or might not have the separator character in it (<code>/</code>).</p>
<p>I came up with regex which isolates the 1st and the 4th element (<a href="https://regex101.com/r/aR7uE8/1" rel="nofollow">example here</a>):</p>
<pre><code>(?=(SVEF/XX:1))|(?=("(.*?)"))
</code></pre>
<p>but I just cannot figure out how to separate the rest of the string by <code>/</code> character, while excluding those two isolated elements?</p>
<p>I can do it with more "manual" approach, with regex like this (<a href="https://regex101.com/r/xM0lF2/1" rel="nofollow">example here</a>):</p>
<pre><code>([^/]+/[^/]+)/([^/]+)/([^/]+)/("[^"]+")/([^/]+)/([^/]+)
</code></pre>
<p>but when I try this out in Python, I get extra empty elements for some reason:</p>
<pre><code>['', 'SVEF/XX:1', '60', '24.02.16 07:30:00', '"isk/kWh"', '0', 'ENDTIME', '']
</code></pre>
<p>I could sanitize this result afterwards, but it would be great if I separate those strings without extra interventions.</p>
|
<p>In python, this can be done more easily (and with more room to generalize or adapt the approach in the future) with successive uses of <code>split()</code> and <code>rsplit()</code>.</p>
<pre><code>ugly_string = 'SVEF/XX:1/60/24.02.16 07:30:00/"isk/kWh"/0/ENDTIME'
temp = ugly_string.split("/", maxsplit=4)
result = [ temp[0]+"/"+temp[1] ] + temp[2:-1] + temp[-1].rsplit("/", maxsplit=2)
print(result)
</code></pre>
<p>Prints:</p>
<pre><code>['SVEF/XX:1', '60', '24.02.16 07:30:00', '"isk/kWh"', '0', 'ENDTIME']
</code></pre>
<p>I use the second argument of <code>split/rsplit</code> to limit how many slashes are split;
I first split as many parts off the left as possible (i.e., 4), and rejoin parts 0 and 1
(the <code>SVEF</code> and <code>XX</code>). I then use <code>rsplit()</code> to make the rest of the split from the right. What's left in the middle is the quoted word, regardless of what it contains. </p>
<p>Rejoining the first two parts isn't too elegant, but neither is a format that allows <code>/</code> to appear both as a field separator and inside an unquoted field.</p>
|
python|regex|python-3.x
| 4 |
1,903,847 | 35,877,555 |
scipy install error
|
<p>I use activestate python 2.7.10 32bit on Windows 10 64bit.
I installed <code>numpy</code> and it worked but <code>scipy</code> gives me a headache.</p>
<p>I tried to pypm install <code>scipy</code> following <a href="https://code.activestate.com/pypm/scipy/" rel="nofollow">https://code.activestate.com/pypm/scipy/</a>
but it gives CRC check error.</p>
<p>When I pip install <code>scipy</code>, it gives the error:</p>
<pre><code>Downloading/unpacking scipy
Running setup.py (path:c:\users\kwan\appdata\local\temp\pip_build_KWan\scipy\setup.py) egg_info for package scipy
warning: no previously-included files matching '*_subr_*.f' found under directory 'scipy\linalg\src\id_dist\src'
no previously-included directories found matching 'benchmarks\env'
no previously-included directories found matching 'benchmarks\results'
no previously-included directories found matching 'benchmarks\html'
no previously-included directories found matching 'benchmarks\scipy'
no previously-included directories found matching 'scipy\special\tests\data\boost'
no previously-included directories found matching 'scipy\special\tests\data\gsl'
no previously-included directories found matching 'doc\build'
no previously-included directories found matching 'doc\source\generated'
no previously-included directories found matching '*\__pycache__'
warning: no previously-included files matching '*~' found anywhere in distribution
warning: no previously-included files matching '*.bak' found anywhere in distribution
warning: no previously-included files matching '*.swp' found anywhere in distribution
warning: no previously-included files matching '*.pyo' found anywhere in distribution
Installing collected packages: scipy
Running setup.py install for scipy
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in []
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in []
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\system_info.py:564: UserWarning: Specified path /home/apy/atlas/lib is invalid.
warnings.warn('Specified path %s is invalid.' % d)
C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\system_info.py:564: UserWarning: Specified path /home/apy/atlas/include is invalid.
warnings.warn('Specified path %s is invalid.' % d)
C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\system_info.py:1408: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\system_info.py:1419: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\system_info.py:1422: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
Running from scipy source directory.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "c:\users\kwan\appdata\local\temp\pip_build_KWan\scipy\setup.py", line 265, in <module>
setup_package()
File "c:\users\kwan\appdata\local\temp\pip_build_KWan\scipy\setup.py", line 262, in setup_package
setup(**metadata)
File "C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\core.py", line 152, in setup
config = configuration()
File "c:\users\kwan\appdata\local\temp\pip_build_KWan\scipy\setup.py", line 182, in configuration
config.add_subpackage('scipy')
File "C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\misc_util.py", line 1003, in add_subpackage
caller_level = 2)
File "C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\misc_util.py", line 972, in get_subpackage
caller_level = caller_level + 1)
File "C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\misc_util.py", line 909, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "scipy\setup.py", line 15, in configuration
config.add_subpackage('linalg')
File "C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\misc_util.py", line 1003, in add_subpackage
caller_level = 2)
File "C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\misc_util.py", line 972, in get_subpackage
caller_level = caller_level + 1)
File "C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\misc_util.py", line 909, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "scipy\linalg\setup.py", line 20, in configuration
raise NotFoundError('no lapack/blas resources found')
numpy.distutils.system_info.NotFoundError: no lapack/blas resources found
Complete output from command C:\Python27\python2.7.exe -c "import setuptools, tokenize;__file__='c:\\users\\kwan\\appdata\\local\\temp\\pip_build_KWan\\scipy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\kwan\appdata\local\temp\pip-7fm9d5-record\install-record.txt --single-version-externally-managed --compile:
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in []
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
numpy.distutils.system_info.atlas_threads_info
NOT AVAILABLE
atlas_info:
numpy.distutils.system_info.atlas_info
NOT AVAILABLE
lapack_info:
libraries lapack not found in []
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\system_info.py:564: UserWarning: Specified path /home/apy/atlas/lib is invalid.
warnings.warn('Specified path %s is invalid.' % d)
C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\system_info.py:564: UserWarning: Specified path /home/apy/atlas/include is invalid.
warnings.warn('Specified path %s is invalid.' % d)
C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\system_info.py:1408: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\system_info.py:1419: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\system_info.py:1422: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
Running from scipy source directory.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "c:\users\kwan\appdata\local\temp\pip_build_KWan\scipy\setup.py", line 265, in <module>
setup_package()
File "c:\users\kwan\appdata\local\temp\pip_build_KWan\scipy\setup.py", line 262, in setup_package
setup(**metadata)
File "C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\core.py", line 152, in setup
config = configuration()
File "c:\users\kwan\appdata\local\temp\pip_build_KWan\scipy\setup.py", line 182, in configuration
config.add_subpackage('scipy')
File "C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\misc_util.py", line 1003, in add_subpackage
caller_level = 2)
File "C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\misc_util.py", line 972, in get_subpackage
caller_level = caller_level + 1)
File "C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\misc_util.py", line 909, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "scipy\setup.py", line 15, in configuration
config.add_subpackage('linalg')
File "C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\misc_util.py", line 1003, in add_subpackage
caller_level = 2)
File "C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\misc_util.py", line 972, in get_subpackage
caller_level = caller_level + 1)
File "C:\Users\KWan\AppData\Roaming\Python\Python27\site-packages\numpy\distutils\misc_util.py", line 909, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "scipy\linalg\setup.py", line 20, in configuration
raise NotFoundError('no lapack/blas resources found')
numpy.distutils.system_info.NotFoundError: no lapack/blas resources found
----------------------------------------
Cleaning up...
Command C:\Python27\python2.7.exe -c "import setuptools, tokenize;__file__='c:\\users\\kwan\\appdata\\local\\temp\\pip_build_KWan\\scipy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\kwan\appdata\local\temp\pip-7fm9d5-record\install-record.txt --single-version-externally-managed --compile failed with error code 1 in c:\users\kwan\appdata\local\temp\pip_build_KWan\scipy
Storing debug log for failure in C:\Users\KWan\pip\pip.log
</code></pre>
|
<p>If you don't want to use a Python distribution which comes with scipy you can download the binary from <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/" rel="nofollow">here</a>.
For your 32 bit, python 2.7 you need: <strong>scipy-0.17.0-cp27-none-win32.whl</strong></p>
<p>This is a wheel package, which you can install with:</p>
<pre><code>pip install scipy-0.17.0-cp27-none-win32.whl
</code></pre>
|
python|windows|scipy
| 0 |
1,903,848 | 36,125,811 |
Count values in ManyToManyField - Django
|
<p>i need some help. I have a model "Event" with manytomanyfield "users":</p>
<pre><code>class Event(models.Model):
name = models.CharField(max_length=26)
description = models.CharField(max_length=200)
date = models.DateField()
user = models.ForeignKey(User)
users = models.ManyToManyField(User, related_name="event", blank=True)
image = models.ImageField(
upload_to='images/',
default='images/default.png'
)
</code></pre>
<p>So, manytomany intermediary table has two fields: <code>user_id</code> and <code>event_id</code>.
How to count how many <code>event_id = 1</code> or <code>event_id = 2</code>... are in this table? Thanks</p>
|
<p>All you'd need is the below model for Event:</p>
<pre><code>class Event(models.Model):
name = models.CharField(max_length=26)
description = models.CharField(max_length=200)
date = models.DateField()
user = models.ManyToMany(User, related_name="events")
image = models.ImageField(
upload_to='images/',
default='images/default.png'
)
</code></pre>
<p>And then from a object of <code>User</code> (say <code>logged_in_user</code>) you can make calls such as <code>logged_in_user.events.all()</code> to get all the events. Or if you just need event id's then <code>logged_in_user.events.values_list('id', flat=True)</code></p>
<p>If you just want a count, then it should be <code>logged_in_user.events.count()</code>. As you can see, you can treat <code>events</code> the same as any other manager (like objects on your user model).</p>
<p>If you need the count of users participating in a single event with <code>event_id</code>. User this: <code>Event.objects.get(id=event_id).users.count()</code></p>
|
python|django|manytomanyfield
| 1 |
1,903,849 | 15,399,904 |
Using python ElementTree's itertree function and writing modified tree to output file
|
<p>I need to parse a very large (~40GB) XML file, remove certain elements from it, and write the result to a new xml file. I've been trying to use iterparse from python's ElementTree, but I'm confused about how to modify the tree and then write the resulting tree into a new XML file. I've read the documentation on itertree but it hasn't cleared things up. Are there any simple ways to do this?</p>
<p>Thank you!</p>
<p>EDIT: Here's what I have so far.</p>
<pre><code>import xml.etree.ElementTree as ET
import re
date_pages = []
f=open('dates_texts.xml', 'w+')
tree = ET.iterparse("sample.xml")
for i, element in tree:
if element.tag == 'page':
for page_element in element:
if page_element.tag == 'revision':
for revision_element in page_element:
if revision_element.tag == '{text':
if len(re.findall('20\d\d', revision_element.text.encode('utf8'))) == 0:
element.clear()
</code></pre>
|
<p>If you have a large xml that doesn't fit in memory then you could try to serialize it one element at a time. For example, assuming <code><root><page/><page/><page/>...</root></code> document structure and ignoring possible namespace issues:</p>
<pre class="lang-py prettyprint-override"><code>import xml.etree.cElementTree as etree
def getelements(filename_or_file, tag):
context = iter(etree.iterparse(filename_or_file, events=('start', 'end')))
_, root = next(context) # get root element
for event, elem in context:
if event == 'end' and elem.tag == tag:
yield elem
root.clear() # free memory
with open('output.xml', 'wb') as file:
# start root
file.write(b'<root>')
for page in getelements('sample.xml', 'page'):
if keep(page):
file.write(etree.tostring(page, encoding='utf-8'))
# close root
file.write(b'</root>')
</code></pre>
<p>where <code>keep(page)</code> returns <code>True</code> if <code>page</code> should be kept e.g.:</p>
<pre class="lang-py prettyprint-override"><code>import re
def keep(page):
# all <revision> elements must have 20xx in them
return all(re.search(r'20\d\d', rev.text)
for rev in page.iterfind('revision'))
</code></pre>
<p>For comparison, to modify a <em>small</em> xml file, you could:</p>
<pre class="lang-py prettyprint-override"><code># parse small xml
tree = etree.parse('sample.xml')
# remove some root/page elements from xml
root = tree.getroot()
for page in root.findall('page'):
if not keep(page):
root.remove(page) # modify inplace
# write to a file modified xml tree
tree.write('output.xml', encoding='utf-8')
</code></pre>
|
python|xml|elementtree
| 8 |
1,903,850 | 49,551,985 |
Python: Accessing the proper values in an array
|
<p>I want to calculate the following equation:</p>
<pre><code>calc = value_a(2D) - (value_b(0D) + value_b(1D))/10000
value_a(2D) corresponds to type **a**, year **2D** and value **1.1275**
value_b(0D) corresponds to type **b**, year **0D** and value **0**
value_b(1D) corresponds to type **b**, year **1D** and value **0.125**
</code></pre>
<p>and the result should be </p>
<p><code>1.1274875</code></p>
<p>but somwhow I am not sure how to access the proper data within my loop? I would like to keep my structure of the code.</p>
<p>The code looks like the following:</p>
<pre><code>import pandas as pd
data = pd.read_csv('C:/Book1.csv').fillna('')
pd_date = pd.DatetimeIndex(data['date'].values)
data['date'] = pd_date
index_data = data.set_index('date')
for current_date in index_data.index.unique():
for index, row in index_data.iterrows():
if index == current_date:
for index2, row2 in index_data.iterrows():
if index2 == current_date:
if row['type'] in {'a', 'b'} and row2['type'] in {'a', 'b'}:
if row['year'] in {'0D','1D','2D'}:
print(row['value'])
</code></pre>
<p>The data looks like the following:</p>
<blockquote>
<pre><code>date type year value
2015-02-09 a 2D 1.1275
2015-02-09 b 10M 58.125
2015-02-09 b 11M 68.375
2015-02-09 b 1M 3.345
2015-02-09 b 1W 0.89
2015-02-09 b 1Y 79.375
2015-02-09 b 2M 7.535
2015-02-09 b 2W 1.8
2015-02-09 b 3M 11.61
2015-02-09 b 3W 2.48
2015-02-09 b 4M 16.2
2015-02-09 b 5M 21.65
2015-02-09 b 6M 27.1
2015-02-09 b 7M 33.625
2015-02-09 b 8M 41.375
2015-02-09 b 9M 49.5
2015-02-09 b 0D 0
2015-02-09 b 1D 0.125
</code></pre>
</blockquote>
|
<p>It looks like you really could use a multi-index here:</p>
<pre><code>In [4]: df.reset_index(inplace=True)
In [5]: df
Out[5]:
type year date value
0 a 2D 2015-02-09 1.1275
1 b 10M 2015-02-09 58.1250
2 b 11M 2015-02-09 68.3750
3 b 1M 2015-02-09 3.3450
4 b 1W 2015-02-09 0.8900
5 b 1Y 2015-02-09 79.3750
6 b 2M 2015-02-09 7.5350
7 b 2W 2015-02-09 1.8000
8 b 3M 2015-02-09 11.6100
9 b 3W 2015-02-09 2.4800
10 b 4M 2015-02-09 16.2000
11 b 5M 2015-02-09 21.6500
12 b 6M 2015-02-09 27.1000
13 b 7M 2015-02-09 33.6250
14 b 8M 2015-02-09 41.3750
15 b 9M 2015-02-09 49.5000
16 b 0D 2015-02-09 0.0000
17 b 1D 2015-02-09 0.1250
In [6]: df.set_index(['type','year'], inplace=True)
In [7]: df
Out[7]:
date value
type year
a 2D 2015-02-09 1.1275
b 10M 2015-02-09 58.1250
11M 2015-02-09 68.3750
1M 2015-02-09 3.3450
1W 2015-02-09 0.8900
1Y 2015-02-09 79.3750
2M 2015-02-09 7.5350
2W 2015-02-09 1.8000
3M 2015-02-09 11.6100
3W 2015-02-09 2.4800
4M 2015-02-09 16.2000
5M 2015-02-09 21.6500
6M 2015-02-09 27.1000
7M 2015-02-09 33.6250
8M 2015-02-09 41.3750
9M 2015-02-09 49.5000
0D 2015-02-09 0.0000
1D 2015-02-09 0.1250
</code></pre>
<p>Then simply:</p>
<pre><code>In [8]: df.loc['a','2D'].value - (df.loc['b', '0D'].value + df.loc['b','1D'].value)/10000
Out[8]: 1.1274875
</code></pre>
<p>Note, suppose I have multiple years (this I made by simply concatenating the df to itself):</p>
<pre><code>In [24]: df2
Out[24]:
type year date value
0 a 2D 2015-02-09 1.1275
1 b 10M 2015-02-09 58.1250
2 b 11M 2015-02-09 68.3750
3 b 1M 2015-02-09 3.3450
4 b 1W 2015-02-09 0.8900
5 b 1Y 2015-02-09 79.3750
6 b 2M 2015-02-09 7.5350
7 b 2W 2015-02-09 1.8000
8 b 3M 2015-02-09 11.6100
9 b 3W 2015-02-09 2.4800
10 b 4M 2015-02-09 16.2000
11 b 5M 2015-02-09 21.6500
12 b 6M 2015-02-09 27.1000
13 b 7M 2015-02-09 33.6250
14 b 8M 2015-02-09 41.3750
15 b 9M 2015-02-09 49.5000
16 b 0D 2015-02-09 0.0000
17 b 1D 2015-02-09 0.1250
18 a 2D 2015-02-10 1.1275
19 b 10M 2015-02-10 58.1250
20 b 11M 2015-02-10 68.3750
21 b 1M 2015-02-10 3.3450
22 b 1W 2015-02-10 0.8900
23 b 1Y 2015-02-10 79.3750
24 b 2M 2015-02-10 7.5350
25 b 2W 2015-02-10 1.8000
26 b 3M 2015-02-10 11.6100
27 b 3W 2015-02-10 2.4800
28 b 4M 2015-02-10 16.2000
29 b 5M 2015-02-10 21.6500
30 b 6M 2015-02-10 27.1000
31 b 7M 2015-02-10 33.6250
32 b 8M 2015-02-10 41.3750
33 b 9M 2015-02-10 49.5000
34 b 0D 2015-02-10 0.0000
35 b 1D 2015-02-10 0.1250
In [25]: df.iloc[-2,-1] = 100000 # this corresponds to (b, 0D) and used to be 0
</code></pre>
<p>As @cα΄Κα΄
sα΄α΄α΄α΄
noted, you can group by the <code>'date'</code> column:</p>
<pre><code>In [26]: df2.groupby('date').apply(
...: lambda df:
...: df.loc['a','2D'].value
...: - (df.loc['b', '0D'].value + df.loc['b','1D'].value)
...: / 10000
...: )
Out[27]:
date
2015-02-09 1.127487
2015-02-10 -8.872513
dtype: float64
</code></pre>
|
python|arrays|pandas
| 2 |
1,903,851 | 49,449,922 |
Divide each value in list array
|
<p>I am trying to divide by 80 of each array value in the list. What I have tried is,</p>
<pre><code>dfs = pd.read_excel('ff1.xlsx', sheet_name=None)
dfs1 = {i:x.groupby(pd.to_datetime(x['date']).dt.strftime('%Y-%m-%d'))['duration'].sum() for i, x in dfs.items()}
d = pd.concat(dfs1).groupby(level=1).apply(list).to_dict()
print(d)
</code></pre>
<p>OP :</p>
<pre><code>{'2017-05-06': [197, 250], '2017-05-07': [188, 80], '2017-05-08': [138, 138], '2017-05-09': [216, 222], '2017-06-09': [6]}
</code></pre>
<p>But Expected OP :</p>
<pre><code>1 : Divide by 80
{'2017-05-06': [2, 3], '2017-05-07': [2, 1], '2017-05-08': [2, 2], '2017-05-09': [2, 2], '2017-06-09': [0]}
2 : total of each array and subtract each value (3+2 = 5-3 and 5-2)
{'2017-05-06': [3, 2], '2017-05-07': [1, 2], '2017-05-08': [2, 2], '2017-05-09': [2, 2], '2017-06-09': [0]}
</code></pre>
<p>How to do this using python? </p>
|
<p>I think need:</p>
<pre><code>d = pd.concat(dfs1).div(80).astype(int)
d = d.groupby(level=1).transform('sum').sub(d).groupby(level=1).apply(list).to_dict()
print (d)
{'2017-06-09': [0], '2017-05-08': [1, 1], '2017-05-09': [2, 2],
'2017-05-07': [1, 2], '2017-05-06': [3, 2]}
</code></pre>
<p><strong>Explanation</strong>:</p>
<ol>
<li>First create <code>MultiIndex</code> DataFrame by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a></li>
<li>Divide by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.div.html" rel="nofollow noreferrer"><code>div</code></a> and if necessary convert to <code>int</code>s</li>
<li>For sum per groups use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>transform</code></a> for possible subtract values by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.sub.html" rel="nofollow noreferrer"><code>sub</code></a></li>
<li>Last create <code>list</code>s with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.apply.html" rel="nofollow noreferrer"><code>GroupBy.apply</code></a></li>
</ol>
|
python|python-3.x|pandas
| 3 |
1,903,852 | 70,334,535 |
Generating graphs favouring unique cliques
|
<p>For a research project I am working on, I need to generate random graphs which favour forming cliques without the largest cliques containing the same nodes. For instance, the BarabΓ‘siβAlbert model can generate graphs containing large cliques, but those cliques have mainly the same nodes because of preferential attachment.</p>
<p>Although I prefer using existing packages like Networkx or IGraph, I am okay with implementing research papers that have formalised generating such networks. The only criterium is that the generation of those graphs should not take more than 15 seconds for a graph with, for instance, 10.000 nodes and an average degree of 100.</p>
<p>I am also not sure if I should ask this question on here or on <a href="https://math.stackexchange.com/">https://math.stackexchange.com/</a>, so say so if I need to ask it over there.</p>
|
<p>It does rather depend on what you mean by random.</p>
<p>However, here is an algorithm for generating graphs with large cliques that do not share nodes and have some specified amount of "randomness"</p>
<pre><code>Input number of cliques required ( or random distribution specs for same )
Input number of nodes in each clique ( or random distribution specs for same )
LOOP over number of cliques
LOOP over number of nodes in clique
Add node to graph
Add links between every pair of nodes in clique
LOOP c1 over cliques
LOOP c2 over c1+1 to number of cliques
SELECT a random node in c1 and c2
Link nodes in c1 and c2
</code></pre>
<p>The c1 loop ensure that the graph is completely connected. Don't know if you want that. It creates one link between each pair of cliques.</p>
|
python|c++|networkx|graph-theory|igraph
| 0 |
1,903,853 | 12,699,382 |
Python Algorithm 3.2 - Help me with the errors please.?
|
<p>Here is my coding:</p>
<pre><code>def main():
actualValued()
assessed_value()
printResult()
def actualValued():
global assessed_value
global property_tax
assessed_value = 0.6 * actualValue
property_tax = assessed_value/100*0.64
def printResult():
print( "For a property valued at"), actualValued
print( "The assessed value is"), asessed_value
print( "The property tax is"), property_tax
actualValue = None
assessed_value = None
property_tax = None
main()
</code></pre>
<p>The Errors:</p>
<pre><code>Traceback (most recent call last):
File "C:/Documents and Settings/Desktop/property tax.py", line 21, in <module>
main()
File "C:/Documents and Settings/Desktop/property tax.py", line 2, in main
actualValued()
File "C:/Documents and Settings/Desktop/property tax.py", line 9, in actualValued
assessed_value = 0.6 * actualValue
TypeError: unsupported operand type(s) for *: 'float' and 'NoneType'
>>>
</code></pre>
<p>What I am trying to do:</p>
<p>Enter assessed value 10000.0
For a property valued at $10,000.00
The assessed value is $6,000.00
And the tax is $38.40</p>
<p>Property Tax: A county collects property taxes on the assessment value of property, which is 60 percent of the property's actual value. For example, if an acre of land is valued at $10,000, its assessment value is $6,000. The property tax is then 64cents for each $100 of the assessment value. The tax for the acre assessed at $6,000 will be $38.40.</p>
<p>I need the actual value for the piece of property and the display of the assessment value and property tax.</p>
<p>Functions I need to use:</p>
<ul>
<li>one to get the input from the user</li>
<li>one to calculate all values</li>
<li>one to output the results</li>
<li>and a main function to call the three other functions</li>
</ul>
|
<p>You set actualValue=None and then you try to use it in functions but a) You don't ever assign it and b) before you assign it within a function you will have to call global actualValue as you did with the other variables. You don't need to use global if you are only reading as was pointed out by @cdhowie</p>
<p>Since actualValue is of Nonetype you cannot multiply it by another number. That is your error.</p>
<p>You can do 1 of 3 things. </p>
<p>1) where you have actualValue=None Change it to actualValue=10000.</p>
<p>2) set actualValue in main as follows:</p>
<pre><code>def main():
global actualValue
actualValue = 10000
...
</code></pre>
<p>3) parameterize your functions as suggested by another answer.</p>
|
python|algorithm|python-3.x
| 2 |
1,903,854 | 12,917,686 |
Uniquify all pairs in a python list
|
<p>I've searched through post but can't find a solution for the exact problem I face. It's pretty easy but need a little guidance.</p>
<p>I have a python list that looks like:</p>
<pre><code>lst = ['bob/sally', 'bob/chris', 'bob/nate', 'sally/bob', ...]
</code></pre>
<p>I want to iterate through and print only unique pairs. So in the above example, it would find that bob/sally is the same as sally/bob, so it would remove one.</p>
<p>Any help would be greatly appreciated! I've seen postings using set() and other python functions but I don't think that would work in this case.</p>
|
<p>You could use a set, and normalise the order of names by sorting on them:</p>
<pre><code>>>> data = ['bob/sally', 'bob/chris', 'bob/nate', 'sally/bob']
>>> set(tuple(sorted(item.split('/'))) for item in data)
set([('bob', 'chris'), ('bob', 'nate'), ('bob', 'sally')])
</code></pre>
<p>Or as has been pointed out by <a href="https://stackoverflow.com/users/20862/ignacio-vazquez-abrams">Ignacio Vazquez-Abrams</a> and <a href="https://stackoverflow.com/users/748858/mgilson">mgilson</a> the use of a <code>frozenset</code> is much more elegant and eludes the sorting and tuple() step:</p>
<pre><code>set(frozenset(item.split('/')) for item in data)
</code></pre>
|
python|list|unique
| 3 |
1,903,855 | 21,569,484 |
read the contents of the input file into a dictionary keyed by id variable
|
<p>I have a sampleLabs1.txt file like this (it has so many records so I just list 5 rows):</p>
<p>visitid cdate ctime pqno test result unit range</p>
<p>OMHioJh8XEeq7152 6/15/2007 06:00 1181913408344759 CREAT 0.8 mg/dL 0.5-1.4
OMHioJh8XEeq7152 6/14/2007 07:10 1181827489130119 CREAT 0.8 mg/dL 0.5-1.4
OMHioJh8XEeq7152 6/11/2007 14:21 1181592540465036 CREAT 2.9 mg/dL 0.5-1.4
t2v0TjgroLTI6118 4/28/2006 14:18 1146257767528282 CREAT 8.7 mg/dL 0.5-1.4
t2v0TjgroLTI6118 5/1/2006 04:00 1146487572667772 CREAT 8.0 mg/dL 0.5-1.4</p>
<p>I want to read the contents of the input file into a dictionary keyed by "visitid", that is, I want something like:</p>
<p>{OMHioJh8XEeq7152: 6/15/2007, 06:00, 1181913408344759, CREAT, 0.8, mg/dL, 0.5-1.4,
OMHioJh8XEeq7152: 6/14/2007, 07:10, 1181827489130119, CREAT, 0.8, mg/dL, 0.5-1.4,
OMHioJh8XEeq7152: 6/11/2007, 14:21, 1181592540465036, CREAT, 2.9, mg/dL, 0.5-1.4,
t2v0TjgroLTI6118: 4/28/2006, 14:18, 1146257767528282, CREAT, 8.7, mg/dL, 0.5-1.4,
t2v0TjgroLTI6118: 5/1/2006, 04:00, 1146487572667772, CREAT, 8.0, mg/dL, 0.5-1.4}</p>
<p>I write the following program:</p>
<pre><code>import os
newdict = {}
with open(os.path.join("..","c:\work\python programming","sampleLabs1.txt"),"rU") as f:
for line in f:
splitLine = line.split()
newdict[(splitLine[0])] = ",".join(splitLine[1:])
newdict
</code></pre>
<p>However, it did give me a dictionary, but it seems that it overwrite the previous record for each key,"visitid", and only 1 unique key ("visitid") is kept. That is, I got something like this:</p>
<p>{OMHioJh8XEeq7152: 6/15/2007, 06:00, 1181913408344759, CREAT, 0.8, mg/dL, 0.5-1.4,
t2v0TjgroLTI6118: 5/1/2006, 04:00, 1146487572667772, CREAT, 8.0, mg/dL, 0.5-1.4}</p>
<p>But I would like to keep all the records that each "visitid" specifies, something like:</p>
<p>{OMHioJh8XEeq7152: 6/15/2007, 06:00, 1181913408344759, CREAT, 0.8, mg/dL, 0.5-1.4,
OMHioJh8XEeq7152: 6/14/2007, 07:10, 1181827489130119, CREAT, 0.8, mg/dL, 0.5-1.4,
OMHioJh8XEeq7152: 6/11/2007, 14:21, 1181592540465036, CREAT, 2.9, mg/dL, 0.5-1.4,
t2v0TjgroLTI6118: 4/28/2006, 14:18, 1146257767528282, CREAT, 8.7, mg/dL, 0.5-1.4,
t2v0TjgroLTI6118: 5/1/2006, 04:00, 1146487572667772, CREAT, 8.0, mg/dL, 0.5-1.4}</p>
<p>I will appreciate your help, can anyone help me to fix my code? Thank you for everyone's help.</p>
|
<p>You might want to treat this as a database table, if your plan is to analyze all entries under a visitid, or compare averages between visitids, etc etc. The <code>pandas</code> package is good for this:</p>
<pre><code>import pandas
nd = pandas.read_csv('sampleLabs1.txt',sep=' ')
unique(nd['visitid']) # all visitid values
nd[nd['visitid'] == 'OMHioJh8XEeq7152']['cdate'] # all cdates for a given visitid
</code></pre>
<p>To use a dictionary, you need to make the value for each visitid a tuple of some kind -- as in Hugh Bothwell's example. </p>
|
python|python-2.7
| 0 |
1,903,856 | 21,603,863 |
Returning value when condition met in recursion
|
<p>I am trying to find tours in a graph. I have written the following code, this seems to be printing tours correctly. I want it to stop once it have found the first tour and return the tour as a list. However, the recursion stack seems to finish to completion and I am not getting the desired result. How can I return a value and fully stop the recursion when I find the first tour i.e. my condition is met? Thanks.</p>
<pre><code>def get_tour(start, graph, path):
if path==[]:
from_node=start
else:
from_node=path[-1][1]
if graph==[]:
if start in path[-1]:
print "Tour Found"
return path
else:
edges=[node for node in graph if from_node in node]
for edge in edges:
to_node=[i for i in edge if i<> from_node][0]
p=list(path)
p.append((from_node,to_node))
g=list(graph)
g.remove(edge)
get_tour(start, g,p)
g=[(1,2), (1,3), (2,3)]
get_tour(1, graph=g, path=[])
</code></pre>
|
<p>When using recursion you need to pass back the return value up to the whole call stack. Normally this isn't the best way to use recursion.</p>
<p>Without going in the details of your code, here is a quick suggestion:</p>
<pre><code>def get_tour(start, graph, path):
ret_val = None
# Some code..
if graph==[]:
# Some code..
else:
edges=[node for node in graph if from_node in node]
for edge in edges:
# Some more code..
ret_val = get_tour(start, g,p)
if ret_val:
break
return ret_val
</code></pre>
|
python|recursion|conditional
| 1 |
1,903,857 | 24,675,209 |
How to read ganglia information from other application?
|
<p>I have managed to install and configure Ganglia on my cluster. I do not want to just see all performance data on ganglia web interface but instead I want to read cluster information from other application (application may be Java or Python based). I am not able to find if it is possible or not.</p>
<p>Is there any API to read Ganglia data?</p>
<p>To test Ganglia I used <code>telnet master 8649</code> and Ganglia showed me nice XML text on my console. But how do I do the same thing using Java or Python? I can definitely connect to 8649 using sockets but after that do I need to send something to Ganglia daemons?</p>
|
<p>I can help you to get an insight on this. But before that I must tell you, I am not a Java programmer, rather I am a C/C++ programmer. So, it means I can let you know, how things work in ganglia and you can find equivalent methods in Java/ Python to rewrite the codes you want.</p>
<p>Please be informed that there is no API in ganglia to achieve what you want to.</p>
<p>First consider below set up of ganglia to understand properly:</p>
<p><img src="https://i.stack.imgur.com/BxjhG.png" alt="ganglia minimal setup"></p>
<p>GS1 and GS2 are collecting system metrics and pushing them to GM.
So, according to your question, if you want to collect all such metrics by your own Java/ Python based application, then you may have to install the application on the Master server (i.e Replace GS with your own application).</p>
<p>GS1 and GS2 sends all collected metrics either by UDP unicast channel or UDP multicast channel. It is recommended that UDP unicast should be enabled in every gmond.conf for easier scalability.</p>
<p>I wouldn't discuss much on GS1 and GS2 as your question is more about replacing GM with your own tool.</p>
<p>GM uses two important libraries heavily to establish a UDP connection and translate data into its own readable format. They are <a href="https://apr.apache.org/" rel="nofollow noreferrer">APR</a> (Apache Portable Runtime) to establish UDP connection and perform related activities and <a href="http://en.wikipedia.org/wiki/External_Data_Representation" rel="nofollow noreferrer">XDR</a> (External Data Representation) to send data across networks and perform RPC.</p>
<p>You need to find APR and XDR equivalent libraries in Java and Python first. XDR is already available in Java and APR could be replaced by your own basic implementation to perform inter-network operations (i.e., create UDP socket, etc).</p>
<p>Open <a href="https://github.com/ganglia/monitor-core/blob/master/gmond/gmond.c" rel="nofollow noreferrer">gmond.c</a> source file of ganglia and go to line 1436. You will find a C function:</p>
<p><code>static void process_udp_recv_channel(const apr_pollfd_t *desc, apr_time_t now)</code>.</p>
<p>This function basically performs "UDP connection establishment" and "data translation into readable format" activities.</p>
<p>The call flow of the above function is shown below:<br>
<img src="https://i.stack.imgur.com/PdAwC.png" alt="Call flow"></p>
<p>Now, let's expand the function at line 1436 to understand more.</p>
<p>The first argument in this function carries network parameters such as IP, Port, etc. The structure is expanded below. You can find similar object in Java also.</p>
<pre><code>struct apr_pollfd_t {
apr_pool_t *p; /**< associated pool */
apr_datatype_e desc_type; /**< descriptor type */
apr_int16_t reqevents; /**< requested events */
apr_int16_t rtnevents; /**< returned events */
apr_descriptor desc; /**< @see apr_descriptor */
void *client_data; /**< allows app to associate context */
};
</code></pre>
<p>The second parameter has nothing to do, if SFLOW is disabled.</p>
<p>So, Start with creating a APR pool, UDP connection, etc.</p>
<pre><code> socket = desc->desc.s;
channel = desc->client_data;
apr_pool_create(&p, global_context);
status = apr_socket_addr_get(&remotesa, APR_LOCAL, socket);
status = apr_sockaddr_info_get(&remotesa, NULL, remotesa->family, remotesa->port, 0, p);
/* Grab the data */
status = apr_socket_recvfrom(remotesa, socket, 0, buf, &len);
if(status != APR_SUCCESS)
{
apr_pool_destroy(p);
return;
}
apr_sockaddr_ip_buffer_get(remoteip, 256, remotesa);
/* Check the ACL */
if(Ganglia_acl_action( channel->acl, remotesa) != GANGLIA_ACCESS_ALLOW)
{
apr_pool_destroy(p);
return;
}
</code></pre>
<p>All declarations of variable can be found in the beginning of the function expanded (line 1439 to 1456).</p>
<p>Then, create XDR stream:</p>
<pre><code>xdrmem_create(&x, buf, max_udp_message_len, XDR_DECODE);
</code></pre>
<p>Flush the data of the struct which saves metadata and metrics value:</p>
<pre><code>memset( &fmsg, 0, sizeof(Ganglia_metadata_msg));
memset( &vmsg, 0, sizeof(Ganglia_value_msg));
</code></pre>
<p>fmsg (<code>Ganglia_metadata_msg</code>) and vmsg (<code>Ganglia_value_msg</code>) struct definitions can be found in <a href="https://github.com/simplegeo/ganglia/blob/master/lib/gm_protocol.h" rel="nofollow noreferrer">gm_protocol.h</a> header file. Re-write them in Java.</p>
<p>Then, figure out if the message received is "metadata" or "metrics values". </p>
<pre><code>xdr_Ganglia_msg_formats(&x, &id); // this function is located in the source file gm_protocol_xdr.c and this file is generated by rpcgen.
</code></pre>
<p>Note: <a href="https://github.com/simplegeo/ganglia/blob/master/lib/gm_protocol_xdr.c" rel="nofollow noreferrer">rpcgen</a> is a rpc compiler and its explanation can be found in this <a href="https://stackoverflow.com/questions/26608158/understanding-xdr-specification-to-create-a-x-file/26625494#26625494">question</a>.</p>
<p>Note: Here is the link for <a href="https://github.com/simplegeo/ganglia/blob/master/lib/gm_protocol_xdr.c#L243" rel="nofollow noreferrer">gm_protocol_xdr.c</a>.</p>
<p>Here, <code>id</code> is an <code>enum</code> and its declaration is shown below:</p>
<pre><code>enum Ganglia_msg_formats {
gmetadata_full = 128,
gmetric_ushort = 128 + 1,
gmetric_short = 128 + 2,
gmetric_int = 128 + 3,
gmetric_uint = 128 + 4,
gmetric_string = 128 + 5,
gmetric_float = 128 + 6,
gmetric_double = 128 + 7,
gmetadata_request = 128 + 8,
};
typedef enum Ganglia_msg_formats Ganglia_msg_formats;
</code></pre>
<p>Based on the value of <code>id</code>, you can determine what kind of value the packets have.
For this purpose, this function calls an another function (which is in fact generated by rpcgen) to determine the kind of value the packet has and if found, it translates it to human readable format too.</p>
<p>The function is:</p>
<pre><code>xdr_Ganglia_value_msg(&x, &vmsg);
</code></pre>
<p>You can find the full expansion of this function in <a href="https://github.com/simplegeo/ganglia/blob/master/lib/gm_protocol_xdr.c#L275" rel="nofollow noreferrer">gm_protocol_xdr.c</a> from line 275.</p>
<p>After that you can do whatever you would like with these packets.</p>
<p>In the end, you must free all XDR variables and APR pools allocated.</p>
<p>I hope this gives you a fair idea to start with your own application. </p>
|
java|python|ganglia
| 4 |
1,903,858 | 24,583,183 |
Using 2 sets of brackets to slice
|
<p>Python 2.7:</p>
<p>I'm working on Project Euler #011 and have found code that I know works, but have come up short finding <em>how</em> it works.</p>
<pre><code>for i in xrange(17):
# top-left to bottom-right
add(grid[i*20::21])
add(grid[i::21][:20-i]) # <---a
# top-right to bottom-left
add(grid[3+i::19][:i+4])
add(grid[39+i::19][i:]) # <---b
</code></pre>
<p>My question is: how do the second set of brackets function at a and b? I know they limit the number of list entries, but HOW!?</p>
|
<p>The first slice operation results in a sequence. The second slice operation slices that sequence. It's like <code>a + b + c</code>, but with slicing instead of addition.</p>
|
python-2.7|slice
| 0 |
1,903,859 | 41,210,441 |
Mocking parser object in a unit test
|
<p>I am trying to develop a <strong>unit test</strong> for the function that has a dependency on a parser (BeautifulSoup), which in turn depends on a network access to fetch a web page. In order to prevent network access I copied all HTML code to the file and whenever I need that web page I simply read it from a file. However, I have a hard time mocking the parser. </p>
<p>My question is: should I try to mock a parser, and if the answer is yes, then how?</p>
<p>Here is the method I am trying to test, which is inside <code>data_processing.py</code>:</p>
<pre><code>def get_ocw_course_info(url):
parser = get_parser(url)
url_name = parser.find('meta', {"name":"Search_Display"}).get('content').replace('|', '-')
description = parser.find('meta', {"name":"Description"}).get('content')
return dict(url=url,
url_name=url_name,
description=description)
</code></pre>
<p>And here is the unit test that I have developed for this function:</p>
<pre><code>@patch('data_processing.get_parser')
def test_get_ocw_course_info_unit(self, *args, **kwargs):
data_processing.get_parser.return_value = BeautifulSoup(read_mock_html('mock_responses/ocw_pass.html'), 'lxml')
actual = data_processing.get_ocw_course_info('https://ocw.mit.edu/courses/aeronautics-and-astronautics/16-682-prototyping-avionics-spring-2006/assignments/')
expected = {'url': 'https://ocw.mit.edu/courses/aeronautics-and-astronautics/16-682-prototyping-avionics-spring-2006/assignments/',
'url_name': '16.682 Prototyping Avionics - Assignments',
'description': 'This section contains three of the four assignments from the class.',
}
self.assertEqual(actual, expected)
</code></pre>
<p>I omitted implementations of helper functions because they are either one liners or there is nothing particularly interesting going on (and I suppose names should be pretty self-explanatory)</p>
|
<p>I think you have two options here:</p>
<ol>
<li>Easiest one, in my opinion, is to accept an optional parameter <code>get_parser</code> (which will be some kind of callable) in <code>get_ocw_course_info</code> procedure and use simple dependency injection to provide testable implementation to your function.</li>
<li><p>You can easily mock parser via <code>mock.patch</code>, and that's the way you doing it (don't quite understand what is your problem).
Notice, in decorated test case you will get second argument, which is mock you can configure:</p>
<pre><code>@patch('data_processing.get_parser')
def test_get_ocw_course_info_unit(self, mock, *args, **kwargs):
mock.return_value = ...
</code></pre></li>
</ol>
<p>Or, the other way, you can provide predefined mock by passing it to patch decorator:</p>
<pre><code>def get_parser_mocked(): # some suitable return value
@patch('data_processing.get_parser', get_parser_mocked)
def test_get_ocw_course_info_unit(self, *args, **kwargs):
...
</code></pre>
|
python|unit-testing|beautifulsoup
| 2 |
1,903,860 | 40,903,518 |
Match two numpy arrays to find the same elements
|
<p>I have a task kind of like SQL search. I have a "table" which contains the following 1D arrays (about 1 million elements) identified by <code>ID1</code>:</p>
<pre><code>ID1, z, e, PA, n
</code></pre>
<p>Another "table" which contains the following 1D arrays (about 1.5 million elements) identified by <code>ID2</code>:</p>
<pre><code>ID2, RA, DEC
</code></pre>
<p>I want to match <code>ID1</code> and <code>ID2</code> to find the common ones to form another "table" which contains <code>ID, z, e, PA, n, RA, DEC</code>. Most elements in <code>ID1</code> can be found in <code>ID2</code> but not all, otherwise I can use <code>numpy.in1d(ID1,ID2)</code> to accomplish it. Anyone has fast way to accomplish this task? </p>
<p>For example:</p>
<pre><code>ID1, z, e, PA, n
101, 1.0, 1.2, 1.5, 1.8
104, 1.5, 1.8, 2.2, 3.1
105, 1.4, 2.0, 3.3, 2.8
ID2, RA, DEC
101, 4.5, 10.5
107, 90.1, 55.5
102, 30.5, 3.3
103, 60.1, 40.6
104, 10.8, 5.6
</code></pre>
<p>The output should be </p>
<pre><code>ID, z, e, PA, n, RA, DEC
101, 1.0, 1.2, 1.5, 1.8, 4.5, 10.5
104, 1.5, 1.8, 2.2, 3.1, 10.8, 5.6
</code></pre>
|
<p>Well you can use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.in1d.html" rel="nofollow noreferrer"><code>np.in1d</code></a> with swapped places for the first columns of the two arrays/tables, such that we would have two masks to index into the arrays for selection. Then, simply stack the results -</p>
<pre><code>mask1 = np.in1d(a[:,0], b[:,0])
mask2 = np.in1d(b[:,0], a[:,0])
out = np.column_stack(( a[mask1], b[mask2,1:] ))
</code></pre>
<p>Sample run -</p>
<pre><code>In [44]: a
Out[44]:
array([[ 101. , 1. , 1.2, 1.5, 1.8],
[ 104. , 1.5, 1.8, 2.2, 3.1],
[ 105. , 1.4, 2. , 3.3, 2.8]])
In [45]: b
Out[45]:
array([[ 101. , 4.5, 10.5],
[ 102. , 30.5, 3.3],
[ 103. , 60.1, 40.6],
[ 104. , 10.8, 5.6],
[ 107. , 90.1, 55.5]])
In [46]: mask1 = np.in1d(a[:,0], b[:,0])
In [47]: mask2 = np.in1d(b[:,0], a[:,0])
In [48]: np.column_stack(( a[mask1], b[mask2,1:] ))
Out[48]:
array([[ 101. , 1. , 1.2, 1.5, 1.8, 4.5, 10.5],
[ 104. , 1.5, 1.8, 2.2, 3.1, 10.8, 5.6]])
</code></pre>
|
python|arrays|numpy|compare|match
| 1 |
1,903,861 | 38,494,908 |
Counting the grouped elements of a Dataframe in Python
|
<p>I have a dataframe that I am trying to group by and sum. I was able to achieve this, but I'd also like to count the grouped by elements. </p>
<pre><code>sessions_summed = df.groupby("screens_completed").sum()
print sessions_summed
</code></pre>
<p>using this, I get this output:</p>
<pre><code>screens_completed sessions
0 6
1 1
2 3
3 1
4 1
5 1
9 33
12 8
13 872
14 103292
</code></pre>
<p>What I would like is to see the count of how many times each entity in screens completed (i.e. how many times did 14 appear) appeared alongside this new summed sessions column. And then I would like divide the summed column by the count column.</p>
<p>How would I do this?</p>
|
<h2>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow"><code>DataFrame.pivot_table</code></a> to count the number of times a certain value appears in a column.</h2>
<p>You can take advantage of the <code>aggfunc</code> argument in the <code>pivot_table</code> function.</p>
<pre><code>sessions_summed = df.groupby("screens_completed").sum()
#the below line will count the number of times each value occurs in screens_completed.
sessions_summed["count"] = df.pivot_table(index="screens_completed", values="sessions", aggfunc=len)
sessions_summed["mean"] = sessions_summed["sessions"] / sessions_summed["count"]
</code></pre>
<h3>So what's going on here?</h3>
<p><code>pivot_table</code> will group your rows based on the column you specify with the <code>index</code> parameter. For each of the columns you pass the 'values' parameter, <code>pivot_table</code> will try to compute some summarizing information to put in that column using all of the values in rows corresponding to rows with a matching index value. The <code>aggfunc</code> parameter allows you to tell <code>.pivot_table</code> how you want that column summarized. </p>
<p>For example, let's say you have the following table:</p>
<pre><code>index screens_completed sessions
0 0 2
1 1 4
2 1 1
3 1 3
3 0 3
</code></pre>
<p><code>pivot_table</code> will create two groups for you:</p>
<p><code>screens_completed</code> == 0, which will pass <code>[2, 3]</code> into your aggfunc for column <code>sessions</code>.
<code>screens_completed</code> == 1, which will pass <code>[4, 1, 3]</code> into your <code>aggfunc</code> for column <code>sessions</code></p>
<p>If you pass <code>len</code> to the <code>aggfunc</code> parameter, you're just asking for the length of the list passed into your <code>aggfunc</code>, which is another way of asking for how many times each <code>screens_completed</code> value occurs in your original DataFrame. </p>
<h3>You can also calculate the mean by passing a mean calculating function into the <code>aggfunc</code> parameter</h3>
<p>for example:</p>
<pre><code>from numpy import mean
sessions_summed["mean"] = df.pivot_table(index="screens_completed", values="sessions", aggfunc=mean)
</code></pre>
|
python
| 0 |
1,903,862 | 29,279,043 |
matplotlib animation does not update when in wxpython gui but works standalone
|
<p>I have a wxPython GUI in which I use matplotlib for 2D and 3D graphics.</p>
<p>I was having problems getting a surface plot to animate, so I used the following dummy case adapted from someplace online to test. It is a fairly typical example for 3D animation and works fine when run standalone.</p>
<pre><code>if True:
from mpl_toolkits.mplot3d import axes3d
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
def generate(X, Y, phi):
R = 1 - np.sqrt(X**2 + Y**2)
return np.cos(2 * np.pi * X + phi) * R
fig = plt.figure()
ax = axes3d.Axes3D(fig)
xs = np.linspace(-1, 1, 3)
ys = np.linspace(-1, 1, 3)
X, Y = np.meshgrid(xs, ys)
Z = generate(X, Y, 0.0)
wframe = ax.plot_wireframe(X, Y, Z, rstride=2, cstride=2)
ax.set_zlim(-1,1)
def update(i, ax, fig):
print i
ax.cla()
phi = i * 360 / 2 / np.pi / 100
Z = generate(X, Y, phi)
wframe = ax.plot_wireframe(X, Y, Z, rstride=2, cstride=2)
ax.set_zlim(-1,1)
return wframe
ani = animation.FuncAnimation(fig, update,
frames=xrange(100),
fargs=(ax, fig), interval=100)
plt.show()
</code></pre>
<p>The "True" test is of course unnecessary but was meant to replicate something like the structure in the GUI at the point of execution (to check for any scoping issues).</p>
<p>When I insert the exact same code into my GUI with a wx.Button causing execution of the code, it plots only the first frame and nothing else, but doesn't issue any error either (running inside IDLE). I can verify by printing that exactly one (the first iteration <code>i=0</code>) frame is plotting.</p>
<p>This is exactly the behavior also of the actual data of interest which originated the problem.</p>
<p>Thank you.</p>
|
<p>When using an animation inside of a GUI class, make sure to keep the reference to the animation object in a class member so that it doesn't get garbage collected at the end of the method in which it is created. Using something like</p>
<pre><code>self.ani = animation.FuncAnimation(....
</code></pre>
<p>rather than</p>
<pre><code>ani = animation.FuncAnimation(....
</code></pre>
<p>should work.</p>
|
python|animation|matplotlib|wxpython
| 1 |
1,903,863 | 59,671,673 |
How to construct a color map in seaborn from a list of RGB colors?
|
<p>I would like to start with a list of RGB colors, and from them construct a color map I can use in <code>seaborn</code> plots. I have found several instructions on how to change the default color map, but that's not what I'm looking for. I would like to construct a color map that I can use in the <code>cmap</code> argument of, for instance, the <code>kdeplot</code> command.</p>
|
<p>Constructing a <a href="https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.colors.ListedColormap.html#matplotlib.colors.ListedColormap" rel="noreferrer"><code>matplotlib.colors.ListedColormap</code></a> from a list of colors is fairly trivial. Here is an example using the first 4 colors in the tableau 20 color palette -</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from matplotlib import cm
# Tableau 20 color palette for demonstration
colors = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120)]
# Conversion to [0.0 - 1.0] from [0.0 - 255.0]
colors = [(e[0] / 255.0, e[1] / 255.0, e[2] / 255.0) for e in colors]
cmap = ListedColormap(colors)
a = np.outer(np.linspace(0, 1, 20), np.linspace(0, 1, 20))
im = plt.imshow(a, cmap=cmap)
plt.colorbar(im)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/ueIV3.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ueIV3.png" alt="enter image description here"></a></p>
<p>However, if you don't already have a gradient in the list of colors (as the above does not) then it might be more useful to use a <a href="https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.colors.LinearSegmentedColormap.html#matplotlib.colors.LinearSegmentedColormap" rel="noreferrer"><code>matplotlib.colors.LinearSegmentedColormap</code></a> instead. This is a bit more tricky because of the format expected, </p>
<blockquote>
<p>[...] <code>segmentdata</code> argument is a dictionary with a set of red, green and blue entries. Each entry should be a list of <em>x</em>, <em>y0</em>, <em>y1</em> tuples, forming rows in a table [...].<br>
Each row in the table for a given color is a sequence of <em>x</em>, <em>y0</em>, <em>y1</em> tuples. In each sequence, <em>x</em> must increase monotonically from 0 to 1. For any input value <em>z</em> falling between <code>x[i]</code> and <code>x[i+1]</code>, the output value of a given color will be linearly interpolated between <code>y1[i]</code> and <code>y0[i+1]</code></p>
</blockquote>
<p>Such a dictionary can be generated algorithmically by the method in the example below </p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
from matplotlib import cm
# Tableau 20 color palette for demonstration
colors = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120)]
colors = [(e[0] / 255.0, e[1] / 255.0, e[2] / 255.0) for e in colors]
nc = len(colors)
c = np.zeros((3, nc, 3))
rgb = ['red', 'green', 'blue']
for idx, e in enumerate(colors):
for ii in range(3):
c[ii, idx, :] = [float(idx) / float(nc - 1), e[ii], e[ii]]
cdict = dict(zip(rgb, c))
cmap = LinearSegmentedColormap('tab20', cdict)
a = np.outer(np.linspace(0, 1, 20), np.linspace(0, 1, 20))
im = plt.imshow(a, cmap=cmap)
plt.colorbar(im)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/n2wsl.png" rel="noreferrer"><img src="https://i.stack.imgur.com/n2wsl.png" alt="enter image description here"></a></p>
<p>Assuming the input list <code>colors</code> has the proper RGB format.</p>
|
python|colors|seaborn
| 8 |
1,903,864 | 19,171,438 |
Extract xlrd package data extraction
|
<p>When i am trying to extract the data from an xlsx files. I get the encoding details with the data as well. </p>
<p>Consider the code as shown below,</p>
<pre><code>column_number = 0
column_headers = []
#column_headers = sheet.row_values(row_number)
while column_number <= sheet.ncols - 1:
column_headers.append(sheet.cell(row_number, column_number).value)
column_number+=1
return column_headers
</code></pre>
<p>output is,</p>
<pre><code>[u'Rec#', u'Cyc#', u'Step', u'TestTime', u'StepTime', u'Amp-hr', u'Watt-hr', u'Amps', u'Volts', u'State', u'ES', u'DPt Time', u'ACR', u'DCIR']
</code></pre>
<p>I just want to extract the cell value which is the data without "u'" attached to it . How can i get just that ?</p>
|
<p>Have you tried the following:</p>
<pre><code>print data.value
</code></pre>
<p>In the new code could you try this:</p>
<pre><code>import unicodedata
...
output = []
for cell in column_headers:
output.append(unicodedata.normalize('NFKD', cell))
return output
</code></pre>
<p>Please see this for more info: <a href="https://stackoverflow.com/a/1207479/2168278">https://stackoverflow.com/a/1207479/2168278</a></p>
|
python|excel|xlrd
| 0 |
1,903,865 | 69,006,040 |
Cant get reponse request.POST.get in views.py index function
|
<p>I am trying to create a webapp which diplays IP Address of a hostname which should be input from user in textField. But I keep getting this error.I cant see to get reponse in url. I am new to this please help.</p>
<p>view.py</p>
<pre><code>from django.shortcuts import render
import dnspython as dns
import dns.resolver
def index(request):
search = request.POST.get('search')
# print(search)
ip_address = dns.resolver.Resolver.resolve(search, "A")
# ip_address = dns.resolver.Resolver()
# answers = ip_address.resolve(search, "A").rrset[0].to_text()
# try:
# ip_address = dns.resolver.resolve(search, 'A').rrset[0].to_text()
# except dns.resolver.NoAnswer:
# ip_address = 'No answer'
context = {"ip_address": ip_address}
return render(request, 'index.html', context)
</code></pre>
<p>This is the html Please have a look and check.
Thanks in advance.</p>
<p>index.html</p>
<pre><code>{% extends 'base.html' %}
{% block title %} IP Finder {% endblock %}
{% block body %}
<div class="container">
<br>
<br>
<center>
<h1 style="font-family:'Courier New'">Django NSLookup</h1>
<br>
<br>
<form action="{% url 'index' %}" method="post">
{% csrf_token %}
<div class="form-group">
<label>
<input type="text" class="form-control" name="search" placeholder="Enter website">
</label>
</div>
<input type="submit" class="btn btn-primary" value="Search">
<p></p>
<p>Click on the "Choose File" button to upload a file:</p>
<form action="/action_page.php">
<input type="file" id="myFile" name="filename">
<input type="submit">
</form>
</form>
</center>
<br>
<br>
<p>IP Address is : {{ip_address}}</p>
</div>
{% endblock %}
</code></pre>
<p><a href="https://i.stack.imgur.com/Gkpf2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gkpf2.png" alt="enter image description here" /></a></p>
<p>Traceback:</p>
<blockquote>
<blockquote>
<p>Traceback (most recent call last): File "C:\Python39\lib\site-packages\django\core\handlers\exception.py",
line 47, in inner
response = get_response(request) File "C:\Python39\lib\site-packages\django\core\handlers\base.py", line
181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs) File "C:\Users\vassu\PycharmProjects\IPFinderA\IPApp\views.py", line 22, in
index
ip_address = dns.resolver.Resolver.resolve(search, "A") File "C:\Python39\lib\site-packages\dns\resolver.py", line 1186, in resolve
resolution = _Resolution(self, qname, rdtype, rdclass, tcp, File "C:\Python39\lib\site-packages\dns\resolver.py", line 552, in <strong>init</strong>
self.qnames_to_try = resolver._get_qnames_to_try(qname, search) AttributeError: 'NoneType' object has no attribute
'_get_qnames_to_try' [01/Sep/2021 04:07:16] "GET / HTTP/1.1" 500 73961</p>
</blockquote>
</blockquote>
|
<p>Instead of</p>
<pre><code>dns.resolver.Resolver.resolve()
</code></pre>
<p>use:</p>
<pre><code>dns.resolver.Resolver.query()
</code></pre>
|
python|html|django|dns
| 0 |
1,903,866 | 62,060,410 |
Unable to use GET_PAGE_NAME keyword with multiple GO_TO Page
|
<p>I am creating a Suite File where a TestCase contains multiple PageObject and to get_page_name where I am landed.
Getting below error
<a href="https://i.stack.imgur.com/IOcAB.png" rel="nofollow noreferrer">Error if 2 pageobject</a></p>
<p><a href="https://i.stack.imgur.com/oSsud.png" rel="nofollow noreferrer">Suite File</a></p>
<p>Suite File</p>
|
<p>two methods are defined with the same name "get page name" in the files "HomePage" and "LoginPage"</p>
|
python|selenium|robotframework|pageobjects
| 0 |
1,903,867 | 62,330,952 |
QtWebPageRenderer SIGILL issue
|
<p>My problem is summed in title. When I call method <code>setHtml</code> on instance of <code>QtWebPageRenderer</code>, SIGILL signal is emitted and my application goes down.</p>
<p>I'm aware that this issue is caused by bad Qt5 dynamic library but I installed it with:</p>
<pre><code>sudo pip install PyQt5 --only-binary PyQt5
sudo pip install PyQtWebEngine --only-binary PyQtWebEngine
</code></pre>
<p>so I thought I will get correct precompiled library. When I tried to install PyQt5 without <code>--only-binary</code>, I always ended with some strange compilation error. Something like <code>qmake</code> is not in PATH even though it is and I'm able to call <code>qmake</code> from shell.</p>
<p>So my question is, how to make PyQt5 running on Fedora 31 without any SIGILLs.</p>
<p>EDIT:</p>
<p>Following code can replicate the issue. That information about SIGILL is little inaccurate because first signal is actually SIGTRAP, after I hit <code>continue</code> in gdb, I got SIGILL. This hints that Qt is actually trying to say something to me, although in not very intuitive way. </p>
<p>After some playing around with it, I found that without thread, its ok. Does this mean that Qt forces user to use QThread and not python threads? Or it means that I can't call methods of Qt objects outside of thread where event loop is running?</p>
<pre><code>import signal
import sys
import threading
from PyQt5 import QtWidgets
from PyQt5 import QtCore
from PyQt5.QtWebEngineWidgets import QWebEnginePage
class WebView(QWebEnginePage):
def __init__(self):
QWebEnginePage.__init__(self)
self.loadFinished.connect(self.on_load_finish)
def print_result(self, data):
print("-" * 30)
print(data)
with open("temp.html", "wb") as hndl:
hndl.write(data.encode("utf-8"))
def on_load_finish(self):
self.toHtml(self.print_result)
class Runner(threading.Thread):
def __init__(self, web_view):
self.web_view = web_view
threading.Thread.__init__(self)
self.daemon = True
def run(self):
self.web_view.load(QtCore.QUrl("https://www.worldometers.info/coronavirus/"))
def main():
signal.signal(signal.SIGINT, signal.SIG_DFL)
app = QtWidgets.QApplication(sys.argv)
web_view = WebView()
runner = Runner(web_view)
runner.start()
app.exec_()
if __name__ == "__main__":
main()
</code></pre>
|
<p>You have to have several restrictions:</p>
<ul>
<li><p>A QObject is not <a href="https://doc.qt.io/qt-5/threads-reentrancy.html#thread-safety" rel="nofollow noreferrer">thread-safe</a> so when creating "web_view" in the main thread then it is not safe to modify it in the secondary thread</p></li>
<li><p>Since the QWebEnginePage tasks run asynchronously then you need a Qt eventloop.</p></li>
</ul>
<p>So if you want to use python's Thread class then you must implement both conditions:</p>
<pre class="lang-py prettyprint-override"><code>import signal
import sys
import threading
from PyQt5 import QtWidgets
from PyQt5 import QtCore
from PyQt5.QtWebEngineWidgets import QWebEnginePage
class WebView(QWebEnginePage):
def __init__(self):
QWebEnginePage.__init__(self)
self.loadFinished.connect(self.on_load_finish)
def print_result(self, data):
print("-" * 30)
print(data)
with open("temp.html", "wb") as hndl:
hndl.write(data.encode("utf-8"))
def on_load_finish(self):
self.toHtml(self.print_result)
class Runner(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.daemon = True
def run(self):
# The QWebEnginePage was created in a new thread and
# that thread has an eventloop
loop = QtCore.QEventLoop()
web_view = WebView()
web_view.load(QtCore.QUrl("https://www.worldometers.info/coronavirus/"))
loop.exec_()
def main():
signal.signal(signal.SIGINT, signal.SIG_DFL)
app = QtWidgets.QApplication(sys.argv)
runner = Runner()
runner.start()
app.exec_()
if __name__ == "__main__":
main()
</code></pre>
<hr>
<p>In reality <code>QThread</code> and <code>threading.Thread()</code> are native thread <strong>handlers</strong> of the OS, so in practical terms it can be said that QThread is a <code>threading.Thread()</code> + <code>QObject</code> with an eventloop running on the secondary thread.</p>
<hr>
<p>On the other hand, if your objective is to call a function from a thread to which it does not belong, then you should use asynchronous methods as pointed out in <a href="https://stackoverflow.com/a/62315639/6622587">this answer</a>.</p>
<p>In this case the simplest is to use pyqtSlot + QMetaObject:</p>
<pre><code>import signal
import sys
import threading
from PyQt5 import QtWidgets
from PyQt5 import QtCore
from PyQt5.QtWebEngineWidgets import QWebEnginePage
class WebView(QWebEnginePage):
def __init__(self):
QWebEnginePage.__init__(self)
self.loadFinished.connect(self.on_load_finish)
def print_result(self, data):
print("-" * 30)
print(data)
with open("temp.html", "wb") as hndl:
hndl.write(data.encode("utf-8"))
def on_load_finish(self):
self.toHtml(self.print_result)
<b>@QtCore.pyqtSlot(QtCore.QUrl)
def load(self, url):
QWebEnginePage.load(self, url)</b>
class Runner(threading.Thread):
def __init__(self, web_view):
self.web_view = web_view
threading.Thread.__init__(self)
self.daemon = True
def run(self):
<b>url = QtCore.QUrl("https://www.worldometers.info/coronavirus/")
QtCore.QMetaObject.invokeMethod(
self.web_view,
"load",
QtCore.Qt.QueuedConnection,
QtCore.Q_ARG(QtCore.QUrl, url),
)</b>
def main():
signal.signal(signal.SIGINT, signal.SIG_DFL)
app = QtWidgets.QApplication(sys.argv)
web_view = WebView()
runner = Runner(web_view)
runner.start()
app.exec_()
if __name__ == "__main__":
main()</code></pre>
<p>Or functools.partial() + QTimer</p>
<pre><code><b>from functools import partial</b>
import signal
import sys
import threading
from PyQt5 import QtWidgets
from PyQt5 import QtCore
from PyQt5.QtWebEngineWidgets import QWebEnginePage
class WebView(QWebEnginePage):
def __init__(self):
QWebEnginePage.__init__(self)
self.loadFinished.connect(self.on_load_finish)
def print_result(self, data):
print("-" * 30)
print(data)
with open("temp.html", "wb") as hndl:
hndl.write(data.encode("utf-8"))
def on_load_finish(self):
self.toHtml(self.print_result)
class Runner(threading.Thread):
def __init__(self, web_view):
self.web_view = web_view
threading.Thread.__init__(self)
self.daemon = True
def run(self):
<b>wrapper = partial(
self.web_view.load,
QtCore.QUrl("https://www.worldometers.info/coronavirus/"),
)
QtCore.QTimer.singleShot(0, wrapper)</b>
def main():
signal.signal(signal.SIGINT, signal.SIG_DFL)
app = QtWidgets.QApplication(sys.argv)
web_view = WebView()
runner = Runner(web_view)
runner.start()
app.exec_()
if __name__ == "__main__":
main()</code></pre>
|
python|pyqt|pyqt5|fedora
| 2 |
1,903,868 | 21,992,042 |
How to copy folder and subfolder using PyQt
|
<p>Does anyone have an idea about copying folder and sub folders using PyQt? </p>
|
<p>You can do:</p>
<pre><code>from shutil import copytree
copytree(original_directory_path, copy_directory_path)
</code></pre>
<p>For more information, check the docs on <code>shutil.copytree</code> here: <a href="http://docs.python.org/2/library/shutil.html#shutil.copytree" rel="nofollow">http://docs.python.org/2/library/shutil.html#shutil.copytree</a></p>
|
python|pyqt4|file-copying
| 0 |
1,903,869 | 22,256,632 |
What's wrong with my input? I keep getting errors that h is not defined
|
<pre><code>else:
hs = raw_input("HIT OR SHAME YOUR FOREFATHERS! (h or s): ").lower()
if h in s:
player.append(rc(cards))
else:
break
</code></pre>
<p>I keep getting an error saying that h is not defined, no matter which one you choose.</p>
|
<p>Answering what @chepner commented:</p>
<pre><code>hs = raw_input("HIT OR SHAME YOUR FOREFATHERS! (h or s): ").lower()
if hs == 'h':
player.append(rc(cards))
else:
break
</code></pre>
<p>Or shorter:</p>
<pre><code>hs = raw_input("HIT OR SHAME YOUR FOREFATHERS! (h or s): ").lower()
if hs != 'h': break
player.append(rc(cards))
</code></pre>
|
python|input
| 0 |
1,903,870 | 16,620,324 |
Delete related object via OneToOneField
|
<p>Is there some clever way how to perform delete in this situation?</p>
<pre><code>class Bus(models.Model):
wheel = OneToOneField(Wheel)
class Bike(models.Model):
wheel = OneToOneField(Wheel)
pedal = OneToOneField(Pedal)
class Car(models.Model):
wheel = OneToOneField(Wheel)
class Wheel(models.Model):
somfields
car = Car()
wheel = Wheel()
wheel.save()
car.wheel = wheel
car.save()
car.delete() # I want to delete also wheel (and also all stuff pointing via OneToOneField eg pedal)
</code></pre>
<p>Do I need to override delete methods of Car, Bike, Bus models or is there some better way? Other option is to create fields car, bike, bus on Wheel model, but it doesn't make much sense.</p>
|
<p>Here is the thing, since <code>Car</code> links to <code>Wheel</code>, it is the dependent model in the relationship. Therefore when you delete a <code>Wheel</code>, it deletes all dependent models (including related <code>Car</code> rows). However when you delete a <code>Car</code>, since <code>Wheel</code> does not depend on <code>Car</code>, it is not removed.</p>
<p>In order to delete parent relations in Django, you can overwrite the <code>Car</code>'s <code>delete</code> method:</p>
<pre><code>class Car(models.Model):
# ...
def delete(self, *args, **kwargs):
self.wheel.delete()
return super(self.__class__, self).delete(*args, **kwargs)
</code></pre>
<p>Then when doing:</p>
<pre><code>Car.objects.get(...).delete()
</code></pre>
<p>will also delete the <code>Wheel</code>. </p>
|
python|django|django-models
| 14 |
1,903,871 | 57,798,032 |
How to resolve Assertion Error for multiple columns in pandas
|
<p>Pandas documentation has given following code, which works fine:</p>
<pre><code> frame = pd.DataFrame(np.arange(12).reshape((4, 3)),
index=[['a', 'a', 'b', 'b'], [1, 2, 1, 2]],
columns=[['Ohio', 'Ohio', 'Colorado'],
['Green', 'Red', 'Green']])
</code></pre>
<p>I tried following code, based on above concept, but it does not work:</p>
<pre><code>hi5 = pd.DataFrame([[1,2,3],[4,5,6],[7,8,9],[10,11,12]],
index = [['a','a','a','b'],[1,2,3,1]],
columns=[['Ohio', 'Ohio', 'Colorado'],
['Green', 'Red', 'Green']])
</code></pre>
<p>It is giving Following error for above code:</p>
<pre><code>AssertionError: 2 columns passed, passed data had 3 columns
</code></pre>
|
<p>Apparently, you will need to use a <code>pd.DataFrame.from_records</code> constructor for that</p>
<pre><code>>>> hi5 = pd.DataFrame.from_records([[1,2,3],[4,5,6],[7,8,9],[10,11,12]],
... index = [['a','a','a','b'],[1,2,3,1]],
... columns=[['Ohio', 'Ohio', 'Colorado'],
... ['Green', 'Red', 'Green']])
>>>
>>> hi5
Ohio Colorado
Green Red Green
a 1 1 2 3
2 4 5 6
3 7 8 9
b 1 10 11 12
</code></pre>
<p>I can only guess that list of lists does not have a shape property, thus generic constructor does not support such type of data.</p>
|
python|pandas|data-science
| 1 |
1,903,872 | 43,579,753 |
TypeError: float() argument must be a string or a number, not 'method'
|
<p>I am trying to convert the latitude and longitude to zipcodes for around 10k data points. I am using geocoder for the task. </p>
<pre><code>lat = subsamp['Latitude'].as_matrix
long = subsamp['Longitude'].as_matrix
g = geocoder.google([lat, long], method='reverse')
zip = g.postal
</code></pre>
<p>But, on executing the geocoder I get the error: </p>
<blockquote>
<p>TypeError: float() argument must be a string or a number, not 'method'</p>
</blockquote>
<p>I tried running it using a Pandas series then Numpy array but does not work. </p>
|
<p>Its a Missing parentheses issue for <code>.as_matrix</code>,
<strong><a href="http://pandas.pydata.org/pandas-docs/version/0.18.1/generated/pandas.DataFrame.as_matrix.html#pandas-dataframe-as-matrix" rel="noreferrer">pandas.DataFrame.as_matrix</a>, is a method</strong>
used to convert the frame to its Numpy-array representation.</p>
<p>As it is a function, you missed the <code>()</code>, you have not added <code>()</code> function parenthesis, for <code>.as_matrix</code>.</p>
<pre><code>lat = subsamp['Latitude'].as_matrix
long = subsamp['Longitude'].as_matrix
</code></pre>
<p>It should be as follows :</p>
<pre><code>lat = subsamp['Latitude'].as_matrix()
long = subsamp['Longitude'].as_matrix()
</code></pre>
|
python|numpy|google-geocoder
| 9 |
1,903,873 | 43,527,318 |
How to Join / Merge datasets?
|
<p>I have two Dataframes <code>DF1</code> and <code>DF2</code>. My goal is to look up <code>DF2</code> with <code>DF1</code> columns as
keys; and save the returns as outcomes in <code>DF3</code>. Can someone help me with getting
<code>DF3</code>?</p>
<p>e.g.</p>
<pre class="lang-none prettyprint-override"><code>DF1 DF2
map test1 test2 No. outcome
A NaN NaN 1 AA
B NaN 5 2 BB
C 1 6 3 CC
D 2 7 4 DD
E 3 NaN 5 EE
F 4 NaN 6 FF
G 5 8 7 GG
H 6 9 8 HH
I 7 10 9 II
10 JJ
11 KK
12 LL
13 MM
DF3
map test1 test2 outcome1 outcome2
A NaN NaN NaN NaN
B NaN 5 NaN EE
C 1 6 AA FF
D 2 7 BB GG
E 3 NaN CC NaN
F 4 NaN DD NaN
G 5 8 EE HH
H 6 9 FF II
I 7 10 GG JJ
</code></pre>
<p>I am currently using two join functions, but this is not what I need. It drops <code>NaN</code>s in <code>DF1</code>, and only returns the overlap of <code>test1</code> and <code>test2</code>.</p>
<pre><code>df3 = df1.merge(df2, how='inner', left_on='test1', right_on='No.')
df3 = df3.merge(df2, how='inner', left_on='test2', right_on='No.')
</code></pre>
<p>currently my code will return this:</p>
<p>DF3<br>
map test1 test2 outcome1 outcome2
C 1 6 AA FF
D 2 7 BB GG
G 5 8 EE HH
H 6 9 FF II
I 7 10 GG JJ</p>
|
<p>Map would be more efficient in this case</p>
<pre><code>DF3 = DF1.copy()
DF3['outcome1'] = DF1['test1'].map(DF2.set_index('No.')['outcome'])
DF3['outcome2'] = DF1['test2'].map(DF2.set_index('No.')['outcome'])
map test1 test2 outcome1 outcome2
0 A NaN NaN NaN NaN
1 B NaN 5.0 NaN EE
2 C 1.0 6.0 AA FF
3 D 2.0 7.0 BB GG
4 E 3.0 NaN CC NaN
5 F 4.0 NaN DD NaN
6 G 5.0 8.0 EE HH
7 H 6.0 9.0 FF II
8 I 7.0 10.0 GG JJ
</code></pre>
|
python|pandas|dataframe|merge
| 1 |
1,903,874 | 54,582,345 |
writing a dictionary with one key and two values to a csv file
|
<p>I have a dictionary that has one key and two values. I want to write a dictionary to a csv file, and sorted according to one of the values. I also want each value to have its own column in the csv file.
I can't seem to do it. </p>
<pre><code>sorted_combined = sorted(combined.items(), key = lambda kv: kv[1][1])
with open('output.csv', 'wb') as output:
writer = csv.writer(output)
writer.writerow(["Subject", "Sij", "gij"])
for key, value in sorted_combined.iteritems():
writer.writerow(k, sorted_combined[k])
</code></pre>
<p>I know some people have said to try
writer.writerow([k] + sorted_combined)
or
writer.writerow(key, *value)</p>
<p>and neither one works. The error messages I get are: cannot concatenate tuple. </p>
<p>What I expect to get is the following: </p>
<pre><code> Subject Sij gij
sub001_01 6578 18
sub992_03 3820 5
</code></pre>
<p>*****EDIT*******
This is what my sorted_combined looks like.. however the (1,6) for instance you see at the end it not a tuple anymore it's a string. When naming the dictionary key, each tuple was converted to a string.</p>
<p>[('network6_QNS_0045_01_(1, 6)', (0.0, 0.0)), ('network6_QNS_0045_01_(1, 4)', (0.0, 0.0)), ('network6_QNS_0045_01_(0, 6)', (0.0, 0.0)), ('network6_QNS_0045_01_(2, 5)', (0.0, 0.0)), ('network6_QNS_0045_01_(1, 7)', (0.0, 0.0)), ('network6_QNS_0045_01_(1, 5)', (0.0, 0.0)), ('network6_QNS_0045_01_(1, 3)', (0.0, 0.0)), ('network6_QNS_0045_01_(5, 6)', (0.0, 0.0)), ('network6_QNS_0045_01_(3, 5)', (0.0, 0.0)), ('network6_QNS_0045_01_(2, 6)', (743466.0, 18.387329999999999)), ('network6_QNS_0045_01_(5, 7)', (142774.0, 18.769649999999999)), ('network6_QNS_0045_01_(0, 5)', (232822.0, 20.160640000000001)), ('network6_QNS_0045_01_(3, 6)', (780163.0, 24.748139999999999)), ('network6_QNS_0045_01_(2, 3)', (199652.0, 26.635860000000001)), ('network6_QNS_0045_01_(4, 7)', (2248433.0, 27.278729999999999)), ('network6_QNS_0045_01_(3, 4)', (922289.0, 27.979320000000001)), ('network6_QNS_0045_01_(1, 2)', (396823.0, 29.924759999999999)), ('network6_QNS_0045_01_(4, 6)', (2897317.0, 30.266200000000001)), ('network6_QNS_0045_01_(0, 4)', (520923.0, 31.040569999999999)), ('network6_QNS_0045_01_(4, 5)', (6358.0, 32.68)), ('network6_QNS_0045_01_(2, 4)', (3622715.0, 35.321170000000002)), ('network6_QNS_0045_01_(2, 7)', (364815.0, 37.499250000000004)), ('network6_QNS_0045_01_(0, 1)', (145240.0, 38.878059999999998)), ('network6_QNS_0045_01_(0, 7)', (224456.0, 46.5182)), ('network6_QNS_0045_01_(0, 3)', (1692.0, 56.884950000000003)), ('network6_QNS_0045_01_(6, 7)', (280955.0, 57.616190000000003)), ('network6_QNS_0045_01_(3, 7)', (2012.0, 71.302719999999994)), ('network6_QNS_0045_01_(0, 2)', (1660.0, 84.085009999999997))]</p>
|
<p>I edited my answer, you are not using a dictionary. You have a list of tuples.</p>
<pre><code>import csv
with open('output.csv', 'wb') as f:
writer = csv.writer(f)
writer.writerow(['Subject', 'Sij', 'gij'])
for row in sorted_combined:
l = [row[0]]
l.append(row[0][1])
l.append(row[1][1])
writer.writerow(l)
</code></pre>
<p>Just like you had in your original try, csv.writer is the way to go</p>
|
python|pandas|csv|sorting|dictionary
| 1 |
1,903,875 | 54,471,006 |
line 6, in <module> get_ipython().run_line_magic('matplotlib', 'inline') AttributeError: 'NoneType' object has no attribute 'run_line_magic'
|
<p>Im new in Python . I am getting error on the following python code.</p>
<p>Running this code on python .Now use version 3.6.5.. Btw, I have installed the pip install ipython. </p>
<p>my code is : </p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import somoclu
from IPython import get_ipython
get_ipython().run_line_magic('matplotlib', 'inline')
</code></pre>
<p>the result : </p>
<pre><code>line 6, in <module>
get_ipython().run_line_magic('matplotlib', 'inline')
AttributeError: 'NoneType' object has no attribute 'run_line_magic'
</code></pre>
|
<pre><code>from IPython import get_ipython
get_ipython().run_line_magic('inline')
</code></pre>
|
python
| 0 |
1,903,876 | 71,409,752 |
Training with threshold in PyTorch
|
<p>I have a neural network, which produces a single value when excited with input. I need to use this value returned by the network to threshold another array. The result of this threshold operation is used to compute a loss function (the value of threshold is not known before hand and needs to be arrived at by training).
Following is an MWE</p>
<pre><code>import torch
x = torch.randn(10, 1) # Say this is the output of the network (10 is my batch size)
data_array = torch.randn(10, 2) # This is the data I need to threshold
ground_truth = torch.randn(10, 2) # This is the ground truth
mse_loss = torch.nn.MSELoss() # Loss function
# Threshold
thresholded_vals = data_array * (data_array >= x) # Returns zero in all places where the value is less than the threshold, the value itself otherwise
# Compute loss and gradients
loss = mse_loss(thresholded_vals, ground_truth)
loss.backward() # Throws error here
</code></pre>
<p>Since the operation of thresholding returns a tensor array that is devoid of any gradients the <code>backward()</code> operation throws error.</p>
<p>How does one train a network in such a case?</p>
|
<p>Your threshold function is not differentiable in the threshold, therefore <code>torch</code> does not calculate the gradient for the threshold which is why your example is not working.</p>
<pre><code>import torch
x = torch.randn(10, 1, requires_grad=True) # Say this is the output of the network (10 is my batch size)
data_array = torch.randn(10, 2, requires_grad=True) # This is the data I need to threshold
ground_truth = torch.randn(10, 2) # This is the ground truth
mse_loss = torch.nn.MSELoss() # Loss function
# Threshold
thresholded_vals = data_array * (data_array >= x) # Returns zero in all places where the value is less than the threshold, the value itself otherwise
# Compute loss and gradients
loss = mse_loss(thresholded_vals, ground_truth)
loss.backward() # Throws error here
print(x.grad)
print(data_array.grad)
</code></pre>
<p>Output:</p>
<pre><code>None #<- for the threshold x
tensor([[ 0.1088, -0.0617], #<- for the data_array
[ 0.1011, 0.0000],
[ 0.0000, 0.0000],
[-0.0000, -0.0000],
[ 0.2047, 0.0973],
[-0.0000, 0.2197],
[-0.0000, 0.0929],
[ 0.1106, 0.2579],
[ 0.0743, 0.0880],
[ 0.0000, 0.1112]])
</code></pre>
|
python|machine-learning|deep-learning|pytorch|gradient-descent
| 1 |
1,903,877 | 39,292,051 |
portalocker does not seem to lock
|
<p>I have a sort of checkpoint file which I wish to modify sometimes by various python programs. I load the file, try to lock it using portalocker, change it, than unlock and close it.</p>
<p>However, portalocker does not work in the simplest case.
I created a simple file:</p>
<pre><code>$echo "this is something here" >> test
$python
Python 3.5.2 (default, Jul 5 2016, 12:43:10)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import portalocker
>>> f = open("test",'w')
>>> portalocker.lock(f, portalocker.LOCK_EX)
</code></pre>
<p>Meanwhile I can still open it in another terminal:</p>
<pre><code>$python
Python 3.5.2 (default, Jul 5 2016, 12:43:10)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> fl = open("test",'w')
>>> fl.write("I can still overwrite this\n")
>>> fl.close()
</code></pre>
<p>Then I close the first one, and check the file:</p>
<pre><code>>>> portalocker.unlock(f)
>>> f.close()
>>>
$ cat test
I can still overwrite this
</code></pre>
<p>What am I doing wrong?</p>
|
<p>The problem is that, by default, Linux uses advisory locks. <a href="https://stackoverflow.com/questions/12062466/mandatory-file-lock-on-linux">To enable mandatory locking (which you are referring to) the filesytem needs to be mounted with the <code>mand</code> option</a>. The advisory locking system actually has several advantages but can be confusing if you're not expecting it.</p>
<p>To make sure your code works properly in both cases I would suggest encapsulating both of the open calls with the locker.</p>
<p>For example, try this in 2 separate Python instances:</p>
<pre><code>import portalocker
with portalocker.Lock('test') as fh:
fh.write('first instance')
print('waiting for your input')
input()
</code></pre>
<p>Now from a second instance:</p>
<pre><code>import portalocker
with portalocker.Lock('test') as fh:
fh.write('second instance')
</code></pre>
<p>Ps: I'm the maintainer of the portalocker package</p>
|
python|python-3.x|locking
| 6 |
1,903,878 | 52,751,533 |
Need suggestions for fast and efficient way to parse AWS Pricing List json files
|
<p>I am trying to parse pricing list json files for some aws services. After parsing I am randomly picking a key from key list to get the data. Currently my code loads the json files one at a time, which takes time. I would like to get some suggestions on how I can speed up this process.</p>
|
<p>Ended up creating a database on redis server.</p>
|
json|python-3.x|amazon-web-services
| 0 |
1,903,879 | 52,562,753 |
tkinter on OSX: open new window instead of tab
|
<p>I want to open a new window in my tkinter app (python 3.6.5) on mac OSX (10.14). <a href="https://stackoverflow.com/questions/17261028/how-do-i-make-a-pop-up-in-tkinter-when-a-button-is-clicked">Existing answers say to use TopLevel</a>. The following code works if System Preferences -> Dock -> "Prefer tabs when opening documents" is set to "In Full Screen Only". However, when that preference is set to "Always", the app preforms differently and opens TopLevel in a new tab, which is not my desired behavior (I'm actually looking for a blocking pop-up alert window regardless of the user's system preference).</p>
<pre><code>import sys
from tkinter import *
ABOUT_TEXT = "I want this to open in a new window, not a tab"
def newWindow():
toplevel = Toplevel(app)
label1 = Label(toplevel, text=ABOUT_TEXT, height=0, width=100)
label1.pack()
app = Tk()
app.title("tkinter: new window on mac")
app.geometry("500x300+200+200")
b = Button(app, text="Quit", width=20, command=app.destroy)
button1 = Button(app, text="Open new window", width=20, command=newWindow)
b.pack(side='bottom',padx=0,pady=0)
button1.pack(side='bottom',padx=5,pady=5)
app.mainloop()
</code></pre>
|
<p>Not sure why but using root.resizable(False, False) to stop the window size being changed means a new window is created rather than a tab. </p>
|
python|macos|tkinter
| 0 |
1,903,880 | 52,846,584 |
How to make QTextEdit automatically save text in PyQt5?
|
<p>How can I make QTextEdit save whatever I type into it <strong>automatically</strong> without having to click a button? Is it possible to do it in PyQt5? So far I have only been able to do it with button binding. </p>
<pre><code>def save_text():
text=textedit.toPlainText()
with open('mytextfile.txt', 'w') as f:
f.write(text)
button.clicked.connect(save_text)
</code></pre>
|
<p>If you want your method to be called every time you change the text in the QTextEdit just use the "textChanged" signal. I don't think it makes sense to store the text to a file in your case, but here is a working code for what you asked for:</p>
<pre><code>import sys
from PyQt5.QtWidgets import *
class MyMainWindow(QMainWindow):
def __init__(self):
super(MyMainWindow, self).__init__()
layout = QHBoxLayout()
centralWidget = QWidget()
centralWidget.setLayout(layout)
self.setCentralWidget(centralWidget)
self.textedit = QTextEdit()
self.textedit.textChanged.connect(self.save_text)
layout.addWidget(self.textedit)
def save_text(self):
text = self.textedit.toPlainText()
with open('mytextfile.txt', 'w') as f:
f.write(text)
if __name__ == "__main__":
app = QApplication(sys.argv)
form = MyMainWindow()
form.show()
sys.exit(app.exec_())
</code></pre>
|
python|pyqt|pyqt5|qtextedit
| 0 |
1,903,881 | 47,951,760 |
How to run multiple asynchronous processes in Python using multiprocessing?
|
<p>I need to run multiple background asynchronous functions, using multiprocessing. I have working Popen solution, but it looks a bit unnatural. Example:</p>
<pre><code>from time import sleep
from multiprocessing import Process, Value
import subprocess
def worker_email(keyword):
subprocess.Popen(["python", "mongoworker.py", str(keyword)])
return True
keywords_list = ['apple', 'banana', 'orange', 'strawberry']
if __name__ == '__main__':
for keyword in keywords_list:
# Do work
p = Process(target=worker_email, args=(keyword,))
p.start()
p.join()
</code></pre>
<p>If I try not to use Popen, like:</p>
<pre><code>def worker_email(keyword):
print('Before:' + keyword)
sleep(10)
print('After:' + keyword)
return True
</code></pre>
<p>Functions run one-by-one, no async. So, how to run all functions at the same time without using Popen?</p>
<p><strong>UPD:</strong> I'm using multiprocessing.Value to return results from Process, like:</p>
<pre><code>def worker_email(keyword, func_result):
sleep(10)
print('Yo:' + keyword)
func_result.value = 1
return True
func_result = Value('i', 0)
p = Process(target=worker_email, args=(doc['check_id'],func_result))
p.start()
# Change status
if func_result.value == 1:
stream.update_one({'_id': doc['_id']}, {"$set": {"status": True}}, upsert=False)
</code></pre>
<p>But it doesn't work without .join(). Any ideas how to make it work or similar way? :)</p>
|
<p>If you just remove the line <code>p.join()</code> it should work.
You only need <code>p.join</code> if you want to wait for the process to finish before executing further. At the end of the Program python waits for all Process to finished before closing, so you don't need to worry about that.</p>
|
python|python-3.x|asynchronous|multiprocessing|python-multiprocessing
| 2 |
1,903,882 | 47,829,325 |
Multi-line function calls with strings in python
|
<p>I have a function call in python 2.7:</p>
<pre><code>execute_cmd('/sbin/ip addr flush dev '
+ args.interface
+ ' && '
+ '/sbin/ifdown '
+ args.interface
+ ' ; '
+ '/sbin/ifup '
+ args.interface
+ ' && '
+ '/sbin/ifconfig | grep '
+ args.interface)
</code></pre>
<p>This is running fine, but <code>pylint</code> is complaining with the following warning messages:</p>
<pre><code>C:220, 0: Wrong continued indentation (remove 1 space).
+ args.interface
|^ (bad-continuation)
C:221, 0: Wrong continued indentation (remove 1 space).
+ ' && '
|^ (bad-continuation)
C:222, 0: Wrong continued indentation (remove 1 space).
+ '/sbin/ifconfig | grep '
|^ (bad-continuation)
.
.
.
</code></pre>
<p>What is the correct way to call a function in python with string argument(s) which spans across multiple lines?.</p>
|
<p><a href="https://www.python.org/dev/peps/pep-0008/#indentation" rel="nofollow noreferrer">PEP 8 states</a> that you can also start a long argument list (or anything within brackets, really) at the next line with one extra indentation level:</p>
<pre><code>execute_cmd(
'/sbin/ip addr flush dev ' +
args.interface +
' && ' +
'/sbin/ifdown ' +
args.interface +
' ; ' +
'/sbin/ifup ' +
args.interface +
' && ' +
'/sbin/ifconfig | grep ' +
args.interface
)
</code></pre>
<p>As I said in my comment, <a href="https://www.python.org/dev/peps/pep-0008/#should-a-line-break-before-or-after-a-binary-operator" rel="nofollow noreferrer">binary operators should be put at the end of a line break</a>, not at the start of a new one.</p>
<hr>
<p>What you can also do is use an <code>fstring</code> (python >3.6) and just drop the <code>+</code>s:</p>
<pre><code>execute_cmd(
f'/sbin/ip addr flush dev {args.interface} && /sbin/ifdown'
f' {args.interface} ; /sbin/ifup {args.interface} && '
f'/sbin/ifconfig | grep {args.interface}'
)
</code></pre>
<p>The same with the <code>.format</code> function (from python .. 2.6 onwards I think?):</p>
<pre><code>execute_cmd(
'/sbin/ip addr flush dev {0} && /sbin/ifdown' +
' {0} ; /sbin/ifup {0} && ' +
'/sbin/ifconfig | grep {0}'.format(args.interface)
)
</code></pre>
|
python
| 0 |
1,903,883 | 37,187,564 |
operations with nested lists in python
|
<p>I'm trying to iterate through a nested list and make some changes to the elements. After changing them I'd like to save results in the same nested list.
For example, I have</p>
<pre><code>text = [['I', 'have', 'a', 'cat'], ['this', 'cat', 'is', 'black'], ['such', 'a', 'nice', 'cat']]
</code></pre>
<p>I want to get a list of lists with elements slightly changed. For example: </p>
<pre><code>text = [['I_S', 'have', 'a_A', 'cat'], ['this', 'cat_S', 'is', 'black_A'], ['such', 'a', 'nice', 'cat_S']]
</code></pre>
<p>Firstly, I go through each list, then go through each item in a list and then apply additional code to make changes needed. But how to return the nested list back after operations? This is what I do:</p>
<pre><code>for tx in text:
for t in tx:
#making some operations with each element in the nested list.
#using if-statements here
result.append()
</code></pre>
<p>And what I've got the single list with all the changed elements from the nested list</p>
<pre><code>result = ['I_S', 'have', 'a_A', 'cat', 'this', 'cat_S', 'is', 'black_A', 'such', 'a', 'nice', 'cat_S']
</code></pre>
<p>I need to keep the nested list because it's actually the sentences from the text. </p>
|
<p>To create a nested list as output try this:</p>
<pre><code>result = []
for i in range(len(text)):
temp = []
for t in text[i]:
word_modified = t
#making some operations with each element in the nested list.
#using if-statements here
temp.append(word_modified)
result.append(temp)
result
</code></pre>
<p>If you just copy paste this code, <code>result</code> will be the equal to <code>text</code>. But as in the loop t represents each word separatly, you should be able to modifiy it as you wish.</p>
|
python|list|nested|nested-loops
| 3 |
1,903,884 | 37,213,362 |
Sphinx 1.4+ and block literals in Returns with sphinx-napoleon don't work anymore
|
<p>I am writing documentation for a python project that follows this guide:</p>
<p><a href="http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html" rel="nofollow">http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html</a></p>
<p>To be exact this part of the guide:</p>
<p>The <code>Returns</code> section supports any reStructuredText formatting, including literal blocks::</p>
<p>My code looks as follows:</p>
<pre><code>Returns:
None:
::
{
"status": "update",
"success": True,
}
</code></pre>
<p>For sphinx 1.3.5-1.3.6 it works as expected.</p>
<p>For sphinx 1.4.0-1.4.1 it throws error like that:</p>
<pre><code>api/v0_0/views.py:docstring of api.v0_0.views.add_gallery:19: ERROR: Unexpected indentation.
api/v0_0/views.py:docstring of api.v0_0.views.add_gallery:21: WARNING: Block quote ends without a blank line; unexpected unindent.
</code></pre>
|
<p>This was bug. I have created a ticket with working example how to reproduce it. Guys working on Sphinx have fixed it week (or so) later. Bumping version will solve it. </p>
|
python|python-sphinx|sphinx-napoleon
| 0 |
1,903,885 | 66,097,388 |
How to split a python list of lists into 3 separate lists according to their value in the each sublist's first entry?
|
<p>How do you use Python to accomplish the following:</p>
<p>I have the following list</p>
<pre><code>StateCityList = [["Kansas","Overland Park"],
["Kansas","Lenexa"],
["Kansas","Olathe"],
["Missouri","Kansas City"],
["Missouri","Raytown"],
["Missouri","Independence"],
["Texas","Dallas"],
["Texas","Houston"],
["Texas","San Antonio"]]
</code></pre>
<p>I want to get all the cities in a certain state into a separate list like this</p>
<pre><code> Kansas =[["Kansas","Overland Park],
["Kansas","Lenexa"],
["Kansas","Olathe"]]
Missouri = [["Missouri","Kansas City"]
["Missouri","Raytown"]
["Missouri","Independence"]]
Texas = [["Texas","Dallas"]
["Texas","Houston"]
["Texas","San Antonio"]]
</code></pre>
<p>Thanks</p>
|
<p>You can use <a href="https://docs.python.org/3/library/operator.html#operator.itemgetter" rel="nofollow noreferrer"><code>operator.itemgetter</code></a> and <a href="https://docs.python.org/3/library/itertools.html#itertools.groupby" rel="nofollow noreferrer"><code>itertools.groupby</code></a>:</p>
<pre><code>>>> from itertools import groupby
>>> from operator import itemgetter
>>> {k: list(g) for k, g in groupby(StateCityList, key=itemgetter(0))}
{'Kansas': [['Kansas', 'Overland Park'],
['Kansas', 'Lenexa'],
['Kansas', 'Olathe']],
'Missouri': [['Missouri', 'Kansas City'],
['Missouri', 'Raytown'],
['Missouri', 'Independence']],
'Texas': [['Texas', 'Dallas'],
['Texas', 'Houston'],
['Texas', 'San Antonio']]}
</code></pre>
<p><strong>NOTE:</strong>
If StateCityList is not sorted by state name, then use this:</p>
<pre><code>{k: list(g) for k,g in groupby(sorted(StateCityList, key=itemgetter(0)), itemgetter(0))}
</code></pre>
|
python|list|dictionary
| 3 |
1,903,886 | 72,790,044 |
Conda says package is already installed but doesn't list it and won't install it again. Python code can't import the library
|
<p>Conda says the package is already installed and will not install it again, but when I list the packages in the environment, there are no packages installed.</p>
<p>When I try to import the package in a notebook file, it fails.
<a href="https://i.stack.imgur.com/cmKaU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cmKaU.png" alt="enter image description here" /></a></p>
<p>Running a terminal shell launched from JupyterLab:</p>
<ol>
<li>Use pip to uninstall bashplotlib - <strong>that works</strong></li>
<li>Use Conda to activate an environment -- <strong>That works</strong></li>
<li>Use Conda to install bashplotlib -- It fails because it's already supposedly installed</li>
<li>Use Conda to print the packages installed in the Conda environment -- <strong>There are none listed</strong></li>
</ol>
<blockquote>
<pre><code>PS C:\Users\nicomp> pip uninstall bashplotlib
Uninstalling bashplotlib-0.6.5:
Would remove:
c:\users\nicomp\anaconda3\lib\site-packages\bashplotlib-0.6.5-py2.7.egg-info
c:\users\nicomp\anaconda3\lib\site-packages\bashplotlib\*
c:\users\nicomp\anaconda3\scripts\hist-script.py
c:\users\nicomp\anaconda3\scripts\scatter-script.py
c:\users\nicomp\anaconda3\scripts\scatter.exe
Proceed (Y/n)? y
Successfully uninstalled bashplotlib-0.6.5
PS C:\Users\nicomp> conda activate fooEnvironment
PS C:\Users\nicomp> conda info --envs
# conda environments:
#
base * C:\Users\nicomp\anaconda3
bashplotlibEnvironment C:\Users\nicomp\anaconda3\envs\bashplotlibEnvironment
condaTestEnvironment C:\Users\nicomp\anaconda3\envs\condaTestEnvironment
fooEnvironment C:\Users\nicomp\anaconda3\envs\fooEnvironment
jupyterlab-debugger C:\Users\nicomp\anaconda3\envs\jupyterlab-debugger
microservices C:\Users\nicomp\anaconda3\envs\microservices
ml C:\Users\nicomp\anaconda3\envs\ml
someEnvironment C:\Users\nicomp\anaconda3\envs\someEnvironment
zzz C:\Users\nicomp\anaconda3\envs\zzz
PS C:\Users\nicomp> conda install -c conda-forge bashplotlib
Collecting package metadata (current_repodata.json): done
Solving environment: done
# All requested packages already installed.
PS C:\Users\nicomp> conda list -n fooEnvironment
# packages in environment at C:\Users\nicomp\anaconda3\envs\fooEnvironment:
#
# Name Version Build Channel
PS C:\Users\nicomp>
</code></pre>
</blockquote>
|
<h3>Activation Failing</h3>
<p>From the <code>conda info --envs</code> output, it indicates that the <code>conda activate</code> command is not working, since the output shows that <strong>base</strong> is still activated (that's what "<code>*</code>" indicates). That is, despite the efforts, the package is getting installed in <strong>base</strong>.</p>
<h3>Specifying Target Environment</h3>
<p>I can't answer why the environment activation is broken (this can be specific to PowerShell or the Jupyter terminal - try searching), but I can at least recommend a more robust installation command. Rather than relying on environment activation, most Conda commands support specification of the target environment using the <code>--name,-n</code> or <code>--prefix,-p</code> flags. In this case,</p>
<pre class="lang-bash prettyprint-override"><code>conda install -n fooEnvironment -c conda-forge bashplotlib
</code></pre>
<p>would work no matter what environment happens to be activated.</p>
<p>I'd encourage this as a good habit to adopt because it makes the command less context-sensitive.</p>
|
python|pip|conda
| 2 |
1,903,887 | 39,471,932 |
Tkinter grid method
|
<p>I'm using Tkinter to create a GUI for my computer science coursework based on steganography. I'm using the <code>.grid()</code> function on the widgets in my window to lay them out, however I can't get this particular part to look how I want it to.</p>
<p>Here's what my GUI currently looks like: <a href="http://imgur.com/LNEZtEL" rel="nofollow">http://imgur.com/LNEZtEL</a>
(or just the part with the error).</p>
<p>I want the remaining characters label to sit directly underneath the text entry box, but for some reason row 4 starts a large way down underneath the box. If I label the GUI with columns and rows anchored north west it looks like this: <a href="http://imgur.com/a/V7dTW" rel="nofollow">http://imgur.com/a/V7dTW</a>.</p>
<p>If I shrink the image box on the left, it looks how I want, however I don't want the image this small: <a href="http://imgur.com/a/0Dudu" rel="nofollow">http://imgur.com/a/0Dudu</a>.</p>
<p>The image box has a rowspan of 2, so what is causing the 4th row to start so low down from the text entry box? Here's roughly what I want the GUI to look like: <a href="http://imgur.com/a/ck04A" rel="nofollow">http://imgur.com/a/ck04A</a>.</p>
<p>Full code:</p>
<pre><code>imageButton = Button(root, text="Add Image", command = add_image)
imageButton.grid(row = 2, columnspan = 2, sticky = W, padx = 30, pady = 20)
steg_widgets.append(imageButton)
image = Image.open("square.jpg")
image = image.resize((250,250))
photo = ImageTk.PhotoImage(image)
pictureLabel = Label(root, image = photo)
pictureLabel.image = photo
pictureLabel.grid(column = 0, row = 3, columnspan = 2, rowspan = 2, padx = 20, pady = (0, 20), sticky = NW)
steg_widgets.append(pictureLabel)
nameLabel = Label(root, text = "Brandon Edwards - OCR Computer Science Coursework 2016/2017")
nameLabel.grid(row = 0, column = 2, columnspan = 2, padx = (0, 20), pady = 10)
steg_widgets.append(nameLabel)
inputTextLabel = Label(root, text = "Enter text:")
inputTextLabel.grid(row = 2, column = 2, sticky = W)
steg_widgets.append(inputTextLabel)
startButton = Button(root, text="Go!", command = start_stega)
startButton.grid(row = 2, column = 2, sticky = E)
steg_widgets.append(startButton)
inputTextBox = Text(root, height = 10, width = 30)
inputTextBox.grid(row = 3, column = 2, sticky = NW)
steg_widgets.append(inputTextBox)
maxCharLabel = Label(root, text = "Remaining characters:")
maxCharLabel.grid(row = 4, column = 2, sticky = NW)
steg_widgets.append(maxCharLabel)
saveButton = Button(root, text="Save Image", command = save_image)
saveButton.grid(row = 2, column = 3, sticky = W)
steg_widgets.append(saveButton)
</code></pre>
|
<p>I recommend breaking your UI down into logical sections, and laying out each section separately. </p>
<p>For example, you clearly have two distinct sections: the image and button on the left, and the other widgets on the right. Start by creating containers for those two groups:</p>
<pre><code>import Tkinter as tk
...
left_side = tk.Frame(root)
right_side = tk.Frame(root)
</code></pre>
<p>Since they are side-by-side, <code>pack</code> is the simplest way to lay them out:</p>
<pre><code>left_side.pack(side="left", fill="y", expand=False)
right_side.pack(side="right", fill="both", expand=True)
</code></pre>
<p>Next, you can focus on just one side. You can use <code>pack</code> or <code>grid</code>. This uses <code>grid</code> for illustrative purposes:</p>
<pre><code>image = tk.Canvas(left_side, ...)
button = tk.Button(left_side, ...)
left_side.grid_rowconfigure(0, weight=1)
left_side.grid_columnconfigure(0, weight=1)
image.grid(row=0, column=0, sticky="nw")
button.grid(row=1, column=0, sticky="n")
</code></pre>
<p>Finally, work on the right side. Since widgets are stacked top-to-bottom, <code>pack</code> is the natural choice:</p>
<pre><code>l1 = tk.Label(right_side, text="Enter text:")
l2 = tk.Label(right_side, text="Remaining characters")
text = tk.Text(right_side)
l1.pack(side="top", fill="x")
text.pack(side="top", fill="both", expand=True)
l2.pack(side="top", fill="x")
</code></pre>
|
python|tkinter
| 1 |
1,903,888 | 38,592,908 |
How do you setup simple timer between two times when the other time is the next day?
|
<p>Python noob here</p>
<pre><code>from datetime import datetime, time
now = datetime.now()
now_time = now.time()
if now_time >= time(10,30) and now_time <= time(13,30):
print "yes, within the interval"
</code></pre>
<p>I would like the timer to work between 10,30 AM today and 10 AM the next day. Changing time(13,30) to time(10,00) will not work, because I need to tell python 10,00 is the next day. I should use datetime function but don't know how. Any tips or examples appreciated.</p>
|
<p>The <code>combine</code> method on the <code>datetime</code> class will help you a lot, as will the <code>timedelta</code> class. Here's how you would use them:</p>
<pre><code>from datetime import datetime, timedelta, date, time
today = date.today()
tomorrow = today + timedelta(days=1)
interval_start = datetime.combine(today, time(10,30))
interval_end = datetime.combine(tomorrow, time(10,00))
time_to_check = datetime.now() # Or any other datetime
if interval_start <= time_to_check <= interval_end:
print "Within the interval"
</code></pre>
<p>Notice how I did the comparison. Python lets you "nest" comparisons like that, which is usually more succinct than writing <code>if start <= x and x <= end</code>.</p>
<p>P.S. Read <a href="https://docs.python.org/2/library/datetime.html" rel="nofollow">https://docs.python.org/2/library/datetime.html</a> for more details about these classes.</p>
|
python
| 1 |
1,903,889 | 32,888,859 |
Checking Boolean variables in a sequence
|
<p>So, I have an array, containing tuple instances of <code>True</code> and <code>False</code> that are determined by a user input. An example could look like this:</p>
<pre><code>array = [True, True, False, False, True, True, True]
</code></pre>
<p>I want to check if these are fulfilling of certain conditions. My current attempt at this is:</p>
<pre><code>if (array[0], array[1], array[4], array[5], array[6]) is False and (array[2], array[3]) is True:
</code></pre>
<p>Obviously, that is completely incorrect and not doing what I want it to do, but I honestly can't find the correct method anywhere.</p>
|
<p>You can use <a href="https://docs.python.org/2/library/functions.html#all" rel="nofollow noreferrer"><strong><code>all</code></strong></a> and <a href="https://docs.python.org/2/library/functions.html#any" rel="nofollow noreferrer"><strong><code>any</code></strong></a>. These check whether all or any elements of a sequence are equivalent to <code>True</code>:</p>
<pre><code>>>> array = [True, True, False, False, True]
>>> not any(array[0:2]) and all(array[2:4]) and not any(array[4:7])
False
</code></pre>
|
python|boolean
| 6 |
1,903,890 | 14,243,401 |
Python 3 detailed control of I/O of another process
|
<p>In python 3 (on Linux or MacOSX10.8) how can I get a parent process to read just a prompt (that does not include <code>\n</code>) issued by a subprocess, not the entire buffer till <code>\n</code>?</p>
<pre><code># program names.py
print("I am Joe.") #1print
name = input("What is your name? ") #2prompt
print("Hi", name, "!") #3print
# program parent.py (in python 3.3.0)
import subprocess
import sys
p = subprocess.Popen([sys.executable, "names.py"],
bufsize=0,
stdin =subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=False,
universal_newlines=True
)
print(p.stdout.readline(), end='')
p.stdin.write("Sue\n")
# The next print will produce the output of 2prompt + 3print:
print(p.stdout.readline(), end='')
</code></pre>
<p>When I run <code>parent.py</code> it prints:</p>
<pre><code>I am Joe.
What is your name? Hi Sue !
</code></pre>
<p>How to modify <code>parent.py</code> so that it prints:</p>
<pre><code>I am Joe.
What is your name?
Hi Sue !
</code></pre>
<p>That is, how to extract separately the output produced by prompt and print?</p>
|
<p><code>readline()</code> reads from <code>names.py</code> until the next <code>\n</code>, where what you need is a "read as much as there is right now". Time is an additional factor here, since you basically have to detect, when <code>names.py</code> is waiting (expecting input).</p>
<p>Basically, you need a <code>read()</code>-operation with a timeout. You could start a separate thread that reads from <code>names.py</code> byte by byte into a buffer. You can then <code>join(timeout)</code> this thread and access its buffer.</p>
|
python|io|subprocess|pipe|python-3.3
| 2 |
1,903,891 | 12,632,767 |
Reading from flushed vs unflushed buffers
|
<p>I've got a script <code>parent.py</code> trying to to read stdout from a subprocess <code>sub.py</code> in Python.</p>
<p>The parent <code>parent.py</code>:</p>
<pre><code>#!/usr/bin/python
import subprocess
p = subprocess.Popen("sub.py", stdout=subprocess.PIPE)
print p.stdout.read(1)
</code></pre>
<p>And the subprocess, <code>sub.py</code>:</p>
<pre><code>#!/usr/bin/python
print raw_input( "hello world!" )
</code></pre>
<p>I would expect running <code>parent.py</code> to print the 'h' from "hello world!". Actually, it hangs. I can only get my expected behaviour by adding <code>-u</code> to <code>sub.py</code>'s she-bang line.</p>
<p>This confuses me because the <code>-u</code> switch makes no difference when <code>sub.py</code> is run directly from a shell; the shell is somehow privy to the un-flushed output stream, unlike <code>parent.py</code>.</p>
<p>My goal is to run a C program as the subprocess, so I won't be able to control whether or not it flushes stdout. How is it that a shell has better access to a process's stdout than Python running the same thing from <code>subprocess.Popen</code>? Am I going to be able to read such a stdout stream from a C program that doesn't flush its buffers?</p>
<p><b>EDIT:</b></p>
<p>Here is an updated example based on korylprince's comment...</p>
<pre><code>## capitalize.sh ##
#!/bin/sh
while [ 1 ]; do
read s
echo $s | tr '[:lower:]' '[:upper:]'
done
########################################
## parent.py ##
#!/usr/bin/python
from subprocess import Popen, PIPE
# cmd = [ 'capitalize.sh' ] # This would work
cmd = [ 'script', '-q', '-f', '-c', 'capitalize.sh', '/dev/null']
p = Popen(cmd, stdin=PIPE)
p.stdin.write("some string\n")
p.wait()
</code></pre>
<p>When running through <code>script</code>, I get steady printing of newlines (and if this were a Python, subprocess, it'd raise an EOFerror).</p>
|
<p>An alternative is</p>
<pre><code>p = subprocess.Popen(["python", "-u", "sub.py"], stdout=subprocess.PIPE)
</code></pre>
<p>or the suggestions <a href="https://stackoverflow.com/questions/107705/python-output-buffering">here</a>.</p>
<p>My experience is that yes, you will be able to read from most C programs without any extra effort. </p>
<p>The Python interpreter takes extra steps to buffer its output which is why it needs the <code>-u</code> switch to disable output buffering. Your typical C program won't do this.</p>
<p>I haven't run into any program (C or otherwise) other than the Python interpreter that I expected to work and didn't within a subshell.</p>
|
python|c|shell|ipc
| 1 |
1,903,892 | 23,184,401 |
Python 3 Is there anything that prints the order of a class's method calls?
|
<p>Yes, I can manually add a line of <code>print("call xxx")</code> to the start of the definition code of each method. But this seems a little silly.</p>
<p>For example:</p>
<pre><code>class Test():
def a(self):
return self.b()
def b(self):
return self.c()
def c(self):
return
</code></pre>
<p>And when you process <code>Test</code> with something <code>X</code>, and then you call <code>Test().a()</code>, the console will print something like:</p>
<pre><code>call -> a
call -> b
call -> c
</code></pre>
<p>So you can see how method <code>a</code> calls other mothods, this is helpful for understanding the logic of some complicated Classes (such as Django's <code>ListView</code>)</p>
<p>At first, I use metaclass to make <code>X</code>, that is:</p>
<pre><code>import functools
def print_info(func):
@functools.wraps(func)
def magic(*args,**kwargs):
print('call -> %s'%func.__name__)
return func(*args,**kwargs)
return magic
class BlackMeta(type):
def __new__(cls, name, parents, attrs):
new_attrs={}
for k,v in attrs.items():
if callable(v):
new_attrs[k] = print_info(v)
else:
new_attrs[k] = v
return type.__new__(cls, name, parents, new_attrs)
class Test(metaclass=BlackMeta):
def a(self):
return self.b()
def b(self):
return self.c()
def c(self):
return
</code></pre>
<p>it works. But when I add <code>BlackMeta</code> to a subclass of <code>DetailView</code>, and call method <code>get</code>, error raised: </p>
<p><img src="https://i.stack.imgur.com/F7wLm.png" alt="enter image description here"></p>
<p>So I give up and come here for help. Not necessarily a metaclass, anything that can show me the method call order is OK. Thanks.</p>
|
<p>You can use <code>sys.settrace</code> to register a function that gets called whenever a function gets called, like this:</p>
<pre><code>import sys
def trace(frame, event, arg):
if event == 'call':
print("%s (%s line %d)" % (frame.f_code.co_name, frame.f_code.co_filename, frame.f_lineno))
return trace
class Test():
def a(self):
return self.b()
def b(self):
return self.c()
def c(self):
return
def do_stuff():
t = Test()
t.a()
sys.settrace(trace)
do_stuff()
</code></pre>
<p>Output:</p>
<pre><code>do_stuff (x.py line 16)
a (x.py line 9)
b (x.py line 11)
c (x.py line 13)
</code></pre>
<p>Your trace function will be called whenever any function gets called anywhere, so the naive
version here will produce a lot of output in a non-trival program! You'd need to only print the output when it was relevant, eg. by looking at <code>frame.f_code.co_filename</code>.</p>
|
python|django
| 2 |
1,903,893 | 22,961,681 |
Profiling CherryPy: convert results to graphic format
|
<p>I try to profile my CherryPy web server and as a result I have some <code>.prof </code> files. I can read them in text format by using web browser what was described in <a href="https://stackoverflow.com/questions/16630208/profiling-cherrypy?rq=1">this post</a>. But I need to export the results into a calltree to profile using, for example, KCacheGrind or Gprof2Dot.</p>
<p>But Gprof2Dot give me an error:</p>
<blockquote>
<p>profile_results>gprof2dot.py -f prof out.prof | dot -Tpng -o out.png</p>
<p>error: unexpected end of file</p>
</blockquote>
<p>And KCacheGrind doesn't know about <code>.prof</code> files...</p>
<p>Are there any ways to take a calltree in graphic format?</p>
<p>Thanks.</p>
|
<p>You need to use pstats.</p>
<pre><code>gprof2dot -f pstats out.prof | dot -Tpng -o out.png
</code></pre>
<p>CherryPy uses python cProfile/profile.</p>
<p><a href="https://code.google.com/p/jrfonseca/wiki/Gprof2Dot#python_profile" rel="nofollow">Here is the reference on the docs</a></p>
|
python|profiling|cherrypy
| 2 |
1,903,894 | 623,276 |
Best Resource for mysql + python 2.6 programming
|
<p>I need a great resource for interacting with MySql (version 5.0.45) with Python2.6.</p>
<p>I'm using cherrypy, mako, the standard library, and nothing else.</p>
<p>The resources can be blogs, howtos, books (online of offline), whatever.</p>
<p>Additional information:</p>
<p>The python mysql module, MySQLdb, is compatible with Python DB-API 2.0 . See <a href="http://sourceforge.net/projects/mysql-python" rel="nofollow noreferrer">http://sourceforge.net/projects/mysql-python</a>.</p>
|
<p>Python connectivity to DBs is accomplished (most of the times) through the DBI (Python Database API).
The Python DBI has 2 versions and their documentation is the place for you to start:
<a href="http://www.python.org/dev/peps/pep-0248/" rel="nofollow noreferrer">v.1</a> and <a href="http://www.python.org/dev/peps/pep-0249/" rel="nofollow noreferrer">v.2</a>. You must check what version is supported by the MySQL connector and use the corresponding spec version.</p>
<p>For more details about Python and MySQL, you can find good articles on <a href="http://dev.mysql.com/usingmysql/python/" rel="nofollow noreferrer">Using MySQL With Python</a> and here is the article that walks you through most of the operations: <a href="http://www.kitebird.com/articles/pydbapi.html" rel="nofollow noreferrer">Writing MySQL Scripts with Python DB-API</a></p>
<p>./alex</p>
|
python|mysql
| 3 |
1,903,895 | 42,024,212 |
Python translate function not working
|
<p>I have this following simple function which I try to replace any numeric values of a file name inside a given folder. This is what I have so far</p>
<pre><code>import os
def decode_message():
#this is stage one
file_list = os.listdir(r"C:\TestFolder");
#this is stage two
print(file_list);
os.chdir(r"C:\TestFolder")
saved_path = os.getcwd();
print("Current Working Directory : "+saved_path)
for file_name in file_list:
print("Old File Name : "+file_name);
os.rename(file_name,file_name.translate(None,"0123456789"))
decode_message()
</code></pre>
<p>This is working up to the point where I can list the file names as shown below</p>
<p><a href="https://i.stack.imgur.com/cSI6X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cSI6X.png" alt="enter image description here"></a></p>
<p>But once i use the translate option it gives the following error</p>
<p><a href="https://i.stack.imgur.com/SCkuP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SCkuP.png" alt="enter image description here"></a></p>
<p>Can Anyone help?</p>
|
<p><strong>Solution 1: Update your python version</strong>: You're on python 2.4 which is about 12 years old by now. If you update to Python 2.7 your <code>translate</code> would work. I don't have a python 2.4 version available, nor can I find the documentation. Also: have a look at <a href="https://stackoverflow.com/questions/30743314/typeerror-expected-a-character-buffer-object">this question</a> and the answers. It is exactly the same question</p>
<p><strong>Solution 2: Replace the numeric character with something else</strong></p>
<p>e.g. </p>
<pre><code>import re
os.rename(file_name, re.sub('\d+', '', file_name))
</code></pre>
<p>or </p>
<pre><code>file_name2 = "".join(ch for ch in file_name if not ch.isdigit())
os.rename(file_name, file_name2)
</code></pre>
|
python|translate
| 1 |
1,903,896 | 57,417,700 |
Subtracting Pandas Data Frame with another of different size
|
<p>I want to assigin a new Colunm ['minIndx'] to a DataFrame.</p>
<p>Each row in DataFrame df2, would be taken to find a ManHatton Distance with DataFrame df and the index of df which is least distant from row of df2 is said to be minIndx for df2 's row</p>
<h1>The below line of code works very fast:</h1>
<pre><code>#df.loc[2] is assumed to be one row of df1
k=df-df.loc[2] # Second Row
k.abs().sum(axis=1).idxmin()
# out put in few secounds
</code></pre>
<h1>But Below Code Runs for ever</h1>
<pre><code>def find_minIndx(row):
k=df-df.row
return k.abs().sum(axis=1).idxmin()
df_2=df_2.head(1) # Testing For one Row
df_2['minIndx']=df_2.apply(find_minIndx)
</code></pre>
<p>Why is the second code taking too much of time, How to Fix it ?</p>
|
<p>For rows you should use df_2.apply(find_minIndx, axis=1)</p>
|
python|pandas|dataframe
| 1 |
1,903,897 | 71,019,878 |
Django why getting MultiValueDictKeyError?
|
<p>here is my code:</p>
<pre><code> if request.method == "POST":
forms = GoogleAuthFroms(request.POST or None)
if forms.is_valid():
code = request.POST["auth_code"]
context = {
'forms':forms,
}
return render(request,'members/security.html',context)
</code></pre>
<p>This line of code <code>code = request.POST["auth_code"]</code> throwing this error <code>MultiValueDictKeyError at /security/ 'auth_code'</code></p>
|
<p>If the code field is already in the form there's no need in repeating it, in your post request.POST and also you don't put a request.POST (code = request.POST["auth_code"] ) right below the form validation condition test.</p>
<pre><code>def some_view(request):
if request.method == "POST":
forms = GoogleAuthFroms(request.POST or None)
code = request.POST.get("auth_code")
if forms.is_valid():
forms.save()
return redirect(...)#You will need to redirect to a url after the form is validated and save
else:
forms = GoogleAuthFroms()
context = {
'forms':forms,
}
return render(request,'members/security.html',context)
</code></pre>
|
python-3.x|django|django-views|django-forms
| 1 |
1,903,898 | 11,640,620 |
Is it possible to draw graphs on a given image using NetworkX?
|
<p>Is it possible to draw graphs on a given image (instead of on an empty figure) by using the python package NetworkX?</p>
|
<p>Perhaps you can try this but it requires matplotlib:</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import networkx as nx
G = nx.cycle_graph(2)
pos = {0:[0,0], 1:[ 300, 300]}
plt.figure(1)
img=mpimg.imread('/home/stinkbug.png')
plt.imshow(img)
nx.draw(G,pos)
plt.savefig('/home/test.png')
</code></pre>
<p>I used the stink bug on this <a href="http://matplotlib.sourceforge.net/users/image_tutorial.html" rel="nofollow">page</a>.</p>
<p>Using networkx by itself might be a little tricky. Perhaps you could set the image you want as a node (say node 0) and then position the node at origin (0,0). Finally, orient the other nodes from your graph on top of them. I haven't tried it myself but its an idea that popped into my head.</p>
|
python|networkx
| 2 |
1,903,899 | 58,462,801 |
Convert frequency table to raw data in Pandas
|
<p>I have a sensor. For some reasons, the sensor like to record data like this:</p>
<pre><code>>df
obs count
-0.3 3
0.9 2
1.4 5
</code></pre>
<p>i.e. it first records observations and make a count table out of it. What I would like to do it convert this df into a series with raw observations. For example, I would like to end up with: [-0.3,-0.3,-0.3,0.9,0.9,1.4,1.4 ....]</p>
<p><a href="https://stackoverflow.com/questions/35039574/generate-column-of-raw-data-based-on-frequency-table-in-excel">Similar question</a> asked for excel.</p>
|
<p>If your dataframe structure is like this one (or similar):</p>
<pre><code> obs count
0 -0.3 3
1 0.9 2
2 1.4 5
</code></pre>
<p>This is an option, using numpy.repeat:</p>
<pre><code> import numpy as np
times = df['count']
df2['obs'] = np.concatenate([np.repeat(df['obs'],times)])
print(df2)
obs
0 -0.3
1 -0.3
2 -0.3
3 0.9
4 0.9
5 1.4
6 1.4
7 1.4
8 1.4
9 1.4
</code></pre>
|
python|pandas
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.