Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
|---|---|---|---|---|---|---|
1,907,000 | 44,661,336 |
Print several sentences with different colors
|
<p>I'm trying to print several sentences with different colors, but it won't work, I only got 2 colors, the normal blue and this red</p>
<pre><code>import sys
from colorama import init, AnsiToWin32
stream = AnsiToWin32(sys.stderr).stream
print(">>> This is red.", file=stream)
</code></pre>
|
<p>As discussed in the comments, change your code to use these features;</p>
<pre><code>import os, colorama
from colorama import Fore,Style,Back
colorama.init()
print(Fore.RED + 'some red text')
print(Back.GREEN + 'and with a green background')
print(Style.BRIGHT + 'and in bright text')
print(Style.RESET_ALL)
print('back to normal now')
</code></pre>
<p>These are much easier to use and do job. The available colours you can use for <code>Fore</code> or <code>Back</code> are;</p>
<ul>
<li>Red</li>
<li>Cyan</li>
<li>Green</li>
<li>Yellow</li>
<li>Black</li>
<li>Magenta</li>
<li>Blue</li>
<li>White</li>
</ul>
<p>These will all be needed to be put in capitals.
And for <code>Style</code> you can use;</p>
<ul>
<li>Bright</li>
<li>Dim</li>
<li>Reset_all</li>
</ul>
<p>These will also need to be in capitals.</p>
<p>Have fun using colorama :)</p>
|
python|colorama
| 0 |
1,907,001 | 23,655,963 |
get NACE codes with regex
|
<p>I want to extract the <a href="http://ec.europa.eu/competition/mergers/cases/index/nace_all.html" rel="nofollow">NACE codes</a> from a webpage with a Regex. I got this:</p>
<pre><code>(?P<first>[A-U])(?P<second>\d{1,2})(?P<delimiter>[- /.])(?P<third>\d{1,2})(?P<secdelimiter>[- /.])(?P<forth>\d{1,2})
</code></pre>
<p>This will get me NACE codes like A1.1.1, but not A1.1 or A1. How can I make the expression take these as well?</p>
|
<p>Make the additional capturing groups optional using <code>?</code>, for example:</p>
<pre><code>(?P<n1>\w\d{1,2})(?P<d1>[- /.])?(?P<n2>\d{1,2})?(?P<d2>[- /.])?(?P<n3>\d{1,2})?
# ^ means "zero or one times"
</code></pre>
<p>See demo on <a href="http://regex101.com/r/qU6bA1" rel="nofollow">regex101</a>.</p>
|
python|regex
| 0 |
1,907,002 | 36,087,572 |
Emacs: check if filepath contains directory name
|
<p>I try to write code in emacs. I have many different python projects, all of them have space indentation, except one. I want to check if open file path contains directory name of this project and set tabs instead of spaces, only for files in this directory.</p>
<p>I try </p>
<pre><code>(when (string-match "project_name" buffer-file-name)
do sth)
</code></pre>
<p>But it wasn`t work</p>
<p>Also, if you write how to set tabs for python and javascript files it will helps a lot)</p>
<p><strong>UPD</strong></p>
<p>My work code</p>
<pre><code>(add-hook 'python-mode-hook
(lambda()
(setq tab-width 4)
(setq python-indent 4)
(if (string-match-p "project_name" (or (buffer-file-name) ""))
(setq indent-tabs-mode t)
(setq indent-tabs-mode nil))))
</code></pre>
|
<p>Simple answer</p>
<pre><code> (when (string-prefix-p "/home/user/project-path" (buffer-file-name))
;;do sth
)
</code></pre>
<p>You may also need to use <code>expand-file-name</code> to get a full path to match <code>buffer-file-name</code> so that you can handle something like <code>"~/project-paht"</code>.</p>
<pre><code>(expand-file-name (buffer-file-name))
</code></pre>
<p>You may also need to take care <code>nil</code> result from <code>buffer-file-name</code> by</p>
<pre><code>(or (buffer-file-name) "")
</code></pre>
|
python|emacs|elisp
| 5 |
1,907,003 | 15,347,602 |
Python packaging distribute post-install step
|
<p>I am packaging a project that uses nltk. When you install nltk with pip, you get core functionalitiy, but not all the modules that come with it. To get those modules, you call nltk's download method.</p>
<p>I tried the following, but it doesn't work, saying <code>ImportError: No module named nltk</code>. I assume this is happening because import nltk occurs before nltk is installed by the call to <code>setup(...)</code>.</p>
<p>Is there a clean way of having a post-install step with <a href="http://pythonhosted.org/distribute/" rel="nofollow">distribute</a> that executes one of the following?</p>
<pre><code>$ python -m nltk.downloader punkt
>>> import nltk; nltk.download('punkt')
</code></pre>
<p>Here's my failed attempt at <code>setup.py</code>:</p>
<pre><code>class my_install(install):
def run(self):
install.run(self)
import nltk
nltk.download('punkt')
setup(
...
install_requires = [..., 'nltk==2.0.4'],
cmdclass={'install': my_install},
)
</code></pre>
|
<p>I used command line installation method and was successful.
like this...</p>
<pre><code>import subprocess
class my_install(install):
def run(self):
install.run(self)
cmd = ["python", "-m", "nltk.downloader", "punkt"]
with subprocess.Popen(cmd, stdout=subprocess.PIPE) as proc:
print(proc.stdout.read())
</code></pre>
|
python|packaging|setuptools|distutils|distribute
| 0 |
1,907,004 | 29,543,012 |
Variable number of decimal digits in string.format
|
<p>I want to transform this:</p>
<pre><code>my_number = 3.1415928
if precision == 2:
my_string = "{0:.2f}".format(my_number)
elif precision == 3:
my_string = "{0:.3f}".format(my_number)
elif precision == 4:
my_string = "{0:.4f}".format(my_number)
elif precision == 5:
my_string = "{0:.5f}".format(my_number)
</code></pre>
<p>Into something like this:</p>
<pre><code>my_number = 3.1415928
my_string = "{SOMETHING_DEPENDING_ON_PRECISION_HERE}".format(my_number)
</code></pre>
<p>Is it possible and how to do it?</p>
|
<p>in addition to Vaultah's (excellent) suggestion you can do it with the splat with old style formating as well</p>
<pre><code>precision = 2
my_num = 3.1415928
my_string = "%0.*f"%(precision,my_num)
</code></pre>
<p>Vaultah's (now shredded suggestion)</p>
<pre><code>"{0:0.{prec}f}".format(my_num,prec=precision)
</code></pre>
|
python|python-3.x
| 4 |
1,907,005 | 46,304,331 |
Change dataframe columns if column name exist in other dataframe, Python 3.6
|
<p>I have a main data frame (DF) with below columns & data</p>
<pre><code>C D E F G H I J K L QC
254 95 0 34543 43 32 4 4 4 4 Q23
255 59 1 43 tre r5 54 567 564 Q23
256 50 7 65 76557 65 65 5 5 Q23
</code></pre>
<p>And, mapping dataframe(MDF) with below columns</p>
<pre><code>QC Res1 Res2 Res3 Res4 Res5 Res6 Res7 Res8 Res9 Res10
Q23 US CH JP CE OV NON DK TOT N KK
Q24 US ZZ JP ME KP NON DK TOT E LK
</code></pre>
<p>Here, column QC in both dataframe is for mapping.</p>
<p>I want to replace DF columns by mapping with MDF where MDF['QC']=DF[Q23]</p>
<p>Order is the same in both the dataframe. I have total 500 dataframe, I want to update all dataframe columns with new columns that present in another dataframe.</p>
<p>Final Expected dataframe: DF</p>
<pre><code>US CH JP CE OV NON DK TOT N KK QC
254 95 0 34543 43 32 4 4 4 4 Q23
255 59 1 43 tre r5 54 567 564 Q23
256 50 7 65 76557 65 65 5 5 Q23
</code></pre>
<p>This is really challenging one.</p>
|
<p>You can use np.append by selecting the row of that contains 'QC's value i.e </p>
<p>If you have dataframes like </p>
<pre>
print(df1)
C D E F G H I J K L QC
0 254 95 0 34543 43 32.0 4 4 4 4 Q23
1 255 59 1 43 tre NaN r5 54 567 564 Q23
2 256 50 7 65 NaN 76557.0 65 65 5 5 Q23
</pre>
<pre>
print(df2)
C D E F G H I J K L QC
0 254 95 0 34543 43 32.0 4 4 4 4 Q24
1 255 59 1 43 tre NaN r5 54 567 564 Q24
2 256 50 7 65 NaN 76557.0 65 65 5 5 Q24
</pre>
<p>Then a for loop to assign the columns would help you i.e </p>
<pre><code>for i in [df1,df2]:
q = i['QC'].unique()[0]
i.columns = np.append(mdf[mdf['QC'] == q].values[0][1:],['QC'])
print([df1,df2]
</code></pre>
<pre>
[ US CH JP CE OV NON DK TOT N KK QC
0 254 95 0 34543 43 32.0 4 4 4 4 Q23
1 255 59 1 43 tre NaN r5 54 567 564 Q23
2 256 50 7 65 NaN 76557.0 65 65 5 5 Q23,
US ZZ JP ME KP NON DK TOT E LK QC
0 254 95 0 34543 43 32.0 4 4 4 4 Q24
1 255 59 1 43 tre NaN r5 54 567 564 Q24
2 256 50 7 65 NaN 76557.0 65 65 5 5 Q24]
</pre>
|
python|python-3.x|pandas|dataframe
| 1 |
1,907,006 | 62,718,967 |
Ubuntu: _tkinter.TclError: no display name and no $DISPLAY environment variable
|
<p>So I'm trying to run a python GUI using tkinter from Ubuntu command line, on Windows 10, and get the following error:</p>
<pre><code>brandon@DESKTOP-V5LTF5T:~$ python3 MainApp.py
Traceback (most recent call last):
File "MainApp.py", line 14, in <module>
root = tk.Tk()
File "/usr/lib/python3.6/tkinter/__init__.py", line 2023, in __init__
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: no display name and no $DISPLAY environment variable
</code></pre>
<p>If you are using the <code>matplotlib</code> library then use this question: <a href="https://stackoverflow.com/questions/37604289/tkinter-tclerror-no-display-name-and-no-display-environment-variable">_tkinter.TclError: no display name and no $DISPLAY environment variable</a></p>
<p>However, this question is for people using the <code>tkinter</code> library only</p>
|
<p>One cannot run active GUI's inside a bash terminal unless they download external software. The following tutorial is how I found out how to solve this problem: <a href="http://pjdecarlo.com/2016/06/xming-bash-on-ubuntu-on-windows-x11-window-system-running-from-windows-10-subsystem-for-linux.html" rel="nofollow noreferrer">http://pjdecarlo.com/2016/06/xming-bash-on-ubuntu-on-windows-x11-window-system-running-from-windows-10-subsystem-for-linux.html</a></p>
<ol>
<li><p>Download xming x server: A free display server for windows operating systems, simply allows you to display gui's and other fancy things from terminals: This is where I found it: <a href="https://xming.en.softonic.com/download" rel="nofollow noreferrer">https://xming.en.softonic.com/download</a> , then run the server and it should appear in the lower right hand of the taskbar</p>
</li>
<li><p>Run the following command from bash/ubuntu:
<code>brandon@DESKTOP-V5LTF5T:~$ export DISPLAY=localhost:0.0</code>, this sets the DISPLAY variable to the local host of the newly installed xming x xerver.</p>
</li>
<li><p>Now run your GUI! <code>brandon@DESKTOP-V5LTF5T:~$ python3 MainApp.py</code></p>
</li>
</ol>
|
python|ubuntu|tkinter
| 4 |
1,907,007 | 70,186,959 |
Separate Flask route GET method from POST
|
<p>I have a following route for file upload.</p>
<pre><code>@app.route("/upload", methods=["GET", "POST"])
def upload_file():
form = FileUploadForm()
if form.validate_on_submit():
file = form.document.data
file_name = secure_filename(file.filename)
save_path = get_user_uploads_folder(current_user) / file_name
return redirect(url_for("upload_file"))
file.save(save_path)
return redirect(url_for("list_user_files"))
return render_template("upload_file.html", form=form)
</code></pre>
<p>How to separate this route, so i can have <em>GET</em> and <em>POST</em> methods in different functions with common route like so:</p>
<pre><code>@app.route("/upload", methods=["GET"])
def upload_file():
return render_template(...)
@app.route("/upload", methods=["POST"])
def upload_file():
form = FileUploadForm()
...
return redirect(...)
</code></pre>
|
<p>This is discussed in the Flask <a href="https://flask.palletsprojects.com/en/1.1.x/quickstart/#http-methods" rel="nofollow noreferrer">docs</a>. You can use the following pattern:</p>
<pre class="lang-py prettyprint-override"><code>@app.route('/login', methods=['GET', 'POST'])
def login():
if request.method == 'POST':
return do_the_login()
else:
return show_the_login_form()
</code></pre>
|
python|flask|wtforms
| 1 |
1,907,008 | 53,634,228 |
virtualenv not activated from crontab on Linux Centos
|
<p>I'm facing a weird issue.</p>
<p>I need to write a crontab which will invoke a python script, but I need to activate a virtualenv first. This is the crontab I wrote:</p>
<pre><code>SHELL = /bin/bash
MAILTO="mail@mail.com"
*/15 * * * * source /srv/python/virtualenvs/proj/bin/activate && /srv/python/virtualenvs/proj/bin/python3.6 /srv/python/proj/Scripts/scheduling.py
</code></pre>
<p>The script <code>scheduling.py</code> try to import data from an oracle DB using <code>Cx_Oracle</code>. Crontab gives me an error:</p>
<pre><code>[2018-12-05 14:45:02] ERROR - DB connection error: DPI-1047: 64-bit Oracle Client library cannot be loaded: "libclntsh.so: cannot open shared object file: No such file or directory". See https://oracle.github.io/odpi/doc/installation.html
</code></pre>
<p>So I obviously thought about an error related to Oracle and library <code>cx_Oracle</code>.
The weird thing is that if I enter in the linux shell and do</p>
<pre><code>source /srv/python/virtualenvs/proj/bin/activate
</code></pre>
<p>Then type <code>python</code> to open a python shell then:</p>
<pre><code>import cx_Oracle
import pandas
con = cx_Oracle.connect('parameter_connection')
query = 'select * from tab1 fetch first 5 rows only'
pd.read_sql(query, con = con)
</code></pre>
<p>it works and gives me query result. I suspect that in <code>crontab</code> the virtualenv is not activated properly.</p>
<p>Any ideas? Thanks </p>
|
<p><strong>in crontab</strong></p>
<pre><code>*/15 * * * * /home/user/script.sh > /dev/null 2>&1
</code></pre>
<p><strong>in script.sh</strong></p>
<pre><code>#!/bin/bash
source /srv/python/virtualenvs/proj/bin/activate
/srv/python/virtualenvs/proj/bin/python3.6 /srv/python/proj/Scripts/scheduling.py
</code></pre>
|
python|linux|cron
| 2 |
1,907,009 | 33,224,944 |
Generate a list of 6 random numbers between 1 and 6 in python
|
<p>So this is for a practice problem for one of my classes. I want to generate a list of 6 random numbers between 1 and 6. For example,startTheGame() should return [1,4,2,5,4,6] </p>
<p>I think I am close to getting it but Im just not sure how to code it so that all 6 numbers are appended to the list and then returned. Any help is appreciated.</p>
<pre><code>import random
def startTheGame():
counter = 0
myList = []
while (counter) < 6:
randomNumber = random.randint(1,6)
myList.append(randomNumber)
counter = counter + 1
if (counter)>=6:
pass
else:
return myList
</code></pre>
|
<p>Use a <a href="https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions" rel="nofollow noreferrer">list comprehension</a>:</p>
<pre><code>import random
def startTheGame():
mylist=[random.randint(1, 6) for _ in range(6)]
return mylist
</code></pre>
<p>List comprehensions are among the most powerful tools offered by Python. They are considered very pythonic and they make code very expressive.</p>
<p>Consider the following code:</p>
<pre><code>counter = 0
myList = []
while (counter) < 6:
randomNumber = random.randint(1, 6)
myList.append(randomNumber)
counter = counter + 1
if (counter)>=6:
pass
else:
return
</code></pre>
<p>We will refactor this code in several steps to better illustrate what list comprehensions do. First thing that we are going to refactor is the while loop with an initialization and an abort-criterion. This can be done much more concise with a for in expression:</p>
<pre><code>myList = []
for counter in range(6):
randomNumber = random.randint(1, 6)
myList.append(randomNumber)
</code></pre>
<p>And now the step to make this piece of code into a list comprehension: Move the for loop inside mylist. This eliminates the appending step and the assignment:</p>
<pre><code>[random.randint(1, 6) for _ in range(6)]
</code></pre>
<p>The <code>_</code> is a variable name just like any other, but it is convention in python to us e <code>_</code> for variables that are not used. Think of it as a temp variable.</p>
|
python|random|numbers
| 13 |
1,907,010 | 33,125,664 |
transposing dataframe pandas
|
<p>I have following file format: </p>
<pre><code>SA BTSA01_U01 0 0 0 -9 G G T T
SA BTSA01_U02 0 0 0 -9 G G T T
</code></pre>
<p>want to transpose it using pandas, following is the code I tried:</p>
<pre><code>import pandas as pd from pandas import DataFrame
def transpose(file1,file2):
source=pd.read_csv(file1,sep=None,engine='python')
dest=source.transpose()
dest.to_csv(file2)
</code></pre>
<p>But it did not give me the desire output,following is the output: </p>
<pre><code>0
SA SA
BTSA01_U01 BTSA01_U02
0 0
0.1 0
0.2 0
-9 -9
G G
G.1 G
T T
T.1 T
</code></pre>
<p>I tried using some of the options like, <code>header=False</code>, <code>index=False</code> with <code>dest.to_csv</code>, but none is working, I am wondering about addition of "." and "1" in some of the values in first column, </p>
|
<p>You didn't specify <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html#pandas.read_csv" rel="nofollow"><code>header=None</code></a> so your first line is being interpreted as column names but this will generate duplicate names which isn't allowed so you get <code>.1</code> appended.</p>
<p>So you need:</p>
<pre><code>source=pd.read_csv(file1,sep=None,engine='python', header=None)
</code></pre>
|
pandas|transpose
| 1 |
1,907,011 | 13,115,988 |
C++ Compiling with Python.h Undefined Symbols
|
<p>So, I've been trying to start using Python.h for a little project I want to work on that seems pretty /simple/. But before I start I want to try to learn how to use Python.h.
So I found this little example online.</p>
<pre><code>#include "Python/Python.h"
int main(int argc, char** argv)
{
Py_Initialize();
PyRun_SimpleString("print 'Test'");
PyRun_SimpleString("print str(3 + 5)");
Py_Exit(0);
}
</code></pre>
<p>Seems pretty straight forward. When i first used</p>
<pre><code>gcc test.cpp
</code></pre>
<p>to compile, i got some undefined symbols. I quickly found out I should use</p>
<pre><code>-lpython2.7
</code></pre>
<p>then I found out I could also use</p>
<pre><code>-L/Library/Frameworks/Python.framework/Versions/2.7/lib/
</code></pre>
<p>that didn't work (I made sure that /Library/Frameworks/Python/Versions/2.7/lib/ existed)
I'm stuck, what do I do?
I get</p>
<pre><code>Undefined symbols:
"_Py_Initialize", referenced from:
_main in ccoUOSlc.o
"_PyRun_SimpleStringFlags", referenced from:
_main in ccoUOSlc.o
_main in ccoUOSlc.o
"___gxx_personality_v0", referenced from:
_main in ccoUOSlc.o
CIE in ccoUOSlc.o
"_Py_Exit", referenced from:
_main in ccoUOSlc.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
</code></pre>
<p>EDIT:
I just tried using the -Framework argument, and tried adding after the -L the -l python2.7 argument, and I now get</p>
<pre><code>Undefined symbols:
"___gxx_personality_v0", referenced from:
_main in ccfvtJ4j.o
CIE in ccfvtJ4j.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
</code></pre>
<p>Now what?</p>
|
<p>If you are using an Python framework installation on OS X as it appears you are based on the paths, you can use the <code>-framework</code> argument to the Apple compiler drivers:</p>
<pre><code>cc test.cpp -framework Python
</code></pre>
<p>Alternatively, you can explicitly specify the directory path and library name:</p>
<pre><code>cc test.cpp -L /Library/Frameworks/Python.framework/Versions/2.7/lib/ -l python2.7
</code></pre>
<p>Update: With the configuration you report in the comments (<code>Xcode 3.2.6</code>, <code>gcc-4.2</code>), it appears you need to explicitly invoke the <code>c++</code> variant of <code>gcc</code>. Either:</p>
<pre><code>g++ test.cpp -framework Python
</code></pre>
<p>or</p>
<pre><code>c++ test.cpp -framework Python
</code></pre>
<p>should work.</p>
|
c++|python|gcc|python-2.7
| 4 |
1,907,012 | 12,815,286 |
How to cycle through items in an list and have each item get fed into a function
|
<blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://stackoverflow.com/questions/493367/python-for-each-list-element-apply-a-function-across-the-list">Python: For each list element apply a function across the list</a> </p>
</blockquote>
<p>for example, let's say I have an array or list</p>
<pre><code>myList = [a,b,c,d]
</code></pre>
<p>and I have a function that generates a random number.</p>
<p>How do I go through the list and have each of the
item in that list receives the random number generated by the
function and be added to the item?</p>
<p>So, say 'a' is the 1st in the list, 'a' goes into the function where a random number (let's say 5), is generated and adds itself to 'a' the result should be `[a+5, b+.......].</p>
|
<p>You use a <a href="http://docs.python.org/tutorial/datastructures.html#list-comprehensions" rel="nofollow">list comprehension</a>:</p>
<pre><code>[func(elem) for elem in lst]
</code></pre>
<p>For your specific example, you can use an expression that sums the values:</p>
<pre><code>[elem + func() for elem in myList]
</code></pre>
<p>where <code>func()</code> returns your random number.</p>
|
python
| 4 |
1,907,013 | 12,696,291 |
Tornado: Can I run code after calling self.finish() in an asynchronous RequestHandler?
|
<p>I'm using Tornado. I have a bunch of asynchronous request handlers. Most of them do their work asynchronously, and then report the result of that work back to the user. But I have one handler whose job it is to simply tell the user that their request is going to be processed at some point in the future. I finish the HTTP connection and then do more work. Here's a trivialized example:</p>
<pre><code>class AsyncHandler(tornado.web.RequestHandler):
@tornado.web.asynchronous
def get(self, *args, **kwargs):
# first just tell the user to go away
self.write("Your request is being processed.")
self.finish()
# now do work
...
</code></pre>
<p>My question is: is this a legitimate use of Tornado? Will the code after the self.finish() run reliably? I've never had a problem with it before, but now I'm seeing a problem with it in one of my development environments (not all of them). There are a number of work-arounds here that I've already identified, but I want to make sure I'm not missing something fundamental to the request-lifecycle in Tornado. There doesn't SEEM to be a reason why I wouldn't be able to run code after calling self.finish(), but maybe I'm wrong.</p>
<p>Thanks!</p>
|
<p><strong>Yes, you can.</strong> </p>
<p>You have to define <code>on_finish</code> method of your <code>RequestHandler</code>. This is a function run after the request finished and has sent the response to client.</p>
<blockquote>
<p><a href="http://www.tornadoweb.org/en/stable/web.html#tornado.web.RequestHandler.on_finish" rel="noreferrer"><code>RequestHandler.on_finish()</code></a></p>
<p>Called after the end of a request.</p>
<p>Override this method to perform cleanup, logging, etc. This method is
a counterpart to prepare. on_finish may not produce any output, as it
is called after the response has been sent to the client.</p>
</blockquote>
|
python|tornado
| 31 |
1,907,014 | 41,150,246 |
Spark BigQuery Connector vs Python BigQuery Library
|
<p>I`m currently working on recommender system using pyspark and ipython-notebook. I want to get recommendations from data stored in BigQuery. There are two options:Spark BQ connector and Python BQ library. </p>
<p>What are the pros and cons of these two tools?</p>
|
<p>The Python BQ library is a standard way to interact with BQ from Python, and so it will include the full API capabilities of BigQuery. The Spark BQ connector you mention is the <a href="https://cloud.google.com/hadoop/bigquery-connector" rel="nofollow noreferrer">Hadoop Connector</a> - a Java Hadoop library that will allow you to read/write from BigQuery using abstracted Hadoop classes. This will more closely resemble how you interact with native Hadoop inputs and outputs.</p>
<p>You can find example usage of the Hadoop Connector <a href="https://cloud.google.com/hadoop/examples/bigquery-connector-spark-example" rel="nofollow noreferrer">here</a>.</p>
|
pyspark|google-bigquery|ipython-notebook
| 2 |
1,907,015 | 38,430,844 |
Define the marker and color setting by the dataset label
|
<p>I think my target was simple. But I haven't check out my hidden mistake somewhere. </p>
<p>I was learning PLA(Perceptron Linear Algorithm) and tried to achieve it in Python language. </p>
<p>The algorithm itself has been work out. Then, I want to plot the adjust process through the algorithm. </p>
<h3>The dataset</h3>
<pre><code>dataset = np.array([
((1, -0.4, 0.3), -1),
((1, -0.3, -0.1), -1),
((1, -0.2, 0.4), -1),
((1, -0.1, 0.1), -1),
((1, 0.9, -0.5), 1),
((1, 0.7, -0.9), 1),
((1, 0.8, 0.2), 1),
((1, 0.2, -0.6), 1)])
</code></pre>
<blockquote>
<h3>I want to plot the scatter point with different style by the label("-1" or "1" in this example)</h3>
</blockquote>
<p>So, here is what I coded: </p>
<pre><code>def marker_choice(s):
if s == 1:
marker = "o"
else:
marker = "x"
return marker
def color_choice(s):
if s == 1:
color = "r"
else:
color = "b"
return color
ps = [v[0] for v in dataset]
label = [v[1] for v in dataset]
fig = plt.figure()
ax = fig.add_subplot()
ax.scatter([v[1] for v in ps], [v[2] for v in ps], s=80, \
c=color_choice(v for v in np.array(label)),
marker=marker_choice(v for v in np.array(label)),
)
</code></pre>
<p><a href="https://i.stack.imgur.com/t34u3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t34u3.png" alt="enter image description here"></a></p>
<h3>Target</h3>
<p><a href="https://i.stack.imgur.com/5LZBc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5LZBc.png" alt="enter image description here"></a></p>
|
<p>The problem is that your one line loops don't produce your required output. If you just run them to test their output then the result is color:<code>'b'</code> and marker:<code>'x'</code> which explains why your output is the way it is. </p>
<p>The solution below does not use one line loops but does produce the required graph. One thing to note is that the markers on the output from the code below are the wrong way around to that in your desired output in the question. This is simply a case of altering the <code>marker_choice(s)</code> function and changing <code>if s==1</code> to <code>if s == -1</code>.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
dataset = np.array([
((1, -0.4, 0.3), -1),
((1, -0.3, -0.1), -1),
((1, -0.2, 0.4), -1),
((1, -0.1, 0.1), -1),
((1, 0.9, -0.5), 1),
((1, 0.7, -0.9), 1),
((1, 0.8, 0.2), 1),
((1, 0.2, -0.6), 1)])
def marker_choice(s):
if s == 1: # change to -1 here to get the markers the other way around
marker = "o"
else:
marker = "x"
return marker
def color_choice(s):
if s == 1:
color = "r"
else:
color = "b"
return color
ps = [v[0] for v in dataset]
label = [v[1] for v in dataset]
str_label = []
str_marker = []
for i in (label):
a = color_choice(label[i])
str_label.append(a)
b = marker_choice(label[i])
str_marker.append(b)
fig, ax = plt.subplots()
for i in range (len(ps)):
data = ps[i]
data_x = data[1]
data_y = data[2]
ax.scatter(data_x,data_y, s=80, color = str_label[i], marker=str_marker[i])
plt.show()
</code></pre>
<p>This produces the output below:</p>
<p><a href="https://i.stack.imgur.com/Na8kD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Na8kD.png" alt="enter image description here"></a></p>
<p><em>Note:</em> I have not tested how the performance of this code compares to your original code. </p>
|
python|matplotlib
| 1 |
1,907,016 | 28,901,444 |
Pandas to_datetime - setting the date when only times (HH:MM) are input
|
<p>I'm new(ish) to python, pandas in particular, and cannot work out how to correctly produce datetimes with <code>pandas.to_datetime</code>, when only hours and minutes are provided.</p>
<p>Specifically, I am working with a series consisting of strings such as "08:40 AM", "09:15 AM" ect, stored in a dataframe as <code>df.hhmm</code>.</p>
<p><code>times = pandas.to_datetime(df.hhmm)</code> completes, but associates the times with today's date. e.g. <code>2015-03-06 09:19:00</code></p>
<p>I wish to be able to set the date that these times are associated with e.g. <code>2015-04-15 09:19:00</code></p>
<p>I have a solution that works, but is ugly, e.g.</p>
<pre><code>for t in times:
t.replace(year=2015,month=04,day=15)
</code></pre>
<p>I'm sure there is a much better way to do this, any tips?</p>
<p>Thanks,
Luke</p>
|
<p>just create a string with the full date/time you want and parse that.</p>
<p>so, assuming that <code>hhmm</code> is a string, do:</p>
<pre><code>pd.to_datetime('20140415 ' + df.hhmm)
</code></pre>
|
python|datetime|pandas
| 0 |
1,907,017 | 29,060,652 |
Identify image encoding and convert it to a real image in python
|
<p>Currently I have the following problem: I get an array of image data (a large vector). I do neither know the size of the image, only the formula for the size <code>(2^n*2^m)</code> nor know the encoding of the image (jpeg, j2k, lossy 12bit jpeg or similar). One array I know the encoding looks like:</p>
<pre><code>[-1000, -888, -884, -883, -884, -886,...-850, -852, -854, -854]
</code></pre>
<p>Here I can simply reshape it into the form I want to have (in this case its the square root of the length) and afterwards convert it into an image I can view with </p>
<pre><code>pixel_values = numpy.asarray(pixel_values).reshape(512, 512)
pl2 = pylab.imshow(pixel_values, cmap=pylab.cm.bone
</code></pre>
<p>But now I have another array:</p>
<pre><code>[65534, 57344, 4, 0, 0, 0, 65534, 57344, 7652, 1, 20479, 20991, 10496, 0,...35286, 23076, 34407, 36383, 56252, 65370, 217]
</code></pre>
<p>Here I can not use the square root or something similar (I know that the images are always like <code>(2^n*2^m)</code>, and I don't know how I can transform this data into a real image I can view. How can I find out the encoding of this data and the size in python?</p>
|
<ol>
<li><p>To determine the size of the image, I don't think there is a better approach that simply by trial and error. First we determine the image sizes compatible with the expression <code>(2^n, 2^m)</code>, </p>
<pre><code>import numpy as np
vect_len = len(pixel_values)
min_size = 256 # e.g. minimal size acceptable for one of the dimensions
npm = np.log2(vect_len) # this is n+m
if not npm % 1:
# n+m is an integer
for n in range(1, npm):
p = npm - n
if 2**n < min_size or 2**p < min_size:
continue
print(n,p)
# (256, 2048)
# (512, 1024)
# (1024, 512)
# (2048, 256)
</code></pre>
<p>Then, for each of the possible image sizes, we reshape the <code>pixel_values</code> array and plot the result until it looks right. If it is a colour image, there would also be a third dimension of size 3 for RGB channels.</p></li>
<li><p>If you can visualise you image simply by reshaping the input vector, that means that it contains directly the values for each pixel and we don't care about the encoding of the image (it was already decoded). Indeed, say jpeg would store the discrete cosine transform (DCT) coefficients in the .jpeg file, j2k stores the wavelet transform coefficients, etc. This is not something you want to get into, and the approach is to use the appropriate library for each format. </p></li>
</ol>
|
python|image
| 0 |
1,907,018 | 58,791,834 |
How to fix list list index out of range
|
<p>I am having problems with this line. I am not really sure why it gives me list index out of range. I've tried couple of solutions so far none of them worked.</p>
<pre class="lang-py prettyprint-override"><code>def endGame(points):
scoreboard = []
with open("scoreboard.csv", "a") as scoreboardFile:
scoreboardWriter = csv.writer(scoreboardFile)
scoreboardWriter.writerow(name, points)
scoreboardFile = open("scoreboard.csv", "rt")
scoreboardReader = csv.reader(scoreboardFile)
for i in scoreboardReader:
scoreboard.append([i[0], int(i[1])])
</code></pre>
<blockquote>
<pre><code>Traceback (most recent call last): File "E:\Nea\NEA-PROJECT.py",
line 127, in <module>
endGame(points) File "E:\Nea\NEA-PROJECT.py", line 25, in endGame
scoreboard.append([i[0], int(i[1])]) IndexError: list index out of range
</code></pre>
</blockquote>
<p>This is supposed to write the name of the user and the score they achieved. The thing that confuses me is that it works the name and the score is saved on the file, but it gives me index list out of range.</p>
|
<p>You can just append the whole row at once in the for loop. Otherwise the line you had <code>[i[0], int(i[1])]</code> will fail when it finds a row in the <code>scoreboard.csv</code> that is empty (or has only 1 character), and then it tries to index into that.</p>
<p>Also you need to pass an iterable (like a list) to the <code>writerow</code> method, since as you can see from <a href="https://docs.python.org/3/library/csv.html#csv.csvwriter.writerow" rel="nofollow noreferrer">the docs</a> it only takes one argument.</p>
<pre><code>def endGame(points):
scoreboard = []
with open("scoreboard.csv", "a") as scoreboardFile:
scoreboardWriter = csv.writer(scoreboardFile)
scoreboardWriter.writerow([name, points]) # CHANGED
scoreboardFile = open("scoreboard.csv", "rt")
scoreboardReader = csv.reader(scoreboardFile)
for i in scoreboardReader:
scoreboard.append(i) # CHANGED
</code></pre>
|
python
| 1 |
1,907,019 | 52,292,541 |
Tensorflow - Is there a way to separate tf.data.Dataset by label?
|
<p>I do know that I can separate my data by their label before I load them into my network. Let's say there are 3 classes, with labels 0,1,2. I can do it by:</p>
<pre><code>dataset1 = tf.data.TextLineDataset(train_csv_file1).map(_parse_csv_train)
dataset2 = tf.data.TextLineDataset(train_csv_file2).map(_parse_csv_train)
dataset3 = tf.data.TextLineDataset(train_csv_file3).map(_parse_csv_train)
</code></pre>
<p>I am just curious about the following:</p>
<p>Suppose we have the dataset: </p>
<pre><code>dataset = tf.data.TextLineDataset(train_csv_file).map(_parse_csv_train)
</code></pre>
<p>which contains all the data from the 3 classes,</p>
<p><strong>is there a way to call some function like</strong> <code>dataset.selectDataByLabel(label=="2")</code> [this is a made-up function] <strong>so that I can divide the dataset into 3 parts according to their labels?</strong></p>
|
<p>So finally I chose to separate the files by csvs, i.e. generate csvs that each contains data from only one class. This might not be a perfect solution when there are too many classes, but in my case there are only 5 classes so it doesn't matter.</p>
|
python|tensorflow|tensorflow-datasets
| 0 |
1,907,020 | 52,087,613 |
Filtering rows by years; AttributeError: Can only use .dt accessor with datetimelike values
|
<p>This code is used to show which delivery's are late, prints out the "Material" number associated with it, and shows how many days late the delivery was. My issue now lies with trying to filter the data set to only read a specified range of time; in my following code I attempted to filter the data from 2017 to 2018, however I am receiving an error ( listed below the block of code). How can I filter rows to show only a specified range of time, while conducting the same analysis: which is to see which Material part numbers had a late delivery and to see how many days late it was( without running into an error ) </p>
<pre><code>import pandas as pd
from datetime import datetime
from datetime import timedelta
df = pd.read_csv('otd.csv')
diff_delivery_date = []
date_format = '%m/%d/%Y'
df2 = df[(df['Delivery Date'].dt.year >= 2017) & (df['Delivery Date'].dt.year <= 2018)]
for x,y,z in zip(df2['Material'], df2['Delivery Date'], df2['source desired delivery date']):
actual_deliv_date = datetime.strptime(y, date_format)
supposed_deliv_date = datetime.strptime(z, date_format)
diff_deliv_date = supposed_deliv_date - actual_deliv_date
diff_delivery_date.append(diff_deliv_date)
df['Diff Deliv Date'] = diff_delivery_date
print(df2)
</code></pre>
<p>Full Error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\khalha\eclipse-workspace\Python\Heyy\Code.py", line 13, in <module>
df2 = df[(df['Delivery Date'].dt.year >= 2017) & (df['Delivery Date'].dt.year <= 2018)]
File "C:\Users\khalha\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\core\generic.py", line 4372, in __getattr__
return object.__getattribute__(self, name)
File "C:\Users\khalha\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\core\accessor.py", line 133, in __get__
accessor_obj = self._accessor(obj)
File "C:\Users\khalha\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\core\indexes\accessors.py", line 325, in __new__
raise AttributeError("Can only use .dt accessor with datetimelike "
AttributeError: Can only use .dt accessor with datetimelike values
</code></pre>
<p>Dummy csv:
<a href="https://i.stack.imgur.com/0vq4w.png" rel="nofollow noreferrer">Image of csv file</a></p>
<pre><code>Material Delivery Date source desired delivery date
3334678 12/31/2014 12/31/2014
233433 12/31/2014 12/31/2014
3434343 1/5/2015 1/5/2015
3334567 1/5/2015 1/5/2015
546456 2/11/2015 2/11/2015
221295 4/10/2015 4/10/2015
</code></pre>
<p>Sample dataframe: </p>
<pre><code>Deliveryvalue = df2['11/31/2014', '11/31/2017', '11/31/2018']
Desiredvalue = df2['12/31/2014', '12/21/2017', '12/11/2018']
</code></pre>
|
<p>I this answer I'm assuming your data has the following format:</p>
<pre><code>Material,Delivery Date,source desired delivery date
3334678,12/31/2017,12/31/2017
233433,12/31/2017,12/31/2017
3434343,1/5/2017,1/5/2017
3334567,1/5/2017,1/5/2017
546456,2/11/2017,2/11/2017
221295,4/10/2017,4/10/2017
</code></pre>
<p>So, assuming that you can do it like this:</p>
<pre><code>import pandas as pd
df = pd.read_csv('odt.csv')
df['Delivery Date'] = pd.to_datetime(df['Delivery Date'], format='%m/%d/%Y')
df['source desired delivery date'] = pd.to_datetime(df['source desired delivery date'], format='%m/%d/%Y')
df2 = df[(df['Delivery Date'].dt.year >= 2017) & (df['Delivery Date'].dt.year <= 2018)]
df2['Diff Deliv Date'] = df2['Delivery Date'] - df2['source desired delivery date']
print(df2)
</code></pre>
<p><strong>Output</strong></p>
<pre><code> Material Delivery Date source desired delivery date Diff Deliv Date
0 3334678 2017-12-31 2017-12-31 0 days
1 233433 2017-12-31 2017-12-31 0 days
2 3434343 2017-01-05 2017-01-05 0 days
3 3334567 2017-01-05 2017-01-05 0 days
4 546456 2017-02-11 2017-02-11 0 days
5 221295 2017-04-10 2017-04-10 0 days
</code></pre>
<p><strong>Notes</strong></p>
<p>After loading the data the types of the columns where the following:</p>
<pre><code>Material int64
Delivery Date object
source desired delivery date object
</code></pre>
<p>You can check if yours are those. Then you need to convert the <code>'Delivery Date'</code> and <code>'source desired delivery date'</code> to datetime, this is done in:</p>
<pre><code>df['Delivery Date'] = pd.to_datetime(df['Delivery Date'], format='%m/%d/%Y')
df['source desired delivery date'] = pd.to_datetime(df['source desired delivery date'], format='%m/%d/%Y')
</code></pre>
<p>Then simply filter the data and compute the difference. Also I changed:</p>
<pre><code>df['Diff Deliv Date'] = diff_delivery_date
</code></pre>
<p>to <code>df2</code> given than your code prints <code>df2</code> in the end.</p>
|
python|pandas|csv|datetime
| 1 |
1,907,021 | 51,578,235 |
Pytorch how to get the gradient of loss function twice
|
<p>Here is what I'm trying to implement:</p>
<p>We calculate loss based on <code>F(X)</code>, as usual. But we also define "adversarial loss" which is a loss based on <code>F(X + e)</code>. <code>e</code> is defined as <code>dF(X)/dX</code> multiplied by some constant. Both loss and adversarial loss are backpropagated for the total loss.</p>
<p>In tensorflow, this part (getting <code>dF(X)/dX</code>) can be coded like below:</p>
<pre><code> grad, = tf.gradients( loss, X )
grad = tf.stop_gradient(grad)
e = constant * grad
</code></pre>
<p>Below is my pytorch code:</p>
<pre><code>class DocReaderModel(object):
def __init__(self, embedding=None, state_dict=None):
self.train_loss = AverageMeter()
self.embedding = embedding
self.network = DNetwork(opt, embedding)
self.optimizer = optim.SGD(parameters)
def adversarial_loss(self, batch, loss, embedding, y):
self.optimizer.zero_grad()
loss.backward(retain_graph=True)
grad = embedding.grad
grad.detach_()
perturb = F.normalize(grad, p=2)* 0.5
self.optimizer.zero_grad()
adv_embedding = embedding + perturb
network_temp = DNetwork(self.opt, adv_embedding) # This is how to get F(X)
network_temp.training = False
network_temp.cuda()
start, end, _ = network_temp(batch) # This is how to get F(X)
del network_temp # I even deleted this instance.
return F.cross_entropy(start, y[0]) + F.cross_entropy(end, y[1])
def update(self, batch):
self.network.train()
start, end, pred = self.network(batch)
loss = F.cross_entropy(start, y[0]) + F.cross_entropy(end, y[1])
loss_adv = self.adversarial_loss(batch, loss, self.network.lexicon_encoder.embedding.weight, y)
loss_total = loss + loss_adv
self.optimizer.zero_grad()
loss_total.backward()
self.optimizer.step()
</code></pre>
<p>I have few questions:</p>
<p>1) I substituted tf.stop_gradient with grad.detach_(). Is this correct?</p>
<p>2) I was getting <code>"RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time."</code> so I added <code>retain_graph=True</code> at the <code>loss.backward</code>. That specific error went away.
However now I'm getting a memory error after few epochs (<code>RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1525909934016/work/aten/src/THC/generic/THCStorage.cu:58</code>
). I suspect I'm unnecessarily retaining graph. </p>
<p>Can someone let me know pytorch's best practice on this? Any hint / even short comment will be highly appreciated.</p>
|
<p>I think you are trying to implement generative adversarial network (GAN), but from the code, I don't understand and can't follow to what you are trying to achieve as there are a few missing pieces for a GAN to works. I can see there's a discriminator network module, <code>DNetwork</code> but missing the generator network module.</p>
<p>If to guess, when you say 'loss function twice', I assumed you mean you have one loss function for the discriminator net and another for the generator net. If that's the case, let me share how I would implement a basic GAN model.</p>
<p>As an example, let's take a look at this <a href="https://nbviewer.jupyter.org/github/cedrickchee/wasserstein-gan/blob/master/wgan-pytorch.ipynb#Create-model" rel="nofollow noreferrer">Wasserstein GAN Jupyter notebook</a></p>
<p>I'll skip the less important bits and zoom into the important ones here:</p>
<ol>
<li><p>First, import PyTorch libraries and set up</p>
<pre><code># Set up batch size, image size, and size of noise vector:
bs, sz, nz = 64, 64, 100 # nz is the size of the latent z vector for creating some random noise later
</code></pre></li>
<li><p>Build a discriminator module</p>
<pre><code>class DCGAN_D(nn.Module):
def __init__(self):
... truncated, the usual neural nets stuffs, layers, etc ...
def forward(self, input):
... truncated, the usual neural nets stuffs, layers, etc ...
</code></pre></li>
<li><p>Build a generator module</p>
<pre><code>class DCGAN_G(nn.Module):
def __init__(self):
... truncated, the usual neural nets stuffs, layers, etc ...
def forward(self, input):
... truncated, the usual neural nets stuffs, layers, etc ...
</code></pre></li>
<li><p>Put them all together</p>
<pre><code>netG = DCGAN_G().cuda()
netD = DCGAN_D().cuda()
</code></pre></li>
<li><p>Optimizer needs to be told what variables to optimize. A module automatically keeps track of its variables.</p>
<pre><code>optimizerD = optim.RMSprop(netD.parameters(), lr = 1e-4)
optimizerG = optim.RMSprop(netG.parameters(), lr = 1e-4)
</code></pre></li>
<li><p>One forward step and one backward step for Discriminator</p>
<p>Here, the network can calculate gradient during the backward pass, depends on the input to this function. So, in my case, I have 3 type of losses; generator loss, dicriminator real image loss, dicriminator fake image loss. I can get gradient of loss function three times for 3 different net passes.</p>
<pre><code>def step_D(input, init_grad):
# input can be from generator's generated image data or input image from dataset
err = netD(input)
err.backward(init_grad) # backward pass net to calculate gradient
return err # loss
</code></pre></li>
<li><p><strong>Control trainable parameters</strong> [IMPORTANT]</p>
<p>Trainable parameters in the model are those that require gradients.</p>
<pre><code>def make_trainable(net, val):
for p in net.parameters():
p.requires_grad = val # note, i.e, this is later set to False below in netG update in the train loop.
</code></pre>
<p>In TensorFlow, this part can be coded like below:
<code>
grad = tf.gradients(loss, X)
grad = tf.stop_gradient(grad)
</code></p>
<p>So, I think this will answer your first question, "I substituted tf.stop_gradient with grad.detach_(). Is this correct?"</p></li>
<li><p>Train loop</p></li>
</ol>
<p>You can see here how's the 3 different loss functions are being called here.</p>
<pre><code> def train(niter, first=True):
for epoch in range(niter):
# Make iterable from PyTorch DataLoader
data_iter = iter(dataloader)
i = 0
while i < n:
###########################
# (1) Update D network
###########################
make_trainable(netD, True)
# train the discriminator d_iters times
d_iters = 100
j = 0
while j < d_iters and i < n:
j += 1
i += 1
# clamp parameters to a cube
for p in netD.parameters():
p.data.clamp_(-0.01, 0.01)
data = next(data_iter)
##### train with real #####
real_cpu, _ = data
real_cpu = real_cpu.cuda()
real = Variable( data[0].cuda() )
netD.zero_grad()
# Real image discriminator loss
errD_real = step_D(real, one)
##### train with fake #####
fake = netG(create_noise(real.size()[0]))
input.data.resize_(real.size()).copy_(fake.data)
# Fake image discriminator loss
errD_fake = step_D(input, mone)
# Discriminator loss
errD = errD_real - errD_fake
optimizerD.step()
###########################
# (2) Update G network
###########################
make_trainable(netD, False)
netG.zero_grad()
# Generator loss
errG = step_D(netG(create_noise(bs)), one)
optimizerG.step()
print('[%d/%d][%d/%d] Loss_D: %f Loss_G: %f Loss_D_real: %f Loss_D_fake %f'
% (epoch, niter, i, n,
errD.data[0], errG.data[0], errD_real.data[0], errD_fake.data[0]))
</code></pre>
<blockquote>
<p>"I was getting "RuntimeError: Trying to backward through the graph a second time..."</p>
</blockquote>
<p>PyTorch has this behaviour; to reduce GPU memory usage, during the <code>.backward()</code> call, all the intermediary results (if you have like saved activations, etc.) are deleted when they are not needed anymore. Therefore, if you try to call <code>.backward()</code> again, the intermediary results don't exist and the backward pass cannot be performed (and you get the error you see).</p>
<p>It depends on what you are trying to do. You can call <code>.backward(retain_graph=True)</code> to make a backward pass that will not delete intermediary results, and so you will be able to call <code>.backward()</code> again. All but the last call to backward should have the <code>retain_graph=True</code> option.</p>
<blockquote>
<p>Can someone let me know pytorch's best practice on this</p>
</blockquote>
<p>As you can see from the PyTorch code above and from the way things are being done in PyTorch which is trying to stay Pythonic, you can get a sense of PyTorch's best practice there.</p>
|
tensorflow|pytorch
| 1 |
1,907,022 | 59,575,681 |
Unattended Edgerouter Upgrade - strange Ansible(?) behaviour? Different stdout outputs between stages
|
<p>this is my first question here, I am so excited :)</p>
<p>It's maybe a noob question but I don't understand it...</p>
<p>I'am trying to upgrade an Edgerouter firmware (<a href="https://www.ui.com/edgemax/edgerouter-pro/" rel="nofollow noreferrer">https://www.ui.com/edgemax/edgerouter-pro/</a>) with Gitlab-CI and Ansible. The stages are absolutely identical but the stdout of the same task, with the same ansible.cfg, with the same gitlab-runner, in the same pipeline etc., differs:</p>
<pre><code>STAGE1
CI Deployment Docker Image:
ansible-playbook 2.8.3
python version = 3.7.4
Edgerouter:
USER1@HOSTNAME1:~$ python --version
Python 2.7.13
USER1@HOSTNAME1:~$ show system image
The system currently has the following image(s) installed:
v2.0.8.5247496.191120.1124 (running image) (default boot)
v2.0.8.5247496.191120.1124-1
</code></pre>
<pre><code>OUTPUT
...identical verbose output, but:
ok: [HOSTNAME1] => changed=false
invocation:
module_args:
commands:
- show version | grep "Build ID" | cut -d ':' -f 2 | tr -d ' '
interval: 1
match: all
retries: 10
wait_for: null
stdout:
- '5247496'
stdout_lines: <omitted>
</code></pre>
<p>Works like a charm!
BUT:</p>
<pre><code>STAGE2
CI Deployment Image:
ansible-playbook 2.8.3
python version = 3.7.4
Edgerouter
USER2@HOSTNAME2:~$ python --version
Python 2.7.13
USER2@HOSTNAME2:~$ show system image
The system currently has the following image(s) installed:
v2.0.8.5247496.191120.1124 (running image) (default boot)
v2.0.8.5247496.191120.1124-1
</code></pre>
<pre><code>OUTPUT
...identical verbose output, but:
ok: [HOSTNAME2] => changed=false
invocation:
module_args:
commands:
- show version | grep "Build ID" | cut -d ':' -f 2 | tr -d ' '
interval: 1
match: all
retries: 10
wait_for: null
stdout:
- |-
show version | grep "Build ID" | cut -d ':' -f 2 |
tr -d ' '
5247496
stdout_lines: <omitted>
</code></pre>
<p>DOES NOT...
This is the Ansible task:</p>
<pre><code>- name: get installed firmware build ID to compare with config
edgeos_command:
commands: show version | grep "Build ID" | cut -d ':' -f 2 | tr -d ' '
register: installed_firmware_build_id
tags: router-upgrade
</code></pre>
<p>What am I missing here?</p>
|
<p>I ended up like this:</p>
<pre><code> - set fact:
edgeos_command:
commands: show version
register: installed_firmware_version_raw
tags: router-upgrade
- set_fact:
installed_firmware_version: "{{ (installed_firmware_version_raw.stdout[0] | regex_findall('Version:\\s+(v.+)'))[0] }}"
tags: router-upgrade
- set_fact:
installed_firmware_build_id: "{{ (installed_firmware_version_raw.stdout[0] | regex_findall('Build ID:\\s+(\\d+)'))[0] }}"
tags: router-upgrade
</code></pre>
|
python|ansible|gitlab-ci|ubiquity
| 0 |
1,907,023 | 63,521,889 |
Lines in 3d plot in python
|
<p>I have the following script:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
nn = 400 # number of points along circle's perimeter
theta = np.linspace(0, 2*np.pi, nn)
rho = np.ones(nn)
# (x,y) represents points on circle's perimeter
x = np.ravel(rho*np.cos(theta))
y = np.ravel(rho*np.sin(theta))
fig, ax = plt.subplots()
plt.rcParams["figure.figsize"] = [6, 10]
ax = plt.axes(projection='3d') # set the axes for 3D plot
ax.azim = -90 # y rotation (default=270)
ax.elev = 21 # x rotation (default=0)
# low, high values of z for plotting 2 circles at different elev.
loz, hiz = -15, 15
# Plot two circles
ax.plot(x, y, hiz)
ax.plot(x, y, loz)
# set some indices to get proper (x,y) for line plotting
lo1,hi1 = 15, 15+nn//2
lo2,hi2 = lo1+nn//2-27, hi1-nn//2-27
# plot 3d lines using coordinates of selected points
ax.plot([x[lo1], x[hi1]], [y[lo1], y[hi1]], [loz, hiz])
ax.plot([x[lo2], x[hi2]], [y[lo2], y[hi2]], [loz, hiz])
ax.plot([0, 0, 0], [0, 0, 10])
ax.plot([0, 0, 0], [9, 0, 0])
ax.plot([0, 0, 0], [0, 8, 0])
plt.show()
</code></pre>
<p>At the end of the script, I would like to plot three lines in three directions. How to do that? Why this:</p>
<pre><code>ax.plot([0, 0, 0], [0, 0, 10])
ax.plot([0, 0, 0], [9, 0, 0])
ax.plot([0, 0, 0], [0, 8, 0])
</code></pre>
<p>gives the line in same direction?</p>
<p>And I have a second question, please. How to make the cone more narrower (the base more similar to circle)?</p>
<p>Output now:
<a href="https://i.stack.imgur.com/NRkka.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NRkka.png" alt="enter image description here" /></a></p>
|
<p><code>ax.plot([0, 0, 0], [0, 0, 10])</code> is giving <code>plot</code> the x and y coordinates of 3 points, but you haven't given any coordinates in the z direction. Remember the inputs to <code>plot</code> are x, y, z, not, as you seem to have assumed, (x0,y0,z0), (x1,y1,z1)</p>
<p>So this is drawing 3 "lines" where two of them start and end at x=y=z=0, and one of them extends to y=10. The other two <code>ax.plot</code> calls you have are doing similar things.</p>
<p>To draw three lines that start at the origin and each extend along one of the x, y, or z directions, you perhaps meant to use:</p>
<pre><code>ax.plot([0, 0], [0, 0], [0, 10]) # extend in z direction
ax.plot([0, 0], [0, 8], [0, 0]) # extend in y direction
ax.plot([0, 9], [0, 0], [0, 0]) # extend in x direction
</code></pre>
<p>Note that this also makes your circles look more like circles</p>
<p><a href="https://i.stack.imgur.com/eG8nh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eG8nh.png" alt="enter image description here" /></a></p>
|
python-3.x|matplotlib|line
| 2 |
1,907,024 | 36,584,166 |
How do I make perspective transform of point with x and y coordinate
|
<p>So I wrote this little program which allows me to select 4 points on two images.</p>
<p>Usign those points I get a transformation matrix. After that I select a point on one of the images and want to get visualization of where that point will be on other image.</p>
<p>Say my point is marked like this -> <code>(x,y)</code> - so it's a tuple. How should I format this "position" on image so it can be possible to transform it.</p>
<p>I have looked at <a href="http://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#void%20perspectiveTransform(InputArray%20src,%20OutputArray%20dst,%20InputArray%20m)" rel="noreferrer">documentation for perspectiveTransform() method</a> and figured that I should be storing it in following shape:</p>
<pre><code>numpy.array([
[self.points[self.length-1][0]],
[self.points[self.length-1][1]]
], dtype="float32")
</code></pre>
<p>Which would give me on a single click this format:</p>
<pre><code>Point= [[ 2300.]
[ 634.]]
</code></pre>
<p>This format doesn't seem to work, I use this Transformation matrix:</p>
<pre><code>M = [[ -1.71913123e+00 -4.76850572e+00 5.27968944e+03]
[ 2.07693562e-01 -1.09738424e+01 6.35222770e+03]
[ 1.02865125e-04 -4.80067600e-03 1.00000000e+00]]
</code></pre>
<p>in this method (and get following error):</p>
<pre><code>cv2.perspectiveTransform(src, M)
OpenCV Error: Assertion failed (scn + 1 == m.cols) in cv::perspectiveTransform, file C:\builds\master_PackSlaveAddon-win64-vc12-static\opencv\modules\core\src\matmul.cpp
</code></pre>
<p>Any advice or tip is welcome.</p>
|
<p>I figured out the answer. </p>
<p>Found it on this <a href="http://answers.opencv.org/question/252/cv2perspectivetransform-with-python/" rel="noreferrer">link</a></p>
<p>The key is to put your point like this:</p>
<pre><code> pts = numpy.array([[x,y]], dtype = "float32")
</code></pre>
<p>And then call another <code>numpy.array</code> on existing variable <code>pts</code>:</p>
<pre><code> pts = numpy.array([pts])
</code></pre>
<p>The procedure is the same after this. </p>
|
python|opencv
| 6 |
1,907,025 | 19,527,351 |
Python: How to convert a timezone aware timestamp to UTC without knowing if DST is in effect
|
<p>I am trying to convert a naive timestamp that is always in Pacific time to UTC time. In the code below, I'm able to specify that this timestamp I have is in Pacific time, but it doesn't seem to know that it should be an offset of -7 hours from UTC because it's only 10/21 and DST has not yet ended.</p>
<p>The script:</p>
<pre><code>import pytz
import datetime
naive_date = datetime.datetime.strptime("2013-10-21 08:44:08", "%Y-%m-%d %H:%M:%S")
localtz = pytz.timezone('America/Los_Angeles')
date_aware_la = naive_date.replace(tzinfo=localtz)
print date_aware_la
</code></pre>
<p>Outputs:</p>
<pre><code> 2013-10-21 08:44:08-08:00
</code></pre>
<p>It should have an offset of -07:00 until DST ends on Nov. 3rd. How can I get my timezone aware date to have the correct offset when DST is and is not in effect? Is pytz smart enough to know that DST will be in effect on Nov 3rd?</p>
<p>Overall goal: I'm just trying to convert the timestamp to UTC knowing that I will be getting a timestamp in pacific time without any indication whether or not DST is in effect. I'm not generating this date from python itself, so I'm not able to just use utc_now().</p>
|
<p>Use the <code>localize</code> method:</p>
<pre><code>import pytz
import datetime
naive_date = datetime.datetime.strptime("2013-10-21 08:44:08", "%Y-%m-%d %H:%M:%S")
localtz = pytz.timezone('America/Los_Angeles')
date_aware_la = localtz.localize(naive_date)
print(date_aware_la) # 2013-10-21 08:44:08-07:00
</code></pre>
<p>This is covered in the "Example & Usage" section of <a href="http://pytz.sourceforge.net/">the pytz documentation</a>.</p>
<p>And then continuing to UTC:</p>
<pre><code>utc_date = date_aware_la.astimezone(pytz.utc)
print(utc_date)
</code></pre>
|
python|datetime|dst|pytz
| 27 |
1,907,026 | 57,852,652 |
Python read from txt to csv with duplicate values
|
<p>I got text file and my text file includes infos about department and managers
Example part from txt : </p>
<pre><code>department: sale
group : building a
manager::sergey
department: hr
group : building a
manager::tom
location:somewhereelse
department: health
group : building b
manager::jeniffer
manager::billy
department: security
group : building b
manager::john
</code></pre>
<p>Between every department there is one empty line after manager name/names. and one space after every info at lines.</p>
<p>Im using df_readcsv and transform that csv for making columns with dep, group , manager,location</p>
<p>There are 2 problems</p>
<p>1- Sometimes there can be 2 manager at dep(like health department) . If there are 2 managers at same department I failed. My code works only for 1 manager. How can add one more column with other manager?</p>
<p>2- Sometimes there are more rows than other rows look at hr department. There is info about location. It makes harder to doing regular df.
It needs dynamic structure I suppose :(</p>
<p>Example of What It should be </p>
<pre><code>dep group manager Location
sale building a sergey ""
hr building a tom somewhereelse
health building b jeniffer ""
health building b billy ""
</code></pre>
<p>What can I do ? </p>
<p>My code </p>
<pre><code>df = pd.read_csv('sample.txt', sep="\n")
df = data.replace({ '# department: ' : " ", '# group:' : " ",'# manager:' : " ",}, regex= True)
ab = pd.DataFrame(df.values.reshape(-1, 7),
columns=["department","group","manager"])
</code></pre>
|
<p>You can do this as follows:</p>
<pre><code>import pandas as pd
# read the file in the structure it is
# using a consequtive series of : as
# field separators
# and cutting off the spaces before
# and after the colon
# the two fields are named _key and _value
# skip the blank lines
df= pd.read_csv(
io.StringIO(raw),
sep='\s*:+\s*',
skip_blank_lines=True,
names=['_key', '_value'],
engine='python')
# now build groups to identify which
# rows belong together
# assuming each group begins with
# a line containing department in the
# key column
group_ser= df['_key'] == 'department'
df['_group']= group_ser.cumsum()
# now get rid of the duplicate manager
# rows by putting the manager values
# into a list
df_agg_managers= df[df['_key']=='manager'].groupby(['_group', '_key']).agg({'_value':list})
# the rest actually does not deed to be
# aggregated according to your description
# so just add the index structure so it
# matches the one of df_agg_managers
df_rest= df[df['_key']!='manager'].set_index(['_group', '_key'])
df_all= pd.concat([df_agg_managers, df_rest], axis='index')
# now use the content of the _key column
# to build columns (like in a pivot operation)
df_unstacked= df_all.unstack()
# get rid of the first level in the column names
df_unstacked.columns= df_unstacked.columns.get_level_values(1)
# count the number of entries per manager
# column
len_ser= df_unstacked['manager'].map(len)
# below we loop only over the rows that contain
# lists with at least one element, so we need
# an extra treatment for the rows with empty
# lists or NaN
df_result= df_unstacked.loc[(len_ser<1) | df_unstacked['manager'].isna()].reset_index()
# now create one row per entry in the manager-list
for i in range(len_ser.max()):
df_work= df_unstacked.loc[len_ser > i].copy()
df_work['manager']= df_work['manager'].map(lambda lst: lst[i])
df_result= pd.concat([df_result, df_work.reset_index()], axis='index', ignore_index=True)
# apply some final cosmetics
# sort the rows so they appear groupwise
# then remove the _group column
df_result.sort_values('_group').drop(['_group'], axis='columns')
</code></pre>
<p>The result is:</p>
<pre><code>_key department group location manager
0 sale building a NaN sergey
1 hr building a somewhereelse tom
2 health building b NaN jeniffer
4 health building b NaN billy
3 security building b NaN john
</code></pre>
<p>If run on the following test data:</p>
<pre><code>raw="""department: sale
group : building a
manager::sergey
department: hr
group : building a
manager::tom
location:somewhereelse
department: health
group : building b
manager::jeniffer
manager::billy
department: security
group : building b
manager::john"""
</code></pre>
<p><strong>Note:</strong> in case the assumption is wrong that all groups begin with department, you can also use the blank lines to group your content. You could just replace the first few lines by:</p>
<pre><code>df= pd.read_csv(
io.StringIO(raw),
sep='\s*:+\s*',
skip_blank_lines=False,
names=['_key', '_value'],
engine='python')
group_ser= df['_key'].isna()
df['_group']= group_ser.cumsum()+1
df.drop(group_ser[group_ser==True].index, axis='index', inplace=True)
</code></pre>
|
python|pandas|dataframe|io
| 0 |
1,907,027 | 54,509,359 |
Sum of elements of subarrays is equal to a target value in Python
|
<p>I am given with an array <code>arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]</code> and a target value <code>trgt = 10</code>. I need to find all possible combinations of <code>subarrays</code> such that sum of the elements of each subarray will result in the given target value <code>trgt</code>. I need to use Python to acomplish the task. I have found a similar discussion in <a href="https://stackoverflow.com/questions/23087820/python-subset-sum">here</a>. However the given solution there only returns only one possible subarray instead of other valid subarrays. Any help pointing to obtaining all such subarrays will be very helpful. Thank you in advance.</p>
|
<p>The library of choice for getting combinations is <code>itertools</code>: </p>
<pre><code>import itertools as it
import numpy as np
arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
trgt = 10
</code></pre>
<p>At first calculate the maximum length of a tuple that could result in <code>trgt</code> when summed up, even when it consists of the smallest numbers available:</p>
<pre><code>maxsize = np.argwhere(np.cumsum(sorted(arr))>trgt)[0][0]
</code></pre>
<p>Then iterate from one to <code>maxsize</code>, let itertools create the corresponding combinations and save only those which sum up to <code>trgt</code>:</p>
<pre><code>subsets = []
for size in range(1, maxsize+1):
subsets.extend([t for t in it.combinations(arr, size) if sum(t)==trgt])
print(subsets)
#[(10,), (1, 9), (2, 8), (3, 7), (4, 6), (1, 2, 7), (1, 3, 6), (1, 4, 5), (2, 3, 5), (1, 2, 3, 4)]
</code></pre>
|
python|recursion|sub-array
| 1 |
1,907,028 | 54,651,700 |
Use python 3 in reticulate on shinyapps.io
|
<p>I have some code in Python 3 which I'm running in R through the <code>reticulate</code> library to use in a <code>shiny</code> app. It works fine in my local machine, but when I published in shinyapps.io reticulate is using Python 2 by default.</p>
<p>So far I tried to use <code>use_python</code> function, but I'm not sure about the path:</p>
<pre><code>use_python("/usr/bin/python3", require = TRUE)
</code></pre>
<p>The logs give me the error:</p>
<pre><code>2019-02-12T13:44:54.691167+00:00 shinyapps[710102]: Warning: Error in initialize_python: Python shared library '/usr/lib/libpython3.5.so' not found, Python bindings not loaded.
2019-02-12T13:44:54.697101+00:00 shinyapps[710102]: 64: stop
2019-02-12T13:44:54.697103+00:00 shinyapps[710102]: 63: initialize_python
2019-02-12T13:44:54.697104+00:00 shinyapps[710102]: 62: ensure_python_initialized
2019-02-12T13:44:54.697105+00:00 shinyapps[710102]: 61: py_run_file
2019-02-12T13:44:54.697106+00:00 shinyapps[710102]: 60: source_python
2019-02-12T13:44:54.697107+00:00 shinyapps[710102]: 59: server [/srv/connect/apps/str_telefonica/app.R#57]
2019-02-12T13:44:54.697385+00:00 shinyapps[710102]: Error in initialize_python(required_module, use_environment) :
2019-02-12T13:44:54.697387+00:00 shinyapps[710102]: Python shared library '/usr/lib/libpython3.5.so' not found, Python bindings not loaded.
</code></pre>
|
<p>To deploy an app to shinyapps.io using <code>reticulate</code> and Python 3, your code should create a Python 3 virtual environment and install any required packages into it:</p>
<pre><code>reticulate::virtualenv_create(envname = 'python3_env',
python = '/usr/bin/python3')
reticulate::virtualenv_install('python3_env',
packages = c('numpy')) # <- Add other packages here, if needed
</code></pre>
<p>Then, instead of using the <code>use_python()</code> function, just point <code>reticulate</code> to the Python 3 virtual environment that you created:</p>
<pre><code>reticulate::use_virtualenv('python3_env', required = T)
</code></pre>
<p>For a more complete tutorial on deploying a Shiny app using <code>reticulate</code> with Python 3 to shinyapps.io, check out <a href="https://github.com/ranikay/shiny-reticulate-app" rel="noreferrer">this step-by-step example</a>.</p>
<p><strong>Note</strong>: Until a few months ago, <code>reticulate</code> invoking <code>virtualenv</code> from Python 3 still created a Python 2 virtual environment by default. However, <a href="https://github.com/rstudio/reticulate/commit/0a516f571721c1219929b3d3f58139fb9206a3bd" rel="noreferrer">this was fixed</a> in the development version of <code>reticulate</code> as of Oct 8, 2019.</p>
<p>You can install that particular version of <code>reticulate</code> with the fix by using the R package <code>remotes</code>:</p>
<pre><code>remotes::install_github("rstudio/reticulate", force = T, ref = '0a516f571721c1219929b3d3f58139fb9206a3bd')
</code></pre>
<p>or use any <code>reticulate</code> >= v1.13.0-9001 and you'll be able to create Python 3 virtual environments on shinyapps.io.</p>
|
python|r|shiny|reticulate
| 9 |
1,907,029 | 54,558,936 |
Logarithm function-approximation algorithm
|
<p>I created a function to calculate the parameters of a logarithm-function. </p>
<p>My aim is to predict the future results of data points that follow a logarithm function. But what is the most important is that my algorithm fits the last results better than the whole data points as it is the prediction that matters. I currently use Mean Squared Error to optimize my parameters but I do not know how to weight it such as it takes my most recent data points as more important than the first ones.</p>
<ul>
<li>Here is my equation:</li>
</ul>
<p>y = C * log( a * x + b )</p>
<ul>
<li><p>Here is my code:</p>
<pre><code>import numpy as np
from sklearn.metrics import mean_squared_error
def approximate_log_function(x, y):
C = np.arange(0.01, 1, step = 0.01)
a = np.arange(0.01, 1, step = 0.01)
b = np.arange(0.01, 1, step = 0.01)
min_mse = 9999999999
parameters = [0, 0, 0]
for i in np.array(np.meshgrid(C, a, b)).T.reshape(-1, 3):
y_estimation = i[0] * np.log(i[1] * np.array(x) + i[2])
mse = mean_squared_error(y, y_estimation)
if mse < min_mse:
min_mse = mse
parameters = [i[0], i[1], i[2]]
return (min_mse, parameters)
</code></pre></li>
</ul>
<p>You can see in the image below the orange curve is the data I have and the blue line is my fitted line. We see that the line stretch a bit away from the line on the end and I would like to avoid that to improve the prediction from my function.</p>
<p><a href="https://i.stack.imgur.com/Vb4v0.png" rel="nofollow noreferrer">logarithm function graph</a></p>
<p>My question is twofold:</p>
<ul>
<li><p>Is this actually the best way to do it or is it best to use another function (such as the increasing form of an Exponential Decay)? (y = C ( 1 - e-kt ), k > 0)</p></li>
<li><p>How can I change my code so that the last values are more important to be fitted than the first ones. </p></li>
</ul>
|
<p>Usually, in non-linear least-squares, the inverse of the y values is taken as weight, that essentially eliminates outliers, you can expand on that idea by adding a function to calculate the weight based on the x position. </p>
<pre><code>def xWeightA(x):
container=[]
for k in range(len(x)):
if k<int(0.9*len(x)):
container.append(1)
else:
container.append(1.2)
return container
def approximate_log_function(x, y):
C = np.arange(0.01, 1, step = 0.01)
a = np.arange(0.01, 1, step = 0.01)
b = np.arange(0.01, 1, step = 0.01)
min_mse = 9999999999
parameters = [0, 0, 0]
LocalWeight=xWeightA(x)
for i in np.array(np.meshgrid(C, a, b)).T.reshape(-1, 3):
y_estimation = LocalWeight*i[0] * np.log(i[1] * np.array(x) + i[2])
mse = mean_squared_error(y, y_estimation)
if mse < min_mse:
min_mse = mse
parameters = [i[0], i[1], i[2]]
return (min_mse, parameters)
</code></pre>
<p>Also, it looks like you're evaluating through the complete objective function, that makes the code to take to much time to find the minimum (at least on my machine). You can use curve_fit or polyfit as suggested, but if the goal is to generate the optimizer try adding an early break or a random search through the grid. Hope it helps </p>
|
python|mathematical-optimization
| 0 |
1,907,030 | 71,364,168 |
Scoring parameter in BayesSearchCV class confusion
|
<p>I'm using BayesSearchCV from <code>scikit-optimize</code> to train a model on a fairly imbalanced dataset. From what I'm reading precision or ROC AUC would be the best metrics for imbalanced dataset. In my code:</p>
<pre><code>knn_b = BayesSearchCV(estimator=pipe, search_spaces=search_space, n_iter=40, random_state=7, scoring='roc_auc')
knn_b.fit(X_train, y_train)
</code></pre>
<p>The number of iterations is just a random value I chose (although I get a warning saying I already reached the best result, and there is not a way to early stop as far as I'm aware?). For the scoring parameter, I specified <code>roc_auc</code>, which I'm assuming it will be the primary metric to monitor for the best parameter in the results. So when I call <code>knn_b.best_params_</code>, I should have the parameters where the roc_auc metrics is higher. Is that correct?</p>
<p>My confusion is when I look at the results using <code>knn_b.cv_results_</code>. Shouldn't the <code>mean_test_score</code> be the <code>roc_auc</code> score because of the scoring param in the BayesSearchCV class? What I'm doing it plotting the results and seeing how each combination of params performed.</p>
<pre><code>sns.relplot(
data=knn_b.cv_results_, kind='line', x='param_classifier__n_neighbors', y='mean_test_score',
hue='param_scaler', col='param_classifier__p',
)
</code></pre>
<p>When I try to use to <code>roc_auc_score()</code> function on the true and predicted values, I get something completely different.</p>
<p>Is the <code>mean_test_score</code> here different? How would I be able to get the individual/mean roc_auc score of each CV/split of each iteration? Similarly for when I want to use RandomizedSearchCV or GridSearchCV.</p>
<p><strong>EDIT</strong>: tldr; I want to know what's being computed exactly in <code>mean_test_score</code>. I thought it was <code>roc_auc</code> because of the scoring param, or accuracy, but it seems to be neither.</p>
|
<p><code>mean_test_score</code> is the AUROC, because of your <code>scoring</code> parameter, yes.</p>
<p>Your main problem is that the ROC curve (and the area under it) require the <em>probability</em> predictions (or other continuous score), not the hard class predictions. Your manual calculation is thus incorrect.</p>
<p>You shouldn't expect exactly the same score anyway. Your second score is on the test set, and the first score is optimistically biased by the hyperparameter selection.</p>
|
python|scikit-learn|scikit-optimize
| 1 |
1,907,031 | 9,625,124 |
Debugging IronPython in Visual Studio 2010 - Source lines not synchronized
|
<p>I use VS 2010 to debug a single IronPython module. Everything works great. I can set breakpoints, watch local variables, etc. The only annoyance (which is serious) is that yellow arrow that marks the current step in the debugger is not synchronized with the real line being edited. Did anyone run into this issue?</p>
<p>I created an IronPython project through Visual Studio to make sure. I don't miss some important setting, but to no avail.</p>
<p>Pyramid Newbie</p>
<p>Update:
Ok. Problem solved. I had a bunch of Python interpreters installed and the default interpreter was Python 3.2. I switched the default interpreter to IronPython 2.7 and everything is peachy now. The settings is in Tools|Options|Python Tools|Interpreter Options|Default Interpreter</p>
|
<p>Ok. Problem solved. I had a bunch of Python interpreters installed and the default interpreter was Python 3.2. I switched the default interpreter to IronPython 2.7 and everything is peachy now. The settings is in Tools|Options|Python Tools|Interpreter Options|Default Interpreter</p>
|
visual-studio-2010|ironpython|visual-studio-debugging
| 2 |
1,907,032 | 9,299,020 |
Python "and" operator with ints
|
<p>What is the explanation for this behavior in Python?</p>
<pre><code>a = 10
b = 20
a and b # 20
b and a # 10
</code></pre>
<p><code>a and b</code> evaluates to 20, while <code>b and a</code> evaluates to 10. Are positive ints equivalent to True? Why does it evaluate to the second value? Because it is second?</p>
|
<p>The <a href="http://docs.python.org/reference/expressions.html#boolean-operations">documentation</a> explains this quite well:</p>
<blockquote>
<p>The expression <code>x and y</code> first evaluates <code>x</code>; if <code>x</code> is false, its value is returned; otherwise, <code>y</code> is evaluated and the resulting value is returned.</p>
</blockquote>
<p>And similarly for <code>or</code> which will probably be the next question on your lips.</p>
<blockquote>
<p>The expression <code>x or y</code> first evaluates <code>x</code>; if <code>x</code> is true, its value is returned; otherwise, <code>y</code> is evaluated and the resulting value is returned.</p>
</blockquote>
|
python|boolean
| 15 |
1,907,033 | 39,021,206 |
frequency response for a repeating 1-2-1 filter
|
<p>I'm trying to estimate the effect of applying a simple 1-2-1 filter for multiple times, and determined the residual scales. In specific, I'm trying to reproduce this plot:</p>
<p><a href="http://i.stack.imgur.com/dunww.png" rel="nofollow">from Small et al., 2013</a></p>
<p>I used the scipy.signal.freqz as below</p>
<pre><code>filt = np.array([0.25,0.5,0.25])
w, h=signal.freqz(filt)
</code></pre>
<p>And I thought that for a repeating filter, I just need to multiply h by itself for many times (since it's in frequency domain, and filtering is just convolution.)</p>
<p>However, I cannot get the same plot as they did in the paper. I have three major questions,</p>
<ol>
<li><p>I thought the 1-2-1 filter is just the triangle filter, is there other way to check its response in frequency domain?</p></li>
<li><p>How to check its frequency response for a repeating 1-2-1 filter in python? Isn't it just h times itself for multiple times?</p></li>
<li><p>I have hard time understanding the w(normalized frequency) unit in the freqz output. Could some one explain to me how to convert to wavenumber as in the plot?</p></li>
</ol>
<p>Thank you.</p>
|
<p>It turned out that I was not wrong. By plotting the absolute value of the transfer function, and dividing the normalized frequeny by 2pi, I got the exact same plots, and applying mutilpe time of filter is exactly mutiplying the frequency response to itself for multiple times. </p>
<pre><code>filt = np.array([0.25,0.5,0.25])
w, h=signal.freqz(filt)
plt.plot(w/(2*pi), abs(h**400), label='400 pass')
</code></pre>
<p><a href="http://i.stack.imgur.com/JKqRU.png" rel="nofollow">Comparison between frequency response of repeating 1-2-1 filter</a></p>
|
python|scipy|signal-processing
| 0 |
1,907,034 | 52,465,957 |
Different results acquired in Gridsearch tuning
|
<p>The below code represents bagging algorithm parameters tuning using grid search method. At each execution of the code i got a different results for the best_parameters even though i have setted the seed and the random_state of each model decision tree and the bagging ensemble. It there ay advice?</p>
<pre><code># Bagged Decision Trees for Classification
import pandas
from sklearn import model_selection
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.cross_validation import train_test_split
from sklearn.grid_search import GridSearchCV
from random import seed
seed=1
#X=datascaled.iloc[:,0:71]
#Selected_features=['Event','AVK','Beta blockers','proton pump inhibitor','Previous stroke','CYP2C19*17','Clopidogrel active metabolite','Obesity']
Selected_features=['Event time','CYP2C19*17','Clopidogrel active metabolite', 'proton pump inhibitor', 'DOSE BB','Previous stroke', 'Obesity','AVK']
X=datascaled[Selected_features]
Y=datascaled['Cardio1']
# Split the dataset in two equal parts
X_train, X_test, y_train, y_test =model_selection.train_test_split(
X,Y, test_size=0.3, random_state=seed)
param_grid = {
'base_estimator__max_depth' : [1, 2, 3, 4, 5],
'max_samples' : [0.05, 0.1, 0.2, 0.5], 'max_features' : [0.5, 1, 2],
'n_estimators' : [10,20,50, 100, 150, 200], #here you must add 'random_state':[123], 'n_jobs':[-1]
}
clf = GridSearchCV(BaggingClassifier(DecisionTreeClassifier(),
n_estimators = 50, max_features = 0.5),
param_grid,cv=10, scoring = 'accuracy')
clf.fit(X_train, y_train)
#The best hyper parameters set
print("Best Hyper Parameters:\n",clf.best_params_)
prediction=clf.predict(X_test)
#importing the metrics module
from sklearn import metrics
#evaluation(Accuracy)
print("Accuracy:",metrics.accuracy_score(prediction,y_test))
#evaluation(Confusion Metrix)
from sklearn.cross_validation import train_test_split,StratifiedShuffleSplit,cross_val_score
from sklearn import cross_validation
from sklearn.model_selection import StratifiedKFold
from time import *
from sklearn import metrics
n_folds=10
DTC = DecisionTreeClassifier(max_features=2, class_weight = "balanced",max_depth=4 ,random_state=seed)
#model=BaggingClassifier(base_estimator = DTC,random_state = 11, n_estimators= 50)
model=BaggingClassifier(base_estimator = DTC, max_samples= 0.5, n_estimators= 150)
cv = cross_validation.StratifiedKFold(Y, n_folds=n_folds, random_state=42)
t0 = time()
y_pred = cross_validation.cross_val_predict(model, X=X, y=Y, n_jobs=-1, cv=cv)
t = time() - t0
print("=" * 52)
print("time cost: {}".format(t))
print()
print("confusion matrix\n", metrics.confusion_matrix(Y, y_pred))
print()
print("\t\taccuracy: {}".format(metrics.accuracy_score(Y, y_pred)))
print("\t\troc_auc_score: {}".format(metrics.roc_auc_score(Y, y_pred)))
print(metrics.classification_report(Y, y_pred))
</code></pre>
|
<p>This can happen when you are either having
1. Insifficient data/opportunity to arrive a convergence
2. Model is largely underfitting, and each run is a variation</p>
|
python|python-3.x
| 0 |
1,907,035 | 52,666,646 |
For Loop in print statement is giving generator funtion as output
|
<p>This is my program I can't understand what is wrong with this program.
I want to Print the pattern shown in the given sample output</p>
<pre><code>for i in range(1,int(input())):
print(i for x in list(range(0,i)))
</code></pre>
<p>Sample Input: </p>
<p><code>5</code></p>
<p>Sample output:</p>
<pre><code>1
22
333
4444
</code></pre>
<p>Output Given By the program:</p>
<pre><code><generator object <genexpr> at 0x7feb4598cdb0>
<generator object <genexpr> at 0x7feb4598cdb0>
<generator object <genexpr> at 0x7feb4598cdb0>
<generator object <genexpr> at 0x7feb4598cdb0>
</code></pre>
|
<pre class="lang-or-tag-here prettyprint-override"><code>for i in range(1,int(input())):
print([i for x in range(0,i)])
</code></pre>
<p>Generators in python are defined with <code>()</code>, make sure you use the brackets, to make a list comprehension. </p>
|
python
| 0 |
1,907,036 | 47,755,312 |
Resize tiff image in Python
|
<p>Tried resize my image (weight > 100 MB) using:</p>
<pre><code>>>> from PIL import Image
>>> image = Image.open(path_to_the_file)
>>> new_image = image.resize((200, 200))
</code></pre>
<p>and received ValueError: tile cannot extend outside image.</p>
<p>The size of original image is</p>
<pre><code>>>> image.size
>>> (4922, 3707)
</code></pre>
<p>The same error I received while doing thumbnails, rotate etc.</p>
<p>What I am doing wrong?</p>
<p>Edit:
Checked image using ImageMagic:</p>
<pre><code>$ identify file.tif
file.tif[0] TIFF 4922x3707 4922x3707+0+0 32-bit Grayscale Gray 31.23MB 0.000u 0:00.009
file.tif[1] TIFF 2461x1854 2461x1854+0+0 32-bit Grayscale Gray 31.23MB 0.000u 0:00.000
filetif[2] TIFF 1231x927 1231x927+0+0 32-bit Grayscale Gray 31.23MB 0.000u 0:00.000
file.tif[3] TIFF 616x464 616x464+0+0 32-bit Grayscale Gray 31.23MB 0.000u 0:00.000
file.tif[4] TIFF 308x232 308x232+0+0 32-bit Grayscale Gray 31.23MB 0.000u 0:00.000
file.tif[5] TIFF 154x116 154x116+0+0 32-bit Grayscale Gray 31.23MB 0.000u 0:00.000
identify: Unknown field with tag 33550 (0x830e) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/881.
identify: Unknown field with tag 33922 (0x8482) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/881.
identify: Unknown field with tag 34735 (0x87af) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/881.
identify: Unknown field with tag 34736 (0x87b0) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/881.
</code></pre>
|
<p>The problem might be here, from the <a href="http://www.effbot.org/imagingbook/image.htm" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>Note that the bilinear and bicubic filters in the current version of PIL are <strong>not well-suited for large downsampling ratios (e.g. when creating thumbnails)</strong>. You should use ANTIALIAS unless speed is much more important than quality.</p>
</blockquote>
<p>In this case, add at your code <code>Image.ANTIALIAS</code></p>
<pre><code>from PIL import Image
image = Image.open(path_to_the_file)
new_image = image.resize((200, 200) Image.ANTIALIAS)
</code></pre>
<p>Should now do the trick.</p>
|
python|image|tiff|pillow
| 1 |
1,907,037 | 37,479,304 |
python - matplotlib - segmentation fault with figsize
|
<p>I don't understand why if I uncomment line 3 of the following code</p>
<pre><code>from matplotlib import pyplot
pyplot.clf()
#pyplot.figure(figsize=(400, 200))
pyplot.plot(range(0,len(x)),x,label="x")
pyplot.legend(loc='best', fancybox=True, shadow=True)
pyplot.savefig("/home/user/ooo.png", dpi=600)
</code></pre>
<p>I get a Segmentation fault (core dumped) error. Does anyone know how to fix this?</p>
|
<p>400 inches by 200 inches is too big, so you may want to reduce the figure size.</p>
<p>If you need to scale your image, I recommend to save it in a vector format (see <a href="https://stackoverflow.com/questions/9266150/matplotlib-generating-vector-plot">matplotlib: generating vector plot</a>).</p>
|
python|matplotlib
| 2 |
1,907,038 | 37,327,579 |
How to create a table in Python that has headers (text), filled with values (int and float)
|
<p>I am filling an numpy array in python (could change this to a list if neccesary), and i want to fill it with column headings, then enter a loop and fill the table with values, I am struggling with which type to use for the array. I have something like this so far...</p>
<pre><code> info = np.zeros(shape=(no_of_label+1,19),dtype = np.str) #Creates array to store coordinates of particles
info[0,:] = ['Xpos','Ypos','Zpos','NodeNumber','BoundingBoxTopX','BoundingBoxTopY','BoundingBoxTopZ','BoundingBoxBottomX','BoundingBoxBottomY','BoundingBoxBottomZ','BoxVolume','Xdisp','Ydisp','Zdisp','Xrot','Yrot','Zrot','CC','Error']
for i in np.arange(1,no_of_label+1,1):
info[i,:] = [C[0],C[1],C[2],i,int(round(C[0]-b)),int(round(C[1]-b)),int(round(C[2]-b)),int(round(C[0]+b)),int(round(C[1]+b)),int(round(C[2]+b)),volume,0,0,0,0,0,0,0,0] # Fills an array with label.No., size of box, and co-ords
np.savetxt(save_path+Folder+'/Data_'+Folder+'.csv',information,fmt = '%10.5f' ,delimiter=",")
</code></pre>
<p>There is other things in the loop, but they are irrelevent, C is an array of float, b is int.</p>
<p>I also need to be able to save it as a csv file as shown in the last line, and open it in excel.</p>
<p>What I have now, returns all the values as integers, when i need C[0], C[1], C[2] to be floating point.</p>
<p>Thanks in advance!</p>
|
<p>It depends on what you want to do with this array but I think you want to use 'dtype=object' instead of 'np.str'. You can do that explicitly, by changing 'np.str' to 'dtype' or here is how I would write the first part of your code:</p>
<pre><code>import numpy as np
labels = ['Xpos','Ypos','Zpos','NodeNumber','BoundingBoxTopX','BoundingBoxTopY',
'BoundingBoxTopZ','BoundingBoxBottomX','BoundingBoxBottomY','BoundingBoxBottomZ',
'BoxVolume','Xdisp','Ydisp','Zdisp','Xrot','Yrot','Zrot','CC','Error']
no_of_label = len(labels)
#make a list of length ((no_of_label+1)*19) and convert it to an array and reshape it
info = np.array([None]*((no_of_label+1)*19)).reshape(no_of_label+1, 19)
info[0] = labels
</code></pre>
<p>Again, there is probably a better way of doing this if you have a specific application in mind, but this should let you store different types of data in the same 2D array. </p>
|
python|arrays|string|csv|numpy
| 2 |
1,907,039 | 34,079,680 |
Mean value of multikey dictionary
|
<p>I want to find the mean price of an item in a dictionary that have pairs of item,shop as key and the price as value</p>
<p>example dictionary</p>
<pre><code>{('item1', 'shop1'): 40,
('item2', 'shop2'): 14,
('item1', 'shop3'): 55,
</code></pre>
<p>for example i want to find the mean price of item1. Is it possible with a multikey dictionary or should i change it? Any ideas?</p>
<p>Thanks</p>
|
<p>You can create a Pandas DataFrame using <code>nested lists</code>. You can then use Pandas <code>groupby</code> to get the <code>mean</code> you're looking for.</p>
<pre><code> import pandas as pd
df = pd.DataFrame([['item1', 'shop1', 40],
['item2', 'shop2', 14],
['item1', 'shop3', 55]], columns=('item', 'shop', 'price'))
df
item shop price
0 item1 shop1 40
1 item2 shop2 14
2 item1 shop3 55
result_mean = df.groupby('item')['price'].mean()
result_mean
item
item1 47.5
item2 14.0
Name: price, dtype: float64
</code></pre>
|
python|dictionary|pandas
| 1 |
1,907,040 | 66,051,383 |
Alembic MSSQL Unique Constraint Allow Nulls
|
<h1>Problem to Solve</h1>
<p>Using MSSQL I'd like to have a column that is unique and accepts nulls.</p>
<h1>Issues</h1>
<ol>
<li><p>Add two rows of data into a column that allows nulls with the unique constraint like in the implementation below gives the following error:</p>
<pre><code>Violation of UNIQUE KEY constraint 'UQ_...'. Cannot insert duplicate key in
object 'TABLE'. The duplicate key value is (<NULL>). (2627) (SQLExecDirectW)"
</code></pre>
</li>
<li><p>Downgrading the column causes constraint issues tied to the <code>reference</code> column. The constraint is automatically uniquely named so its a pain to programmatically remove.</p>
</li>
</ol>
<h1>Current Implementation</h1>
<p>The alembic operation is:</p>
<pre class="lang-py prettyprint-override"><code>from alembic import op
import sqlalchemy as sa
#...
def upgrade():
op.add_column(
'TABLE', sa.Column('reference', sa.Integer(), nullable=True, unique=True),
)
def downgrade():
op.drop_column('TABLE', 'reference')
</code></pre>
|
<p>The solution to allowing nulls in a unique field is:</p>
<ol>
<li>Don't create the unique constraint when defining the column</li>
</ol>
<pre class="lang-py prettyprint-override"><code>op.add_column(
'TABLE', sa.Column('reference', sa.Integer(), nullable=True), # No unique=True
)
</code></pre>
<ol start="2">
<li>Create a <a href="https://alembic.sqlalchemy.org/en/latest/ops.html#alembic.operations.Operations.create_index" rel="nofollow noreferrer">unique index</a> manually</li>
</ol>
<pre class="lang-py prettyprint-override"><code>op.create_index(
'uq_reference_allow_nulls', table_name='TABLE', columns=['reference'],
mssql_where=sa.text('reference IS NOT NULL'), unique=True,
)
</code></pre>
<ol start="3">
<li>Remove the index when downgrading.</li>
</ol>
<pre class="lang-py prettyprint-override"><code>op.drop_index('uq_reference_allow_nulls', table_name='TABLE')
</code></pre>
<p>This also solves the problem of having a randomized unique constraint on the table because the <code>unique</code> parameter is removed. All together the alembic revision looks like this:</p>
<pre class="lang-py prettyprint-override"><code>from alembic import op
import sqlalchemy as sa
#...
def upgrade():
op.add_column(
'TABLE', sa.Column('reference', sa.Integer(), nullable=True), # Do not include unique here
)
op.create_index(
'uq_reference_allow_nulls', table_name='TABLE', columns=['reference'],
mssql_where=sa.text('reference IS NOT NULL'), unique=True,
)
def downgrade():
op.drop_index('uq_reference_allow_nulls', table_name='TABLE')
op.drop_column('TABLE', 'reference')
</code></pre>
|
sql-server|python-3.x|sqlalchemy|alembic
| 0 |
1,907,041 | 66,089,232 |
subprocess or threading differences between Python 2.7 and 3.8
|
<p>I'm using the following code to execute a subprocess (a Python 3 script). When run using Python 3, the code correctly reads the output of the subprocess. When run using Python 2.7, I get no output. This script is just a test script, I need to actually run the subprocess from a larger Python 2.7 application, so I can't just do it using Python3.</p>
<pre><code># client.py: test client for communicating with the wrapper
from subprocess import Popen, PIPE
from threading import Thread
from time import sleep
def read_it():
print(u"read_it thread running")
while True:
for msg in process.stdout:
print(u"subprocess output: {}".format(msg.rstrip()))
print(u"subprocess starting")
process = Popen(['/usr/bin/python3', './wrapper.py', 'arg1', 'arg2'],
stdin=PIPE, stdout=PIPE, close_fds=True, bufsize=1, universal_newlines=True)
print(u"subprocess running: {}".format(process.pid))
thread = Thread(target=read_it)
thread.daemon = True
thread.start()
sleep(5.0) # wait for initial output from subprocess
</code></pre>
|
<p>Turns out that the file io buffering between the versions is the issue. But this works for both Python versions:</p>
<pre><code># client.py: test client for communicating with the wrapper
from subprocess import Popen, PIPE
from threading import Thread
from time import sleep
def read_it():
print(u"read_it thread running")
while True:
msg = process.stdout.readline()
print(u"subprocess output: {}".format(msg.rstrip()))
print(u"subprocess starting")
process = Popen(['/usr/bin/python3', './wrapper.py', 'arg1', 'arg2'],
stdin=PIPE, stdout=PIPE, close_fds=True, bufsize=1, universal_newlines=True)
print(u"subprocess running: {}".format(process.pid))
thread = Thread(target=read_it)
thread.daemon = True
thread.start()
sleep(5.0) # wait for initial output from subprocess
</code></pre>
|
python|python-3.x|multithreading|python-2.7|subprocess
| 0 |
1,907,042 | 7,200,252 |
How does urllib.urlopen() work?
|
<p>Let's consider a big file (~100MB). Let's consider that the file is line-based (a text file, with relatively short line ~80 chars).
If I use built-in <code>open()</code>/<code>file()</code> the file will be loaded in <a href="https://stackoverflow.com/questions/519633/lazy-method-for-reading-big-file-in-python/519653#519653">lazy manner</a>.
I.E. if a I do <code>aFile.readline()</code> only a chunk of a file will reside in memory. Does the urllib.urlopen() do something similar (with usage of a cache on disk)?</p>
<p>How big is the difference in performance between <code>urllib.urlopen().readline()</code> and <code>file().readline()</code>? Let's consider that file is located on localhost. Once I open it with <code>urllib.urlopen()</code> and then with <code>file()</code>. How big will be difference in performance/memory consumption when i loop over the file with <code>readline()</code>?</p>
<p>What is best way to process a file opened via <code>urllib.urlopen()</code>? Is it faster to process it line by line? Or shall I load bunch of lines(~50) into a list and then process the list?</p>
|
<p><code>open</code> (or <code>file</code>) and <code>urllib.urlopen</code> look like they're more or less doing the same thing there. <code>urllib.urlopen</code> is (basically) creating a <code>socket._socketobject</code> and then invoking the <code>makefile</code> method (contents of that method included below)</p>
<pre><code>def makefile(self, mode='r', bufsize=-1):
"""makefile([mode[, bufsize]]) -> file object
Return a regular file object corresponding to the socket. The mode
and bufsize arguments are as for the built-in open() function."""
return _fileobject(self._sock, mode, bufsize)
</code></pre>
|
python|performance|file|urllib|urlopen
| 2 |
1,907,043 | 72,688,919 |
faster processing of a for loop with put requests python
|
<p>I have a json file - that i am reading and processing each element due to the api and the payload that is required.</p>
<p>I am looking for ways to speed this up through multi processing / concurrency - not for sure the proper approach. I think either of those would work as they are individual request updating a specific role within the API. The calls can be run concurrently without impacting the role itself.</p>
<p>the function that i currently have iterates through the following:</p>
<p>newapproach.py</p>
<pre><code>import requests
import json
#from multiprocessing import Pool
import urllib3
import cpack_utility
from cpack_utility import classes
#import concurrent.futures
import time
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
def update_role(data):
url, header, verifySSL = mApi.role()
session = requests.Session()
session.headers.update(header)
def update_role_permissions():
start = time.time()
for k,v in permissions_roleprivs.items():
perm_code = v["permissionCode"]
perm_access = v["access"]
payload = json.dumps(
{"permissionCode": perm_code, "access": perm_access}
)
result = session.put(url, verify=verifySSL, headers=header, data=payload)
response = result.json()
logger.debug(response)
end = time.time()
print(f"Time to complete: {round(end - start, 2)}")
update_role_permissions()
def main(file):
global mApi
global logger
logger = cpack_utility.logging.get_logger("role")
mApi = classes.morphRequests_config()
with open(file, 'r') as f:
data = f.read()
data = json.loads(data)
update_role(data)
f.close()
if __name__ == '__main__':
main()
</code></pre>
<p>The length of time right now is around 60 seconds to process all of the required payloads that need sent.</p>
<pre><code>logs
2022-06-20 14:39:16,925:88:update_role:update_role_permissions:DEBUG:{'success': True, 'access': 'none'}
2022-06-20 14:39:17,509:88:update_role:update_role_permissions:DEBUG:{'success': True, 'access': 'full'}
2022-06-20 14:39:17,953:88:update_role:update_role_permissions:DEBUG:{'success': True, 'access': 'none'}
2022-06-20 14:39:18,449:88:update_role:update_role_permissions:DEBUG:{'success': True, 'access': 'full'}
2022-06-20 14:39:19,061:88:update_role:update_role_permissions:DEBUG:{'success': True, 'access': 'none'}
2022-06-20 14:39:19,493:88:update_role:update_role_permissions:DEBUG:{'success': True, 'access': 'none'}
2022-06-20 14:39:19,899:88:update_role:update_role_permissions:DEBUG:{'success': True, 'access': 'none'}
Time to complete: 63.22
</code></pre>
<p>The json file that gets read in contains a number of api calls that are needed updating.</p>
<p>data.json</p>
<pre><code>{
"rolePermissions":{
"roleprivs": {
"admin-appliance": {
"permissionCode": "admin-appliance",
"access": "none"
},
"admin-backupSettings": {
"permissionCode": "admin-backupSettings",
"access": "none"
}
}
}
}
</code></pre>
<p>The old version that i was testing was something like the following and using yaml - which yaml was kind of a nightmare to manage.<br />
oldversion.py</p>
<pre><code>def background(f):
def wrapped(*args, **kwargs):
return asyncio.get_event_loop().run_in_executor(None, f, *args, **kwargs)
return wrapped
@background
def role_update_post(strRoleID, access, code):
url, header, verifySSL = mApi.role()
session.headers.update(header)
url = f'{url}/{strRoleID}{mApi.updateRolePermissions()}'
payload = classes.cl_payload.pl_permissionsRole(code, access)
result = session.put(url, verify=verifySSL, headers=header, data=payload)
response = result.json()
if response["success"] == False:
logger.debug("Error updating permission. Enable Debugging")
logger.debug(f"Result: {response}")
logger.debug(f"Access: {access}")
logger.debug(f"Code: {code}")
elif response["success"] == True:
logger.debug(f"Permission updated: {code}")
</code></pre>
<p>However this would complete the script - but push the role update to the background - and the script would complete and stall at the end waiting for the background to complete. still took the same amount of time just not as noticeable.</p>
<p>Ideally - I think multiprocessing is the route i would want to do but still not quite grasping how to make that a proper for loop and multiprocess that for loop.</p>
<p>OR - i am just crazy and there is a much better way to do it all - and i am currently an anti pattern.</p>
<p>UPDATED:
so this concurrent config actually processes properly - however its still at the same speed as the other.</p>
<pre><code> def testing2():
def post_req(payload):
result = session.put(url, verify=verifySSL, headers=header, data=payload)
response = result.json()
logger.debug(response)
logger.debug('post_req')
return result
start = time.time()
futures = []
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
for k,v in permissions_roleprivs.items():
perm_code = v["permissionCode"]
perm_access = v["access"]
payload = json.dumps(
{"permissionCode": perm_code, "access": perm_access}
)
futures.append(executor.submit(post_req,payload)) #for k,v in permissions_roleprivs.items()
for future in futures:
future.result()
end = time.time()
logger.debug('intesting 2')
print(f"Time to complete: {round(end - start, 2)}")
</code></pre>
|
<p>so concurrent.futures - is the ideal sweetness that is needed to process this. I had to do a little more testing but now a process that used to take 60 to 80 seconds depending on the server i was hitting. Now takes 10 seconds.</p>
<pre><code> def testing2():
def post_req(payload):
result = session.put(url, verify=verifySSL, headers=header, data=payload)
response = result.json()
logger.debug(response)
return result
start = time.time()
futures = []
with concurrent.futures.ThreadPoolExecutor() as executor:
for k,v in permissions_roleprivs.items():
perm_code = v["permissionCode"]
perm_access = v["access"]
payload = json.dumps(
{"permissionCode": perm_code, "access": perm_access}
)
futures.append(executor.submit(post_req,payload)) #for k,v in permissions_roleprivs.items()
for future in concurrent.futures.as_completed(futures):
future.result()
end = time.time()
logger.debug('intesting 2')
print(f"Time to complete: {round(end - start, 2)}")
</code></pre>
<p>One of the key screw ups that i found in my previous attempts at this was</p>
<pre><code>for future in concurrent.futures.as_completed(futures):
future.result()
</code></pre>
<p>I didn't have this line of code - properly setup or in my initial tests it didn't exist. When i finally got this working - i was still seeing 60 seconds.</p>
<p>The next problem was it was in the for loop for roleprivs.items() - pulled that out of the initial for loop and was able to process much faster.</p>
|
python|python-3.x|multiprocessing|concurrent.futures
| 0 |
1,907,044 | 39,829,712 |
Attain a tally of a column of a 2d array
|
<p>I have a 2d array data. And would like to attain a tally every time the jth iteration is a 1.
Where i = rows and j = columns.
How do I go about doing this without a for loop? </p>
<p>Conceptually something like this:</p>
<pre><code>for r in range(row):
if(data[r][j] == 1)
amount += 1
</code></pre>
|
<p>You can do as follow:</p>
<pre><code>import numpy as np
a = np.array([[0, 1], [1, 1]])
j = 1
np.sum(a[:, j] == 1)
</code></pre>
<p>will give you 2 as a result
, while <code>np.sum(a[:, 0] == 1)</code> will give 1</p>
<p>If as mentioned in your comment you want to use a condition on multiple arrays, you can use <code>np.logical_and(condition1, condition2)</code>:</p>
<pre><code>np.sum(np.logical_and(a[:, 0] == 1, b[:, 0] == 2))
</code></pre>
|
python|arrays|numpy|multidimensional-array|scipy
| 1 |
1,907,045 | 39,622,014 |
Problems with pickling data
|
<pre><code>import random
import pickle, shelve
import os
#import RPi.GPIO as GPIO | Raspberry pi only
import tkinter
import sys
import time
class Operator(object):
global list_name
def __init__(self):
print("Welcome to Python OS 1.0")
print("type 'help' to access help...") # ADD CODE OS.REMOVE("FILE")
def CheckDetails(self):
if not os.path.isfile( 'details.dat' ) :
data=[0]
data[0] = input('Enter Your Name: ' )
file= open( 'details.dat' , 'wb' )
pickle.dump( data , file )
file.close()
else :
File = open( 'details.dat' , 'rb' )
data = pickle.load( File )
file.close()
user = ""
while user != data[0]:
input("please enter your username...")
print( 'Welcome Back To Python OS, '+ data[0])
def Help(self):
print("""
write(sentence) - Prints the typed sentence on the screen
open(file, mode) - Opens the file and mode such as 'r'
create(listName) - creates the list, listName
add(data, listName) - adds the data to listName
remove(data, listName) - removes the selected data from listName
""")
def write(self, sentence):
print(sentence)
@classmethod
def create(self):
list_name = input("Please enter the list name...")
vars()[list_name] = []
time.sleep(1)
print("List (" + list_name + ") created")
def add(self):
data = input("Please specify the data to be added...")
list_name += data
def remove(self, data, list_name):
remove_data = input("Plese specify the data to be removed...")
list_name -= data
def main():
os = Operator()
os.CheckDetails()
ans = ""
ans = ans.lower()
while ans != "quit":
ans = input()
if ans == "write":
os.write()
elif ans == "help":
os.Help()
elif ans == "create":
os.create()
elif ans == "add":
os.add()
elif ans == "remove":
os.remove()
elif ans == "quit":
break
else:
print("Sorry, that command does not exist or it will be added into a future update...")
print("goodbye...")
main()
</code></pre>
<p>I am trying to make some sort of simplified python, but hitting errors only on the <code>CheckDetails()</code> function. I'm pickling data (which is fine) but getting errors when making the user check his or her username is correct, I've tested it and even though I have typed in the correct username, it carry's on asking for my username. Can anyone please help? </p>
|
<p>You have a while loop that will execute forever because you are not assigning your user variable to anything.
while user != data[0]:
user = input("please enter your username...")
print( 'Welcome Back To Python OS, '+ data[0])</p>
|
python|python-3.x|pickle
| 0 |
1,907,046 | 38,927,794 |
Python dictionary vs list, which is faster?
|
<p>I was coding a Euler problem, and I ran into question that sparked my curiosity. I have two snippets of code. One is with lists the other uses dictionaries. </p>
<p><strong>using lists</strong>:</p>
<pre><code>n=100000
num=[]
suma=0
for i in range(n,1,-1):
tmp=tuple(set([n for n in factors(i)]))
if len(tmp) != 2: continue
if tmp not in num:
num.append(tmp)
suma+=i
</code></pre>
<p><strong>using dictionaries</strong>:</p>
<pre><code>n=100000
num={}
suma=0
for i in range(n,1,-1):
tmp=tuple(set([n for n in factors(i)]))
if len(tmp) != 2: continue
if tmp not in num:
num[tmp]=i
suma+=i
</code></pre>
<p>I am only concerned about performance. Why does the second example using dictionaries run incredibly fast, faster than the first example with lists. the example with dictionaries runs almost thirty-fold faster!</p>
<p>I tested these 2 code using n=1000000, and the first code run in 1032 seconds and the second one run in just 3.3 second,,, amazin'!</p>
|
<p>In Python, the average time complexity of a dictionary key lookup is O(1), since they are implemented as hash tables. The time complexity of lookup in a list is O(n) on average. In your code, this makes a difference in the line <code>if tmp not in num:</code>, since in the list case, Python needs to search through the whole list to detect membership, whereas in the dict case it does not except for the absolute worst case.</p>
<p>For more details, check out <a href="https://wiki.python.org/moin/TimeComplexity" rel="noreferrer">TimeComplexity</a>.</p>
|
python|list|performance|dictionary|time-complexity
| 39 |
1,907,047 | 40,554,514 |
wtforms-alchemy - what about pre-populating the form with relationship data?
|
<p>so...</p>
<p>This:</p>
<pre><code>model_dictionary = model.as_dict() # some kind of dictionary
form = TheModelsForm(**model_dictionary) # https://wtforms-alchemy.readthedocs.io/en/latest/advanced.html
form.populate_obj(model)
return form
</code></pre>
<p>This let's me pre populate the model's form with fields from the model that are db.Columns.. Well.. what about db.relationships on the model? </p>
<p>The above code is not pre-populating the relationship fields; they show up but are left blank. Only the db.Column fields are pre-populated with values regardless of whether I force include the relationship values in the model_dictonary or not (for example, model_dictionary['key_from_other_object'] = associated value). </p>
<p>Can this be accomplished? --Can't find anything on this..</p>
<p><strong>Edit:</strong>
I solved my initial problem with:</p>
<pre><code>form = TheModelsForm(obj=model)
form.populate_obj(model)
return form
</code></pre>
<p>Now I'm trying to figure out how to get the events on the location's model to actually show up on the Location form...</p>
<pre><code>class Event(Base):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String())
location_id = db.Column(db.Integer, db.ForeignKey('location.id'))
class Location(Base):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String())
events = db.relationship('Event', backref=db.backref('events'))
</code></pre>
<p>I'm sure there is a logical reason why all the stuff I commented out isn't working.. I'm getting 'NoneType' object is not iterable</p>
<pre><code>class EventForm(ModelForm):
class Meta:
model = models.Event
location_id = fields.SelectField(u'Location', coerce=int)
class LocationForm(ModelForm):
class Meta:
model = models.Location
#events = fields.SelectField(u'Event', choices=[(g.id, g.selector) for g in models.Event.query.all()], coerce=int)
# events = fields.SelectMultipleField('Event',
# choices=[(g.id, g.selector) for g in models.Event.query.all()],
# coerce=int,
# option_widget=widgets.CheckboxInput(),
# widget=widgets.ListWidget(prefix_label=False)
# )
#events = ModelFieldList(fields.FormField(EventForm))
</code></pre>
|
<p>was barking up the wrong tree using SelectMultipleField and SelectField..</p>
<p>Instead,</p>
<p><code>wtforms_alchemy.fields.QuerySelectMultipleField</code> @ <a href="https://media.readthedocs.org/pdf/wtforms-alchemy/latest/wtforms-alchemy.pdf" rel="nofollow noreferrer">https://media.readthedocs.org/pdf/wtforms-alchemy/latest/wtforms-alchemy.pdf</a> </p>
<p>and </p>
<p>this post, which helped to show usage, @ <a href="https://stackoverflow.com/questions/17499369/wtforms-queryselectmultiplefield-not-sending-list">WTForms QuerySelectMultipleField Not sending List</a></p>
<p>solved my problem..</p>
<p>..ended up with</p>
<pre><code>class LocationForm(ModelForm):
class Meta:
model = models.Location
events = wtforms_alchemy.fields.QuerySelectMultipleField('Event',
query_factory=lambda: models.Event.query.order_by(models.Event.name).all(),
get_label= lambda x: x.name,
option_widget=widgets.CheckboxInput(),
widget=widgets.ListWidget(prefix_label=False)
)
</code></pre>
|
python|flask-sqlalchemy|wtforms|flask-wtforms
| 2 |
1,907,048 | 9,867,959 |
Python Subversion bindings which package nicely with `pip`?
|
<p>Are there any usable/documented Python bindings for Subversion that package nicely using <code>pip</code>? </p>
<p>I'm primarily concerned with adding the bindings to a virtual environment. My goal is to be able to do something like <code>pip install <pkg></code>.</p>
<p>Packages I've tried:</p>
<ul>
<li><code>pysvn</code></li>
<li><code>svn</code></li>
<li><code>subvertpy</code></li>
</ul>
<p>Of these, <code>subvertpy</code> is the only that is on pypi and installs cleanly. Unfortunately, the documentation/usability of this package is <em>terrible</em>.</p>
|
<p>I think the problem is that neither of the packages you mention are in pypi so pip can find them. There is a package called subvertpy which is in pypi so can be installed easily with pip the details of the package are here:</p>
<p><a href="http://pypi.python.org/pypi/subvertpy/0.8.10" rel="nofollow">http://pypi.python.org/pypi/subvertpy/0.8.10</a></p>
<p><a href="https://launchpad.net/subvertpy" rel="nofollow">https://launchpad.net/subvertpy</a></p>
<p>However it does have some prerequisites that you must install first (SVN developer packages) so it may not suit you if you need a completely atomic pip install. Then again if you already have those libraries installed or you are willing to install the prerequisites once because you plan to use pip to install subvertpy into several virtual envs then it may be worth looking at. I haven't used subvertpy so I can't say how it compares to the other packages but given your requirement for pip install it might suit you.</p>
|
python|svn|pip
| 1 |
1,907,049 | 68,249,402 |
Python - Loop through files and store occurrences by file
|
<p>I have a python script that I need to adapt, I have to open 4 different files and generate a final file that has the number of occurrences of a "word" for each file, for example:</p>
<p>Format: file - occurrences in file 1, occurrences in file 2, occurrences in file 3, occurrences in file 4</p>
<p>apple, 3,4,1,5
pineapple, 7,4,1,3</p>
<p>I want to do it without using external libraries. Initially, I have the idea of storing the values in lists and then adding them to the file but I don't see it performing. Currently the code for a single file I have it like this:</p>
<pre><code>def data():
list_words = []
with open("story.txt") as words:
for line in words:
word = line.split()
for i in range(len(word) + 1):
if i not in list_words:
list_words.append(i)
else:
list_words[i] += 1
</code></pre>
<p>I did not find much relevant help in other posts, any ideas will be welcome, thanks!</p>
<p>Edit:
Example of how are the files that I must open and verify the words:</p>
<p><a href="http://textfiles.com/stories/3lpigs.txt" rel="nofollow noreferrer">http://textfiles.com/stories/3lpigs.txt</a>
<a href="http://textfiles.com/stories/adler.txt" rel="nofollow noreferrer">http://textfiles.com/stories/adler.txt</a></p>
<p>They are txt files with stories</p>
|
<p>Here is the simplest word counter that uses the fact that dictionary keys are unique.</p>
<pre class="lang-py prettyprint-override"><code>import tkinter
from tkinter import filedialog
master = tkinter.Tk()
master.withdraw()
def data( pathfilename ):
word = dict()
with open( pathfilename, mode='rt' ) as words:
text = words.read()
for line in text.split('\n'):
for k in line.split():
if k in [
'', chr(10), chr(13), chr(9), '.', ',', '!', '?', ':', ';',
'@', '#', '$', '%', '^', '&', '*', '(', ')', '-', '_', '\\',
'|', '<', '>', '/', '"', "'", chr(96), '~', '[', ']', '{', '}' ]:
pass
elif k in word:
word[ k ] += 1
else:
word[ k ] = 1
return word
fdir = filedialog.askopenfilename( title = 'Pick a txt file' )
if fdir:
result = data( fdir )
print( len( list( result.keys() ) ) )
print()
for k,v in result.items():
print( f'{k} = {v}' )
master.destroy()
</code></pre>
<p>It returns a dictionary of words and counts.</p>
|
python-3.x
| 0 |
1,907,050 | 25,942,278 |
Using Django for a website without separating static files
|
<p>I want to move my website to django which has a lot of images and css files linked to it. Also it uses Application cache for caching the static files.Since I have other apps working on the django I want to move this static one also to the django.So is it possible to run a webpage without rendering the static files dynamically and use the page as static webpage only(static files path relative to html not using django's static folder)? How to do this? </p>
|
<p>Assuming the HTML is also static, you should just move everything (HTML and relative files) to a static folder (no need to separate the HTML template since it is static as well), and then you can map it to any URL you want using your web server, e.g. you can put them inside <code>{{ STATIC_ROOT }}/my-page/</code>, and map <code>example.com/my-page/</code> to that folder on the filesystem</p>
<p>Run <code>collectstatic</code>, Django will copy/generate the static files into your <code>STATIC_ROOT</code> folder on the filesystem <a href="https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#collectstatic" rel="nofollow">https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#collectstatic</a></p>
<p>Then use a web server like Apache or Nginx to serve your <code>/my-page</code> URL directly without hitting your Django app. (set in Django with <code>STATIC_URL</code>), while the other requests are forwarded to your Django app</p>
<p>So e.g. your Django app will run on 127.0.0.1:8000, while nginx runs on the default HTTP/HTTPS port, and uses e.g. <code>proxy_pass</code> to talk to your Django app for the dynamic content </p>
<p><a href="http://wiki.nginx.org/HttpProxyModule" rel="nofollow">http://wiki.nginx.org/HttpProxyModule</a></p>
|
python|django
| 1 |
1,907,051 | 1,593,576 |
String replacing in a file by given position
|
<p>I have a file opened in 'ab+' mode.</p>
<p>What I need to do is replacing some bytes in the file with another string's bytes such that:</p>
<p>FILE:</p>
<pre><code>thisissomethingasperfectlygood.
</code></pre>
<p>string:</p>
<pre><code>01234
</code></pre>
<p>So, for example, I seek for the position (4, 0) and I want to write 01234 in the place of "issom" in the file. Last appearance would be:</p>
<p><code>this01234ethingasperfectlygood</code>.</p>
<p>There are some solutions on the net, but all of them (at least what I could find) are based on "first find a string in the file and then replace it with another one". Because my case is based on seeking, so I am confused about the solution.</p>
|
<p>You could mmap() your file and then use slice notation to update specific byte ranges in the file. The example <a href="http://docs.python.org/library/mmap.html" rel="nofollow noreferrer">here</a> should help.</p>
|
python|string|replace|seek
| 2 |
1,907,052 | 2,287,966 |
Trouble using PIL with Google Appengine SDK
|
<p>I have a Windows Server 2008 R2 (64bits) machine on which I wanted to develop a google AppEngine app.</p>
<p>So I installed Python 2.5.4 from python.org (because the Google SDK said I needed 2.5 and 2.5.6 didn't have any MSI's)
Then I installed PIL from <a href="http://www.pythonware.com/products/pil/" rel="nofollow noreferrer">http://www.pythonware.com/products/pil/</a> I used version 1.1.7 for python 2.5
I used the 32-bits versions of both of these.</p>
<p>Then I installed the AppEngine SDK.</p>
<p>Hello-World worked fine, but I wanted to manipulate an image, which didn't work because I get this stacktrace and a HTTP 500 response:</p>
<pre><code>2010-02-18 11:50:27 Running command: "['C:\\Python25\\pythonw.exe', 'C:\\Program Files
(x86)\\Google\\google_appengine\\dev_appserver.py', '--admin_console_server=', '--port=8080', 'd:\\imgsvc']"
WARNING 2010-02-18 10:50:29,260 datastore_file_stub.py:623] Could not read datastore data from c:\users\admini~1\appdata\local\temp\dev_appserver.datastore
INFO 2010-02-18 10:50:29,627 dev_appserver_main.py:399] Running application imgsvc on port 8080: http://localhost:8080
ERROR 2010-02-18 10:50:40,058 dev_appserver.py:3217] Exception encountered handling request
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3180, in _HandleRequest
self._Dispatch(dispatcher, self.rfile, outfile, env_dict)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3123, in _Dispatch
base_env_dict=env_dict)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 515, in Dispatch
base_env_dict=base_env_dict)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2382, in Dispatch
self._module_dict)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2292, in ExecuteCGI
reset_modules = exec_script(handler_path, cgi_path, hook)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2188, in ExecuteOrImportScript
exec module_code in script_module.__dict__
File "d:\imgsvc\imgsvc.py", line 7, in <module>
outputimage = images.resize(inputimage.content, 32, 32)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\images\__init__.py", line 625, in resize
return image.execute_transforms(output_encoding=output_encoding)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\images\__init__.py", line 513, in execute_transforms
response)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\apiproxy_stub_map.py", line 78, in MakeSyncCall
return apiproxy.MakeSyncCall(service, call, request, response)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\apiproxy_stub_map.py", line 278, in MakeSyncCall
rpc.CheckSuccess()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\apiproxy_rpc.py", line 149, in _WaitImpl
self.request, self.response)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\apiproxy_stub.py", line 80, in MakeSyncCall
method(request, response)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\images\images_stub.py", line 171, in _Dynamic_Transform
response_value = self._EncodeImage(new_image, request.output())
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\images\images_stub.py", line 193, in _EncodeImage
image.save(image_string, image_encoding)
File "C:\Python25\lib\site-packages\PIL\Image.py", line 1439, in save
save_handler(self, fp, filename)
File "C:\Python25\lib\site-packages\PIL\PngImagePlugin.py", line 564, in _save
import ICCProfile
SystemError: Parent module 'PIL' not loaded
INFO 2010-02-18 10:50:40,081 dev_appserver.py:3246] "GET / HTTP/1.1" 500 -
</code></pre>
<p>The python script I was trying to run:</p>
<pre><code>from google.appengine.api import urlfetch
from google.appengine.api import images
url = "http://www.brokenwire.net/bw/images/113.png"
inputimage = urlfetch.fetch(url)
if inputimage.status_code == 200:
outputimage = images.resize(inputimage.content, 32, 32)
self.response.headers['Content-Type'] = "image/png"
self.response.out.write(outputimage)
</code></pre>
<p>Anybody any idea what is going wrong here?</p>
<p>I also tried this standalone python script, which works fine:</p>
<pre><code>import Image
im = Image.open('filename.png')
im2 = im.resize((100,100), Image.ANTIALIAS)
im2.show()
</code></pre>
<p>It seems that it makes a difference which image I use:</p>
<pre><code>url = "http://www.r-stone.net/blogs/ishikawa/uploaded_images/google_appengine-779483.png"
</code></pre>
<p>Gives the stacktrace of the question, but</p>
<pre><code>url = "http://www.brokenwire.net/bw/images/113.png"
</code></pre>
<p>works without a problem.</p>
|
<p>This happens when Python can't find ICCProfile module.
Apperently when using trough GAE instead of an ImportError the importer throws a SystemError, and the function fails.
What i did was to change line 567 in ...\Python25\Lib\site-packages\PIL\PngImagePlugin.py
from </p>
<p><code>except ImportError:</code></p>
<p>to:</p>
<p><code>except Exception:</code></p>
|
python|google-app-engine|windows-server-2008|python-imaging-library
| 2 |
1,907,053 | 1,719,776 |
Euler #26, how to convert rational number to string with better precision?
|
<p>I want to get <code>1/7</code> with better precision, but it got truncated. How can I get better precision when I convert a rational number?</p>
<pre><code>>>> str(1.0/7)[:50]
'0.142857142857'
</code></pre>
|
<p>Python has a built-in library for arbitrary-precision calculations: Decimal. For example:</p>
<pre><code>>>>from decimal import Decimal, getcontext
>>>getcontext().prec = 50
>>>x = Decimal(1)/Decimal(7)
>>>x
Decimal('0.14285714285714285714285714285714285714285714285714')
>>>str(x)
'0.14285714285714285714285714285714285714285714285714'
</code></pre>
<p>Look at the <a href="http://docs.python.org/library/decimal.html" rel="noreferrer">Python Decimal documentation</a> for more details. You can change the precision to be as high as you need.</p>
|
python|floating-point|floating-point-precision
| 9 |
1,907,054 | 1,528,691 |
Idiomatic way to do list/dict in Cython?
|
<p>My problem: I've found that processing large data sets with raw C++ using the STL map and vector can often be considerably faster (and with lower memory footprint) than using Cython. </p>
<p>I figure that part of this speed penalty is due to using Python lists and dicts, and that there might be some tricks to use less encumbered data structures in Cython. For example, this page (<a href="http://web.archive.org/web/20130824073620/http://wiki.cython.org/tutorials/numpy" rel="nofollow noreferrer">http://wiki.cython.org/tutorials/numpy</a>) shows how to make numpy arrays very fast in Cython by predefining the size and types of the ND array. </p>
<p>Question: Is there any way to do something similar with lists/dicts, e.g. by stating roughly how many elements or (key,value) pairs you expect to have in them? <strong>That is, is there an idiomatic way to convert lists/dicts to (fast) data structures in Cython?</strong> </p>
<p>If not I guess I'll just have to write it in C++ and wrap in a Cython import. </p>
|
<p>Cython now has template support, and comes with declarations for some of the STL containers.</p>
<p>See <a href="http://docs.cython.org/src/userguide/wrapping_CPlusPlus.html#standard-library" rel="noreferrer">http://docs.cython.org/src/userguide/wrapping_CPlusPlus.html#standard-library</a></p>
<p>Here's the example they give:</p>
<pre><code>from libcpp.vector cimport vector
cdef vector[int] vect
cdef int i
for i in range(10):
vect.push_back(i)
for i in range(10):
print vect[i]
</code></pre>
|
c++|python|c|cython
| 37 |
1,907,055 | 63,058,673 |
Cant convert to different data types
|
<p>I am trying to run a .txt file with raw data into my code but I keep getting a value error. It was working before, but now I am getting this error:</p>
<p>ValueError: could not convert string to float: '.'</p>
<p>Here is my file with the raw data:</p>
<pre><code>0.0980224609375
0.10589599609375
0.0980224609375
0.0980224609375
0.0980224609375
0.11767578125
0.130.0980224609375 --> The error is here I assume since there are 2 periods
0.10198974609375
0.10198974609375
0.0980224609375
</code></pre>
<p>This data can not be changed, so how can I convert this from a string to float without getting an error? Here is my code:</p>
<pre><code># Read and pre-process input images
n, c, h, w = net.inputs[input_blob].shape
images = np.ndarray(shape=(n, c, h, w))
for i in range(n):
image = cv2.imread(args.input[i])
if image.shape[:-1] != (h, w):
log.warning("Image {} is resized from {} to {}".format(args.input[i], image.shape[:-1], (h, w)))
image = cv2.resize(image, (w, h))
# Swapping Red and Blue channels
#image[:, :, [0, 2]] = image[:, :, [2, 0]]
# Change data layout from HWC to CHW
image = image.transpose((2, 0, 1))
images[i] = image
eoim = image
eoim16 = eoim.astype(np.float16)
# divide by 255 to get value in range 0->1 if necessary (depends on input pixel format)
if(eoim16.max()>1.0):
eoim16 = np.divide(eoim16,255)
print(eoim16)
val = []
preprocessed_image_path = 'C:/Users/Owner/Desktop/Ubotica/IOD/cloud_detect/'
formated_image_file = "output_patch_fp"
f = open(preprocessed_image_path + "/" + formated_image_file + ".txt", 'r')
'''elem_counter = 0
for elem in eoim16:
for elem1 in elem:
for col in elem1:
#f.read(int(float(formated_image_file)))
val = float(f.readline())'''
for y in f.readlines()[0]:
val.append(float(y))
f.close()
#print(val)
#val = np.reshape(val, (3,512,512))
val = np.ndarray(shape=(c, h, w))
#res = val
# calling the instance method using the object cloudDetector
res = cloudDetector.infer(val)
res = res[out_blob]
</code></pre>
<p>Any help will be much appreciated!</p>
|
<p>You've correctly identified what's going wrong. <code>0.130.0980224609375</code> confuses Python, and would confuse most humans as well. Does it mean 0.13009...? Does it mean 0.130? Is it 2 decimal numbers? Is it an ip address? Python doesn't do much thinking, just shrugs its shoulders and quits. This code will assume you mean one decimal.</p>
<pre><code>def clean(s):
while s.count(".") > 1:
i = s.rindex(".")
s = s[:i] + s[i+1:]
return s
assert clean("0.130.0980224609375") == "0.1300980224609375"
</code></pre>
|
python|valueerror|readlines
| 0 |
1,907,056 | 32,502,153 |
pandas - boxplot median color settings issues
|
<p>I'm running Pandas 0.16.2 and Matplotlib 1.4.3. I have this issue coloring the median of the boxplot generated by the following code:</p>
<pre><code>df = pd.DataFrame(np.random.rand(10, 5), columns=['A', 'B', 'C', 'D', 'E'])
fig, ax = plt.subplots()
medianprops = dict(linestyle='-', linewidth=2, color='blue')
bp = df.boxplot(medianprops=medianprops)
plt.show()
</code></pre>
<p>That returns:</p>
<p><a href="https://i.stack.imgur.com/536ZA.png" rel="noreferrer"><img src="https://i.stack.imgur.com/536ZA.png" alt="enter image description here"></a></p>
<p>It appears that the <code>color</code> setting is not read. Changing only the settings of linestyle and linewidth the plot reacts correctly.</p>
<pre><code>medianprops = dict(linestyle='-.', linewidth=5, color='blue')
</code></pre>
<p><a href="https://i.stack.imgur.com/mIndn.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mIndn.png" alt="enter image description here"></a></p>
<p>Anyone can reproduce it?</p>
|
<p>Looking at the code for <code>DataFrame.boxplot()</code> there is some special code to handle the colors of the different elements that supersedes the <code>kws</code> passed to matplotlib's <code>boxplot</code>. In theory, there seem to be a way to pass a <code>color=</code> argument containing a dictionary with keys being <code>'boxes', 'whiskers', 'medians', 'caps'</code> but I can't seem to get it to work when calling <code>boxplot()</code> directly.</p>
<p>However, this seem to work:</p>
<pre><code>df.plot(kind='box', color={'medians': 'blue'},
medianprops={'linestyle': '--', 'linewidth': 5})
</code></pre>
<p>see <a href="http://pandas.pydata.org/pandas-docs/stable/visualization.html#box-plots" rel="noreferrer" title="Pandas Boxplot Examples">Pandas Boxplot Examples</a></p>
|
python|pandas|matplotlib
| 8 |
1,907,057 | 28,355,385 |
ImportError: No module named 'Crypto'
|
<p>I am working with pycrypto. It works fine on my local windows machine, but when I move it to my python box I get an error with importing the module: </p>
<pre><code>from Crypto.Cipher import ARC4
ImportError: No module named 'Crypto'
</code></pre>
<p>The output of <code>python3.3 -c "from Crypto.Cipher import ARC4"</code></p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: No module named 'Crypto'
</code></pre>
<p>output of pip3 list has a reference includes pycrypto (2.6.1)</p>
<p>I know it works with Python 2.7.6, but I wrote the script in 3.3 so it depends on some things from 3.3</p>
|
<p>As I already wrote in <a href="https://stackoverflow.com/a/58077358/3459910">this answer</a>:</p>
<h1>WARNING: Don't use <code>pycrypto</code> anymore!</h1>
<p>Use <code>pycryptodome</code> instead, via <code>pip3 install pycryptodome</code>.</p>
<p>But make sure that you don't have <code>pycrypto</code> installed, because both packages install under the same folder <code>Crypto</code>.</p>
|
python|importerror|pycrypto
| 11 |
1,907,058 | 44,012,731 |
os.environ.get("key") returns none; hardcoding "key" works
|
<p>Building a python 3 web app using flask which includes google maps.</p>
<p>Checking for API Key before loading index.html always raises RuntimeError:</p>
<pre><code>if not os.environ.get("key"):
raise RuntimeError("key not set")
return render_template("index.html", key=os.environ.get("key"))
</code></pre>
<p>Also tried <code>os.getenv</code> - the same problem occurs. Changing variable name does not solve the issue either.</p>
<p>Exported the variable to environment via <code>export key=value</code> and <code>printenv</code> returns correct value of <code>key</code>.</p>
<p>Hardcoding the API Key works and returns the map successfully:</p>
<pre><code>return render_template("index.html", key=value)
</code></pre>
<p>Any ideas how to solve this?</p>
|
<p>SOLVED: make sure to run the <code>export var</code> command in the same terminal window as <code>flask run</code>.</p>
<p>ALTERNATIVE: create <code>websiteconfig.py</code> file with <code>key="value"</code> and include <code>import websiteconfig</code> in your application. source: <a href="http://flask.pocoo.org/snippets/2" rel="nofollow noreferrer">link</a></p>
|
python|flask
| 1 |
1,907,059 | 54,587,423 |
How to encode an Excel File to base64
|
<p>I need to use SendGrid to send emails from my application. I've made the transactional templates in sendgrid and now need to attach excel to the emails that go out. </p>
<p>According to their Web API V3 documentation, the attachment can be added by encoding the file in Base64. I've tried searching for solutions to encode the file in Base64, but can't find a solution to it. From what I understand of the base64 package, it can only encode bytes-like objects. </p>
<p>So, do I need to read the excel file into a byte-like object before I can encode it? Is there a package that will do this for me magically?</p>
<p>My application currently generates the excel file by using the pandas to_excel() method. </p>
<p>Thanks in advance :)</p>
<p>[Update 1]
I've checked this question if mine could be a duplicate, however, that question returns the file as a String, but to do the Base64 encoding, I need a byte-like object to encode my excel file. </p>
<p>I did try the solution provided in that question but it did not work because the final return was a string. </p>
|
<p>To encode an Excel file to base64, try this</p>
<pre class="lang-py prettyprint-override"><code>import base64
data = open(excel_path, 'rb').read()
base64_encoded = base64.b64encode(data).decode('UTF-8')
</code></pre>
|
python|pandas|encoding|base64
| 6 |
1,907,060 | 26,995,637 |
Use of "if" in template with custom template tag with multiple arguments
|
<p>I wrote a custom template tag to query my database and check if the value in the database matches a given string:</p>
<pre><code>@register.simple_tag
def hs_get_section_answer(questionnaire, app, model, field, comp_value):
model = get_model(app, model)
modal_instance = model.objects.get(questionnaire=questionnaire)
if getattr(modal_instance, field) == comp_value:
return True
else:
return False
</code></pre>
<p>In my template I can use this tag as follows:</p>
<pre><code>{% hs_get_section_answer questionnaire 'abc' 'def' 'ghi' 'jkl' %}
</code></pre>
<p>The function returns True or False correctly.</p>
<p>My problem: I'd like to do something like this:</p>
<pre><code>{% if hs_get_section_answer questionnaire 'abc' 'def' 'ghi' 'jkl' %}
SUCCESS
{% else %}
FAILURE
{% endif %}
</code></pre>
<p>But this does not work; it seems as if the "if" template tag cannot handle multiple arguments.</p>
<p>Can anybody give me a hint how to solve this problem?</p>
|
<p>Set the result of the template tag call to a variable then call {% if %} on that result</p>
<pre><code>{% hs_get_section_answer questionnaire 'abc' 'def' 'ghi' 'jkl' as result %}
{% if result %}
...
{% endif %}
</code></pre>
<p>You will also need to change your template tag to use an assignment tag instead of a simple tag as well. See assignment tags django doc: <a href="https://docs.djangoproject.com/en/dev/howto/custom-template-tags/#assignment-tags" rel="nofollow">https://docs.djangoproject.com/en/dev/howto/custom-template-tags/#assignment-tags</a></p>
<pre><code>@register.assignment_tag
def hs_get_section_answer(questionnaire, app, model, field, comp_value):
model = get_model(app, model)
modal_instance = model.objects.get(questionnaire=questionnaire)
if getattr(modal_instance, field) == comp_value:
return True
else:
return False
</code></pre>
|
django|python-3.x|django-templates
| 1 |
1,907,061 | 27,415,466 |
How to recursively get the size of a tree?
|
<pre><code>class EmptyValue:
pass
class Tree:
def __init__(self, root = EmptyValue):
self.root = root
self.subtrees = []
def is_empty():
self.root = EmptyValue
def size(self, a = None):
for subtree in self.subtrees:
if isinstance(subtree,Tree):
if subtree.subtrees == []:
a+=1
else:
a+=1
return(subtree.size(a))
else:
a+=1
return(a)
</code></pre>
<p>I'm trying to count the number of items in this tree, however it get's stuck at this case.</p>
<p>a = Tree(5)</p>
<p>b = Tree(6)</p>
<p>b.subtrees = [1,2,3]</p>
<p>a.subtrees = [Tree(11), Tree(5), Tree(3), b, Tree(4), Tree(12)]</p>
<p>print(a.size()) #should print 10, gives 8 </p>
<p>I think the loop stops at b and has something to do with the subtrees of b not being trees themselves , any help would be greatly appreciated.</p>
|
<p>Your <code>size</code> definition is more complicated than it needs to be -- and in that complication hides bugs:</p>
<pre><code>def size(self, a=1):
for subtree in self.subtrees:
if isinstance(subtree,Tree):
a += subtree.size()
else:
a += 1
return(a)
</code></pre>
<p>Note in particular that we only have one <code>return</code> point -- after <strong>all</strong> subtrees have been evaluated, whereas the old code allowed a <code>return</code> to take place at any point in the process, meaning that later children wouldn't be considered.</p>
|
python|recursion|tree
| 1 |
1,907,062 | 27,408,635 |
'List indices must be integers, not tuple' Error when using sep='\n'
|
<p>I made a quick program which outputs the full script of '12 days of Christmas' in a least possible amount of lines. </p>
<p>However when using:</p>
<pre><code>print(myList,sep='\n')
</code></pre>
<p>In the full program:</p>
<pre><code>script = ["st", "nd", "rd", "th", "th", "th", "th", "th", "th", "th", "th", "th", "A patridge in a pear tree", "Two turtle doves and", "Three french hens", "Four calling birds", "Five golden rings", "Six geese a laying", "Seven swans are swimming", "Eight maids are milking", "Nine ladies dancing", "Ten lords-a-leaping", "Elven pipers piping", "Twelve drummers drumming"]
for each in range(1,13):
print(("On the ") + str(each) + str(script[each - 1]) + " day of christmas my true love gave to me")
print(script[11 , (each) + 10] ,sep='\n')
</code></pre>
<p>I get the error message of:</p>
<p>print(script[11 , (each) + 10] ,sep='\n')</p>
<p>TypeError: list indices must be integers, not tuple</p>
<p>I have looked online however nothing seems to fit and I am now banging my head against the wall. Anyway thanks for reading.</p>
|
<p>By putting a comma inside your index to <code>script</code> in the expression <code>script[11 , (each) + 10]</code>, you are trying to the tuple <code>(11, each+10)</code> as an index, which has no meaning for a list.</p>
<p>If you want to print the elements between two indices, you need to use a slice, which uses a colon, not a comma. For instance, <code>script[2:5]</code> would print the elements from indices 2 to 5 (including 2 but not 5).</p>
<p>However, to get the correct song lyrics, you want is to print the elements from <code>each+11</code> to 11 in <em>backwards</em> order:</p>
<pre><code>print(*script[(each) + 11:11:-1] ,sep='\n')
</code></pre>
<p>This gives the right result:</p>
<pre><code>On the 1st day of christmas my true love gave to me
A patridge in a pear tree
On the 2nd day of christmas my true love gave to me
Two turtle doves and
A patridge in a pear tree
On the 3rd day of christmas my true love gave to me
Three french hens
Two turtle doves and
A patridge in a pear tree
[etc.]
</code></pre>
|
python|list
| 2 |
1,907,063 | 12,202,998 |
Python input in while loop
|
<p>I'm making an IRC bot in python,
now I want to have some console commands.</p>
<p>The main function is in a while loop, so I can't put <code>input_raw()</code>. Does anyone know how to get user input while not interrupt the loop.</p>
<p>thanks!</p>
|
<p>Question solved, used Threading,
source: <a href="https://github.com/FaceHunter/FaceBot" rel="nofollow">FaceBot source</a></p>
|
python|bots|irc
| 0 |
1,907,064 | 12,357,435 |
Python SocketServer Listen on Multicast
|
<p>I've been looking around to find a way to have SocketServer python module to listen on multicast without success.</p>
<p>Anyone managed to do so ?</p>
<p>Any insight will be greatly appreciated !</p>
<p>Thanks</p>
|
<p>The docs (http://docs.python.org/library/socketserver.html) don't make any mention about multicast, and the source code (http://hg.python.org/cpython/file/2.7/Lib/SocketServer.py) doesn't set any socket options you'd expect to see in a multicast listener (e.g. socket.IP_ADD_MEMBERSHIP), so I'd say SocketServer doesn't support multicast.</p>
<p>I assume (you should try to include a code snippet with the error you're getting) you're trying to make a UDPServer and you're getting an error that is something like: </p>
<pre><code>socket.error: [Errno 10049] The requested address is not valid in its context
</code></pre>
<p>This is because UDPServer is a subclass of TCPServer, and when a TCPServer is created it calls bind() on the specified address. You're not supposed to bind to a multicast address for listening though (hence the error), you use the IP_ADD_MEMBERSHIP socket option to listen for multicast traffic. </p>
<p>Looks like you may have to roll your own multicast server.</p>
|
python|multicast|socketserver
| 2 |
1,907,065 | 12,412,324 |
Python class returning value
|
<p>I'm trying to create a class that returns a value, not self.</p>
<p>I will show you an example comparing with a list:</p>
<pre><code>>>> l = list()
>>> print(l)
[]
>>> class MyClass:
>>> pass
>>> mc = MyClass()
>>> print mc
<__main__.MyClass instance at 0x02892508>
</code></pre>
<p>I need that MyClass returns a list, like <code>list()</code> does, not the instance info. I know that I can make a subclass of list. But is there a way to do it without subclassing?</p>
<p>I want to imitate a list (or other objects):</p>
<pre><code>>>> l1 = list()
>>> l2 = list()
>>> l1
[]
>>> l2
[]
>>> l1 == l2
True
>>> class MyClass():
def __repr__(self):
return '[]'
>>> m1 = MyClass()
>>> m2 = MyClass()
>>> m1
[]
>>> m2
[]
>>> m1 == m2
False
</code></pre>
<p>Why is <code>m1 == m2</code> False? This is the question.</p>
<p>I'm sorry if I don't respond to all of you. I'm trying all the solutions you give me. I cant use <code>def</code>, because I need to use functions like setitem, getitem, etc.</p>
|
<p>I think you are very confused about what is occurring.</p>
<p>In Python, everything is an object:</p>
<ul>
<li><code>[]</code> (a list) is an object</li>
<li><code>'abcde'</code> (a string) is an object</li>
<li><code>1</code> (an integer) is an object</li>
<li><code>MyClass()</code> (an instance) is an object</li>
<li><code>MyClass</code> (a class) is also an object</li>
<li><code>list</code> (a type--much like a class) is also an object</li>
</ul>
<p>They are all "values" in the sense that they are a thing and not a name which refers to a thing. (Variables are names which refer to values.) A value is not something different from an object in Python.</p>
<p>When you call a class object (like <code>MyClass()</code> or <code>list()</code>), it returns an instance of that class. (<code>list</code> is really a type and not a class, but I am simplifying a bit here.)</p>
<p>When you <em>print</em> an object (i.e. get a string representation of an object), that object's <a href="http://www.rafekettler.com/magicmethods.html#representations" rel="noreferrer"><code>__str__</code> or <code>__repr__</code> magic method</a> is called and the returned value printed.</p>
<p>For example:</p>
<pre><code>>>> class MyClass(object):
... def __str__(self):
... return "MyClass([])"
... def __repr__(self):
... return "I am an instance of MyClass at address "+hex(id(self))
...
>>> m = MyClass()
>>> print m
MyClass([])
>>> m
I am an instance of MyClass at address 0x108ed5a10
>>>
</code></pre>
<p>So what you are asking for, "I need that MyClass return a list, like list(), not the instance info," does not make any sense. <code>list()</code> returns a list instance. <code>MyClass()</code> returns a MyClass instance. If you want a list instance, just get a list instance. If the issue instead is <em>what do these objects look like when you <code>print</code> them or look at them in the console</em>, then create a <code>__str__</code> and <code>__repr__</code> method which represents them as you want them to be represented.</p>
<h2>Update for new question about equality</h2>
<p>Once again, <code>__str__</code> and <code>__repr__</code> are only for <em>printing</em>, and do not affect the object in any other way. Just because two objects have the same <code>__repr__</code> value does not mean they are equal!</p>
<p><code>MyClass() != MyClass()</code> because your class does not define how these would be equal, so it falls back to the default behavior (of the <code>object</code> type), which is that objects are only equal to themselves:</p>
<pre><code>>>> m = MyClass()
>>> m1 = m
>>> m2 = m
>>> m1 == m2
True
>>> m3 = MyClass()
>>> m1 == m3
False
</code></pre>
<p>If you want to change this, use <a href="http://www.rafekettler.com/magicmethods.html#comparisons" rel="noreferrer">one of the comparison magic methods</a></p>
<p>For example, you can have an object that is equal to everything:</p>
<pre><code>>>> class MyClass(object):
... def __eq__(self, other):
... return True
...
>>> m1 = MyClass()
>>> m2 = MyClass()
>>> m1 == m2
True
>>> m1 == m1
True
>>> m1 == 1
True
>>> m1 == None
True
>>> m1 == []
True
</code></pre>
<p>I think you should do two things:</p>
<ol>
<li>Take a look at <a href="https://rszalski.github.io/magicmethods/" rel="noreferrer">this guide to magic method use in Python</a>.</li>
<li><p>Justify why you are not subclassing <code>list</code> if what you want is very list-like. If subclassing is not appropriate, you can delegate to a wrapped list instance instead:</p>
<pre><code>class MyClass(object):
def __init__(self):
self._list = []
def __getattr__(self, name):
return getattr(self._list, name)
# __repr__ and __str__ methods are automatically created
# for every class, so if we want to delegate these we must
# do so explicitly
def __repr__(self):
return "MyClass(%s)" % repr(self._list)
def __str__(self):
return "MyClass(%s)" % str(self._list)
</code></pre>
<p>This will now act like a list without being a list (i.e., without subclassing <code>list</code>).</p>
<pre><code>>>> c = MyClass()
>>> c.append(1)
>>> c
MyClass([1])
</code></pre></li>
</ol>
|
python|class|return
| 53 |
1,907,066 | 23,127,404 |
Python setup into virtualenv sets proper python intepreter path only for some files
|
<p>I'm attempting to install a python package into my virtualenv. Installation works, but some scripts have trouble knowing where to import modules. I've traced this to the interpreter paths in the script files. It turns out that the package is not consistently developed and diff of the source and target directories show the following variations in the interpreter paths:</p>
<pre><code>-in source file
+after installed in virtualenv
-#! /usr/bin/env python
+#!/Users/fuu/project/bin/python
-#!/usr/bin/env python
+#!/Users/fuu/project/bin/python
-#! /usr/bin/env python
+#!/usr/bin/python
-#!/usr/bin/env python
+#!/usr/bin/python
-#!/usr/bin/python
+#!/usr/bin/python
</code></pre>
<p>I'm failing to understand the logic of these transformations. Sometimes the path gets converted properly, (two first examples), sometimes it does not, without apparent pattern as to why.</p>
|
<p>This problem was solved after setting interpreter paths in all of the source script files to:</p>
<pre><code>#!/usr/bin/env python
</code></pre>
<p>After this was done, running the setup correctly set the path correctly in my virtualenv:</p>
<pre><code>#!/Users/fuu/project/bin/python
</code></pre>
<p>I believe the erratic behavior was caused by the python installer becoming confused about the real path to be set.</p>
|
python|macos|python-2.7|virtualenv
| 0 |
1,907,067 | 22,980,062 |
How CFFI use the c file in a certain directory?
|
<p>I am learning to call c in python program by CFFI and write c file named 'add.c' as below :</p>
<pre><code>float add(float f1, float f2)
{
return f1 + f2;
}
</code></pre>
<p>and a python file named 'demo.py' to call add method in 'add.c':</p>
<pre><code>from cffi import FFI
ffi = FFI()
ffi.cdef("""
float(float, float);
""")
C = ffi.verify("""
#include 'add.c'
""", libraries=[]
)
sum = C.add(1.9, 2.3)
print sum
</code></pre>
<p>When I run demo.py, I get the error that add.c file cannot be found. Why file add.c cannot be found and how can I to fix it?</p>
|
<p>I was able to reproduce your error with the following specific error message.</p>
<pre><code>__pycache__/_cffi__x46e30051x63be181b.c:157:20: fatal error: add.c: No such file or
directory
#include "add.c"
</code></pre>
<p>It seems that <code>cffi</code> is trying to compile your file from inside the <code>__pycache__</code> subdirectory, while <code>add.c</code> is in the current directory. The fix for this is to use the relative path </p>
<pre><code> #include "../add.c"
</code></pre>
<p>However, once I fixed that, your declaration was also incorrect, so I fixed that as well, and the following code produces correct results.</p>
<pre><code>from cffi import FFI
ffi = FFI()
ffi.cdef("""
float add(float f1, float f2);
""")
C = ffi.verify("""
#include "../add.c"
""", libraries=[]
)
sum = C.add(1.9, 2.3)
print sum
</code></pre>
|
python|python-cffi
| 3 |
1,907,068 | 23,100,305 |
Delete list via referent ? > Was : How do you alias variable in Python ? or similar?
|
<p>In Perl I can say this :</p>
<pre><code>> perl -e '@a=(1,2,3);$b=\@a;$$b[1]=5;print @a'
153
@a=(1,2,3);
$b=\@a;
$$b[1]=5;
print @a
</code></pre>
<p>i.e. I can change the original variable @a via the reference $b.
How can I do that in Python ?</p>
<p>=========</p>
<p>Sorry my mistake I was trying to delete the content of the referent array "a" via the reference "b".. and it does not seem to work.
What is the correct way to delete a via b</p>
<pre><code>> a = [1,2,3]
> b = a
> b
[1, 2, 3]
> b = []
> a
[1, 2, 3]
</code></pre>
<p>'a' still not empty i.e. I have reference to variable and I want to clean it up via the reference?</p>
|
<p>All types in Python are references. That means you cannot re-assign an actual variable and expect it to change the original variable. However, you can certainly modify a variable through a "reference" which is automatically created on copy.</p>
<pre><code>a = [1,2,3]
b = a
b[0] = 0
print a
</code></pre>
<p>Output</p>
<pre><code>[0,2,3]
</code></pre>
<p>If you want to delete a list via a reference, you can use the other solution or do the following:</p>
<pre><code>b[:] = []
</code></pre>
|
python|pointers|reference|alias
| 3 |
1,907,069 | 975,785 |
Perl and CopSSH
|
<p>I'm trying to automate a process on a remote machine using a python script. The machine is a windows machine and I've installed CopSSH on it in order to SSH into it to run commands. I'm having trouble getting perl scripts to run from the CopSSH terminal. I get a command not found error. Is there a special way that I have to have perl installed in order to do this? Or does anyone know how to install perl with CopSSH?</p>
|
<p>I suspect CopSSH is giving you different environment vars to a normal GUI login. I'd suggest you type 'set' and see if perl is in the path with any other environment vars it might need. </p>
<p>Here is some explanation of <a href="http://apps.sourceforge.net/mediawiki/controltier/index.php?title=OpenSSH_on_Windows" rel="nofollow noreferrer">setting up the CopSSH user environment</a>. It may be of use.</p>
|
python|perl|ssh|openssh
| 4 |
1,907,070 | 644,073 |
signal.alarm replacement in Windows [Python]
|
<p>I have a function that occasionally hangs.</p>
<p>Normally I would set an alarm, but I'm in Windows and it's unavailable.</p>
<p>Is there a simple way around this, or should I just create a thread that calls <code>time.sleep()</code>?</p>
|
<p>The most robust solution is to use a subprocess, then kill that subprocess. Python2.6 adds .kill() to subprocess.Popen().</p>
<p>I don't think your threading approach works as you expect. Deleting your reference to the Thread object won't kill the thread. Instead, you'd need to set an attribute that the thread checks once it wakes up.</p>
|
python|alarm
| 3 |
1,907,071 | 42,111,166 |
Check if list with given items is present in dict
|
<p>I have a dict and a list like this:</p>
<pre><code>hey = {'2': ['Drink', 'c', 'd'], '1': ['Cook', 'a']}
temp = ['Cook', 'a']
</code></pre>
<p>I want to check if <code>temp</code> is present in <code>hey</code>. My code:</p>
<pre><code>def checkArrayItem(source,target):
global flag
flag = True
for item in source:
if (item in target):
continue
else:
flag = False
break
for i,arr in enumerate(hey) :
if (len(temp) == len(hey[arr])):
checkArrayItem(temp,hey[arr])
if (flag):
print('I got it')
break
</code></pre>
<p>What is a more elegant way to do this check?</p>
|
<p>How about <code>temp in hey.values()</code>?</p>
|
python
| 2 |
1,907,072 | 41,913,058 |
What would be the best way to generate IDs to make rows individual?
|
<p>In order to remove specific rows from a csv file I want to add an ID value to my rows. There are a few ways this could be done of course and I'd appreciate some input on possible ways to generate IDs. Simple and short but good ways are preferred.</p>
<p>Maybe a <strong>random</strong> big number using <code>random.randint(00000, 99999)</code> for example? But needing to check for <strong>possible duplicates</strong> would make me think that there is a better solution.</p>
<p>Another way would be to <strong>read the csv</strong> file add <code>1</code> to some variable for each line. Maybe you would even need to figure out a way to check that the given line contains actual csv content and isn't just a result of <code>\n</code>. I tried this and had success but the code is just really long and ugly.</p>
<p>I bet there are better ways I can't think of. My go at it can be found below. It works for my specific row mangemant and way of adding new lines. That's what I used to this point.</p>
<p>I am looking for a solution for Python 3+ if it wasn't obvious to this point.</p>
<pre><code>import os
def ID(filename):
if os.path.isfile(filename):
if os.path.getsize() == 0:
return 1
else:
ID = 1
[ID += 1 for line in open(filename, "r")]
return ID
else:
return 1
</code></pre>
|
<p>Use a v4 UUID</p>
<pre><code>import uuid
ID = uuid.uuid4().hex
</code></pre>
<p>It's random and guaranteed to be unique for most practical applications.</p>
|
python|python-3.x|csv|uniqueidentifier
| 0 |
1,907,073 | 47,269,306 |
Installing Django when Python is already installed through Anaconda?
|
<p>I am having the common issue when trying to run:</p>
<pre><code>django-admin startproject hellodjango
</code></pre>
<p>that I am getting the error:</p>
<pre><code>'django-admin' is not recognized as an internal or external command, operable program or batch file.
</code></pre>
<p>I have run:</p>
<pre><code>pip install Django
</code></pre>
<p>Which ran successfully. However, when I navigate to my C:\Python 3\Scripts folder, I don't see any djangoadmin.py or related files in there.</p>
<p>Python 3 is added as a PATH environmental variable.</p>
<p>When I run:</p>
<pre><code>python --version
</code></pre>
<p>I get the following:</p>
<pre><code>Python 3.6.2 |Anaconda, Inc.| (default, Sep 19 2017, 08:03:39)
</code></pre>
<p>Could my issue potentially be that my version of python is actually within the Anaconda package, rather than an explicit standalone python installation? (Just a guess as i'm not sure where i'm going wrong).</p>
|
<p>Django-admin is probably missing from your path in that case...</p>
<p>You could try running with the full path. C:\Python 3\site-packages\django\bin\django-admin</p>
<p>To fix, edit your path to include the path above.</p>
|
python|django
| 2 |
1,907,074 | 57,360,016 |
How to zoom in/out a 3D image using glumpy?
|
<p>I'm trying to show a 3D image (a sphere) with a texture that contains some information. I need to rotate and zoom in/out the image.</p>
<p>I just came up using glumply and I saw some examples that are very helpful (especially the Earth rendering example at <a href="https://github.com/glumpy/glumpy/blob/master/examples/earth.py" rel="nofollow noreferrer">https://github.com/glumpy/glumpy/blob/master/examples/earth.py</a>).</p>
<p>However, so far I haven't been able to find any example at all that zooms in/out the image. Does anybody know whether that's possible or not? I'm starting to think that it is not possible, but that's somehow hard to believe. I would really appreciate any example of how to do it (or somebody who knows about it telling me that it's impossible). I just discovered glumpy yesterday night, so the more complete the example, the better.</p>
<p>Thanks a lot!</p>
<p>EDIT: As far as I have seen, both the <code>Trackball</code> and <code>Arcball</code> classes (which I use for the 3D sphere) have an <code>on_mouse_scroll</code> method which should already zoom in/out when the mouse wheel is turned. However, that method is never called when I turn the wheel. I'm not sure whether this has something to do with a message I get in the console when I execute the program:</p>
<pre><code>[w] Backend (<module 'glumpy.app.window.backends.backend_glfw' from 'C:\\Python37\\lib\\site-packages\\glumpy\\app\\window\\backends\\backend_glfw.py'>) not available
[w] Backend (<module 'glumpy.app.window.backends.backend_pyglet' from 'C:\\Python37\\lib\\site-packages\\glumpy\\app\\window\\backends\\backend_pyglet.py'>) not available
</code></pre>
<p>Any ideas? I'm using Windows 10 and Python 3.7.</p>
|
<p>The problem was that I was lacking the GLFW DLL library. I could create the sphere and rotate it, but I couldn't zoom in/out. I didn't pay much attention to a couple of warnings/errors that I got when I executed the application as it somehow seemed to work alright.</p>
<p>As jdehesa pointed out in his comments, I had not properly followed the installation steps shown in <a href="https://glumpy.readthedocs.io/en/latest/installation.html#step-by-step-install-for-x64-bit-windows-7-8-and-10" rel="nofollow noreferrer">Step-by-step install for x64 bit Windows 7,8, and 10</a>.</p>
<p>Now it works. Thanks jdehesa!</p>
|
python|zooming|glumpy
| 1 |
1,907,075 | 71,033,352 |
split combined pdf into original documents
|
<p><strong>Is there a way of identifying individual documents in a combined pdf and split it accordingly?</strong></p>
<p>The pdf I am working on contains combined scans (with OCR, mostly) of individual documents. I would like to split it back into the original documents.</p>
<p>These original documents are of unstandardised length and size (hence, adobe's split by "Number of pages" or "File Size" are not an option). The "Top level bookmarks" seem to correspond to something different than individual documents, so splitting on them does not provide a useful result either.</p>
<p>I've created an xml version of the file. I'm not too familiar with it but having looked at it, I couldn't identify a standardised tag or something similar that indicates the start of a new document.</p>
<p>The answer to this <a href="https://stackoverflow.com/questions/48345067/split-a-combined-pdf-into-its-individual-files">question</a> requires control over the merging process (which I don't have), while the answer to this <a href="https://stackoverflow.com/questions/27831572/how-to-split-a-pdf-into-multiple-documents">question</a> does not work because I have no standardised keyword on which to split.</p>
<p>Eventually, I would like to do this split for a few hundred pdfs. An <a href="https://www.fedlex.admin.ch/filestore/fedlex.data.admin.ch/eli/dl/proj/6019/17/cons_1/doc_6/de/pdf-a/fedlex-data-admin-ch-eli-dl-proj-6019-17-cons_1-doc_6-de-pdf-a.pdf" rel="nofollow noreferrer">example</a> of a pdf to be split can be found here.</p>
|
<p>As per discussions in comments one course of action is to parse the pages information (MediaBox) via python. However I prefer a few fast cmd line commands rather than write and test a heavier solution on this lightweight netbook.</p>
<p>Thus I would build a script to handle a loop of files and pass to windows console the files using <a href="http://www.xpdfreader.com/download.html" rel="nofollow noreferrer">Xpdf command line tools</a></p>
<p><strong>Edit</strong> Actually most Python libs tend to include the poppler version (2022-01) of pdfinfo so you should be able to call or request feedback from that variant via your libs.</p>
<p>Using PDFinfo on your file and limit it to first 20 pages for a quick test is</p>
<p><code>pdfinfo -f 1 -l 20 yourfile.pdf</code>
and the response will be a text output suitable for comparison:-</p>
<pre><code>Title: Microsoft Word - 20190702_Revision_CO2_Verordnung_Detailkommenta
re_SWISS_final
Subject:
Keywords:
Author: heim
Creator: PDF24 Creator
Producer: GPL Ghostscript 9.25
CreationDate: Thu Jul 18 17:36:26 2019
ModDate: Thu Jul 18 17:36:26 2019
Tagged: no
Form: none
Pages: 223
Encrypted: no
Page 1 size: 595 x 842 pts (A4) (rotated 0 degrees)
Page 2 size: 595 x 842 pts (A4) (rotated 0 degrees)
Page 3 size: 595.32 x 841.92 pts (A4) (rotated 0 degrees)
Page 4 size: 595.44 x 842.04 pts (A4) (rotated 0 degrees)
Page 5 size: 595.44 x 842.04 pts (A4) (rotated 0 degrees)
Page 6 size: 595.2 x 841.9 pts (A4) (rotated 0 degrees)
Page 7 size: 595.45 x 841.9 pts (A4) (rotated 0 degrees)
Page 8 size: 595.45 x 841.9 pts (A4) (rotated 0 degrees)
Page 9 size: 595.2 x 841.44 pts (rotated 0 degrees)
Page 10 size: 595.2 x 841.44 pts (rotated 0 degrees)
Page 11 size: 595.2 x 841.68 pts (rotated 0 degrees)
Page 12 size: 594.54 x 840.78 pts (rotated 0 degrees)
Page 13 size: 591.85 x 835.45 pts (rotated 0 degrees)
Page 14 size: 593.75 x 835.45 pts (rotated 0 degrees)
Page 15 size: 595.2 x 841.44 pts (rotated 0 degrees)
Page 16 size: 595.32 x 841.92 pts (A4) (rotated 0 degrees)
Page 17 size: 593.5 x 840.7 pts (rotated 0 degrees)
Page 18 size: 594.72 x 840.96 pts (rotated 0 degrees)
Page 19 size: 596 x 842 pts (A4) (rotated 0 degrees)
Page 20 size: 595.2 x 841.68 pts (rotated 0 degrees)
File size: 33926636 bytes
Optimized: no
PDF version: 1.4
</code></pre>
<p>In a command line I would possibly only use the desired page ### and size: values (discarding the verbage) to make line by line match analysis easier.</p>
<p>We can see that in this case, as suspected by @mkl, there is some commonality in sequential pages.</p>
<p>The above is less than a 10% sample and may not represent the full picture but its promising enough to pair either X or Y values in sequential pages. I ran 200 pages (on this slow machine in seconds) and the output slowly flashing by had sufficient similarities to suggest this is a viable part answer to build upon.</p>
<p>The majority of pairs are matched in first value but the oddity was 13 & 14 which matched in the second value, HOWEVER note number 6 matches second value with 7 & 8 but is not the same document, thus cross checking such cases may be desired.</p>
|
python|r|xml|pdf|split
| 0 |
1,907,076 | 11,820,843 |
Python: Why dosent this work?
|
<p>So i have this small piece of code which just wont work:</p>
<pre><code> while c<b:
str2 += str1[c]
c+=1
print str2
</code></pre>
<p>b is the length of str1 that i want to copy to str2, and c is the point which i want to begin transfer from str1, then the while loop is just supposed to transfer all the characters from str1 to str2.</p>
<p>For some reason i can't seem to print str2 and get this error message:</p>
<p>"NameError: name 'str2' is not defined"</p>
<p>My guess is that I'm just doing something simple wrong, I just began experimenting with Python and have only really done C# before.</p>
|
<p>A better approach would be to slice the strings:</p>
<pre><code>str2 = str1[c:b]
</code></pre>
<p>This copies <code>str1</code> from character number <code>c</code> and up to character number <code>b</code> into <code>str2</code>.</p>
<p>For example:</p>
<pre><code>>>> 'Hello World'[3:7]
'lo W'
</code></pre>
<p>Here's a little information about Python's slice notation: <a href="https://stackoverflow.com/questions/509211/good-primer-for-python-slice-notation">Explain Python's slice notation</a></p>
|
python
| 4 |
1,907,077 | 58,421,492 |
Skipping lines and selecting wrong cells while reading CSV file
|
<p>So I have a csv file that has tons of games and info about them and I'm trying to save the game's publisher and the ESRB rating. But for some reason when I print them out it'll randomly skip games and chose wrong cells.</p>
<p>My code:</p>
<pre><code>def simpleLoop(file_name):
output = []
input_file = open(file_name, "r")
for line in input_file:
cells = line.split(",")
output.append((cells[7], cells[13]))
i = 0
while (i <= 10):
print(output[i]) # testing what values i get
i += 1
</code></pre>
<p><a href="https://i.stack.imgur.com/Fm9Pi.png" rel="nofollow noreferrer">Screenshot of csv</a></p>
<p><a href="https://i.stack.imgur.com/1mOlX.png" rel="nofollow noreferrer">Output</a></p>
<p><a href="https://i.stack.imgur.com/v6Lcp.png" rel="nofollow noreferrer">Expected Output</a></p>
<p>Any help is appreciated thanks!</p>
<p><strong>Edit: Solved with the help of SimoN</strong></p>
<p>For anyone else facing a similar issue make sure you specify exactly where you want to split. In my case I split at commas but there were commas inside some of the cells. So to fix this I changed:</p>
<pre><code>cells = line.split(",")
</code></pre>
<p>To</p>
<pre><code>cells = line.split('","')
</code></pre>
<p>Which makes python split after each cell because cells end with a double quote then a comma and the next cell starts with a double quote</p>
|
<p>There are commas inside some of the cells and you are splitting on these. When you opened the CSV in Excel (or whatever you used) it knew not to split on these as they are surrounded by quotes. I'd suggest using the Python csv module so you can do the same.</p>
|
python|csv
| 2 |
1,907,078 | 33,905,999 |
To to prevent FileHandler logger from impacting other threads?
|
<p>I've got a custom django admin command and I want to capture the log output for when that command is run and make it available for download in a separate file. Similar to "Console Output" functionality in Jenkins. This command is invoked using django-after-response and I'm running uWSGI.</p>
<p>At the beginning of the admin command, I do this:</p>
<pre><code>deploy_log = NamedTemporaryFile()
formatter = logging.Formatter("%(asctime)-15s %(levelname)-8s %(message)s")
file_handler = logging.FileHandler(deploy_log.name)
file_handler.setFormatter(formatter)
file_handler.setLevel(logging.INFO)
logging.getLogger('').addHandler(file_handler)
</code></pre>
<p>Then at the end of the admin command:</p>
<pre><code>logging.getLogger('').removeHandler(file_handler)
</code></pre>
<p>The problem I'm running into is that when there are multiple 'deploys' running simultaneously, the deploy_log for one thread will have entries from other threads. How do I avoid this?</p>
|
<p>I believe I have found the solution. I had to add the following to my uwsgi vassal ini file:</p>
<pre><code>enable-threads = true
</code></pre>
<p>Now the log files are not getting jumbled together.</p>
|
python|django|logging|uwsgi
| 0 |
1,907,079 | 46,966,318 |
selenium.common.exceptions.NoSuchElementException: Message: {"errorMessage":"Unable to find element with id 'search-facet-city'"
|
<p>I am trying to scrape the following website using Python 3, Selenium, and PhantomJS:</p>
<p><a href="https://health.usnews.com/best-hospitals/search" rel="nofollow noreferrer">https://health.usnews.com/best-hospitals/search</a></p>
<p>I need to locate a search field and enter text into it, and then press enter to generate the search results. Below is the HTML that corresponds to the search field I am trying to locate:</p>
<pre><code><div class="search-field-view">
<div class="block-tight">
<label class="" for="search-facet-city">
<input id="search-facet-city" autocomplete="off" name="city"
type="text" data-field-type="text" placeholder="City, State or ZIP"
value="" />
</label>
</div>
</div>
</code></pre>
<p>Below is my Python 3 code that attempts to locate this search field using the id "search-facet-city."</p>
<pre><code>def scrape(self):
url = 'https://health.usnews.com/best-hospitals/search'
location = 'Massachusetts'
# Instantiate the driver
driver = webdriver.PhantomJS()
driver.get(url)
driver.maximize_window()
driver.implicitly_wait(10)
elem = driver.find_element_by_id("search-facet-city")
elem.send_keys(self.location)
driver.close()
</code></pre>
<p>I need to scrape some results from the page once the text is entered into the search field. However, I keep getting a NoSuchElementException error; it is not able to locate the search box element despite the fact that it exists. How can I fix this?</p>
|
<p>I tried this with Chrome:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
url = 'https://health.usnews.com/best-hospitals/search'
location = 'Massachusetts'
# Instantiate the driver
driver = webdriver.Chrome(executable_path=r'/pathTo/chromedriver')
#driver = webdriver.PhantomJS(executable_path=r'/pathTo/phantomjs')
driver.get(url)
driver.maximize_window()
wait = WebDriverWait(driver, 20)
driver.save_screenshot('out.png');
elem=wait.until(EC.element_to_be_clickable((By.XPATH,"//div[@class='search-field-view']")))
span = elem.find_element_by_xpath("//span[@class='twitter-typeahead']")
input=span.find_element_by_xpath("//input[@class='tt-input' and @name='city']");
input.send_keys(location)
driver.save_screenshot('out2.png');
</code></pre>
<p>and it works. </p>
<p>But if I try with phantomJS, in <code>driver.save_screenshot('out.png');</code> I obtain:</p>
<p><a href="https://i.stack.imgur.com/aHaXF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aHaXF.png" alt="enter image description here"></a></p>
<p>As <a href="https://stackoverflow.com/users/494134/john-gordon">@JonhGordon</a> said in the comments, the website make some checks. If you want to use phantomJS, you could try to change the <code>desired_capabilities</code> or the <code>service_args</code>.</p>
|
python|selenium|phantomjs
| 0 |
1,907,080 | 30,038,663 |
Python, Heroku & Memcachier - access settings.py variable
|
<p>I am following the <a href="https://devcenter.heroku.com/articles/memcachier#python" rel="nofollow">instructions</a> on Heroku for using Memcahier with Python. </p>
<p>When trying to use the 'mc' variable, which is set in settings.py, in another file I get the following error:</p>
<pre><code> Exception Value: name 'mc' is not defined
</code></pre>
<p>I have tried importing settings.py into the file I wish to use the 'mc' variable but I get another error:</p>
<pre><code>'Settings' object has no attribute 'mc'
</code></pre>
<p>How can I access this mc variable outside of the settings file?</p>
|
<p>This is probably an importing issue.</p>
<p>You need to access <code>mc</code> via <code>settings.mc</code>, because, provided you imported it using <code>import settings</code> at the beginning of the file, it is not included in your current namespace, but in a seperate one called "settings".</p>
<p>If you whish to import it directly into your current namespace, use </p>
<pre><code>from settings import *
</code></pre>
<p>instead.</p>
<p>This only works, when your own file is in the same directory as settings.py, or if settings.py is in a directory known to Python. (See <strong>PYTHONPATH</strong>)</p>
<p>If settings.py is in another Directory, you could for example Import it using the <a href="https://stackoverflow.com/a/67692">whole path</a></p>
<p>It never hurts to skim over the Python docs, by the way: <a href="https://docs.python.org/3/reference/import.html" rel="nofollow noreferrer">see this</a></p>
<p>Also, make sure to use the correct case for your settings module. If the settings file is imported as "settings" with a lowercase letter, then you have to access it like that all over the place, because <em>Python is case sensitive</em></p>
|
python|heroku|memcachier
| 1 |
1,907,081 | 61,565,761 |
How to add TreeMap and Pie Chart as Subplot?
|
<p>I am trying to add a PIE chart and Treemap as a subplot.
Like below(Expected) :-
<a href="https://i.stack.imgur.com/AbnjQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AbnjQ.png" alt="enter image description here"></a></p>
<p>As per <a href="https://github.com/laserson/squarify/blob/master/squarify/__init__.py" rel="nofollow noreferrer">squarify documentation</a>, <strong>I was trying to pass axis object as ax parameter</strong>. However its not working. On passing axis object, the second plot is coming as empty.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import matplotlib.mlab as mlab
import matplotlib.gridspec as gridspec
import squarify
# Fixing random state for reproducibility
np.random.seed(19680801)
dt = 0.01
t = np.arange(0, 10, dt)
nse = np.random.randn(len(t))
r = np.exp(-t / 0.05)
cnse = np.convolve(nse, r) * dt
cnse = cnse[:len(t)]
s = 0.1 * np.sin(2 * np.pi * t) + cnse
fig, (ax0, ax1) = plt.subplots(ncols=2, constrained_layout=True)
fig.set_figheight(7)
fig.set_figwidth(13)
#plt.subplot(211)
# Pie chart, where the slices will be ordered and plotted counter-clockwise:
labels = ['Frogs', 'Hogs']
sizes = [15, 30]
explode = (0, 0.1) # only "explode" the 2nd slice (i.e. 'Hogs')
ax0.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%',
shadow=True, startangle=90)
#plt.subplot(212)
#ax1.psd(s, 512, 1 / dt)
plt.show()
volume = [350, 220, 170, 150, 50]
labels = ['Liquid\n volume: 350k', 'Savoury\n volume: 220k', 'Sugar\n volume: 170k',
'Frozen\n volume: 150k', 'Non-food\n volume: 50k']
color_list = ['#0f7216', '#b2790c', '#ffe9a3', '#f9d4d4', '#d35158', '#ea3033']
plt.rc('font', size=14)
squarify.plot(sizes=volume, label=labels,
color=color_list, alpha=0.7)
plt.axis('off')
plt.show()
</code></pre>
|
<p>I found the solution. Above code was almost correct expect 1 issue. Before Squerify's plot there was an statement to call show method. After Removing its working.
Credit : @JahanC</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import matplotlib.mlab as mlab
import matplotlib.gridspec as gridspec
import squarify
# Fixing random state for reproducibility
np.random.seed(19680801)
dt = 0.01
t = np.arange(0, 10, dt)
nse = np.random.randn(len(t))
r = np.exp(-t / 0.05)
cnse = np.convolve(nse, r) * dt
cnse = cnse[:len(t)]
s = 0.1 * np.sin(2 * np.pi * t) + cnse
fig, (ax0, ax1) = plt.subplots(ncols=2, constrained_layout=True)
fig.set_figheight(7)
fig.set_figwidth(13)
#plt.subplot(211)
# Pie chart, where the slices will be ordered and plotted counter-clockwise:
labels = ['Frogs', 'Hogs']
sizes = [15, 30]
explode = (0, 0.1) # only "explode" the 2nd slice (i.e. 'Hogs')
ax0.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%',
shadow=True, startangle=90)
#plt.subplot(212)
#ax1.psd(s, 512, 1 / dt)
volume = [350, 220, 170, 150, 50]
labels = ['Liquid\n volume: 350k', 'Savoury\n volume: 220k', 'Sugar\n volume: 170k',
'Frozen\n volume: 150k', 'Non-food\n volume: 50k']
color_list = ['#0f7216', '#b2790c', '#ffe9a3', '#f9d4d4', '#d35158', '#ea3033']
plt.rc('font', size=14)
squarify.plot(sizes=volume, label=labels, ax=ax1,
color=color_list, alpha=0.7)
plt.axis('off')
plt.show()
</code></pre>
|
python|matplotlib|data-visualization|pie-chart|treemap
| 0 |
1,907,082 | 27,632,882 |
BeautifulSoup find links that meet criteria
|
<p>I am trying to collect all of the links on a webpage I have collected through beautiful soup that contain <code>/d2l/lp/ouHome/home.d2l?ou=</code>.</p>
<p>Actual links would look like these:</p>
<pre><code>"http://learn.ou.edu/d2l/lp/ouHome/home.d2l?ou=1234567"
"http://learn.ou.edu/d2l/lp/ouHome/home.d2l?ou=1234561"
"http://learn.ou.edu/d2l/lp/ouHome/home.d2l?ou=1234564"
"http://learn.ou.edu/d2l/lp/ouHome/home.d2l?ou=1234562"
"http://learn.ou.edu/d2l/lp/ouHome/home.d2l?ou=1234563"
</code></pre>
|
<p>You can pass a <a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/#a-regular-expression" rel="nofollow">compiled regular expression</a> as an <code>href</code> argument value to <code>find_all()</code>:</p>
<pre><code>soup.find_all('a', href=re.compile(r'/d2l/lp/ouHome/home\.d2l\?ou=\d+'))
</code></pre>
<p>Demo:</p>
<pre><code>>>> import re
>>> from bs4 import BeautifulSoup
>>>
>>> data = """
... <div>
... <a href="http://learn.ou.edu/d2l/lp/ouHome/home.d2l?ou=1234567">link1</a>
... <a href="http://learn.ou.edu/d2l/lp/ouHome/home.d2l?ou=1234561">link2</a>
... <a href="http://learn.ou.edu/d2l/lp/ouHome/home.d2l?ou=1234564">link3</a>
... <a href="http://learn.ou.edu/d2l/lp/ouHome/home.d2l?ou=1234562">link4</a>
... <a href="http://learn.ou.edu/d2l/lp/ouHome/home.d2l?ou=1234563">link5</a>
... </div>
... """
>>>
>>> soup = BeautifulSoup(data)
>>> links = soup.find_all('a', href=re.compile(r'/d2l/lp/ouHome/home\.d2l\?ou=\d+'))
>>> for link in links:
... print link.text
...
link1
link2
link3
link4
link5
</code></pre>
|
python|html|parsing|beautifulsoup|html-parsing
| 0 |
1,907,083 | 43,437,764 |
How to fetch gradients with respect to certain occurrences of variables in tensorflow?
|
<p>Since tensorflow supports variable reuse, some part of computing graph may occur multiple times in both forward and backward process. So my question is, is it possible to update variables with respect their certain occurrences in the compute graph?</p>
<p>For example, in <code>X_A->Y_B->Y_A->Y_B</code>, <code>Y_B</code> occurs twice, how to update them respectively? I mean, at first, we take the latter occurrence as constant, and update the previous one, then do opposite.</p>
<p>A more simple example is, say <code>X_A</code>, <code>Y_B</code>, <code>Y_A</code> are all scalar variable, then let <code>Z = X_A * Y_B * Y_A * Y_B</code>, here the gradient of <code>Z</code> w.r.t both occurrences of <code>Y_B</code> is <code>X_A * Y_B * Y_A</code>, but actually the gradient of <code>Z</code> to <code>Y_B</code> is <code>2*X_A * Y_B * Y_A</code>. In this example computing gradients respectively may seems unnecessary, but not always are those computation commutative. </p>
<p>In the first example, gradients to the latter occurrence may be computed by calling <code>tf.stop_gradient</code> on <code>X_A->Y_B</code>. But I could not think of a way to fetch the previous one. Is there a way to do it in tensorflow's python API?</p>
<p><strong>Edit:</strong></p>
<p>@Seven provided an example on how to deal with it when reuse a single variable. However often it's a variable scope that is reused, which contains many variables and functions that manage them. As far as I know, their is no way to reuse a variable scope with applying <code>tf.stop_gradient</code> to all variables it contains.</p>
|
<p>With my understanding, when you use <code>A = tf.stop_gradient(A)</code>, <code>A</code> will be considered as a constant. I have an example here, maybe it can help you. </p>
<pre><code>import tensorflow as tf
wa = tf.get_variable('a', shape=(), dtype=tf.float32,
initializer=tf.constant_initializer(1.5))
b = tf.get_variable('b', shape=(), dtype=tf.float32,
initializer=tf.constant_initializer(7))
x = tf.placeholder(tf.float32, shape=())
l = tf.stop_gradient(wa*x) * (wa*x+b)
op_gradient = tf.gradients(l, x)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print sess.run([op_gradient], feed_dict={x:11})
</code></pre>
|
tensorflow
| 1 |
1,907,084 | 48,739,563 |
Python logic isn't yielding the correct answer and I'm having difficulty with developing the context
|
<p>Values including height = 10, number = 2, and bounciness = 6 should come to 25.6 feet, but I'm getting 23.2 feet. Can someone help me understand where my logic is screwed up please?</p>
<pre><code>import math
# User inputs are set up
height = int(input("Enter the height of the ball: "))
number = int(input("Enter the number of bounces of the ball: "))
bounciness = int(input("Enter the height for each successive bounce: "))
count = 0
# Calculation
for count in range(bounciness):
count = (height * bounciness)/100
distance = height + count
bounceSum = number * bounciness
total = count + distance + bounceSum
# Results
print("The ball has traveled a total distance of", total, "feet.")
</code></pre>
|
<p>If you look at the calculation in the for loop, this is what happens in the first iterations (and ones after that too):-</p>
<p><code>count</code> = 10*6/100 = 0.6<br>
<code>distance</code> = 10 + 0.6 = 10.6 (<em>count is 0.6 here because of the line above</em>)<br>
<code>bounceSum</code> = 2*12 = 12<br>
<code>total</code> = 0.6 + 10.6 + 12 = 23.2</p>
<p>The problem with the logic in your code is the variable <code>count</code>. There are 3 different definitions of it and they keep overwriting the other:</p>
<ol>
<li><code>count</code> = 0 above the for loop</li>
<li><code>count</code> which ranges from 0 to bounciness-1</li>
<li><code>count</code> = (height*bounciness)/100</li>
</ol>
|
python-3.x
| 1 |
1,907,085 | 48,645,262 |
CSV Dictionary to filter a CSV file
|
<p>I am using Python 3.6 with pandas and numpy.
I have two CSV files, both not containing any titles (so indexing is builtin). One is a one column list with computernames:
PC001
PC002
PC003
...</p>
<p>The other file is an import-file for a system. It is a csv-file. And the pc name is the third column:
addprinter,terminal,PC001,something,something
addprinter,terminal,PC002,something,something
addprinter,terminal,PC003,something,something
...</p>
<p>Now the import-file contains thousands of entries and i only need the line copied to a new csv (name it to-be-imported.csv) which contains the pc names from the let's call it hostnames.csv</p>
<p>I came "close" by using this here:</p>
<pre><code>np.intersect1d(df_main[2],df_key[0])
</code></pre>
<p>Unfortunately it will only list then the pc names which were found in the huge csv but not listing the line which contains the name (so it could be easily written in a new csv).</p>
<p>I know that is to advance for me, but i am also sure i will learn much. So hopefully there is a kind soul out there understanding what i would like to do and share some guidance </p>
|
<p>As answered <a href="http://%20https://stackoverflow.com/questions/17071871/select-rows-from-a-dataframe-based-on-values-in-a-column-in-pandas" rel="nofollow noreferrer">here</a>, you could use <code>isin</code>:</p>
<pre><code>hosts = pd.read_csv('hostnames.csv', header=None, names=['hosts'], squeeze=True)
df = pd.read_csv('import.csv', header=None, names=['a', 'b', 'host', 'c'])
result = df.loc[df['host'].isin(hosts)]
</code></pre>
<p>and then write the result to a CSV file with <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="nofollow noreferrer"><code>to_csv</code></a>.</p>
|
python|pandas|csv|numpy|export-to-csv
| 2 |
1,907,086 | 20,007,123 |
How do I fill an n-dimensional array in HDF5 from a 1D source?
|
<p>I have an array with multiple dimensions (x, y, channels, z, time-steps). However, the raw data is stored in a TIFF image as a single stack of (x, y, channels), with z * time-steps frames.</p>
<p>Finally, Pillow's Image.getdata() function returns a 1D array-like object that needs to be reshaped.</p>
<p>What is the best way to read this into HDF5 if the dataset is too large to fit in memory? Is it possible to reshape the array once it's been written into HDF5, or to write 1D data in a way that it automatically fills in an array (i.e. writes in with x varying fastest, y second-fastest, etc.) <strong>Update</strong>: Something like <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.flat.html" rel="nofollow">numpy.ndarray.flat</a> would be ideal.</p>
<p>Here's what I've tried so far (img is PIL.Image, dset is a h5py dataset):</p>
<p>1) Reading individual frames. This method is too slow since it takes ~20min for 300MB in 1000 frames. Most of the time is spent in the dset[] = a call.</p>
<pre><code>for i in range(0, img_layers):
img.seek(i)
a = numpy.array(img.getdata(), dtype=dtype) # a.shape = (sx * sz * channels,)
a.resize(sx, sy, channels)
z = i % sz
frame = i // sz
dset[..., z, frame] = a
</code></pre>
<p>2) <strong>Incomplete:</strong> Reading in chunks. This is much faster (2min for the same dataset), but I've only got this working for a 4D image (sx, sy, channels, time-steps), and need an additional dimension for z-slices:</p>
<pre><code>chunk_bits = 256 * 1000**2 # 256MB
frame_bits = depth_bits[dtype] * sx * sy * channels
chunk_frames = chunk_bits // frame_bits
a = numpy.zeros((sx, sy, channels, chunk_frames), dtype=dtype)
for i in range(0, layers):
img.seek(i)
temp = numpy.array(img.getdata(), dtype=dtype)
temp.resize(sx, sy, channels)
a[..., i % chunk_frames] = temp
if (i + 1) % chunk_frames == 0 or i == (layers - 1):
chunk = i // chunk_frames
dset[..., chunk * chunk_frames : i + 1] = a[..., : i % chunk_frames + 1
</code></pre>
|
<p>Option 1 was the correct answer. However, it makes a large difference which dimension varies fastest:</p>
<p><strong>~15 minutes:</strong></p>
<pre><code>for i in range(0, img_layers):
img.seek(i)
a = numpy.array(img.getdata(), dtype=dtype)
a.resize(sx, sy, channels)
z = i % sz
frame = i // sz
dset[..., z, frame] = a # Majority of time in this call
</code></pre>
<p><strong>~3 minutes:</strong></p>
<pre><code>for i in range(0, img_layers):
img.seek(i)
a = numpy.array(img.getdata(), dtype=dtype) # Majority of time in this call
a.resize(sx, sy, channels)
z = i % sz
frame = i // sz
dset[frame, z, ...] = a
</code></pre>
<p>To read this data quickly the fastest varying index should be LAST, not first.</p>
|
python|numpy|hdf5|h5py
| 0 |
1,907,087 | 19,885,214 |
Python - .append not working in if statement
|
<p>I am making a text based game but I am coming into a problem appending a list.</p>
<p>The starting commands you can do are in the list.</p>
<pre><code>room1_commands = ['help', 'look']
</code></pre>
<p>If they use the 'look' command it starts up this.</p>
<pre><code>if ask == 'look':
print ('You see a fireplace.')
room1_commands.append('fireplace')
</code></pre>
<p>To my knowledge that should add fireplace to the list of commands but it doesn't.</p>
<p>I have noticed that if I add...</p>
<pre><code>print (room1_commands)
</code></pre>
<p>right after the append it shows it appended, but if I don't indent it so it doesn't sit inside the if statement it only prints out 'help' and 'look'</p>
<p>This is the whole statement (including the help statement which prints out the command list)</p>
<pre><code>def room1():
ask = input()
room1_commands = ['help', 'look']
if ask == 'help':
print ('Usable Commands')
print (room1_commands)
room1()
elif ask == 'look':
print ('You see a fireplace')
room1_commands.append('fireplace')
room1()
</code></pre>
|
<p>This is a recursive function, each time <code>room1</code> gets called, <code>room1_commands</code> is being set to <code>['help', 'look']</code> again. To fix it, you should move <code>room1_commands = ['help', 'look']</code> to outside of the function like this:</p>
<pre><code>room1_commands = ['help', 'look']
def room1():
ask = input()
if ask == 'help':
print ('Usable Commands')
print (room1_commands)
room1()
elif ask == 'look':
print ('You see a fireplace')
room1_commands.append('fireplace')
room1()
</code></pre>
|
python|list|append
| 1 |
1,907,088 | 69,413,362 |
Explanation of 'list' and 'object' as parameter in class definition
|
<p>Can somebody please explain what is the purpose of 'object' and 'list' parameters in the classes Card and StandardDeck? I find little information about this.</p>
<p>Phycharm says this about 'object' in class Card():
<em>The base class of the class hierarchy.
When called, it accepts no arguments and returns a new featureless instance that has no instance attributes and cannot be given any.</em></p>
<p>Is class card considered base class because of the 'object' parameter? Does the <code>super().__init__()</code> from class StandardDeck inherited from class Card? I really hope someone can give a good explanation i have been struggling for hours.</p>
<pre class="lang-py prettyprint-override"><code>def main():
class Card(object):
def __init__(self, value, suit):
self.value = value
self.suit = suit
class StandardDeck(list):
def __init__(self):
super().__init__()
suits = list(range(4))
values = list(range(13))
[[self.append(Card(i, j)) for j in suits] for i in values]
deck = StandardDeck()
for card in deck:
print(card)
main()
</code></pre>
|
<p><code>object</code> is the <a href="https://docs.python.org/3/tutorial/classes.html#inheritance" rel="nofollow noreferrer">base class</a> (also referred to a <code>super</code> class) for the derived class <code>Card</code>. This means <code>Card</code> inherit all the functionality and state of the base class (and as others said already this is implied anyhow), and it allows <code>Card</code> to override (or change) methods as needed. Another way to say that is <code>Card</code> is a more specialized class than <code>object</code>.</p>
<p>Similarly, <code>list</code> is the base class for <code>StandardDeck</code>.</p>
<p>I would also add that it's not a particular good design. For instance, <code>list</code> has a method called <code>clear()</code>. What does it mean to <code>clear()</code> a <code>StandardDeck</code>? It would be better design to use whatever data structures are needed an implementation detail (instance variables). This is sometimes expressed as failing the <a href="https://en.wikipedia.org/wiki/Liskov_substitution_principle" rel="nofollow noreferrer">Liskov substitution principle (LSP)</a>.</p>
|
python|class|inheritance|parameters
| 1 |
1,907,089 | 48,312,986 |
Call a module's multiprocessing function from a script
|
<p>My module, which is also a script, calls some internally defined functions that use multiprocessing.</p>
<p>Running the module as a script works just fine on Windows and Linux. Calling its main function from another python script works fine on Linux but not on Windows.</p>
<p>The core, multi-processed function (the function passed to the <code>multiprocessing.Process</code> constructor as the <code>target</code>) never gets executed when my module calls the Process's <code>start()</code> function.</p>
<p>The module must be doing something too demanding for this usage (multiprocessing on Windows when called from a script), but how can I get to the source of this problem?</p>
<p>Here's some example code to demonstrate the behavior. First the module:</p>
<pre><code># -*- coding: utf-8 -*-
'my_mp_module.py'
import argparse
import itertools
import Queue
import multiprocessing
def meaty_function(**kwargs):
'Do a meaty calculation using multiprocessing'
task_values = kwargs['task_values']
# Set up a queue of tasks to perform, one for each element in the task_values array
in_queue = multiprocessing.Queue()
out_queue = multiprocessing.Queue()
reduce(lambda a, b: a or b,
itertools.imap(in_queue.put, enumerate(task_values)))
core_procargs=(
in_queue ,
out_queue,
)
core_processes = [multiprocessing.Process(target=_core_function,
args=core_procargs) for ii in xrange(len(task_values))]
for p in core_processes:
p.daemon = True # I've tried both ways, setting this to True and False
p.start()
sum_of_results = 0
for result_count in xrange(len(task_values)):
a_result = out_queue.get(block=True)
sum_of_results += a_result
for p in core_processes:
p.join()
return sum_of_results
def _core_function(inp_queue, out_queue):
'Perform the core calculation for each task in the input queue, placing the results in the output queue'
while 1:
try:
task_idx, task_value = inp_queue.get(block=False)
# Perform a calculation with this task value.
task_result = task_idx + task_value # The real calculation is more complicated than this
out_queue.put(task_result)
except Queue.Empty:
break
def get_command_line_arguments(command_line=None):
'parse the given command_line (list of strings) or from sys.argv, return the corresponding argparse.Namespace object'
aparse = argparse.ArgumentParser(description=__doc__)
aparse.add_argument('--task_values', '-t',
action='append',
type=int,
help='''The value for each task to perform.''')
return aparse.parse_args(args=command_line)
def main(command_line=None):
'perform a meaty calculation with the input from the command line, and print the results'
# collect input from the command line
args=get_command_line_arguments(command_line)
keywords = vars(args)
# perform a meaty calculation with the input
meaty_results = meaty_function(**keywords)
# display the results
print(meaty_results)
if __name__ == '__main__':
multiprocessing.freeze_support()
main(command_line=None)
</code></pre>
<p>Now the script that calls the module:</p>
<pre><code># -*- coding: utf-8 -*-
'my_mp_script.py:'
import my_mp_module
import multiprocessing
multiprocessing.freeze_support()
my_mp_module.main(command_line=None)
</code></pre>
<p>Running the module as a script gives the expected results:</p>
<pre><code>C:\Users\greg>python -m my_mp_module -t 0 -t 1 -t 2
6
</code></pre>
<p>But running another script that simply calls the module's <code>main()</code> function gives an error message under Windows (here I stripped out the error message duplicated from each of the multiple processes):</p>
<pre><code>C:\Users\greg>python my_mp_script.py -t 0 -t 1 -t 2
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\greg\AppData\Local\Continuum\anaconda2-64\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\Users\greg\AppData\Local\Continuum\anaconda2-64\lib\multiprocessing\forking.py", line 510, in prepare
'__parents_main__', file, path_name, etc
File "C:\Users\greg\Documents\PythonCode\Scripts\my_mp_script.py", line 7, in <module>
my_mp_module.main(command_line=None)
File "C:\Users\greg\Documents\PythonCode\Lib\my_mp_module.py", line 72, in main
meaty_results = meaty_function(**keywords)
File "C:\Users\greg\Documents\PythonCode\Lib\my_mp_module.py", line 28, in meaty_function
p.start()
File "C:\Users\greg\AppData\Local\Continuum\anaconda2-64\lib\multiprocessing\process.py", line 130, in start
self._popen = Popen(self)
File "C:\Users\greg\AppData\Local\Continuum\anaconda2-64\lib\multiprocessing\forking.py", line 258, in __init__
cmd = get_command_line() + [rhandle]
File "C:\Users\greg\AppData\Local\Continuum\anaconda2-64\lib\multiprocessing\forking.py", line 358, in get_command_line
is not going to be frozen to produce a Windows executable.''')
RuntimeError:
Attempt to start a new process before the current process
has finished its bootstrapping phase.
This probably means that you are on Windows and you have
forgotten to use the proper idiom in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce a Windows executable.
</code></pre>
|
<p>Linux and Windows work a little differently in the way they create additional processes. Linux <code>forks</code> the code but Windows creates a new Python interpreter to run the spawned process. The effect here is that all your code gets re-loaded just as if it were the first time. There is a similar question that might be informative to look at see... <a href="https://stackoverflow.com/questions/48306228/how-to-stop-multiprocessing-in-python-running-for-the-full-script#comment83598182_48306228">How to stop multiprocessing in python running for the full script</a>.</p>
<p>The solution here is to modify the <code>my_mp_script.py</code> script so the call to <code>my_mp_module.main()</code> is guarded like so..</p>
<pre><code>import my_mp_module
import multiprocessing
if __name__ == '__main__':
my_mp_module.main(command_line=None)
</code></pre>
<p>Note that I've also removed the <code>freeze_support()</code> functions for now, however those may be acceptable to put back in if needed.</p>
|
python|windows|multiprocessing
| 0 |
1,907,090 | 51,188,395 |
Passing PyArrayObject to C function
|
<p>I try to write a Numpy extension module. The problem is that I am not sure how to pass a pointer to a <code>PyArrayObject</code> correctly to a C function, which results in the following behavior. Consider the code below:</p>
<pre><code>/* File: test_mod.c */
#define NPY_NO_DEPRECATED_API NPY_1_8_API_VERSION
#define PY_ARRAY_UNIQUE_SYMBOL __NP_ARRAY_API
#include <Python.h>
#include <numpy/arrayobject.h>
#include "test_utilities.h"
static PyObject *
test_mod(PyObject* self, PyObject* args) {
PyArrayObject* arr;
if(!PyArg_ParseTuple(args, "O")) {
printf("Error while parsing objects");
return NULL;
}
/* do some checks
... */
do_something(&arr);
return Py_None;
}
/* Method table
... */
/* Module definition structure
... */
/* Module init function
... */
</code></pre>
<p>I want to pass <code>arr</code> by reference to a C function. In the above snipped, <code>arr</code> is a poiter to a <code>PyArrayObject</code>, which is a C <code>struct</code>. Hence, I came up with the following definition:</p>
<pre><code>/* File: test_utilities.c */
#define NPY_NO_DEPRECATED_API NPY_1_8_API_VERSION
#define NO_IMPORT_ARRAY
#define PY_ARRAY_UNIQUE_SYMBOL __NP_ARRAY_API
#include <numpy/arrayobject.h>
void
do_something(struct PyArrayObject **arr) {
int d = PyArray_NDIM(*arr));
npy_intp N = PyArray_SIZE(*arr);
printf("%i, %li", d, N);
}
</code></pre>
<p><code>PyArray_NDIM(PyArrayObject* arr)</code> and <code>PyArray_SIZE(PyArrayObject* arr)</code> are macros from the <a href="https://docs.scipy.org/doc/numpy/reference/c-api.array.html#array-structure-and-data-access" rel="nofollow noreferrer">numpy Array API</a>.</p>
<p>After compiliation, the modul produces the expected output, however, anlongside some compiler warnings. Hence, I doubt that everything works fine here.</p>
<p>Warning for test_modul.c:</p>
<pre><code>test_utilities.h:1:25: warning: declaration of 'struct PyArrayObject'
will not be visible outside of this function [-Wvisibility]
void print_array(struct PyArrayObject **arr);
^
test_modul.c:63:14: warning: incompatible pointer types passing
'PyArrayObject **' (aka 'struct tagPyArrayObject **') to parameter of
type 'struct PyArrayObject **' [-Wincompatible-pointer-types]
do_something(&signal);
^~~~~~~
test_utilities.h:1:41: note: passing argument to parameter 'arr' here
void print_array(struct PyArrayObject **arr);
</code></pre>
<p>Warning for test_utilities.c:</p>
<pre><code>test_utilities.c:10:20: warning: declaration of 'struct PyArrayObject'
will not be visible outside of this function [-Wvisibility]
do_something(struct PyArrayObject **arr) {
^
test_utilities.c:13:51: warning: incompatible pointer types passing
'struct PyArrayObject *' to parameter of type 'const PyArrayObject *'
(aka 'const struct tagPyArrayObject *') [-Wincompatible-pointer-types]
printf("Array has: %i dimensions.", PyArray_NDIM(*arr));
^~~~
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-
packages/numpy/core/include/numpy/ndarraytypes.h:1464:35: note: passing
argument to parameter 'arr' here
PyArray_NDIM(const PyArrayObject *arr)
^
test_utilities.c:16:28: warning: incompatible pointer types passing
'struct PyArrayObject *' to parameter of type 'PyArrayObject *' (aka
'struct tagPyArrayObject *') [-Wincompatible-pointer-types]
npy_intp N = PyArray_SIZE(*arr);
^~~~
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-
packages/numpy/core/include/numpy/ndarrayobject.h:91:59: note: expanded
from macro 'PyArray_SIZE' #define PyArray_SIZE(m)
PyArray_MultiplyList(PyArray_DIMS(m), PyArray_NDIM(m))
^
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-
packages/numpy/core/include/numpy/ndarraytypes.h:1482:29: note: passing
argument to parameter 'arr' here
PyArray_DIMS(PyArrayObject *arr)
^
</code></pre>
<p>I am not (yet) that good in C, and that's probably why I do not see whats is going on here. <code>do_something</code> takes a pointer to a struct. Hence, dereferencing <code>arr</code> in do something should give me the original pointer to the PyArrayObject struct from <code>test_mod</code> function in <code>test_mod.c</code>. Is that a misconception and if so, why is the producing the expected results?</p>
<p>To wrap this up, how do I pass a <code>PyArrayObject</code> correctly to a C function, such that there are no warnings (concerning incompatible pointer types, etc)?</p>
|
<p>After studying the docs again, I found the answer to be very simple. One just has to pass the struct pointer, like </p>
<pre><code>do_something(arr);
</code></pre>
<p>The function itsef should read</p>
<pre><code>void do_something(PyArrayObject *arr)
{
int d = PyArray_NDIM(arr));
npy_intp N = PyArray_SIZE(arr);
printf("%i, %li", d, N);
}
</code></pre>
<p>Additionally, one should not forget to increase the reference count of <code>Py_None</code> in cases the C function has no return value. Hence, the end of the C function should read</p>
<pre><code>Py_INCREF(Py_None);
return Py_None;
</code></pre>
<p>One could also use the built-in macro <code>Py_RETURN_NONE</code>, which has the same meaning as the above two lines.</p>
|
c|numpy
| 0 |
1,907,091 | 70,534,790 |
Spacy matcher pattern with specifics nouns
|
<p>I'm trying to match a specific pattern: any verb with a noun ending with a s, t or l.
E.g.:
Like cat,
Eat meal,
Make Spices</p>
<p>How Can i do this?</p>
<p>I Know i was doing this:</p>
<pre><code>nlp =spacy.load("en_core_web_sm")
matcher = Matcher(nlp.vocable)
pattern = [{"POS": "VERB"}, {"POS": "NOUN"}]
matcher.add("mypattern", [pattern])
doc = nlp(Verbwithnoun)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id]
print(doc[start:end)
</code></pre>
<p>But that prints all verb with nouns not the noun ending with a t,l or s. How Can i get spacy to match only specific nouns ending with a t,l or s?</p>
|
<p>You can post-process the results by checking if the phrase you get ends with either of the three letters:</p>
<pre class="lang-py prettyprint-override"><code>import spacy
from spacy.matcher import Matcher
nlp = spacy.load("en_core_web_sm")
matcher = Matcher(nlp.vocab)
pattern = [{"POS": "VERB"}, {"POS": "DET", "OP" : "?"}, {"POS": "NOUN"}]
matcher.add("mypattern", [pattern])
Verbwithnoun = "I know the language. I like the cat, I eat a meal, I make spices."
doc = nlp(Verbwithnoun)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id]
phrase = doc[start:end]
if phrase.text.endswith('s') or phrase.text.endswith('t') or phrase.text.endswith('l'):
print(doc[start:end])
</code></pre>
<p>Output:</p>
<pre><code>like the cat
eat a meal
make spices
</code></pre>
|
python|python-3.x|spacy|spacy-3
| 2 |
1,907,092 | 69,734,810 |
How would I transfer information like a variable from javascript to python using AJAX and Flask
|
<p>I am messing around with Flask and Ajax to see what I can do with it for a bigger project I'm working on. I am trying to get some data in a variable into my python program to be able to use and manipulate. I have a script in an html file that returns the current URL. How can I get the url into my python program with AJAX?</p>
<p>I will attach my relevant code below:</p>
<p>index.html</p>
<pre><code><html lang="en">
<head>
<meta charset="UTF-8">
<title>HELP</title>
</head>
<body>
<p>I am {{person}} and i am {{age}} years old, I am a {{ql}}</p>
<form name="passdata" action="." method="post">
<label>Name:</label>
<input type="text" name="name">
<label>Age:</label>
<input type="text" name="age">
<label>Date of Birth:</label>
<input type="text" name="dateofbirth">
<input type="submit" value="submit">
</form>
<p id="demo"></p>
<script>
let geturl = window.
document.getElementById("demo").innerHTML =
"The full URL of this page is:<br>" + geturl;
</script> //getting the current url
</body>
</html>
</code></pre>
<p>main.py</p>
<pre><code>from flask import Flask, render_template, request
app = Flask(__name__)
name = "random"
age = "21"
qualification = "software engineer"
@app.route('/')
def index():
return render_template('index.html', person=name, age=age, ql=qualification)
@app.route('/', methods=['POST'])
def getvalue():
name2 = request.form['name']
age = request.form['age']
db = request.form['dateofbirth']
return name2
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>let me know if you need any clarification. Thanks in advance.</p>
|
<p>Usually when you are sending variables to routes in flask you will have the route designed as:</p>
<pre class="lang-py prettyprint-override"><code>
@app.route('/route_to_send_to/<variable>', methods=['GET'])
def route_to_send_to(variable):
variable_sent = variable
return jsonify({'variable':variable})
</code></pre>
<p>Another hint usually when working with ajax calls the response may be a json response which you can send back with jsonify.</p>
<p>All you need to do is call it to the route replacing with the variable you want to send.</p>
<p>You can also send with Ajax a post request to have the data hidden, but then you will need to change the route to handle post requests.</p>
<p>The Ajax call may be something like:</p>
<pre><code> var url = '/route_to_send_to/ + name;
return $.ajax({
type: "GET",
url: url,
}).then(function(data) {// Do something with the responce})
});
</code></pre>
<p>Considering the comments below I might suggest you read this <a href="https://towardsdatascience.com/using-python-flask-and-ajax-to-pass-information-between-the-client-and-server-90670c64d688" rel="nofollow noreferrer">article</a> to get a better understanding of how flask and ajax work together.</p>
<p>Also This amazing <a href="https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-i-hello-world" rel="nofollow noreferrer">Flask tutorial by Miguel Grinberg</a> is probably one of the best resources I have ever come across for learning flask, the framework and good conducts with it. It goes through everything from the absolute basic to extremely advanced topics and is well worth the time.</p>
|
javascript|python|html|flask
| 1 |
1,907,093 | 69,765,531 |
Filtering a date range field with date range
|
<p>React/Django app. I want to add a date range filter (flatpickr) to already existing filters for orders. Orders has a <code>period</code> field (how long the order is valid), which is a <code>DateRange</code> field. Via the flatpickr I select a date range, and if at least one day of the order period is in that selected date range, it should show up as a result.</p>
<p>ex. period: <code>DateRange(datetime.date(2021, 2, 11), datetime.date(2021, 3, 14), '[)')</code></p>
<p>I have the filter ready in FE to accept the results from BE. But I'm not sure how to achieve this in the BE. As I'm fairly new to this, my ideas are limited, but currently have this:</p>
<pre><code>...
from django_filters.rest_framework import DjangoFilterBackend, IsoDateTimeFilter, FilterSet
from rest_framework import filters, mixins, status
...
class OrderFilter(FilterSet):
start_date = IsoDateTimeFilter(field_name="period", lookup_expr="gte")
end_date = IsoDateTimeFilter(field_name="period", lookup_expr="lte")
class Meta:
model = Order
fields = {
"status": ["in"],
"client": ["exact"],
"created_by": ["exact"],
}
ordering_fields = ["period", "client__name", "destination__name"]
ordering = ["period"]
custom_field_target_model = Order
</code></pre>
<p>I believe the closest thing I found, is <a href="https://django-filter.readthedocs.io/en/stable/ref/filters.html?highlight=range#isodatetimefromtorangefilter" rel="nofollow noreferrer">IsoDateTimeFromToRangeFilter</a>, but that doesn't seem to be what I'm looking for.</p>
|
<p>Since your field is <code>DateRangeField</code>, your <a href="https://docs.djangoproject.com/en/2.2/_modules/django/contrib/postgres/fields/ranges/" rel="nofollow noreferrer">lookup expressions</a> are limited. You can use the following options:</p>
<ol>
<li>contains</li>
<li>contained_by</li>
<li>overlap</li>
</ol>
<p>If your case to determine if period contains a specific date range (start_date, end_date), you can use it as:</p>
<pre><code>Order.objects.filter(period__contains=(start_date, end_date))
</code></pre>
<p>you can also play with others if you wish.</p>
<p>The problem here is not the filter class, it is the field that you're trying to filter. If you keep period_start_date and period_end_date separately, you can filter the dates separately.
To filter a field with two inputs, you can use a <a href="https://www.django-rest-framework.org/api-guide/filtering/#custom-generic-filtering" rel="nofollow noreferrer">custom filter method</a> or you can <a href="https://www.django-rest-framework.org/api-guide/filtering/#filtering-against-query-parameters" rel="nofollow noreferrer">override</a> the <code>get_queryset</code> method of your view.</p>
<p>Not: As @aberkb mentioned, you need to add "period" to fields.</p>
|
python|django
| 2 |
1,907,094 | 65,421,831 |
How do i get user images on twitter using Twitter API?
|
<p>I'm trying to download users timeline images (including tweets images). How do i get user images on twitter using Twitter API.
[https://api.twitter.com/labs/2/tweets/1138505981460193280?expansions=attachments.media_keys&tweet.fields=created_at%2Cauthor_id%2Clang%2Csource%2Cpublic_metrics%2Ccontext_annotations%2Centities][1]</p>
<p>This API proved all the details about a single tweet. If any possible solution to get all images from a standard twitter app. Or this feature only available in premium account ?</p>
|
<p>You can do this using the <a href="https://developer.twitter.com/en/docs/twitter-api/tweets/timelines/api-reference/get-users-id-tweets" rel="nofollow noreferrer">v2 User timeline endpoint</a>:</p>
<p><code>twurl "/2/users/786491/tweets?max_results=100&expansions=attachments.media_keys&media.fields=url,media_key"</code></p>
<p>You can retrieve up to 3200 of the user's most recent Tweets using this method. You can get additional information by adding further <a href="https://developer.twitter.com/en/docs/twitter-api/data-dictionary/using-fields-and-expansions" rel="nofollow noreferrer">fields and expansions</a> to the request, if you need them.</p>
|
api|twitter|google-api|twitter-oauth|twitterapi-python
| 2 |
1,907,095 | 70,293,297 |
Python datetime utcnow vs Luxon Datetime.fromMillis?
|
<p>I'm trying to wrap my head in understanding the <em>implication</em> of using <code>.utcnow</code> vs. <code>.now</code> on Python's DateTime.</p>
<p>Here's the reason for my confusion: I live in France. Right now, we have a +1 hour on the UTC timezone (CET timezone in winter (now) / CEST (+2) timezone in summer).</p>
<p>If I take the following value :</p>
<pre><code>dt = datetime.datetime.utcnow()
dt.strftime('%c') # Thu Dec 9 16:17:38 2021
int(dt.timestamp()) # 1639063064
</code></pre>
<p>This is correct as it is, in France right now, 17h17.
So, from my understanding, that timestamp, <code>1639063064</code>, is the UTC representation of the time since EPOCH.</p>
<p>But if I test this value in the website <a href="https://www.epochconverter.com/" rel="nofollow noreferrer">Epoch Converter</a>, I get</p>
<ul>
<li>GMT: Thursday 9 December 2021 15:17:44</li>
<li>Your time zone: jeudi 9 décembre 2021 16:17:44 GMT+01:00</li>
</ul>
<p>It seems that the website ALSO subtracts my timezone to an already "substracted" value, ending in removing twice the timezone and causing an invalid value.</p>
<p>The actual confusion is when I tried to import that UTC timestamp to Luxon on my front app, doing the following doesn't work :</p>
<pre><code>DateTime.fromMillis(parseInt(ts), { zone: 'utc' }).toLocal().setLocale('en')
</code></pre>
<p>I'm one hour behind.</p>
<p>How can I "tell" Luxon that the current TS is in the UTC timezone, and calling <code>toLocal</code> will apply the proper user's timezone ?</p>
|
<blockquote>
<p>It seems that the website ALSO substract my timezone t</p>
</blockquote>
<p>No, epochconverter.com isn't doing anything. The value 1639063064 really <em>does</em> represent 2021-12-09T15:17:44Z. That's not the value you want.</p>
<p>I'm no Python expert, but I believe the problem is the combination of <a href="https://docs.python.org/3/library/datetime.html#datetime.datetime.utcnow" rel="nofollow noreferrer">this <code>utcnow()</code> behavior</a> (emphasis mine):</p>
<blockquote>
<p>Return the current UTC date and time, with <code>tzinfo None</code>.</p>
<p>This is like <code>now()</code>, but returns the current UTC date and time, <strong>as a naive datetime object.</strong></p>
</blockquote>
<p>And <a href="https://docs.python.org/3/library/datetime.html#datetime.datetime.timestamp" rel="nofollow noreferrer">this <code>timestamp()</code> behavior</a>:</p>
<blockquote>
<p>Naive datetime instances are assumed to represent local time and this method relies on the platform C <code>mktime()</code> function to perform the conversion.</p>
</blockquote>
<p>It sounds like you want to follow this advice:</p>
<blockquote>
<p>An aware current UTC datetime can be obtained by calling <code>datetime.now(timezone.utc)</code>.</p>
</blockquote>
<p>So just change your first line to:</p>
<pre class="lang-py prettyprint-override"><code>dt = datetime.now(timezone.utc)
</code></pre>
<p>... and it should be okay.</p>
|
python|datetime|timezone|luxon
| 2 |
1,907,096 | 70,173,535 |
How do I pass arguments?
|
<p>This is part of my code:</p>
<pre><code>req = api.AliexpressSolutionBatchProductInventoryUpdateRequest(url, port)
req.set_app_info(appinfo(appkey, secret))
req.multiple_sku_update_list = {'sku_code': row['model'], 'inventory': int(row['stock'])}
req.mutiple_product_update_list = {'product_id': row['product_id']}
sessionkey = 'xxxxxxxxxxxxxxxxxx'
resp = req.getResponse(sessionkey)
print(resp)
</code></pre>
<p>when i use it like this i get error</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\GOD\Desktop\Новая папка (2)\test.py", line 31, in <module>
resp = req.getResponse(sessionkey)
File "C:\Users\GOD\AppData\Local\Programs\Python\Python310\lib\site-packages\aliexpress\api\base.py", line 300, in getResponse
raise error
aliexpress.api.base.TopException: errorcode=40 message=Missing required arguments:mutiple_product_update_list.multiple_sku_update_list subcode=None submsg=None application_host=11.131.48.59 service_host=top011131048059.na62
</code></pre>
<p><a href="https://developer.alibaba.com/docs/api.htm?spm=a219a.7395905.0.0.2d1075fedZYJML&apiId=45135" rel="nofollow noreferrer">https://developer.alibaba.com/docs/api.htm?spm=a219a.7395905.0.0.2d1075fedZYJML&apiId=45135</a> - official manual
Im trying update stocks, please help
<a href="https://i.stack.imgur.com/puHvE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/puHvE.png" alt="enter image description here" /></a></p>
|
<p>This changes helps:</p>
<pre><code>req.mutiple_product_update_list = {"product_id": row['product_id'], "multiple_sku_update_list": [
{"sku_code": str(inf), "inventory": int(row['stock'])}]}
</code></pre>
|
pandas|parameters|arguments|response|aliexpress
| 0 |
1,907,097 | 68,088,993 |
Pythonic way to convert NamedTuple to dict to use for dictionary unpacking (**kwargs)
|
<p>I have a <code>typing.NamedTuple</code> that I would like to convert to a <code>dict</code> so that I can pass into a function via dictionary unpacking:</p>
<pre class="lang-py prettyprint-override"><code>def kwarg_func(**kwargs) -> None:
print(kwargs)
# This doesn't actually work, I am looking for something like this
kwarg_func(**dict(my_named_tuple))
</code></pre>
<p>What is the most Pythonic way to accomplish this? I am using Python 3.8+.</p>
<hr />
<p><strong>More Details</strong></p>
<p>Here is an example <code>NamedTuple</code> to work with:</p>
<pre class="lang-py prettyprint-override"><code>from typing import NamedTuple
class Foo(NamedTuple):
f: float
b: bool = True
foo = Foo(1.0)
</code></pre>
<p>Trying <code>kwarg_func(**dict(foo))</code> raises a <code>TypeError</code>:</p>
<pre><code>TypeError: cannot convert dictionary update sequence element #0 to a sequence
</code></pre>
<p>Per <a href="https://stackoverflow.com/a/34166604/11163122">this post on <code>collections.namedtuple</code></a>, <code>_asdict()</code> works:</p>
<pre class="lang-py prettyprint-override"><code>kwarg_func(**foo._asdict())
{'f': 1.0, 'b': True}
</code></pre>
<p>However, since <code>_asdict</code> is private, I am wondering, is there a better way?</p>
|
<p>Use <code>._asdict</code>.</p>
<p><code>._asdict</code> is <strong>not private</strong>. It is a public, documented portion of the API. <a href="https://docs.python.org/3/library/collections.html#collections.namedtuple" rel="noreferrer">From the docs</a>:</p>
<blockquote>
<p>In addition to the methods inherited from tuples, named tuples support
three additional methods and two attributes. To prevent conflicts with
field names, the method and attribute names start with an underscore.</p>
</blockquote>
|
python|dictionary|namedtuple|iterable-unpacking
| 9 |
1,907,098 | 62,940,937 |
Line Plot not Plotting
|
<p>I'm trying to plot some data, however it's not plotting. When I run my code, it just says "process finished", with no plot showing up. I'm really not sure what I'm doing wrong here. Any advice on how to fix this?</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib.animation as animation
from random import randint
import time
import datetime
Time = time.time()
fig = plt.figure()
x = []
y = []
def rand_num():
while True:
n = randint(1, 1000)
n1 = randint(1, 1000)
n2 = randint(1, 1000)
def animate(i):
x.append(time.time())
# print(Time)
y.append(n)
# print(n)
plt.plot(x, y, color='r')
# plt.plot(x, n1, color='g')
# plt.plot(x, n2, color='b')
euler = []
return
anim = animation.FuncAnimation(fig, animate, interval=1000)
plt.show()
</code></pre>
|
<p>Try calling the function at the end of the code:</p>
<pre><code>rand_num()
</code></pre>
<p><a href="https://i.stack.imgur.com/giNZj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/giNZj.png" alt="enter image description here" /></a></p>
<p>For changing values try:</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib.animation as animation
from random import randint
import time
import datetime
Time = time.time()
fig = plt.figure()
x = []
y = []
def rand_num():
def animate(i):
n = randint(1, 1000)
n1 = randint(1, 1000)
n2 = randint(1, 1000)
x.append(time.time())
y.append(n)
plt.plot(x, y, color='r')
euler = []
anim = animation.FuncAnimation(fig, animate, interval=1000)
plt.show()
rand_num()
</code></pre>
<p><a href="https://i.stack.imgur.com/VJxIs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VJxIs.png" alt="enter image description here" /></a></p>
|
python|numpy|matplotlib|plot
| 1 |
1,907,099 | 62,113,681 |
Where to find photos
|
<p>I use pycharm and I run the code from github.
where can I get the word-cloud image?</p>
<pre><code>#!/usr/bin/env python
"""
Minimal Example
===============
Generating a square wordcloud from the US constitution using default arguments.
"""
import os
from os import path
from wordcloud import WordCloud
# get data directory (using getcwd() is needed to support running example in generated IPython notebook)
d = path.dirname(__file__) if "__file__" in locals() else os.getcwd()
# Read the whole text.
text = open(path.join(d, 'crawling.xlsx')).read() #I used my text. Only here was corrected.
# Generate a word cloud image
wordcloud = WordCloud().generate(text)
# Display the generated image:
# the matplotlib way:
import matplotlib.pyplot as plt
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
# lower max_font_size
wordcloud = WordCloud(max_font_size=40).generate(text)
plt.figure()
plt.imshow(wordcloud, interpolation="bilinear")
plt.axis("off")
plt.show()
# The pil way (if you don't have matplotlib)
image = wordcloud.to_image()
image.show()
</code></pre>
<p>And I wonder if separate tokenization or string cleansing is required when using the word cloud library.</p>
|
<p>Add this one line to the end to save the image:</p>
<pre><code>image.save("constitution.png")
</code></pre>
<p>See <a href="https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.Image.save" rel="nofollow noreferrer">Pillow reference for Image.save</a>.</p>
|
python|word-cloud
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.