Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,905,700 | 45,531,489 |
Converting different date time formats to MM/DD/YYYY format in pandas dataframe
|
<p>I have a date column in a <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html" rel="nofollow noreferrer"><code>pandas.DataFrame</code></a> in various date time formats and stored as list object, like the following:</p>
<pre><code> date
1 [May 23rd, 2011]
2 [January 1st, 2010]
...
99 [Apr. 15, 2008]
100 [07-11-2013]
...
256 [9/01/1995]
257 [04/15/2000]
258 [11/22/68]
...
360 [12/1997]
361 [08/2002]
...
463 [2014]
464 [2016]
</code></pre>
<p>For the sake of convenience, I want to convert them all to <code>MM/DD/YYYY</code> format. It doesn't seem possible to use regex replace() function to do this, since one cannot execute this operation over list objects. Also, to use strptime() for each cell will be too time-consuming. </p>
<p>What will be the easier way to convert them all to the desired <code>MM/DD/YYYY</code> format? I found it very hard to do this on list objects within a dataframe.</p>
<p>Note: for cell values of the form <code>[YYYY]</code> (e.g., <code>[2014]</code> and <code>[2016]</code>), I will assume they are the first day of that year (i.e., January 1, 1968) and for cell values such as <code>[08/2002]</code> (or <code>[8/2002]</code>), I will assume they the first day of the month of that year (i.e., August 1, 2002).</p>
|
<p>Given your sample data, with the addition of a <code>NaT</code>, this works:</p>
<h3>Code:</h3>
<pre><code>df.date.apply(lambda x: pd.to_datetime(x).strftime('%m/%d/%Y')[0])
</code></pre>
<h3>Test Code:</h3>
<pre><code>import pandas as pd
df = pd.DataFrame([
[['']],
[['May 23rd, 2011']],
[['January 1st, 2010']],
[['Apr. 15, 2008']],
[['07-11-2013']],
[['9/01/1995']],
[['04/15/2000']],
[['11/22/68']],
[['12/1997']],
[['08/2002']],
[['2014']],
[['2016']],
], columns=['date'])
df['clean_date'] = df.date.apply(
lambda x: pd.to_datetime(x).strftime('%m/%d/%Y')[0])
print(df)
</code></pre>
<h3>Results:</h3>
<pre><code> date clean_date
0 [] NaT
1 [May 23rd, 2011] 05/23/2011
2 [January 1st, 2010] 01/01/2010
3 [Apr. 15, 2008] 04/15/2008
4 [07-11-2013] 07/11/2013
5 [9/01/1995] 09/01/1995
6 [04/15/2000] 04/15/2000
7 [11/22/68] 11/22/1968
8 [12/1997] 12/01/1997
9 [08/2002] 08/01/2002
10 [2014] 01/01/2014
11 [2016] 01/01/2016
</code></pre>
|
python|list|pandas|datetime|dataframe
| 9 |
1,905,701 | 56,958,042 |
Using multiprocessing with a void method/setter method
|
<p>I'm still new to multiprocessing but have done a lot of reading over the past couple of days and want to see if something I had in mind was feasible using multiprocessing.</p>
<p>A lot of examples of multiprocessing online look like the following:</p>
<pre><code>def worker():
print('Worker')
if __name__ == '__main__':
jobs = []
for i in range(5):
p = multiprocessing.Process(target=worker)
jobs.append(p)
p.start()
</code></pre>
<p>But the example methods for multiprocessing always return or print something! Is there a way I can do the following?</p>
<pre><code>import multiprocessing
class Worker():
def __init__(self):
self.level=0
def setLevel(self,val):
self.level=val
def method(worker, level):
worker.setLevel(level)
if __name__ == '__main__':
jobs = []
for i in range(5):
jobs.append(Worker())
pool=multiprocessing.Pool()
for i in range(5):
worker=jobs[i]
res = pool.apply_async(method, args=(worker,i,))
pool.close()
pool.join()
for worker in jobs:
print(worker.level)
</code></pre>
<p>I know <code>apply_async</code> returns a result object, whose value you can get with <code>Result.get()</code>, but that doesn't seem useful in such a setting like the one I've described.</p>
<p>When I execute the following code, I get <code>0 0 0 0 0</code> instead of the desired <code>0 1 2 3 4</code> result.</p>
|
<p>Generally speaking, there's no requirement to return something from a function passed to <code>Pool.appy_async()</code>, but in this case it's necessary in order to update the corresponding <code>Worker</code> object in the <code>jobs</code> list that exist only in the main process.</p>
<p>This is because when <code>multiprocessing</code>, each process runs in its own memory-space, which means you cannot share global variables among them. There's ways to simulate that, but it's generally entails a lot of overhead and may actually defeat any gains from doing the multiprocessing. Each sub-process is getting passed a <em>copy</em> of the <code>Worker</code> object.</p>
<p>Taking that into consideration, here's one way to make your code work. The <code>method()</code> function now returns (a copy) of the updated <code>Worker</code> object to the main process which stores all the result objects associated with each one in a separate list named <code>results</code>. When all the jobs have been processed following the <code>pool.join()</code> call, that list is then used to replace each <code>Worker</code> object that was originally put into the <code>jobs</code> list — only making it only <em>appear</em> as though they've updated themselves.</p>
<pre><code>import multiprocessing
class Worker():
def __init__(self):
self.level = 0
def setLevel(self,val):
self.level = val
def method(worker, level):
worker.setLevel(level)
return worker # ADDED - return updated Worker object.
if __name__ == '__main__':
jobs = []
for i in range(5):
jobs.append(Worker())
results = []
pool = multiprocessing.Pool()
for i in range(5):
worker = jobs[i]
results.append(pool.apply_async(method, (worker, i)))
pool.close()
pool.join()
# Update Workers in jobs list.
for i, result in enumerate(results):
jobs[i] = result.get() # Replace workers with their updated version.
for worker in jobs:
print(worker.level)
</code></pre>
|
python|multiprocessing
| 1 |
1,905,702 | 56,976,840 |
How to only accept two numbers in the Texinputfield?
|
<p>I want to only accept two numbers in my Textinput. I know how to only accept numbers, and i know how to only accept two letters, but not how to do both. </p>
<p>This code only accept to letters: </p>
<pre><code>TextInput:
multiline: False
input_filter: lambda text, from_undo: text[:2 - len(self.text)]
</code></pre>
<p>And this code only accept numbers:</p>
<pre><code>TextInput:
multiline: False
input_filter: "int"
</code></pre>
<p>But when i try something like:</p>
<pre><code>TextInput:
multiline: False
input_filter: "int", lambda text, from_undo: text[:2 -
len(self.text)]
</code></pre>
<p>I get this error: </p>
<pre><code>TypeError: 'tuple' object is not callable
</code></pre>
|
<p>Far as I know, you cannot do what you want in this way. But you can use a <em>NumericInput</em>, this class will use the <em>TextInput</em> and will handle your limits. I hope this can help you, It's a little bit different of your original idea, but solves the problem.</p>
<p>So try de follow:</p>
<h3>main.py file</h3>
<pre><code>from kivy.app import App
from kivy.base import Builder
from kivy.properties import NumericProperty
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.textinput import TextInput
from kivy.uix.screenmanager import ScreenManager
class NumericInput(TextInput):
min_value = NumericProperty(None)
max_value = NumericProperty(None)
def __init__(self, *args, **kwargs):
TextInput.__init__(self, *args, **kwargs)
self.input_filter = 'int' # The type of your filter
self.multiline = False
def insert_text(self, string, from_undo=False):
new_text = self.text + string
if new_text != "" and len(new_text) < 3:
try:
# Will try convert the text to a int and compare, if it's not a number
# It will throw a exception and will not append the text into the input
# If the value is between the values and is a int
if self.min_value <= int(new_text) <= self.max_value:
TextInput.insert_text(self, string, from_undo=from_undo)
except ValueError as e: # Just cannot convert to a `int`
pass
class BoundedLayout(BoxLayout):
pass
presentation = Builder.load_file("gui.kv")
class App(App):
def build(self):
return BoundedLayout()
if __name__ == '__main__':
App().run()
</code></pre>
<h3>gui.kv file</h3>
<pre><code>#:kivy 1.0
<BoundedLayout>:
orientation: 'horizontal'
Label:
text: 'Value'
NumericInput:
min_value : 0 # your smaller value, can be negative too
max_value : 99 # Here goes the max value
hint_text : 'Enter values between {} and {}'.format(self.min_value, self.max_value)
</code></pre>
|
python|kivy|kivy-language
| 1 |
1,905,703 | 25,627,854 |
Why does the root logger accept logs from child loggers?
|
<p>I don't understand the interactions between the root logger and child loggers:</p>
<pre><code>import logging
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
# 1. SOME LOGGER
logger = logging.getLogger( 'logger' )
logger.setLevel(logging.INFO)
# 2. ROOT LOGGER
rootLogger = logging.getLogger()
rootLogger.setLevel( logging.CRITICAL )
fh = logging.FileHandler('root.log')
fh.setLevel( logging.DEBUG )
fh.setFormatter(formatter)
rootLogger.addHandler(fh)
#
logger.info( 'hello' )
</code></pre>
<p>The last line <code>logger.info( 'hello' )</code> should be </p>
<ul>
<li>accepted by <code>logger</code> because it has <code>logger.setLevel(logging.INFO)</code> </li>
<li>rejected by the <code>rootLogger</code> because it has <code>rootLogger.setLevel( logging.CRITICAL )</code></li>
</ul>
<p>But at the end of my script, I have a file <code>root.log</code> containing <code>hello</code>. Why doesn't the <code>CRITICAL</code> level block the message for <code>rootLogger</code>?</p>
|
<p>This is happening because the <code>logger</code> object is only inheriting the <code>FileHandler</code> object you assigned to <code>rootLogger</code>, not the log-level. Your <code>Logger</code> object sets its own log-level, so the parent's log-level won't be used at all. This means that logging through <code>logger</code> will check the log-level of the <code>logger</code> itself (which is <code>INFO</code>), and then the level of the inherited <code>FileHandler</code> (which is <code>DEBUG</code>) - it doesn't check the log-level of the parent <code>rootLogger</code> object. Because both <code>logger</code> and <code>FileHandler</code> are set to <code>INFO</code> level or above, you see <code>'hello'</code> get logged.</p>
<p>If you don't want <code>logger</code> to inherit the handlers from <code>rootLogger</code>, set the <a href="https://docs.python.org/3/library/logging.html#logging.Logger.propagate" rel="noreferrer"><code>propagate</code></a> attribute to '0' on the <code>logger</code> object:</p>
<pre><code>logger = logging.getLogger( 'logger' )
logger.setLevel(logging.INFO)
logger.propagate = 0
</code></pre>
<p>If you want the child logger to inherit the parent's log-level, set the child log-level to <code>NOTSET</code>:</p>
<pre><code>logger = logging.getLogger( 'logger' )
logger.setLevel(logging.NOTSET)
</code></pre>
|
python|python-2.7|logging
| 6 |
1,905,704 | 25,701,657 |
BackgroundSubtractorMOG still keep the object after it left the frame
|
<p>I tried to use BackgroundSubtractorMOG to remove the background but there are some object that already left the frame but the result from BackgroundSubtractorMOG.apply() still show that the object is still on the scene.</p>
<p>Here is my code</p>
<pre><code>inputVideo = cv2.VideoCapture('input.avi')
fgbg = cv2.BackgroundSubtractorMOG()
while inputVideo.isOpened():
retVal, frame = inputVideo.read()
fgmask = fgbg.apply(frame)
cv2.imshow('Foreground', fgmask)
cv2.imshow('Original', frame)
if cv2.waitKey(1) & 0xFF == 27:
break
</code></pre>
<p>I've also tried BackgroundSubtractorMOG with custom parameters (history = 200, nmixtures = 5, ratio = 0.8) but result is the same. Did I do something wrong or any recommedation? Please help.</p>
|
<p>The problem is in <code>fgbg.apply</code>. For some reason, the <code>learningRate</code> is set to <code>0</code>. Make call like this:</p>
<pre><code>history = 10 # or whatever you want it to be
fgmask = fgbg.apply(frame, learningRate=1.0/history)
</code></pre>
<p>Credit should go to Sebastian Ramirez who started a ticket in opencv and found the solution</p>
|
python|opencv|background-subtraction
| 9 |
1,905,705 | 44,704,383 |
Read compressed stdin
|
<p>I would like to have such call:</p>
<pre><code>pv -ptebar compressed.csv.gz | python my_script.py
</code></pre>
<p>Inside <code>my_script.py</code> I would like to decompress <code>compressed.csv.gz</code> and parse it using Python csv parser. I would expect something like this:</p>
<pre><code>import csv
import gzip
import sys
with gzip.open(fileobj=sys.stdin, mode='rt') as f:
reader = csv.reader(f)
print(next(reader))
print(next(reader))
print(next(reader))
</code></pre>
<p>Of course it doesn't work because <code>gzip.open</code> doesn't have <code>fileobj</code> argument. Could you provide some working example solving this issue?</p>
<p><strong>UPDATE</strong></p>
<pre><code>Traceback (most recent call last):
File "my_script.py", line 8, in <module>
print(next(reader))
File "/usr/lib/python3.5/gzip.py", line 287, in read1
return self._buffer.read1(size)
File "/usr/lib/python3.5/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "/usr/lib/python3.5/gzip.py", line 461, in read
if not self._read_gzip_header():
File "/usr/lib/python3.5/gzip.py", line 404, in _read_gzip_header
magic = self._fp.read(2)
File "/usr/lib/python3.5/gzip.py", line 91, in read
self.file.read(size-self._length+read)
File "/usr/lib/python3.5/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
</code></pre>
<p>The traceback above appeared after applying @Rawing advice.</p>
|
<p>In python 3.3+, you can pass a file object to <a href="https://docs.python.org/3.5/library/gzip.html#gzip.open" rel="nofollow noreferrer"><code>gzip.open</code></a>:</p>
<blockquote>
<p>The filename argument can be an actual filename (a str or bytes object), or an existing file object to read from or write to.</p>
</blockquote>
<p>So your code should work if you just omit the <code>fileobj=</code>:</p>
<pre><code>with gzip.open(sys.stdin, mode='rt') as f:
</code></pre>
<p>Or, a slightly more efficient solution:</p>
<pre><code>with gzip.open(sys.stdin.buffer, mode='rb') as f:
</code></pre>
<hr>
<p>If for some odd reason you're using a python older than 3.3, you can directly invoke the <a href="https://docs.python.org/2.7/library/gzip.html#gzip.GzipFile" rel="nofollow noreferrer"><code>gzip.GzipFile</code> constructor</a>. However, these old versions of the <code>gzip</code> module didn't have support for files opened in text mode, so we'll use <code>sys.stdin</code>'s underlying buffer instead:</p>
<pre><code>with gzip.GzipFile(fileobj=sys.stdin.buffer) as f:
</code></pre>
|
python|python-3.x|file|unix|gzip
| 2 |
1,905,706 | 24,175,212 |
Python like inheritance for JavaScript
|
<p>In python I can do something like this</p>
<h1>main.py</h1>
<pre><code>class MainClass:
def __init__(self):
self.name = "some_name"
def startDoingStuff(self):
print("I'm doing something boring")
def printName(self):
print("My name is " + self.name)
</code></pre>
<h1>sub.py</h1>
<pre><code>import main
class Sub(main.MainClass):
def startDoingStuff(self):
print("I'm doing something interesting")
self.name = "sub"
sub = Sub()
sub.printName() # prints 'My name is some_name'
sub.startDoingStuff()
sub.printName() # prints 'My name is sub'
</code></pre>
<p>Is there a JavaScript equivalent? </p>
|
<p>If prototype-based inheritance is a little daunting, you might look into <a href="http://ejohn.org/blog/simple-javascript-inheritance/" rel="nofollow">extension based inheritance</a>. </p>
<p>A really basic implementation looks like this. (John Resig's implementation linked above is more robust, but I think this is a little more readable, but with the same basic concept)</p>
<pre><code>var extend = function(subTypeInit) {
var SuperType = this;
var SubType = function () {
function SuperTypeProxy(args) {
return SuperType.apply(this, args);
}
var base = new SuperTypeProxy(arguments);
subTypeInit.apply(base, arguments);
return base;
}
SubType.extend = extend.bind(SubType);
return SubType;
}
</code></pre>
<p>Then it can be used like this:</p>
<pre><code>var Main = function (name) {
var self = this;
self.name = name;
self.doSomething = function () {
console.log("something boring");
};
self.printName = function () {
console.log("Hi, I'm "+name);
};
};
Main.extend = extend.bind(Main); //Manually attach to first parent.
var Sub = Main.extend(function () {
var self = this;
self.doSomething = function () {
console.log("something interesting");
};
var superPrintName = self.printName;
self.printName = function () {
superPrintName();
console.log("And I'm a sub class");
};
});
var sub = new Sub("foo");
sub.doSomething(); //logs "something interesting"
sub.printName(); //logs "Hi, I'm foo" "And I'm a sub class"
</code></pre>
<p>Some caveats here, you really probably should look into prototype based inheritance, which is what javascript is really built for. Extension-based inheritance is a little more natural for someone who's used to other OO languages' approaches to inheritance, but the disadvantage is that it consumes more resources to do inheritance this way; you're creating a lot of functions (and a lot of closures), which can really add up.</p>
|
javascript|python|node.js
| 3 |
1,905,707 | 24,190,802 |
Receiver loses the same last bytes of file lost sent via TCP socket in python
|
<p>I send a .txt file(about 87 kbyte size) from a client to a server over TCP with the following code(python):</p>
<p><strong>Client:</strong></p>
<pre><code> f = open(filename, 'r')
while 1:
data = f.read(1024)
if not data:
data='*Endoffile*!'
con.send('%1024s' %data)
f.close()
break
else:
con.send('%1024s' %data)
</code></pre>
<p><strong>Server:</strong></p>
<pre><code>f = open(filename,'w')
while 1:
data = c.recv(1045)
if data=='%1024s' %'*Endoffile*!':
f.close()
break
else:
f.write(data)
</code></pre>
<p>The problem is that the file received is always cut at the same point(it stops always at the same word when i open it), about 1,6 kbyte smaller than the size of the complete file.
Does anyone have a suggestion of what to do? </p>
|
<p><strong><em>Solved</em></strong> by these changes in the <strong>server</strong> side code:</p>
<pre><code> while 1:
f = open(filename,'wb')
data=c.recv(1024)
while ('*Endoffile*!' in data) == False:
f.write(data)
data=c.recv(1024)
print data
data=data.replace('*Endoffile*!', '')
f.write(data)
f.close()
</code></pre>
|
python|sockets|tcp
| 2 |
1,905,708 | 71,880,297 |
Write List of Jsons using Strings
|
<p>I'm staring at a file I'm not sure what to do with. File (data.txt) that contains a list of dictionaries like this:</p>
<pre><code>{'name': 'jerry', 'internalip1': '192.168.1.1', 'externalip1': '1.1.1.1', 'internalip2': '192.168.1.2', 'externalip2': '2.2.2.2', 'port1': '5451', 'port2': '5450', 'port3': '', 'port4': '', 'port5': '', 'port6': ''}
{'name': 'chris', 'internalip1': '192.168.2.1', 'externalip1': '3.3.3.3', 'internalip2': '192.168.3.5', 'externalip2': '4.4.4.4', 'port1': '1234', 'port2': '', 'port3': '5671', 'port4': '5672', 'port5': '80', 'port6': '443'}
...
...
</code></pre>
<p>How can I create a new text file (data_updated.txt) that looks like this:</p>
<pre><code>edit "externalip1--internalip1 Port port1"
set extip externalip1
set mappedip "internalip1"
set portforward enable
set extport port1
set mappedport port1
edit "externalip1--internalip1 Port port2"
set extip externalip1
set mappedip "internalip1"
set portforward enable
set extport port2
set mappedport port2
...
...
</code></pre>
|
<p>You can do something like this:</p>
<pre class="lang-py prettyprint-override"><code>with open('data.txt') as data_file:
lines = data_file.readlines()
output = []
for line in lines:
data = eval(line.strip())
output.extend([
f"edit \"{data['externalip1']}--{data['externalip1']} Port {data['port1']}\"\n",
f"set extip externalip1\n",
f"set mappedip \"{data['internalip1']}\"\n",
f"set portforward enable\n",
f"set extport {data['port1']}\n",
f"set mappedport {data['port1']}\n",
'\n',
# ...
])
with open('data_updated.txt', 'w') as output_file:
output_file.writelines(output)
</code></pre>
|
python|python-3.x
| 0 |
1,905,709 | 72,067,521 |
ModuleNotFoundError: No module named 'django.core'
|
<p>I want to create django project so I've configured virtualenv ,and I installed django <code>pipenv install django==4.0.1</code> when I create app using this command <code>python3 manage.py startapp Accounts</code>
I got this error.</p>
<pre><code>(env) zakaria@ZAKARIA:/mnt/c/Users/ZAKARIA/Desktop/project$ python manage.py startapp Accounts
Traceback (most recent call last):
File "manage.py", line 11, in main
from django.core.management import execute_from_command_line
ModuleNotFoundError: No module named 'django.core'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 22, in <module>
main()
File "manage.py", line 13, in main
raise ImportError(
ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
</code></pre>
<blockquote>
<p>can any one help to solve this problem ?</p>
</blockquote>
|
<p>Instead of <code>python3 manage.py startapp Accounts</code> try <code>python manage.py startapp Accounts</code> with your <code>venv</code> activated.</p>
<p>To explain why this matters, let's go through an exercise. Starting without a <code>venv</code> activated, try this process (you may need to use the <code>deactivate</code> command to turn off if you're in a <code>venv</code>:</p>
<pre><code>python -m venv my_venv
# The following line assumes you're on Linux or Mac; it appears you're using WSL-2, which is fine
. my_venv/bin/activate
# The following command should show the path to the Python binary in your venv
which python
# The following command may show that you're not hitting the Python version in your venv, but somewhere else
which python3
</code></pre>
<p>You want to make sure you're using the Python binary that is inside your <code>venv</code>. Good luck!</p>
|
python|django
| 1 |
1,905,710 | 29,637,191 |
Python PIL putdata() method not saving the right data
|
<p>I want to save a 10x10 RGB image with PIL's new, putdata, and save methods. When I save, it does not take in the right data and I am not sure where I am going wrong.</p>
<p>The pixel data that I want to save:</p>
<pre><code>flat_pixels=
[(0, 1, 0), (1, 0, 1), (0, 0, 0), (1, 1, 0), (0, 1, 0), (1, 0, 1), (1, 1, 0), (0, 1, 1), (0, 1, 1), (1, 0, 1), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0)]
</code></pre>
<p>There are 100 tuples that I want to fit into a 10x10 image as so:</p>
<pre><code>new_im = Image.new("RGB",(10,10))
new_im.putdata(flat_pixels)
new_im.save("test.jpg")
</code></pre>
<p>When I open up "test.jpg" I get an image with all values being 0:</p>
<pre><code>show = Image.open("test.jpg")
print list(show.getdata())
</code></pre>
<p>prints: </p>
<pre><code>[(0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0)]
</code></pre>
|
<p>You actually aren't doing anything wrong, in the code; it's just that JPEG is a lossy format, and you're expecting individual pixels on a black background to remain intact through the JPEG fuzzing.</p>
<p>Try it with a PNG-format file instead:</p>
<pre><code>new_im.save("test.png")
</code></pre>
|
python|image-processing|python-imaging-library|rgb
| 2 |
1,905,711 | 46,509,972 |
Python: NoneType AttributeError when implementing priority queue with linked nodes
|
<pre><code>class Node:
def __init__(self, data=None, priority='L', next=None):
self.data = data
self.priority = priority
self.next = next
def __str__(self):
return str(self.data)
class P_Queue:
def __init__(self, head=None):
self.head = head
self.length = 0
def enqueue(self, node):
newNode = Node(node.data, node.priority)
if (self.head == None):
self.head = newNode
elif (self.head and self.head.priority == 'L' and newNode.priority == 'H'):
newNode.next = self.head
self.head = newNode
elif (self.head and self.head.priority == 'H' and newNode.priority == 'H'):
last = self.head
while (last.next and last.next.priority == 'H'):
last = last.next
if (last.next and last.next.next):
newNode.next = last.next.next
last.next = newNode
else:
last = self.head
while last.next:
last = last.next
last.next = newNode
self.length += 1
def dequeue(self):
node = self.head
print("next head: ")
print(self.head.next)
self.head = self.head.next
self.length = self.length - 1
return node
def is_empty(self):
return self.length==0
def main():
node0 = Node(0, 'L')
node1 = Node(1, 'H')
node2 = Node(2, 'H')
queue = P_Queue()
queue.enqueue(node0)
queue.enqueue(node1)
queue.enqueue(node2)
print(queue.dequeue())
print(queue.dequeue())
print(queue.dequeue())
main()
</code></pre>
<p>The problem occurs for the last line of the displayed code at the while statement in queue(), I get the error "'Nontype' object has no attribute property.", but only for queue(node1)</p>
<p>However, according to my print statements (output: H) for node0 = Node(0, 'H'), I clearly have a value of 'H' for that attribute (priority) and it does not contain a 'None' value so it's just mindboggling to me.</p>
<p>Please help... and if anyone has a good resource for learning how to implement a priority queue with a linked list for beginners that would be great too. Thank-you so much I'm dying here.</p>
<p>Traceback below:</p>
<pre><code>next head: 2
1
next head: None
2
next head:
Traceback (most recent call last):
File "assignment1_3 queues.py", line 62, in <module>
main()
File "assignment1_3 queues.py", line 60, in main
print(queue.dequeue())
File "assignment1_3 queues.py", line 39, in dequeue
print(self.head.next)
AttributeError: 'NoneType' object has no attribute 'next'
------------------
(program exited with code: 1)
Press any key to continue . . .
</code></pre>
|
<p>Your while loop works. You keep forwarding your <code>last = last.next</code> until you reach <code>NoneType</code>. Before you progressing your <code>last</code> to <code>last.next</code>, verify there is a node there. I've modified this part of your code:</p>
<pre><code> elif (self.head.priority == 'H' and newNode.priority == 'H'):
last = self.head
print(self.head.priority)
print(last.priority)
while last.priority == 'H' and last.next: # <-- check last.next
exists before pointing to it
last = last.next
if last.next and last.next.next: # <-- same thing here
newNode.next = last.next.next
last.next = newNode
</code></pre>
<p>and this is the output:</p>
<pre><code>>>> main()
H
H
</code></pre>
|
python|runtime-error|non-type
| 1 |
1,905,712 | 46,504,556 |
One Hot Encoding in Tensor flow for Batch Training
|
<p>My training data contains ~1500 labels(string,one label per record) and I want to do batch training (just load one batch into memory to update weights in a neural network). I was wondering if there is a class in tensorflow to do one hot encoding for the labels in each batch? Something like in sklearn we can do</p>
<pre><code>onehot_encoder = OneHotEncoder(sparse=False)
onehot_encoder.fit(entire training labels)
</code></pre>
<p>And then in each batch in tensorflow session, I can transform my batch label and feed into tensorflow for traning</p>
<pre><code>batch_label = onehot_encoder.transform(batch training labels)
sess.run(feed_dict={x:...,y:batch_label)
</code></pre>
<p>An example will be appreciated. Thanks.</p>
|
<p>I think this post is similar to that one <a href="https://stackoverflow.com/questions/33681517/tensorflow-one-hot-encoder">Tensorflow One Hot Encoder?</a></p>
<p>A sort answer from this link <a href="http://www.tensorflow.org/api_docs/python/tf/one_hot" rel="nofollow noreferrer">http://www.tensorflow.org/api_docs/python/tf/one_hot</a></p>
<pre><code>indices = [0, 1, 2]
depth = 3
tf.one_hot(indices, depth)
# output: [3 x 3]
# [[1., 0., 0.],
# [0., 1., 0.],
# [0., 0., 1.]]
</code></pre>
<p>Just posting it to save your time ;)</p>
|
python|tensorflow|one-hot-encoding|mini-batch
| 0 |
1,905,713 | 46,202,537 |
saving data from for loop to single list
|
<p>I have a file upload page where users upload their files and they are generally bunch of files.In my python code I am trying to pull a tag out from that file and then save it into a list, so everything works fine but here I am getting three different output lists for 3 files uploaded. How do I combine the 3 output lists into just one.Here is my code</p>
<pre><code> a=self.filename
print(a) #this prints out the uploaded file names(ex: a.xml,b.xml,c.xml)
soc_list=[]
for soc_id in self.tree.iter(tag='SOC_ID'):
req_soc_id = soc_id.text
soc_list.append(req_soc_id)
print(soc_list)
</code></pre>
<p>the output I get is:</p>
<pre><code> a.xml
['1','2','3']
b.xml
[4,5,6]
c.xml
[7,8,9]
</code></pre>
<p>I would like to combine all into just one list</p>
|
<p>As far as I analyzed I think you want to write all the soc_list values to a single file and then you can read the file back. Doing this would be the best way for you <strong>because you will not know the user file uploads as you mentioned in your question</strong>. To do so try to understand and implement this code below to save to your file</p>
<pre><code> save_path = "your_path_goes_here"
name_of_file = "your_file_name"
completeName = os.path.join(save_path, name_of_file + ".txt")
file1 = open(completeName, 'a')
for soc_id in self.tree.iter(tag='SOC_ID'):
req_soc_id = soc_id.text
soc_list.append(req_soc_id)
file1.write(req_soc_id)
file1.write("\n")
file1.close()
</code></pre>
<p>This way you can always write things to your file and then to read back your data and converting it into list follow this example as below</p>
<pre><code> examplefile = open(fileName, 'r')
yourResult = [line.split('in_your_case_newline_split') for line in examplefile.readlines()]
</code></pre>
|
python
| 1 |
1,905,714 | 61,171,894 |
Python pandas - Module not imported when executing from php
|
<p>I have two php scripts. Both executes python3 file and both of them sends emails through sendgrid api and designed to email id's into a mysql database table.</p>
<p>In one of the files, I am writing the email ids into mysql through a php mysqli_query itself. Apart from this I am also sending the mails. This works. The initial few lines of this file looks as follows:</p>
<pre><code>import argparse
import os
import sendgrid as sendgrid
from sendgrid import SendGridAPIClient
from sendgrid.helpers.mail import Mail
</code></pre>
<p>In the second file, I want to do some data realignment and hence, I have used <code>pandas</code> and <code>MySQLdb</code> to manipulate the data before writing to database. The initial few lines of this file looks like this:</p>
<pre><code>import os
import argparse
#import MySQLdb
#import pandas as pd
import sendgrid as sendgrid
from sendgrid import SendGridAPIClient
from sendgrid.helpers.mail import Mail
ap = argparse.ArgumentParser()
ap.add_argument("-f","--inputFile",required=True,help="name of the annotations file")
ap.add_argument("-c","--clientid",required=True,help="name of the operator")
</code></pre>
<p>In this script I am getting the <code>ModuleNotFoundError: No module named 'MySQLdb'</code> and so is for <code>pandas</code>. However, if I disable these two imports, the rest works fine.</p>
<p>I have installed both the libraries using:</p>
<pre><code>sudo pip3 install pandas
sudo pip3 install mysql-client-dev (not sure about this path, but it works properly)
</code></pre>
<p>When I list packages using <code>sudo pip3 list</code> and <code>pip3 list</code> I can see all the packages including <code>sendgrid</code>, <code>pandas</code> and <code>mysqlclient</code>. When I run from a python3 console, I could import all 3 packages without any trouble.</p>
<p>All the files in the path are owned by <code>www-data:www-data</code> and all the files have 777 rights. Still whenever I run the script with <code>import pandas as pd</code> I get the module not found error. </p>
|
<p>This problem got resolved by installing the python packages globally. To install a python package globally, for example, pandas, use</p>
<pre><code>sudo -H pip3 install pandas
</code></pre>
<p>When I installed, all my programs started working instantly. I was under the impression <code>sudo pip3 install pandas</code> will install at the root level and hence available globally, which was not the case.</p>
|
python|php|pandas
| 0 |
1,905,715 | 49,670,586 |
python numpy.savetext a matrix with mixed format
|
<p>I am trying to save as text a matrix which has in each row 288 float and 1 string at the end, i've used the savetxt like this:</p>
<pre><code>np.savetxt('name', matrix, delimiter=' ', header='string', comments='', fmt= '%3f'*288 + '%s')
</code></pre>
<p>but when I try to run the code it raises exceptions like this:</p>
<pre><code>Traceback (most recent call last):
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\numpy\lib\npyio.py", line 1371, in savetxt
v = format % tuple(row) + newline
TypeError: must be real number, not numpy.str_
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\numpy\lib\npyio.py", line 1375, in savetxt
% (str(X.dtype), format))
TypeError: Mismatch between array dtype ('<U32') and format specifier ('%3f(repeated 288 times without spaces)%s')
</code></pre>
<p>I really can't understand where I am wrong.</p>
|
<p>Your error message says that you are providing string data (<code>dtype ('<U32')</code>) -<code>U32</code> stands for <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.dtypes.html" rel="noreferrer">Unicode string</a>- but your format specifier is floating numbers followed by a string (<code>'%3f(repeated 288 times without spaces)%s'</code>).</p>
<p>Since your matrix is already string there is no point in trying to format it anyway. If you are not satisfied with the floating point digits you should probably format it before entering to this matrix.</p>
<p>So, in your case to just write your present matrix just use:</p>
<pre><code>np.savetxt('name', matrix, delimiter=' ', header='string', comments='', fmt='%s')
</code></pre>
<p>which will treat each element as a string (which they actually are) and write it to the text file.</p>
<p>Also maybe this <a href="https://stackoverflow.com/questions/16621351/how-to-use-python-numpy-savetxt-to-write-strings-and-float-number-to-an-ascii-fi">answer</a> provide some clues if you are not satisfied.</p>
|
python|numpy
| 5 |
1,905,716 | 49,708,682 |
Recursive Removal in Binary Search tree
|
<p>I am working on a binary search tree and I have gotten a little stuck in my recursive remove method. Everything seems to work except when I try to remove from the very top root. When I want to remove from the root I am to replace it with the smallest value to the right of the root. It works for every other sub-root but when I try to remove the first root it won't replace the value. But it will remove the value is suppose to be used to replace it. I would really appreciate some advice.</p>
<pre><code>def remove_element(self, t):
if self.__root == None:
raise ValueError
else:
self.__remove_element(t, self.__root)
return self.__root
</code></pre>
|
<p>Your <code>remove_element</code> never assigns to <code>self.__root</code> (and nothing assigns to any “value” attribute), so of course the root never changes.</p>
|
python|recursion|binary-search-tree
| 0 |
1,905,717 | 49,366,473 |
Django: Selectively Apply CSS Styles to Quiz Radio Buttons
|
<p>I have users take a quiz. After each question, I want to show them whether their answer was correct or incorrect. The correct answer should be highlighted in green, and their answer (if incorrect) should be highlighted in red (using Twitter Bootstrap styles).</p>
<p>I am currently rendering the quiz results page in Django and HTML like so:</p>
<pre><code>{{ player.question }}
<div class="myradio">
<label for="choice_1"><input id="id_1" type="radio" value="q1" disabled /> {{q1}}</label>
</div>
<div class="myradio">
<label for="choice_2"><input id="id_2" type="radio" value="q2" disabled /> {{q2}}</label>
</div>
<div class="myradio">
<label for="choice_3"><input id="id_3" type="radio" value="q3" disabled /> {{q3}}</label>
</div>
<div class="myradio">
<label for="choice_4"><input id="id_4" type="radio" value="q4" disabled /> {{q4}}</label>
</div>
<p><b>Correct Answer: {{solution}}<b></p>
Total Score: {{total_score}}
</code></pre>
<p>I am storing the solution to the question in <code>{{solution}}</code>. I have been trying to figure out how to selectively apply CSS filters if, for example, <code>{{q1}} == {{solution}}</code> that should be highlighted green. I can grab the participant's answer with <code>{{player.submitted_answer}}</code>, and so want to highlight a div element red if <code>{{player.submitted_answer}} != {{solution}}</code>. </p>
<p>I have tried messing around with if statement blocks, but can't seem to get it right. Any ideas?</p>
<p>@cezar, a snippet of pages.py and models.py</p>
<p>In pages.py I have the following class</p>
<pre><code>class Question(Page):
timeout_seconds = 120
template_name = 'quiz/Question.html'
form_model = 'player'
form_fields = ['submitted_answer', 'confidence']
def submitted_answer_choices(self):
qd = self.player.current_question()
return [
qd['choice1'],
qd['choice2'],
qd['choice3'],
qd['choice4'],
]
def confidence_error_message(self, value):
if value == 50:
return 'Please indicate your confidence in your answer. It is important you answer accurately.'
def before_next_page(self):
self.player.check_correct()
</code></pre>
<p>In models.py, the relevant class is Player and Subsession:</p>
<pre><code>class Player(BasePlayer):
trial_counter = models.IntegerField(initial=0)
question_id = models.IntegerField()
confidence = models.FloatField(widget=widgets.Slider(attrs={'step': '0.01'}))
confidence_private = models.FloatField(widget=widgets.Slider(attrs={'step': '0.01'}))
question = models.StringField()
solution = models.StringField()
submitted_answer = models.StringField(widget=widgets.RadioSelect)
submitted_answer_private = models.StringField(widget=widgets.RadioSelect)
is_correct = models.BooleanField()
total_score = models.IntegerField(initial = 0)
def other_player(self):
return self.get_others_in_group()[0]
def current_question(self):
return self.session.vars['questions'][self.round_number - 1]
def check_correct(self):
self.is_correct = self.submitted_answer == self.solution
def check_partner_correspondence(self):
self.submitted_answer == self.get_others_in_group()[0].submitted_answer
def check_partner_correct(self):
self.get_others_in_group()[0].submitted_answer == self.solution
def check_if_awarded_points(self):
self.get_others_in_group()[0].submitted_answer == self.submitted_answer == self.solution
def score_points(self):
if self.get_others_in_group()[0].submitted_answer == self.submitted_answer == self.solution:
self.total_score +=1
else:
self.total_score -=1
def set_payoff(self):
if(self.check_if_awarded_points()):
self.total_score +=1
class Subsession(BaseSubsession):
def creating_session(self):
if self.round_number == 1:
self.session.vars['questions'] = Constants.questions
## ALTERNATIVE DESIGN:
## to randomize the order of the questions, you could instead do:
# import random
# randomized_questions = random.sample(Constants.questions, len(Constants.questions))
# self.session.vars['questions'] = randomized_questions
## and to randomize differently for each participant, you could use
## the random.sample technique, but assign into participant.vars
## instead of session.vars.
for p in self.get_players():
question_data = p.current_question()
p.question_id = question_data['id']
p.question = question_data['question']
p.solution = question_data['solution']
</code></pre>
|
<p>I agree with @Sapna-Sharma and @albar comment.</p>
<p>You may use a simple CSS class to set the color to green and use a <code>{% if [...] %}</code> template filter to add the the CSS class only to the good answer </p>
<p>You may refer to <a href="https://docs.djangoproject.com/en/2.0/ref/templates/builtins/" rel="nofollow noreferrer">Django official documentation</a> to know how to handle template filter.</p>
|
python|html|django
| 2 |
1,905,718 | 21,048,035 |
What's the strange double underscored attribute (__attribute__) in Python?
|
<p>I see some strange but useful double underscored attribute in Python, such as:</p>
<pre><code>__module__
__init__
__str__
__class__
__repr__
...
</code></pre>
<p>They seem to be some special attributes. What's the canonical name for them?</p>
|
<p>They are called <strong>Special Methods</strong>.</p>
<p>Python is a <strong>Duck Typed</strong> Language and many of the
user-facing features of the language are implemented
in "protocols" implemented by these special methods.</p>
<p>See: <a href="http://docs.python.org/release/2.5.2/ref/specialnames.html">http://docs.python.org/release/2.5.2/ref/specialnames.html</a></p>
<p>As an <strong>Example</strong>:</p>
<p>To mimic comparison of arbitrary objects you implement the following two methods in your class:</p>
<ul>
<li><code>__lt__</code></li>
<li><code>__eq__</code></li>
</ul>
|
python
| 5 |
1,905,719 | 21,406,926 |
Download image by Python CGI
|
<p>I have a Python cgi that converts a svg to a png file, I'd like then to download the converted output to the user's disk.</p>
<pre><code>#the conversion stuff
print "Content-Type: image/png"
print "Content-Disposition: attachment; filename='pythonchart.png'"
print
print open("http:\\localhost\myproj\pythonchart.png").read()
</code></pre>
<p>This results in a png file containing <code>‰PNG</code>.</p>
<p>Any help please?</p>
|
<p>You should try opening in binary mode <code>open('filename', 'rb').read()</code></p>
|
python|image|svg|http-headers|cgi
| 1 |
1,905,720 | 46,102,267 |
Subset pandas df using concatenation of column indices slices
|
<p>I have a large dataframe that I am trying to subset using only column indices. I am using the following code:</p>
<pre><code>df = df.ix[:, [3,21:28,30:34,36:57,61:64,67:]]
</code></pre>
<p>The code is pretty self explanatory. I am trying to subset the df by keeping columns 3, 21 through 28 and so on. However, I am getting the following error:</p>
<pre><code> File "<ipython-input-44-3108b602b220>", line 1
df = df.ix[:, [3,21:28,30:34,36:57,61:64,67:]]
^
SyntaxError: invalid syntax
</code></pre>
<p>What am I missing?</p>
|
<p>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.r_.html" rel="nofollow noreferrer">numpy.r_[...]</a>:</p>
<pre><code>df = df.iloc[:, np.r_[3,21:28,30:34,36:57,61:64,67:df.shape[1]]]
</code></pre>
<p>Demo:</p>
<pre><code>In [39]: df = pd.DataFrame(np.random.randint(5, size=(2, 100)))
In [40]: df
Out[40]:
0 1 2 3 4 5 6 7 8 9 ... 90 91 92 93 94 95 96 97 98 99
0 3 1 0 3 2 4 1 2 1 3 ... 2 1 4 2 1 2 1 3 3 4
1 0 2 4 1 1 1 0 0 3 4 ... 4 4 0 3 2 3 0 2 0 1
[2 rows x 100 columns]
In [41]: df.iloc[:, np.r_[3,21:28,30:34,36:57,61:64,67:df.shape[1]]]
Out[41]:
3 21 22 23 24 25 26 27 30 31 ... 90 91 92 93 94 95 96 97 98 99
0 3 4 1 2 0 3 0 3 2 2 ... 2 1 4 2 1 2 1 3 3 4
1 1 1 0 2 1 4 4 4 1 3 ... 4 4 0 3 2 3 0 2 0 1
[2 rows x 69 columns]
</code></pre>
|
python|pandas|dataframe|subset|indices
| 0 |
1,905,721 | 73,752,801 |
How to make this Python script to run subfolders too?
|
<p>Which part of the codes do I need to change in order to include subfolders?</p>
<p>File handle.py</p>
<pre><code>import glob
import os
import sys
from typing import List
def get_filenames(filepath: str, pattern: str) -> List[str]:
"""Returns all filenames that matches the pattern in current folder.
Args:
filepath (str): folder path.
pattern (str): filename pattern.
Returns:
List[str]: list of paths.
"""
filenames = glob.glob(os.path.join(filepath, pattern))
if filenames:
return filenames
return sys.exit("Error: no file found, check the documentation for more info.")
</code></pre>
<p>Main.py</p>
<pre><code>import math
import click
import pdf_split_tool.file_handler
import pdf_split_tool.pdf_splitter
def _confirm_split_file(filepath: str, max_size_bytes: int) -> None:
"""Split file if user confirms or is valid.
Args:
filepath: PDF path.
max_size_bytes: max size in bytes.
"""
splitter = pdf_split_tool.pdf_splitter.PdfSplitter(filepath)
valid = True
if not valid:
click.secho(
(
"Warning: {} has more than 200kb per page. "
"Consider reducing resolution before splitting."
).format(filepath),
fg="yellow",
)
if not click.confirm("Do you want to continue?"):
click.secho("{} skipped.".format(filepath), fg="blue")
return
splitter.split_max_size(max_size_bytes)
@click.command()
@click.version_option()
@click.argument("filepath", type=click.Path(exists=True), default=".")
@click.option(
"-m",
"--max-size",
type=float,
help="Max size in megabytes.",
default=20,
show_default=True,
)
def main(filepath: str, max_size: float) -> None:
"""Pdf Split Tool."""
max_size_bytes = math.floor(max_size * 1024 * 1024) # convert to bytes
if filepath.endswith(".pdf"):
_confirm_split_file(filepath, max_size_bytes)
else:
filepaths = pdf_split_tool.file_handler.get_filenames(filepath, "*.pdf")
for path in filepaths:
_confirm_split_file(path, max_size_bytes)
if __name__ == "__main__":
main(prog_name="pdf-split-tool") # pragma: no cover
</code></pre>
<p>pdf_splitter.py</p>
<pre><code>import os
import sys
import tempfile
import PyPDF4
class PdfSplitter:
"""Pdf Splitter class."""
def __init__(self, filepath: str) -> None:
"""Constructor."""
self.filepath = filepath
self.input_pdf = PyPDF4.PdfFileReader(filepath, "rb")
self.total_pages = self.input_pdf.getNumPages()
self.size = os.path.getsize(filepath)
self.avg_size = self.size / self.total_pages
print(
"File: {}\nFile size: {}\nTotal pages: {}\nAverage size: {}".format(
filepath, self.size, self.total_pages, self.avg_size
)
)
def _get_pdf_size(self, pdf_writer: PyPDF4.PdfFileWriter) -> int:
"""Generates temporary PDF.
Args:
pdf_writer: pdf writer.
Returns:
int: generated file size.
"""
with tempfile.TemporaryFile(mode="wb") as fp:
pdf_writer.write(fp)
return fp.tell()
def split_max_size(self, max_size: int) -> int:
"""Creates new files based on max size.
Args:
max_size: size in integer megabytes.
Returns:
int: number of PDFs created.
"""
if self.size > max_size:
avg_step = int(max_size / self.avg_size)
pdfs_count = 0
current_page = 0
while current_page != self.total_pages:
end_page = current_page + avg_step
if end_page > self.total_pages:
end_page = self.total_pages
current_size = sys.maxsize
# while PDF is too big create smaller PDFs
while current_size > max_size:
pdf_writer = PyPDF4.PdfFileWriter()
for page in range(current_page, end_page):
pdf_writer.addPage(self.input_pdf.getPage(page))
current_size = self._get_pdf_size(pdf_writer)
self.input_pdf = PyPDF4.PdfFileReader(self.filepath, "rb")
end_page -= 1
# write PDF with size max_size
with open(
self.filepath.replace(".pdf", "-{}.pdf".format(pdfs_count)), "wb"
) as out:
pdf_writer.write(out)
current_page = end_page + 1
pdfs_count += 1
return pdfs_count
return 0
</code></pre>
|
<p>What you could do is, for each file in filenames, check if it's a folder, if it is, rerun the function on it, using recursivity</p>
<p>To check wether a file is a folder or not you can use</p>
<pre><code>os.path.isdir(path)
</code></pre>
<p>where path is the path to the file</p>
<p>EDIT: Posting the code is better than an image because it can help people showing you the solution without having to rewrite everything</p>
<p>EDIT2:</p>
<p>You could try doing that, tried it myself and it should hopefully do what you want</p>
<pre><code>def get_filenames(filepath, pattern, file_list=None):
if file_list is None:
file_list = []
filenames = glob.glob(os.path.join(filepath, pattern))
if filenames:
for file in filenames:
file_list.append(file)
if os.path.isdir(file): # If it's a folder, rerun the function to get every file inside it recursively
return get_filenames(file, pattern, file_list)
return file_list
</code></pre>
|
python
| 0 |
1,905,722 | 12,948,381 |
simple hello world program gives issue in webpy
|
<p>I am trying to use the webpy server and it works for the hello world if I try the same with the template it gives me this issue.</p>
<pre><code>import web
render = web.template.render('templates/')
urls = (
'/', 'index'
)
class index:
def GET(self):
name ='example'
return render.index(name)
if __name__ == "__main__":
app = web.application(urls, globals())
app.run()
</code></pre>
<p><strong><code>templates/index.html</code>:</strong></p>
<pre><code><em>Hello</em>, world!
$def with (name)
$if name:
I just wanted to say <em>hello</em> to $name.
$else:
<em>Hello</em>, world!
</code></pre>
<p><strong>Error</strong></p>
<pre><code><type 'exceptions.SyntaxError'> at /
invalid syntax Template traceback: File 'templates/index.html', line 8 None (index.html, line 8)
Python /usr/local/lib/python2.7/dist-packages/web.py-0.37-py2.7.egg/web/template.py in compile_template, line 911
Web GET http://0.0.0.0:8080/
</code></pre>
<p>why am I getting this issue?</p>
|
<p>You must reorder your template file. The <code>$def</code> statement must come first:</p>
<pre><code>$def with (name)
<em>Hello</em>, world!
</code></pre>
|
python|web.py
| 2 |
1,905,723 | 13,195,202 |
Why does Python change the value of an integer when there is a 0 in front of it?
|
<p>I implemented a function converting an integer number to its representation as a string <code>intToStr()</code> (code below).</p>
<p>For testing I've passed in some values and observed an unexpected output:</p>
<pre><code>print intToStr( 1223) # prints 1223 as expected
print intToStr(01223) # prints 659, surprisingly
</code></pre>
<p>Now, I've tried to debug it, and the the integer I've passed in has indeed turned out to be <code>659</code>. </p>
<p>Why does this happen and how can I get python to ignore leading zeros of the integer literal?</p>
<hr>
<p>Here's the code for my function:</p>
<pre><code>def intToStr(i):
digits = '0123456789'
if i == 0:
return 0
result = ""
while i > 0:
result = digits[i%10] + result
i /= 10
return result
</code></pre>
|
<p>An integer literal starting with a 0 is interpreted as an <a href="http://docs.python.org/2/reference/lexical_analysis.html#integer-and-long-integer-literals" rel="noreferrer">octal number, base 8</a>:</p>
<pre><code>>>> 01223
659
</code></pre>
<p>This has been changed in Python 3, where integers with a leading 0 are considered errors:</p>
<pre><code>>>> 01223
File "<stdin>", line 1
01223
^
SyntaxError: invalid token
>>> 0o1223
659
</code></pre>
<p>You should never specify an integer literal with leading zeros; if you meant to specify an octal number, use <code>0o</code> to start it, otherwise strip those zeros.</p>
|
python|integer|python-2.x
| 9 |
1,905,724 | 24,673,158 |
access table object outside SQLAlchemy db session
|
<p>I have the following code snippet to operates on the table Customer:</p>
<pre><code>with session.begin(subtransactions=True):
db_obj = Customer(...)
result = io_processing() # may cause greenlet switching
# I do not want to make it inside the transaction above,
# Because some legacy I/O related code which will causes greenlet switching
if result:
self.read_write_db(session, db_obj)
</code></pre>
<p>In read_write_db function:</p>
<pre><code>with session.begin(subtransactions=True):
# do some work on db_obj passed from outside
</code></pre>
<p>Is it safe to pass 'db_obj' outside of the transaction into another function?</p>
<p>Or I have to query the db_obj again in the read_write_db and update it?</p>
|
<p>Yes, it is possible, but you will have to get a new instance by merging <code>db_obj</code> in <code>read_write_db</code>'s session.</p>
<pre><code>with session.begin(subtransactions=True):
merged_db_obj = session.merge(db_obj)
# work with merged_db_obj ...
</code></pre>
<p>See <a href="http://docs.sqlalchemy.org/en/rel_0_9/orm/session.html#merging" rel="nofollow">http://docs.sqlalchemy.org/en/rel_0_9/orm/session.html#merging</a> for all the details. Merging can be tricky.</p>
|
python|mysql|transactions|sqlalchemy
| 0 |
1,905,725 | 24,815,508 |
Python Django Mezzanine install fail for Pillow package
|
<p>I try to install Mezzanine
but it fail installing Pillow</p>
<p>python setup.py install return this error:</p>
<pre><code>Processing Pillow-2.5.1.zip
Writing /tmp/easy_install-5pOzTp/Pillow-2.5.1/setup.cfg
Running Pillow-2.5.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-5pOzTp/Pillow-2.5.1/egg-dist-tmp-E2JGiB
warning: no files found matching '*.bdf' under directory 'Images'
warning: no files found matching '*.fli' under directory 'Images'
warning: no files found matching '*.gif' under directory 'Images'
warning: no files found matching '*.icns' under directory 'Images'
warning: no files found matching '*.ico' under directory 'Images'
warning: no files found matching '*.jpg' under directory 'Images'
warning: no files found matching '*.pbm' under directory 'Images'
warning: no files found matching '*.pil' under directory 'Images'
warning: no files found matching '*.png' under directory 'Images'
warning: no files found matching '*.ppm' under directory 'Images'
warning: no files found matching '*.psd' under directory 'Images'
warning: no files found matching '*.tar' under directory 'Images'
warning: no files found matching '*.webp' under directory 'Images'
warning: no files found matching '*.xpm' under directory 'Images'
warning: no files found matching 'README' under directory 'Sane'
warning: no files found matching 'README' under directory 'Scripts'
warning: no files found matching '*.txt' under directory 'Tk'
Building using 4 processes
_imaging.c: In function ‘setup_module’:
_imaging.c:3575:37: error: ‘Z_RLE’ undeclared (first use in this function)
PyModule_AddIntConstant(m, "RLE", Z_RLE);
^
_imaging.c:3575:37: note: each undeclared identifier is reported only once for each function it appears in
_imaging.c:3576:39: error: ‘Z_FIXED’ undeclared (first use in this function)
PyModule_AddIntConstant(m, "FIXED", Z_FIXED);
^
libImaging/Draw.c: In function ‘ImagingDrawWideLine’:
libImaging/Draw.c:603:9: warning: unused variable ‘vertices’ [-Wunused-variable]
int vertices[4][2];
^
x86_64-linux-gnu-gcc: error: build/temp.linux-x86_64-2.7/_imaging.o: File o directory non esistente
x86_64-linux-gnu-gcc: error: build/temp.linux-x86_64-2.7/decode.o: File o directory non esistente
x86_64-linux-gnu-gcc: error: build/temp.linux-x86_64-2.7/encode.o: File o directory non esistente
x86_64-linux-gnu-gcc: error: build/temp.linux-x86_64-2.7/map.o: File o directory non esistente
x86_64-linux-gnu-gcc: error: build/temp.linux-x86_64-2.7/display.o: File o directory non esistente
error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
</code></pre>
<p>If I try to install using pip install mezzanine I have this error:</p>
<pre><code>Traceback (most recent call last):
File "/home/franco/.virtualenvs/audace/bin/pip", line 11, in <module>
sys.exit(main())
File "/home/franco/.virtualenvs/audace/local/lib/python2.7/site-packages/pip/__init__.py", line 185, in main
return command.main(cmd_args)
File "/home/franco/.virtualenvs/audace/local/lib/python2.7/site-packages/pip/basecommand.py", line 161, in main
text = '\n'.join(complete_log)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 28: ordinal not in range(128)
</code></pre>
<p>As you can see the problem is in installing Pillow requirements
I'm working with virtualenv</p>
<p>thanks</p>
|
<p>You need to install the development libraries. For Ubuntu, this means you need to do the following:</p>
<pre><code>sudo apt-get install python-dev build-essential
sudo apt-get install libjpeg libjpeg-dev libfreetype6 libfreetype6-dev zlib1g-dev
sudo apt-get build-dep python-imaging
</code></pre>
<hr>
<p>This bug was fixed <a href="https://github.com/python-pillow/Pillow/issues/790" rel="nofollow">12 days ago</a>, so you might want to run <code>pip install -U pillow</code>, or <code>pip install git+https://github.com/python-pillow/Pillow.git</code></p>
|
python|django|mezzanine|pillow
| 4 |
1,905,726 | 40,856,002 |
Using PyQt4.QtGui.QMouseEvent in a QWidget
|
<p>I am using a PyQt4.QMainWindow as my application interface, and I want to get the x and y coordinates of the mouse inside of a QWidget and set them continuously in 2 textBrowsers in the MainWindow.</p>
<p>The documentation for QWidget is <a href="http://pyqt.sourceforge.net/Docs/PyQt4/qwidget.html#mouseGrabber" rel="nofollow noreferrer">here</a>.</p>
<p>and the documentation for QMouseEvent is <a href="http://pyqt.sourceforge.net/Docs/PyQt4/qmouseevent.html" rel="nofollow noreferrer">here</a>.</p>
<p>Here is the code</p>
<pre><code>from PyQt4 import QtGui
from PyQt4.QtGui import QApplication
import sys
class Ui_MainWindow(object):
def setupUI(self, MainWindow):
self.textBrowser_1 = QtGui.QTextBrowser(self.tab)
self.textBrowser_2 = QtGui.QTextBrowser(self.tab)
self.widget_1 = QtGui.QWidget(self.tab)
self.widget_1.setMouseTracking(True)
class MyMainScreen(QMainWindow):
def __init__(self, parent=None):
QtGui.QMainWindow.__init__(self, parent)
self.ui = Ui_MainWindow() # This is from a python export from QtDesigner
# There is a QWidget inside that is self.ui.widget_1
# and 2 textBrowsers, textBrowser_1 and textBrowser_2
# I want to populate these 2 textBrowsers with the current x,y
# coordinates.
if __name__ == "__main__":
app = QApplication(sys.argv)
mainscreen = MyMainScreen()
mainscreen.show()
app.exec_()
</code></pre>
|
<p>When you apply setMouseTracking it only applies to that widget, and not to your children, so you must manually, in the next solution:</p>
<pre><code>def setMouseTracking(self, flag):
def recursive_set(parent):
for child in parent.findChildren(QtCore.QWidget):
child.setMouseTracking(flag)
recursive_set(child)
QtGui.QWidget.setMouseTracking(self, flag)
recursive_set(self)
</code></pre>
<p>complete code:</p>
<pre><code>from PyQt4 import QtCore
from PyQt4 import QtGui
from PyQt4.QtGui import QApplication, QMainWindow
import sys
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.resize(800, 132)
self.centralwidget = QtGui.QWidget(MainWindow)
self.horizontalLayout = QtGui.QHBoxLayout(self.centralwidget)
self.textBrowser_1 = QtGui.QTextBrowser(self.centralwidget)
self.horizontalLayout.addWidget(self.textBrowser_1)
self.textBrowser_2 = QtGui.QTextBrowser(self.centralwidget)
self.horizontalLayout.addWidget(self.textBrowser_2)
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtGui.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 800, 22))
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtGui.QStatusBar(MainWindow)
MainWindow.setStatusBar(self.statusbar)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
class MyMainScreen(QMainWindow):
def __init__(self, parent=None):
QtGui.QMainWindow.__init__(self, parent)
self.ui = Ui_MainWindow() # This is from a python export from QtDesigner
self.ui.setupUi(self)
self.setMouseTracking(True)
self.ui.textBrowser_1.setMouseTracking(True)
self.ui.textBrowser_2.setMouseTracking(True)
self.ui.menubar.setMouseTracking(True)
self.ui.statusbar.setMouseTracking(True)
def setMouseTracking(self, flag):
def recursive_set(parent):
for child in parent.findChildren(QtCore.QWidget):
child.setMouseTracking(flag)
recursive_set(child)
QtGui.QWidget.setMouseTracking(self, flag)
recursive_set(self)
def mouseMoveEvent(self, event):
pos = event.pos()
self.ui.textBrowser_1.append(str(pos.x()))
self.ui.textBrowser_2.append(str(pos.y()))
QtGui.QMainWindow.mouseMoveEvent(self, event)
if __name__ == "__main__":
app = QApplication(sys.argv)
mainscreen = MyMainScreen()
mainscreen.show()
app.exec_()
</code></pre>
<p>This is my output:</p>
<p><a href="https://i.stack.imgur.com/CSnmE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CSnmE.png" alt="enter image description here"></a></p>
|
python|pyqt|pyqt4|qwidget|qmouseevent
| 1 |
1,905,727 | 40,941,231 |
aws cli dynamo db (ValidationException) Error
|
<p>I'm looking to batch write item to dynamodb using python's boto3 module and i'm getting this. This is the first time i've ever worked with aws cli or boto3. The documentation says validation exception errors occur when there are empty values and possible incorrect data types, but i've played with all those and it doesn't seem to be working. </p>
<p>Does dynamodb only like to have 25 items written to it at a time? how can i control those batches if so? </p>
<p>My request:</p>
<pre><code>client = boto3.client('dynamodb')
response = client.batch_write_item(RequestItems=batch_dict)
</code></pre>
<p>Top of batch_dict:</p>
<pre><code>{'scraper_exact_urls': [{'PutRequest': {'Item': {'Sku': {'S': 'T104P3'},
'pps_id': {'N': '427285976'},
'scraper_class_name': {'S': 'scraper_class_name'},
'store_id': {'N': '1197386754'},
'updated_by': {'S': 'user'},
'updated_on': {'N': '1480714223'},
'updated_url': {'S': 'http://www.blah.com'}}}},
{'PutRequest': {'Item': {'Sku': {'S': 'T104P3'},
'pps_id': {'N': '427285976'},
'scraper_class_name': {'S': 'scraper_class_name'},
'store_id': {'N': '1197386754'},
'updated_by': {'S': 'user'},
'updated_on': {'N': '1480714223'},
'updated_url': {'S': 'http://www.blah.com'}}}},....
</code></pre>
<p>Schema:</p>
<p>attributes:
"pps_id"=>\Aws\DynamoDb\Enum\Type::NUMBER,
"sku"=>\Aws\DynamoDb\Enum\Type::STRING,
"scraper_class_name"=>\Aws\DynamoDb\Enum\Type::STRING,
"store_id"=>\Aws\DynamoDb\Enum\Type::NUMBER,
"updated_url"=>\Aws\DynamoDb\Enum\Type::STRING,
"updated_by"=>\Aws\DynamoDb\Enum\Type::STRING,
"updated_on"=>\Aws\DynamoDb\Enum\Type::NUMBER,
fields:
"pps_id",
"scraper_class_name",</p>
<p>The Error:</p>
<pre><code>ClientError: An error occurred (ValidationException) when calling the BatchWriteItem operation: 1 validation error detected: Value .... Map value must satisfy constraint: [Member must have length less than or equal to 25, Member must have length greater than or equal to 1]
</code></pre>
|
<p>The BatchWriteItem API works on 25 items at a time. You could use the following code, adapted from the <a href="https://stackoverflow.com/questions/8290397/how-to-split-an-iterable-in-constant-size-chunks">non-copying batching question</a>, to call BatchWriteItem on 25 item chunks</p>
<pre><code>def batch(iterable, n=1):
l = len(iterable)
for ndx in range(0, l, n):
yield iterable[ndx:min(ndx + n, l)]
client = boto3.client('dynamodb')
for x in batch(batch_dict['scraper_exact_urls'], 25):
subbatch_dict = {'scraper_exact_urls': x}
response = client.batch_write_item(RequestItems=subbatch_dict)
</code></pre>
|
python|amazon-web-services|amazon-dynamodb|aws-cli
| 5 |
1,905,728 | 40,967,965 |
Read values from dynamic created dictionary keys. - Python 2.7
|
<p>I am reading data from and API whose response is a JSON.</p>
<p>And then is converted to Python Dictionary</p>
<p>Part of that dictionary in question is, list of dictionaries actually.</p>
<pre><code>[
{
"inputType" : "text",
"can_be_anything_here" : {
"label" : "Enter name",
"value" : "login",
"required" : "True"
}
},
{
"inputType" : "password",
"can_be_anything_here" : {
"label" : "password",
"value" : "password",
"required" : "True"
}
}
]
</code></pre>
<p>Then I have to generate HTML for this form.</p>
<p>I can easily read value on <code>"inputType"</code> because its key is know each time.</p>
<p>But I need to read </p>
<pre><code>{
"label" : "password",
"value" : "password",
"required" : "True"
}
</code></pre>
<p>But its key is unknown/dynamic each time <code>"can_be_anything_here"</code></p>
<p>How do I read data from it?</p>
|
<p>If what i understood from your question is correct. So, the key <code>can_be_aything_here</code> is a variable and you want to access to it value's key everytime. If this is what you meant, so here what comes first to my mind:</p>
<pre><code>a = [{"inputType": "text", "can_be_anything_here": {"label": "Enter name", "value": "login", "required" : "True"}}, {"inputType": "Password", "can_be_anything_here2": {"label": "password", "value": "password", "required" : "True"}}]
for i in a:
for k, v in i.items():
# You can remove this codition if you need this key's value
if k != "inputType":
# Here you can retrive the unkown key
print("Current Key is: ", k)
# Here you can have the unknown key value which is a dict
print(v, type(v))
# For example: retrive the dict's value of the unknown key
print "v['label']: ", v["label"]
</code></pre>
<p>Output:</p>
<pre><code>Current Key is: can_be_anything_here
{'required': 'True', 'value': 'login', 'label': 'Enter name'} <type 'dict'>
v['label']: Enter name
Current Key is: can_be_anything_here2
{'required': 'True', 'value': 'password', 'label': 'password'} <type 'dict'>
v['label']: password
</code></pre>
|
python|python-2.7|dictionary
| 0 |
1,905,729 | 39,956,539 |
'module' object has no attribute 'DataReader
|
<p>import pandas as pd
import pandas.io.data as web # as we have to use only pandas function</p>
<pre><code> #Second, retrieve the data from, say, Google itself:
</code></pre>
<p>stock = web.DataReader('IBM',data_source='yahoo',start='01/01/2011', end='01/01/2013')</p>
<pre><code> # end of question 1
</code></pre>
<p>print type(stock) # Class Type is pandas.core.frame.DataFrame</p>
<p>IBM_dataframe = pd.DataFrame(stock)</p>
<p>Traceback (most recent call last):</p>
<p>File "", line 2, in
import pandas.io.data as web # as we have to use only pandas function</p>
<p>File "C:\Anaconda2\lib\site-packages\pandas\io\data.py", line 2, in
"The pandas.io.data module is moved to a separate package "</p>
<p>ImportError: The pandas.io.data module is moved to a separate package (pandas-datareader). After installing the pandas-datareader package (<a href="https://github.com/pydata/pandas-datareader" rel="nofollow">https://github.com/pydata/pandas-datareader</a>), you can change the import <code>from pandas.io import data, wb</code> to <code>from pandas_datareader import data, wb</code>.</p>
<p>import pandas_datareader as web</p>
<p>stock = web.DataReader('IBM',data_source='yahoo',start='01/01/2011', end='01/01/2013')
Traceback (most recent call last):</p>
<p>File "", line 1, in
stock = web.DataReader('IBM',data_source='yahoo',start='01/01/2011', end='01/01/2013')</p>
<p>AttributeError: 'module' object has no attribute 'DataReader'</p>
<p>change the import pandas.io.data as web to import pandas_datareader as web but now not able to get data plz suggest getting error
'module' object has no attribute 'DataReader'</p>
|
<p>Use the following:</p>
<pre><code>from pandas_datareader import data, wb
DAX = data.DataReader(name='^GDAXI', data_source='yahoo',start='2000-1-1')
</code></pre>
|
python-2.7
| 0 |
1,905,730 | 39,926,302 |
pygame screen failing to display
|
<p>I have the following code in Python 3 (and pygame), but the white surface fails to display and I don't understand why. Has it got something to do with where it has been placed? I tried de-indenting, but that didn't work either? The code is as below:</p>
<pre><code>import pygame
from pygame.locals import*
pygame.init()
screen=pygame.display.set_mode((800,600))
# Variable to keep our main loop running
running = True
# Our main loop!
while running:
# for loop through the event queue
for event in pygame.event.get():
# Check for KEYDOWN event; KEYDOWN is a constant defined in pygame.locals, which we imported earlier
if event.type == KEYDOWN:
# If the Esc key has been pressed set running to false to exit the main loop
if event.key == K_ESCAPE:
running = False
# Check for QUIT event; if QUIT, set running to false
elif event.type == QUIT:
running = False
# Create the surface and pass in a tuple with its length and width
surf = pygame.Surface((50, 50))
# Give the surface a color to differentiate it from the background
surf.fill((255, 255, 255))
rect = surf.get_rect()
screen.blit(surf, (400, 300))
pygame.display.flip()
</code></pre>
|
<p>So it does appear that your indentation is wrong.</p>
<p>You need to define the surface and update the screen etc. outside of the event loop.</p>
<p>At the very least you must move the <code>screen.blit(surf, (400, 300))</code> and <code>pygame.display.flip()</code> outside of the event loop.</p>
<p>This is it fixed:</p>
<pre><code># Our main loop!
while running:
# for loop through the event queue
for event in pygame.event.get():
# Check for KEYDOWN event; KEYDOWN is a constant defined in pygame.locals, which we imported earlier
if event.type == KEYDOWN:
# If the Esc key has been pressed set running to false to exit the main loop
if event.key == K_ESCAPE:
running = False
# Check for QUIT event; if QUIT, set running to false
elif event.type == QUIT:
running = False
# Create the surface and pass in a tuple with its length and width
surf = pygame.Surface((50, 50))
# Give the surface a color to differentiate it from the background
surf.fill((255, 255, 255))
rect = surf.get_rect()
screen.blit(surf, (400, 300))
pygame.display.flip()
</code></pre>
|
python|pygame|surface
| 0 |
1,905,731 | 8,744,849 |
SQLAlchemy Polymorphic Loading
|
<p>I've this model in SQLAlchemy:</p>
<pre><code>class User(Base):
__tablename = 'users'
id = Column(Integer, primary_key=True, autoincrement=True)
type = Column(Text, nullable=False)
user_name = Column(Text, unique=True, nullable=False)
__mapper_args__ = {'polymorphic_on': type}
class Client(User):
__tablename__ = 'clients'
__mapper_args__ = {'polymorphic_identity': 'client'}
id = Column(Integer, ForeignKey('users.id'), primary_key=True)
client_notes = Column(Text)
</code></pre>
<p>This is a joined table inheritance. The problem is when I'm querying User:</p>
<pre><code>self.session.query(User).all()
</code></pre>
<p>all I get is records from clients, while what I want is all record on User without Client. How do I solve this problem?</p>
<p>Edit: I'm using SQLAlchemy 0.7.4 and Pyramid 1.3a3</p>
|
<p><code>session.query(User)</code> does not perform any filtering based on the value of <code>type</code> column.<br>
However, the <code>SQL SELECT</code> statement it generates selects only data from the <code>users</code> table (unless <code>with_polymorphic(...)</code> is used). </p>
<p>But you can add the <code>filter</code> explicitely to achive the desired result:</p>
<pre><code>session.query(User).filter(User.type=='user').all()
</code></pre>
|
python|sqlalchemy|pyramid
| 2 |
1,905,732 | 58,875,973 |
How to store a value from database query result into a Variable in Robot Framework
|
<p>I've written Robot Framework Code to run database query and log the result of the query</p>
<pre><code>Connect To Database pymysql ${Database_name} ${UserName} ${Password} ${DatabaseHost} ${Port}
Check If Exists In Database SELECT cic.comboMenuItemId, mi1.zomatoName AS zomatoComboMenuItemName, cic.quantity, \ mi1.isVirtualCombo, cic.isItemVisible, cic.menuItemId, mi2.zomatoName AS zomatoMenuItemName, mi2.isVirtualCombo,micq.currentInventory FROM CombinationItemsComposition AS cic INNER JOIN MenuItem AS mi1 ON mi1.id = cic.comboMenuItemId INNER JOIN MenuItem AS mi2 ON mi2.id = cic.menuItemId INNER JOIN MenuItemCurrentQuantity AS micq ON micq.itemId = mi2.id WHERE mi1.isVirtualCombo NOT IN (0) AND micq.distributionId IN (7) AND mi1.isActive IN (1) AND mi2.isActive IN (1) AND micq.isAvailableOnZomato IN (1) ORDER BY cic.comboMenuItemId
@{QueryResult} Query SELECT cic.comboMenuItemId, mi1.zomatoName AS zomatoComboMenuItemName, cic.quantity, \ mi1.isVirtualCombo, cic.isItemVisible, cic.menuItemId, mi2.zomatoName AS zomatoMenuItemName, mi2.isVirtualCombo,micq.currentInventory FROM CombinationItemsComposition AS cic INNER JOIN MenuItem AS mi1 ON mi1.id = cic.comboMenuItemId INNER JOIN MenuItem AS mi2 ON mi2.id = cic.menuItemId INNER JOIN MenuItemCurrentQuantity AS micq ON micq.itemId = mi2.id WHERE mi1.isVirtualCombo NOT IN (0) AND micq.distributionId IN (7) AND mi1.isActive IN (1) AND mi2.isActive IN (1) AND micq.isAvailableOnZomato IN (1) ORDER BY cic.comboMenuItemId
Log Many @{QueryResult}
</code></pre>
<p><strong>Result of Query</strong></p>
<pre><code>(56, 'Party Pack (Serves 6-8)', 6, 1, 1, 1, 'Afghani Chicken Tikka Biryani (Heavy Eater)', 0, 11)
(58, 'Party Pack (Serves 6-8)', 6, 1, 1, 3, 'Chicken Tikka Biryani (Heavy Eater)', 0, 4)
(61, 'Party Pack (Serves 6-8)', 6, 1, 1, 5, 'Paneer Makhani Biryani (Heavy Eater)', 0, 18)
(79, 'Party Pack (Serves 6-8)', 6, 1, 1, 74, 'Afghani Veg Biryani (Heavy Eater)', 0, 10)
(90, 'Party Pack (Serves 6-8)', 6, 1, 1, 89, 'Butter Chicken Biryani (Heavy Eater)', 0, 0)
(253, 'Party Pack (Serves 6-8)', 6, 1, 1, 250, 'Classic Hyderabadi Chicken Biryani (Heavy Eater)', 0, 0)
(255, 'Party Pack (Serves 6-8)', 6, 1, 1, 252, 'Classic Hyderabadi Veg Biryani (Heavy Eater)', 0, 15)
(339, 'Party Pack (Serves 6-8)', 6, 1, 1, 325, 'Awadhi Veg Biryani (Heavy Eater)', 0, 26)
(340, 'Party Pack (Serves 6-8)', 6, 1, 1, 326, 'Awadhi Chicken Biryani (Heavy Eater)', 0, 0)
(381, 'Classic Chicken Tikka Roll ', 1, 1, 0, 408, 'Malabari Paratha', 0, 191)
(383, 'Signature Chicken Seekh Kebab Roll ', 1, 1, 0, 408, 'Malabari Paratha', 0, 191)
</code></pre>
<p>Please can anyone suggest how can I store the values of one particular row in variables</p>
<p>Thank You</p>
|
<p>Each row from the DB is basically returned as a tuple. To access a particular row use indexing.</p>
<pre><code>Connect To Database pymysql ${Database_name} ${UserName} ${Password} ${DatabaseHost} ${Port}
Check If Exists In Database SELECT cic.comboMenuItemId, mi1.zomatoName AS zomatoComboMenuItemName, cic.quantity, \ mi1.isVirtualCombo, cic.isItemVisible, cic.menuItemId, mi2.zomatoName AS zomatoMenuItemName, mi2.isVirtualCombo,micq.currentInventory FROM CombinationItemsComposition AS cic INNER JOIN MenuItem AS mi1 ON mi1.id = cic.comboMenuItemId INNER JOIN MenuItem AS mi2 ON mi2.id = cic.menuItemId INNER JOIN MenuItemCurrentQuantity AS micq ON micq.itemId = mi2.id WHERE mi1.isVirtualCombo NOT IN (0) AND micq.distributionId IN (7) AND mi1.isActive IN (1) AND mi2.isActive IN (1) AND micq.isAvailableOnZomato IN (1) ORDER BY cic.comboMenuItemId
@{QueryResult} Query SELECT cic.comboMenuItemId, mi1.zomatoName AS zomatoComboMenuItemName, cic.quantity, \ mi1.isVirtualCombo, cic.isItemVisible, cic.menuItemId, mi2.zomatoName AS zomatoMenuItemName, mi2.isVirtualCombo,micq.currentInventory FROM CombinationItemsComposition AS cic INNER JOIN MenuItem AS mi1 ON mi1.id = cic.comboMenuItemId INNER JOIN MenuItem AS mi2 ON mi2.id = cic.menuItemId INNER JOIN MenuItemCurrentQuantity AS micq ON micq.itemId = mi2.id WHERE mi1.isVirtualCombo NOT IN (0) AND micq.distributionId IN (7) AND mi1.isActive IN (1) AND mi2.isActive IN (1) AND micq.isAvailableOnZomato IN (1) ORDER BY cic.comboMenuItemId
${firstRow} Set Variable ${QueryResult[0]}
${secondRow} Set Variable ${QueryResult[1]}
</code></pre>
<p>In the above example <code>${firstRow}</code> will contain the value <code>(56, 'Party Pack (Serves 6-8)', 6, 1, 1, 1, 'Afghani Chicken Tikka Biryani (Heavy Eater)', 0, 11)</code> and <code>${secondRow}</code> will contain the value<br>
<code>(58, 'Party Pack (Serves 6-8)', 6, 1, 1, 3, 'Chicken Tikka Biryani (Heavy Eater)', 0, 4)</code> </p>
|
database|python-2.7|robotframework|pymysql
| 2 |
1,905,733 | 52,136,625 |
How to convert array from dtype=object to dtype=np.int
|
<p>Currently, the array I got is</p>
<pre><code>arr = array([array([ 2, 7, 8, 12, 14]), array([ 3, 4, 5, 6, 9, 10]),
array([0, 1]), array([11, 13])], dtype=object)
</code></pre>
<p>How can I convert it into <code>array([[ 2, 7, 8, 12, 14], [ 3, 4, 5, 6, 9, 10], [0, 1], [11, 13]])</code>?</p>
<p>I tried <code>arr.astype(np.int)</code>, but failed</p>
|
<p>The <code>dtype</code> for an array of arrays will always be <code>object</code>. This is unavoidable because with NumPy only non-jagged <em>n</em>-dimensional arrays can be held in a contiguous memory block.</p>
<p>Notice your constituent arrays are already of <code>int</code> dtype:</p>
<pre><code>arr[0].dtype # dtype('int32')
</code></pre>
<p>Notice also your logic will work for a <em>non-jagged</em> array of arrays:</p>
<pre><code>arr = np.array([np.array([ 2, 7, 8]),
np.array([ 3, 4, 5])], dtype=object)
arr = arr.astype(int)
arr.dtype # dtype('int32')
</code></pre>
<p>In fact, in this case, the array of arrays is collapsed into a <em>single</em> array:</p>
<pre><code>print(arr)
array([[2, 7, 8],
[3, 4, 5]])
</code></pre>
<p>For computations with a jagged array of arrays you may see <em>some</em> performance advantages relative to a list of lists, but the benefit may be limited. See also <a href="https://stackoverflow.com/questions/14916407/how-do-i-stack-vectors-of-different-lengths-in-numpy">How do I stack vectors of different lengths in NumPy?</a></p>
|
python|arrays|numpy
| 2 |
1,905,734 | 52,223,069 |
Python Template Variables
|
<p>I'm looking for a wiki site for interchangeable Python variables.
I don't know what those are called so i can't look them up.</p>
<p>An example:</p>
<pre><code>template = ['{sti}_Data', '{sti}_OtherData', '{sti}_MaxData']
si = name
</code></pre>
<p>What is this method called ?</p>
|
<p>I think this is basically Python string formatting?</p>
<p>to use your example, </p>
<pre><code>template = ['{}_Data', '{}_OtherData', '{}_MaxData']
si = 'Sauron'
results = [x.format(si) for x in template]
print(results)
# ['Sauron_Data', 'Sauron_OtherData', 'Sauron_MaxData']
</code></pre>
<p>There's more here, but I think you'll find much more if you just google Python string fomatting.</p>
<p><a href="https://pyformat.info/" rel="nofollow noreferrer">https://pyformat.info/</a></p>
|
python|variables
| 0 |
1,905,735 | 52,155,522 |
how can i solve this? Python def function
|
<pre><code>def LongestWord(sen):
# first we remove non alphanumeric characters from the string
# using the translate function which deletes the specified characters
sen = sen.translate(None, "~!@#$%^&*()-_+={}[]:;'<>?/,.|`")
# now we separate the string into a list of words
arr = sen.split(" ")
print(arr)
# the list max function will return the element in arr
# with the longest length because we specify key=len
return max(arr, key=len)
**print LongestWord("Argument goes here")**
</code></pre>
<p>what's wrong with this line? How can I change it? I can't understand it! It make me really uneasy, cause in Coderbyte.com says that it is true and it work!</p>
|
<p>I'm not exactly sure what line you're referring too? Perhaps the last line.</p>
<p>If so you need a parenthesis with the print statement in python 3.x</p>
<pre><code>print(LongestWord("Argument goes here"))
</code></pre>
<p>additionally, string translate works differently in python 3</p>
<pre><code>def LongestWord(sen):
# first we remove non alphanumeric characters from the string
# using the translate function which deletes the specified characters
intab ="~!@#$%^&*()-_+={}[]:;'<>?/,.|`"
trantab= str.maketrans(dict.fromkeys(intab))
sen = sen.translate(trantab)
# now we separate the string into a list of words
arr = sen.split(" ")
print(arr)
# the list max function will return the element in arr
# with the longest length because we specify key=len
return max(arr, key=len)
print(LongestWord("Argument. 'Go' @goes here"))
</code></pre>
<h1>The above worked for me on python 3.6.2</h1>
|
python
| 1 |
1,905,736 | 51,708,356 |
Apache Airflow - DAG registers as success even when critical tasks fail
|
<p>I am new to Apache Airflow and I would like to write a DAG to move some data from a set of tables in a source database to a set of tables in a target database. I am attempting to engineer the DAG such that someone can simply write the <code>create table</code> and <code>insert into</code> SQL scripts for a new source table --> target table process and drop them into folders. Then, on the next DAG run, the DAG would pick up the scripts from the folders and run the new tasks. I set up my DAG like:</p>
<pre><code>source_data_check_task_1 (Check Operator or ValueCheckOperator)
source_data_check_task_2 (Check Operator or ValueCheckOperator, Trigger on ALL_SUCCESS)
source_data_check_task_3 (Check Operator or ValueCheckOperator, Trigger on ALL_SUCCESS)
source_data_check_task_1 >> source_data_check_task_2 >> source_data_check_task_3
for tbl_name in tbl_name_list:
tbl_exists_check (Check Operator, trigger on ALL_SUCCESS): check if `new_tbl` exists in database by querying `information_schema`
tbl_create_task (SQL Operator, trigger on ALL_FAILED): run the `create table` SQL script
tbl_insert_task (SQL Operator ,trigger on ONE_SUCCESS): run the `insert into` SQL script
source_data_check_task_3 >> tbl_exists_check
tbl_exists_check >> tbl_create_task
tbl_exists_check >> tbl_insert_task
tbl_create_task >> tbl_insert)task
</code></pre>
<p>I am running into two problems with this setup: (1) If any data quality check task fails, the <code>tbl_create_task</code> still kicks off because it triggers on <code>ALL_FAILED</code> and (2) No matter which tasks fail, the DAG shows that the run was a <code>SUCCESS</code>. This is fine if the <code>tbl_exists_check</code> fails, because it's supposed to fail at least once, but not ideal if some critical task fails (like any data quality check tasks). </p>
<p>Is there a way to set up my DAG differently to address these problems?</p>
<p>Actual code below: </p>
<pre><code>from airflow import DAG
from airflow.operators.postgres_operator import PostgresOperator
from airflow.operators.check_operator import ValueCheckOperator, CheckOperator
from airflow.operators.bash_operator import BashOperator
from airflow.models import Variable
from datetime import datetime, timedelta
from airflow.utils.trigger_rule import TriggerRule
sql_path = Variable.get('sql_path')
default_args = {
'owner': 'enmyj',
'depends_on_past':True,
'email_on_failure': False,
'email_on_retry': False,
'retries': 0
}
dag = DAG(
'test',
default_args=default_args,
schedule_interval=None,
template_searchpath=sql_path
)
# check number of weeks in bill pay (made up example)
check_one = CheckOperator(
task_id='check_one',
conn_id='conn_name',
sql="""select count(distinct field) from dbo.table having count(distinct field) >= 4 """,
dag=dag
)
check_two = CheckOperator(
task_id='check_two',
conn_id='conn_name',
sql="""select count(distinct field) from dbo.table having count(distinct field) <= 100""",
dag=dag
)
check_one >> check_two
ls = ['foo','bar','baz','quz','apple']
for tbl_name in ls:
exists = CheckOperator(
task_id='tbl_exists_{}'.format(tbl_name),
conn_id='conn_name',
sql =""" select count(*) from information_schema.tables where table_schema = 'test' and table_name = '{}' """.format(tbl_name),
trigger_rule=TriggerRule.ALL_SUCCESS,
depends_on_past=True,
dag = dag
)
create = PostgresOperator(
task_id='tbl_create_{}'.format(tbl_name),
postgres_conn_id='conn_name',
database='triforcedb',
sql = 'create table test.{} (like dbo.source)'.format(tbl_name), # will be read from SQL file
trigger_rule=TriggerRule.ONE_FAILED,
depends_on_past=True,
dag = dag
)
insert = PostgresOperator(
task_id='tbl_insert_{}'.format(tbl_name),
postgres_conn_id='conn_name',
database='triforcedb',
sql = 'insert into test.{} (select * from dbo.source limit 10)'.format(tbl_name), # will be read from SQL file
trigger_rule=TriggerRule.ONE_SUCCESS,
depends_on_past=True,
dag = dag
)
check_two >> exists
exists >> create
create >> insert
exists >> insert
</code></pre>
|
<p>You have a perfect use case for leveraging the <a href="https://airflow.incubator.apache.org/code.html?highlight=branch#airflow.operators.BranchPythonOperator" rel="nofollow noreferrer">BranchPythonOperator</a> which will allow you to perform a check to see if the table exist and then either proceed with creating the table before inserting to that table without having to worry about TRIGGER_RULES and make your DAG logic much more clear from the UI.</p>
|
python|etl|pipeline|airflow
| 3 |
1,905,737 | 51,865,836 |
kivy layout: different computer different display
|
<p>The same code for python layout returns different GUI. I'm terribly confused:</p>
<pre><code># ---------- VQCIA.kv ----------
VQCIA:
<VQCIA>:
orientation: "vertical"
goi: goi
padding: 10
spacing: 10
size: 400, 200
pos: 200, 200
size_hint:None,None
BoxLayout:
Label:
text: "Enter gene of interest with TAIR ID:"
font_size: '25sp'
BoxLayout:
TextInput:
hint_text: 'AT3G20770'
multiline: False
font_size: '25sp'
id: goi
BoxLayout:
Button:
text: "Submit"
size_hint_x: 15
on_press: root.submit_goi()
# ---------- vqcia.py ----------
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
from kivy.properties import ObjectProperty
class VQCIA(BoxLayout):
# Connects the value in the TextInput widget to these
# fields
goi = ObjectProperty()
def submit_goi(self):
# Get the student name from the TextInputs
goi = self.goi.text
print goi
return
class VQCIAApp(App):
def build(self):
return VQCIA()
dbApp = VQCIAApp()
dbApp.run()
</code></pre>
<p>my lab computer is macOS Sierra 10.12.6 with Kivy==1.10.1 and has ideal output:
<a href="https://i.stack.imgur.com/ZSXyk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZSXyk.png" alt="Img01 - macOS Sierra 10.12.6"></a></p>
<p>on the other side my personal mac, macOS high Sierra 10.13.6 with Kivy==1.10.1, has wrong outputs:
<a href="https://i.stack.imgur.com/SmoKR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SmoKR.png" alt="Img02 - macOS high Sierra 10.13.6"></a> </p>
<p>what happens?</p>
|
<ol>
<li>Try using Density-independent Pixels, <code>dp</code>.</li>
<li>In kv file, there are two roots. There is a root rule, <code>VQCIA:</code> and also a class rule, <code><VQCIA>:</code> for the root. Since in Python code, it is using <code>return VQCIA()</code>, which is associated to class rule, <code><VQCIA>:</code> in kv file. Therefore, remove root rule, <code>VQCIA:</code> to avoid confusion.</li>
</ol>
<h1>kv file - vqcia.kv</h1>
<pre><code><VQCIA>: # class rule for root widget
orientation: "vertical"
goi: goi
padding: dp(10)
spacing: dp(10)
size: dp(400), dp(200)
pos: dp(200), dp(200)
size_hint: None, None
</code></pre>
<p><a href="https://kivy.org/docs/api-kivy.metrics.html#dimensions" rel="nofollow noreferrer">Dimensions</a></p>
<blockquote>
<pre><code>dp
</code></pre>
<p>Density-independent Pixels - An abstract unit that is based on the
physical density of the screen. With a density of 1, 1dp is equal to
1px. When running on a higher density screen, the number of pixels
used to draw 1dp is scaled up a factor appropriate to the screen’s
dpi, and the inverse for a lower dpi. The ratio of dp-to-pixels will
change with the screen density, but not necessarily in direct
proportion. Using the dp unit is a simple solution to making the view
dimensions in your layout resize properly for different screen
densities. In others words, it provides consistency for the real-world
size of your UI across different devices.</p>
<pre><code>sp
</code></pre>
<p>Scale-independent Pixels - This is like the dp unit, but it is also
scaled by the user’s font size preference. We recommend you use this
unit when specifying font sizes, so the font size will be adjusted to
both the screen density and the user’s preference.</p>
</blockquote>
|
python|layout|kivy
| 0 |
1,905,738 | 69,109,295 |
Efficiently extract numbers from a column in python
|
<p>I have a column in pandas dataframe as below:</p>
<pre><code> Manufacture_Id Score Rank
0 S1 93 1
1 S1 91 2
2 S1 86 3
3 S2 88 1
4 S25 73 2
5 S100 72 3
6 S100 34 1
7 S100 24 2
</code></pre>
<p>I want to extract the ending numbers from the '<strong>Manufacture_Id</strong>' column into a new column as below:</p>
<pre><code> Manufacture_Id Score Rank Id
0 S1 93 1 1
1 S1 91 2 1
2 S1 86 3 1
3 S2 88 1 2
4 S25 73 2 25
5 S100 72 3 100
6 S100 34 1 100
7 S100 24 2 100
</code></pre>
<p>I have written the below code which gives the results but it is not efficient when the data becomes big.</p>
<pre><code>test['id'] = test.Manufacture_Id.str.extract(r'(\d+\.\d+|\d+)')
</code></pre>
<p>Is there a way to do it efficiently?</p>
<p>Data:</p>
<pre><code>#Ceate dataframe
data = [
["S1",93,1],
["S1",91,2],
["S1",86,3],
["S2",88,1],
["S25",73,2],
["S100",72,3],
["S100",34,1],
["S100",24,2],
]
#dataframe
test = pd.DataFrame(data, columns = ['Manufacture_Id', 'Score', 'Rank'])
</code></pre>
|
<p>Following code will be more efficient than regex.</p>
<pre><code>test["id"] = test['Manufacture_Id'].str[1:].astype(int)
</code></pre>
<p>But if the <code>S</code> is not constant then you try following snippet.</p>
<pre><code>test["id"] = test.Manufacture_Id.str.extract('(\d+)').astype(int)
</code></pre>
|
python|pandas|string
| 1 |
1,905,739 | 62,338,119 |
Unable to render the data from models to view in django framework
|
<p>When i try to run a server,I am getting an error where it shows TypeError: 'Organization' object is not iterable. I am attaching the Model and views for the reference. Can you check and determine why i am getting the error.</p>
<p>'''
Model.py</p>
<pre><code>from django.db import models
# Create your models here.
class Organization(models.Model):
name = models.CharField(max_length=200, null=True)
date_created = models.DateTimeField(auto_now_add=True, null=True)
def __str__(self):
return self.name
class Ticket_status(models.Model):
name = models.CharField(max_length=200, null=True)
def __str__(self):
return self.name
class Ticket(models.Model):
ticket_id = models.CharField(max_length=200, unique=True)
orgname = models.ForeignKey(Organization, null=True, on_delete=models.SET_NULL)
status = models.ForeignKey(Ticket_status, null=True, on_delete=models.SET_NULL)
date_created = models.DateTimeField(auto_now_add=True, null=True)
def __str__(self):
return self.ticket_id
'''
views.py
'''
-----------
from django.shortcuts import render
from django.http import HttpResponse
from.models import *
# Create your views here.
def organization(request, orgname):
organization_name=Organization.objects.get(id=orgname)
organization_count=Organization.objects.get(id=orgname).ticket_set.all()
return render(request,'ticket\organization1.html',{'organization_name':organization_name,' organization_count': organization_count})
'''
</code></pre>
<p>template
This is the template file(ticket\organization1.html).Can you check and determine why i am getting the error?
'''</p>
<pre><code>{% extends 'ticket/main.html' %}
{% block content %}
<br>
<div class="container-fluid">
<div class="row">
<div class="col-sm-4">
<div class="card">
<div class="card-body">
<h5 class="card-title">Organization Name</h5>
<P> {% for t in organization_name %}
{{t.name}}
{% endfor %}
</P>
<p class="card-text"></p>
</div>
</div>
</div>
<div class="col-sm-4">
<div class="card">
<div class="card-body">
<h5 class="card-title">Active Ticket</h5>
<p class="card-text">
{{ organization_count.count }}
</p>
</div>
</div>
</div>
<div class="col-sm-4">
<div class="card">
<div class="card-body">
<h5 class="card-title">Closed Ticket</h5>
<p class="card-text">2.</p>
<!--a href="#" class="btn btn-primary">View</a-->
</div>
</div>
</div>
</div>
<div class="row">
<br>
<div>
<h2>On-Going Tickets</h2><br>
</div>
<table class="table table-hover">
<tr>
<th>S NO</th>
<th>Ticket ID</th>
<th>Organization</th>
<th>Ticket Status</th>
<br>
</tr>
{% for t in organization_name %}
<tr>
<td>{{t.Ticket_id}}</td>
<td>{{t.Ticket_id}}</td>
<td>{{t.organization}}</td>
<td>{{t.Ticket_Status}}</td>
</tr>
{% endfor %}
</table>
</div>
</div>
{% endblock %}'''
</code></pre>
|
<p>Your first </p>
<pre><code><P> {% for t in organization_name %}
{{t.name}}
</code></pre>
<p>should be</p>
<pre><code><P> {{ organization_name.name }} # organization_name is an object
</code></pre>
<p>your second</p>
<pre><code>{% for t in organization_name %}
</code></pre>
<p>should be </p>
<pre><code>{% for t in organization_count %} # organization_count is a list of tickets
<td>{{t.ticket_id}}</td>
<td>{{organization_name.name}}</td> # single object, same as above
<td>{{t.status}}</td>
</code></pre>
|
python|django|python-3.x|django-models|django-templates
| 0 |
1,905,740 | 19,491,936 |
Complex symmetric matrices in python
|
<p>I am trying to diagonalise a <em>complex</em> symmetric matrix in python. </p>
<p>I had a look at numpy and scipy linalg routines but they all seem to deal with either hermitian or real symmetric matrices.</p>
<p>What I am looking for is some way of obtaining the Takagi factorisation of my starting complex and symmetric matrix. This basically is the standard eigendecomposition
S = V D V^-1 but, as the starting matrix S is symmetric, the resulting V matrix should automatically be orthogonal, i.e. V.T = V^-1. </p>
<p>any help?</p>
<p>Thanks</p>
|
<p>Here is some code for calculating a Takagi factorization. It uses the eigenvalue decomposition of a Hermitian matrix. It is <strong>not</strong> intended to be efficient, fault tolerant, numerically stable, nor guaranteed correct for all possible matrices. An algorithm designed for this factorization is preferable, particularly if large matrices need to be factored. Even so, if you just need to factor some matrices and get on with your life, then using mathematical tricks such as this can be useful.</p>
<pre><code>import numpy as np
import scipy.linalg as la
def takagi(A) :
"""Extremely simple and inefficient Takagi factorization of a
symmetric, complex matrix A. Here we take this to mean A = U D U^T
where D is a real, diagonal matrix and U is a unitary matrix. There
is no guarantee that it will always work. """
# Construct a Hermitian matrix.
H = np.dot(A.T.conj(),A)
# Calculate the eigenvalue decomposition of the Hermitian matrix.
# The diagonal matrix in the Takagi factorization is the square
# root of the eigenvalues of this matrix.
(lam, u) = la.eigh(H)
# The "almost" Takagi factorization. There is a conjugate here
# so the final form is as given in the doc string.
T = np.dot(u.T, np.dot(A,u)).conj()
# T is diagonal but not real. That is easy to fix by a
# simple transformation which removes the complex phases
# from the resulting diagonal matrix.
c = np.diag(np.exp(-1j*np.angle(np.diag(T))/2))
U = np.dot(u,c)
# Now A = np.dot(U, np.dot(np.diag(np.sqrt(lam)),U.T))
return (np.sqrt(lam), U)
</code></pre>
<p>To understand the algorithm it is convenient to write out each step and see how it leads to the desired factorization. The code can then be made more efficient if need be.</p>
|
python|numpy|matrix|scipy|symmetric
| 2 |
1,905,741 | 16,598,914 |
How to print a for loop as a list
|
<p>So I have:</p>
<pre><code>s = (4,8,9), (1,2,3), (4,5,6)
for i, (a,b,c) in enumerate(s):
k = [a,b,c]
e = k[0]+k[1]+k[2]
print e
</code></pre>
<p>It would print:</p>
<pre><code>21
6
15
</code></pre>
<p>But I want it to be:</p>
<pre><code>(21,6,15)
</code></pre>
<p>I tried using this but it's not what I wanted:</p>
<pre><code>print i,
</code></pre>
<p>So is this possible? </p>
|
<p>Here are a few options:</p>
<ul>
<li><p>Using tuple unpacking and a generator:</p>
<pre><code>print tuple(a+b+c for a, b, c in s)
</code></pre></li>
<li><p>Using <code>sum()</code> and a generator:</p>
<pre><code>print tuple(sum(t) for t in s)
</code></pre></li>
<li><p>Using <code>map()</code>:</p>
<pre><code>print tuple(map(sum, s))
</code></pre></li>
</ul>
|
python|for-loop
| 9 |
1,905,742 | 58,003,054 |
Text-based RPG Repeating Damage Done
|
<p>I'm a novice programmer working on a text-based RPG, and for some reason, when the enemy is hit with an arrow when you choose the bowman class, it does the same amount of damage multiple times when it is supposed to do random damage each time, and it doesn't subtract the enemy's health by the damage every time. My code is pretty messy, but please be patient. Also, if you have any advice for me in general, please let me know.</p>
<pre><code> while action == "attack":
if player_class == "warrior":
enemy_damage = 30
enemy_health = enemy_health - enemy_damage
elif player_class == "bowman":
target_general = randint(1, 3)
if target_general == 1:
action = input(f"You shot to the left of the {enemy}. What will you do now? Attack, evade or run?")
elif target_general == 2:
target_specific = randint(1, 5)
if target_specific == 5:
enemy_damage = "way too much"
enemy_health = 0
print("Critical hit!")
else:
enemy_damage = target_specific * 10
enemy_health = enemy_health - enemy_damage
else:
action = input(f"You shot to the right of the {enemy}. What will you do now? Attack, evade or run?")
if enemy_health < 0:
enemy_health = 0
if enemy_health == 0:
player_gold = player_gold + treasure
print(f"You defeated the {enemy}! You found {treasure} gold.")
action = leave()
elif action == "attack" and enemy_damage != 0:
action = input(f"You attacked the {enemy}, and did {str(enemy_damage)} damage. The enemy has {enemy_health} health left. What will you do now?Attack, evade or run?")
</code></pre>
|
<p>Found it.
You're not resetting <code>enemy_damage</code>, so after the first successful attack, even if you miss on the second shot, you will print the missed message, then the damage message.</p>
<pre><code>#This message is turn 1
You attacked the Thor, and did 10 damage. The enemy has 460 health left. What will you do now?Attack, evade or run?attack
#this message is turn 2
You shot to the left of the Thor. What will you do now? Attack, evade or run?attack
#this message is ALSO turn 2
You attacked the Thor, and did 10 damage. The enemy has 460 health left. What will you do now?Attack, evade or run?attack
</code></pre>
<p>Throw this in: </p>
<pre><code>while action == "attack":
enemy_damage = 0 # Reset the damage every turn
### The rest
</code></pre>
|
python
| 2 |
1,905,743 | 43,590,327 |
python pptx adding border to text
|
<p>I am using Python 2.7 with python pptx
and need to add border to some text i am writing.</p>
<p>I need it to be a simple box in a color that i specify around the text </p>
<p>I found <a href="http://python-pptx.readthedocs.io/en/latest/api/text.html" rel="nofollow noreferrer">here</a> the text related object but I don't find any mention to what I need, and I found <a href="https://stackoverflow.com/questions/42610829/python-pptx-changing-table-style-or-adding-borders-to-cells">here</a> a similar question about tables but with no answer... </p>
<p>Thanks </p>
|
<p>I'm not sure what objects in Word can have a border. I expect a paragraph can, but not sure a run of text can. You can determine this pretty quickly by experimenting with the Word UI.</p>
<p>In any case, that feature is not implemented in <code>python-docx</code> yet.</p>
|
python|python-2.7|python-pptx
| 1 |
1,905,744 | 71,386,409 |
pandas dataframe conditional population of a new column
|
<p>I am working on manipulation of a column(Trend) in pandas DataFrame. Below is my source DataFrame. Currently I have set it to 0.</p>
<p><a href="https://i.stack.imgur.com/6bN12.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6bN12.png" alt="enter image description here" /></a></p>
<p>The logic I want to use to populate Trend column is below</p>
<ol>
<li><p>if df['Close'] > df.shift(1)['Down'] then 1</p>
</li>
<li><p>if df['Close'] < df.shift(1)['Up'] then -1</p>
</li>
<li><p>if any one of the above condition does not meet then, df.shift(1)['Trend']. if this value is NaN then set it to 1.</p>
</li>
</ol>
<p>Above code in plainText,</p>
<ol>
<li>if current close is greater then previous row value of <strong>Down</strong> column then 1</li>
<li>if current close is less than previous row value of <strong>Up</strong> column then -1</li>
<li>if any one of those conditions does not meet, then set previous row value of <strong>Trend</strong> column as long as its <strong>not NaN</strong>. if its NaN then set to 1</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Data as text</p>
<pre><code> Close Up Down Trend
3.138 NaN NaN 0
3.141 NaN NaN 0
3.141 NaN NaN 0
3.130 NaN NaN 0
3.110 NaN NaN 0
3.130 3.026432 3.214568 0
3.142 3.044721 3.214568 0
3.140 3.047010 3.214568 0
3.146 3.059807 3.214568 0
3.153 3.064479 3.214568 0
3.173 3.080040 3.214568 0
3.145 3.080040 3.214568 0
3.132 3.080040 3.214568 0
3.131 3.080040 3.209850 0
3.141 3.080040 3.209850 0
3.098 3.080040 3.205953 0
3.070 3.080040 3.195226 0
</code></pre>
<p><strong>Expected output</strong></p>
<p><a href="https://i.stack.imgur.com/wMqRt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wMqRt.png" alt="enter image description here" /></a></p>
|
<p>We could use <code>numpy.select</code> to select values depending on which condition is satisfied. Then pass the outcome of <code>numpy.select</code> to <code>fillna</code> to fill in missing "Trend" values with it (this is used to not lose existing "Trend" values). Then since NaN trend values must be filled with previous "Trend" value, we use <code>ffill</code> and fill the remaining NaN values with 1.</p>
<pre><code>import numpy as np
df['Trend'] = (df['Trend'].replace(0, np.nan)
.fillna(pd.Series(np.select([df['Close'] > df['Down'].shift(),
df['Close'] < df['Up'].shift()],
[1, -1], np.nan), index=df.index))
.ffill().fillna(1))
</code></pre>
<p>Output:</p>
<pre><code> Close Up Down Trend
0 3.138 NaN NaN 1.0
1 3.141 NaN NaN 1.0
2 3.141 NaN NaN 1.0
3 3.130 NaN NaN 1.0
4 3.110 NaN NaN 1.0
5 3.130 3.026432 3.214568 1.0
6 3.142 3.044721 3.214568 1.0
7 3.140 3.047010 3.214568 1.0
8 3.146 3.059807 3.214568 1.0
9 3.153 3.064479 3.214568 1.0
10 3.173 3.080040 3.214568 1.0
11 3.145 3.080040 3.214568 1.0
12 3.132 3.080040 3.214568 1.0
13 3.131 3.080040 3.209850 1.0
14 3.141 3.080040 3.209850 1.0
15 3.098 3.080040 3.205953 1.0
16 3.070 3.080040 3.195226 -1.0
</code></pre>
|
python|pandas|dataframe|numpy|fillna
| 1 |
1,905,745 | 9,363,670 |
Sort dictionary by number of values under each key
|
<p>Maybe this is obvious, but how do I sort a dictionary by the number of values in it?</p>
<p>like if this:</p>
<pre><code>{
"2010": [2],
"2009": [4,7],
"1989": [8]
}
</code></pre>
<p>would become this:</p>
<pre><code>{
"2009": [4,7],
"2010": [2],
"1989": [8]
}
</code></pre>
<p>How would I only return key's that had > 1 value </p>
<pre><code> "2009": [4,7]
</code></pre>
|
<p>Dictionaries are unordered, so there is no way of sorting a dictionary itself. You can convert your dictionary to an ordered data type though. In Python 2.7 or above, you can use <code>collections.OrderedDict</code>:</p>
<pre><code>from collections import OrderedDict
d = {"2010": [2], "2009": [4,7], "1989": [8]}
ordered_d = OrderedDict(sorted(d.viewitems(), key=lambda x: len(x[1])))
</code></pre>
|
python|dictionary
| 9 |
1,905,746 | 39,338,764 |
Finding the highest value with the given constraints
|
<pre><code>c = [416,585,464]
A0 = [100,50,200]
A1 = [100,100,200]
A2 = [100,150,100]
A3 = [100,200,0]
A4 = [100,250,0]
b = [300,300,300,300,300]
for num in A0,A1,A2,A3,A4:
t0 = num[0]*1 + num[1]*1 + num[2]*1
t1 = num[0]*0 + num[1]*1 + num[2]*0
t2 = num[0]*0 + num[1]*0 + num[2]*0
t3 = num[0]*0 + num[1]*0 + num[2]*1
t4 = num[0]*1 + num[1]*0 + num[2]*0
t5 = num[0]*0 + num[1]*1 + num[2]*1
t6 = num[0]*1 + num[1]*1 + num[2]*0
t7 = num[0]*1 + num[1]*0 + num[2]*1
</code></pre>
<p>Now check each of the values in <code>t0</code> against each of its corresponding values in the <code>b</code> array. If any of the values from <code>t0</code> are greater than <code>300</code>, then <code>t0</code> is discarded.</p>
<p>If not, then multiply each <code>t_</code> value by the each corresponding <code>c</code> array value, and after that determine the highest value and print it.</p>
<p>For example: <code>t1</code> has <code>50,100,150,200,250</code>, all of which are equal to or below <code>300</code>, so we take <code>0*c[0] + 1*c[1] + 0*c[2]</code>, which gives us <code>585</code>. However, that isn't the highest value. The highest value is <code>1049</code>, which is acquired by <code>t5</code>. It has <code>250,300,250,200,250</code>. Taking <code>0*c[0] + 1*c[1] + 1*c[2]</code> gives <code>1049</code></p>
<p>I am stuck here.</p>
|
<p>I guess this does what you want—at least it produces sums from the data similar to those you mentioned in your question. I found your sample code is <em>very</em> misleading since it doesn't produce the kind of <code>t_</code> values you refer to in the written problem description below it.</p>
<pre><code>from itertools import compress
c = [416,585,464]
A0 = [100,50,200]
A1 = [100,100,200]
A2 = [100,150,100]
A3 = [100,200,0]
A4 = [100,250,0]
b = [300,300,300,300,300]
selectors = [(1, 1, 1), (0, 1, 0), (0, 0, 0), (0, 0, 1),
(1, 0, 0), (0, 1, 1), (1, 1, 0), (1, 0, 1)]
nums_limits = zip((A0, A1, A2, A3, A4), b)
maximum = None
for selector in selectors:
if all(sum(compress(nums, selector)) <= limit for nums,limit in nums_limits):
total = sum(compress(c, selector))
if maximum is None or total > maximum:
maximum = total
print(maximum) # -> 1049
</code></pre>
<p>You can replace most of that with one (longish) <a href="https://docs.python.org/3/howto/functional.html#generator-expressions-and-list-comprehensions" rel="nofollow">generator expression</a> similar to the one in <a href="http://ideone.com/38FQTM" rel="nofollow">linked code</a> in one of @Stefan Pochmann's comments, so this does exactly the same thing:</p>
<pre><code>print(max(sum(compress(c, selector)) for selector in selectors
if all(sum(compress(nums, selector)) <= limit
for nums, limit in zip((A0, A1, A2, A3, A4), b))))
</code></pre>
|
python|arrays|list
| 0 |
1,905,747 | 55,473,806 |
Adding a column to dask dataframe, computing it through a rolling window
|
<p>Suppose I have the following code, to generate a dummy dask dataframe:</p>
<pre><code>import pandas as pd
import dask.dataframe as dd
pandas_dataframe = pd.DataFrame({'A' : [0,500,1000], 'B': [-100, 200, 300] , 'C' : [0,0,1.0] } )
test_data_frame = dd.from_pandas( pandas_dataframe, npartitions= 1 )
</code></pre>
<p>Ideally I would like to know what is the recommended way to add another column to the data frame, computing the column content through a rolling window, in a lazy fashion.</p>
<p>I came up with the following approach:</p>
<pre><code>import numpy as np
import dask.delayed as delay
@delay
def coupled_operation_example(dask_dataframe,
list_of_input_lbls,
fcn,
window_size,
init_value,
output_lbl):
def preallocate_channel_data(vector_length, first_components):
vector_out = np.zeros(len(dask_dataframe))
vector_out[0:len(first_components)] = first_components
return vector_out
def create_output_signal(relevant_data, fcn, window_size , initiated_vec):
## to be written; fcn would be a fcn accepting the sliding window
initiatied_vec = preallocate_channel_data(len(dask_dataframe, init_value))
relevant_data = dask_dataframe[list_of_input_lbls]
my_output_signal = create_output_signal(relevant_data, fcn, window_size, initiated_vec)
</code></pre>
<p>I was writing this, convinced that dask dataframe would allow me some slicing: they do not. So, my first option would be to extract the columns involved in the computations as numpy arrays, but so they would be eagerly evaluated. I think the penalty in performance would be significant. At the moment I create dask dataframes from h5 data, using h5py: so everything is lazy, until I write output files.</p>
<p>Up to now I was processing only data on a certain row; so I had been using:</p>
<pre><code> test_data_frame .apply(fcn, axis =1, meta = float)
</code></pre>
<p>I do not think there is an equivalent functional approach for rolling windows; am I right? I would like something like Seq.windowed in F# or Haskell. Any suggestion highly appreciated.</p>
|
<p>I have tried to solve it through a closure. I will post benchmarks on some data, as soon as I have finalized the code. For now I have the following toy example, which seems to work: since dask dataframe's apply methods seems to be preserving the row order.</p>
<pre><code>import numpy as np
import pandas as pd
import dask.dataframe as dd
number_of_components = 30
df = pd.DataFrame(np.random.randint(0,number_of_components,size=(number_of_components, 2)), columns=list('AB'))
my_data_frame = dd.from_pandas(df, npartitions = 1 )
def sumPrevious( previousState ) :
def getValue(row):
nonlocal previousState
something = row['A'] - previousState
previousState = row['A']
return something
return getValue
given_func = sumPrevious(1 )
out = my_data_frame.apply(given_func, axis = 1 , meta = float)
df['computed'] = out.compute()
</code></pre>
<p>Now the bad news, I have tried to abstract it out, passing the state around and using a rolling window of any width, through this new function:</p>
<pre><code>def generalised_coupled_computation(previous_state , coupled_computation, previous_state_update) :
def inner_function(actual_state):
nonlocal previous_state
actual_value = coupled_computation(actual_state , previous_state )
previous_state = previous_state_update(actual_state, previous_state)
return actual_value
return inner_function
</code></pre>
<p>Suppose we initialize the function with:</p>
<pre><code>init_state = df.loc[0]
coupled_computation = lambda act,prev : act['A'] - prev['A']
new_update = lambda act, prev : act
given_func3 = generalised_coupled_computation(init_state , coupled_computation, new_update )
out3 = my_data_frame.apply(given_func3, axis = 1 , meta = float)
</code></pre>
<p>Try to run it and be ready for surprises: the first element is wrong, possibly some pointer's problems, given the odd result. Any insight? </p>
<p>Anyhow, if one passes primitive types, it seems to function.</p>
<hr>
<p>Update:</p>
<p>the solution is in using copy:</p>
<pre><code>import copy as copy
def new_update(act, previous):
return copy.copy(act)
</code></pre>
<p>Now the functions behaves as expected; of course it is necessary to adapt the function updates and the coupled computation function if one needs a more coupled logic</p>
|
python|pandas|numpy|dask|rolling-computation
| 0 |
1,905,748 | 47,628,870 |
Pyglet GL_QUADS and GL_POLYGON not working properly
|
<p>I'm trying to write a simple game and for some reason the graphics primitives aren't working properly on my machine (Win7/NVIDIA Quadro K2100M). I'm trying to draw a rectangle but whenever I use GL_QUADS or GL_POLYGON it comes with a weird bend in it. It works with GL_QUAD_STRIP, oddly, but that's really not ideal since I don't want the ones I"m drawing to be connected. I really have no idea what the problem could be...</p>
<p>Example code:
</p>
<pre><code>import pyglet
window = pyglet.window.Window(width=400, height=400)
batch = pyglet.graphics.Batch()
white = [255]*4
batch.add(4, pyglet.gl.GL_QUADS, None, ('v2i',[10,10,10,50,390,10,390,50]), ('c4B',white*4))
batch.add(4, pyglet.gl.GL_POLYGON, None, ('v2i',[10,60,10,110,390,60,390,110]), ('c4B',white*4))
batch.add(4, pyglet.gl.GL_QUAD_STRIP, None, ('v2i',[10,120,10,170,390,120,390,170]), ('c4B',white*4))
@window.event
def on_draw():
batch.draw()
pyglet.app.run()
</code></pre>
<p><a href="https://i.stack.imgur.com/a9glp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/a9glp.png" alt="How that looks on my computer"></a></p>
|
<p>I'm so stupid: I put the points in the wrong order so it drew (top left, bottom left, top right, bottom right) and of course this would be the result X_X</p>
|
python-3.x|opengl|pyglet
| 1 |
1,905,749 | 47,661,415 |
Matching two data frames' columns and store it in new column
|
<p>I have two data frames:</p>
<pre><code>df:
id id.1 weight
RoLu1976 Gr1969 50
MaRg1988 FuDa1989 10
FiKy1977 RoBa1983 12
MaTe1980 SeNd1998 23
Gr69 MaGe1977 72
</code></pre>
<p>And:</p>
<pre><code>df1:
id id.1
Gr1969 RoLu1976
FiKy1977 RoBa1983
</code></pre>
<p>I need to make a <code>weight</code> column in <code>df1</code> by matching <code>df1$id</code> and <code>df1$id.1</code> with <code>df$id</code> and <code>df$id.1</code>. </p>
<pre><code>df1:
id id.1 weight
Gr1969 RoLu1976 50
FiKy1977 RoBa1983 12
</code></pre>
<p>Sometimes the observations are <strong>exchanged</strong> in the columns, for example, <code>df's</code> first row and <code>df1's</code> first row:</p>
<pre><code>df:
id id.1 weight
Rolu1976 Gr1969 50
</code></pre>
<p>and</p>
<pre><code>df1:
id id.1
Gr1969 RoLu1976
</code></pre>
<p>It means that the column order doesn't matter for the matching.</p>
<p><strong>[EDIT]</strong></p>
<p>I try to do it with <code>inner_join</code> function from <code>dplyr</code> package, and <code>merge</code> function, but, it is necessary make all the combinations of the columns.
There is a way to matching them without specify all the combinations?</p>
<p>Or there is a fastest way to do it with <strong>python pandas module</strong>?</p>
|
<p>Not sure, if the interchange in the columns was intended or by mistake. Here is the solution</p>
<hr>
<p>solution </p>
<pre><code>rbind(merge(df1, df2, by.x = c('id.1','id'), by.y = c('id','id.1')),
merge(df1, df2, by.x = c('id.1','id'), by.y = c('id.1','id')))
</code></pre>
<hr>
<p>output</p>
<pre><code> id.1 id weight
1 Gr1969 RoLu1976 50
2 RoBa1983 FiKy1977 12
</code></pre>
<hr>
<p>data</p>
<pre><code> df1 <- read.table(text='id id.1 weight
RoLu1976 Gr1969 50
MaRg1988 FuDa1989 10
FiKy1977 RoBa1983 12
MaTe1980 SeNd1998 23
Gr69 MaGe1977 72', header=TRUE)
df2 <- read.table(text='id id.1
Gr1969 RoLu1976
FiKy1977 RoBa1983', header=TRUE)
</code></pre>
|
r|python-3.x|pandas|dataframe|dplyr
| 3 |
1,905,750 | 37,429,728 |
Prediction is depending on the batch size in Keras
|
<p>I am trying to use keras for binary classification of an image.</p>
<p>My CNN model is well trained on the training data (giving ~90% training accuracy and ~93% validation accuracy). But during training if I set the batch size=15000 I get the Figure I output and if I set the batch size=50000 I get Figure II as the output. Can someone please tell what is wrong? The prediction should not depend on batch size right? </p>
<p>Code I am using for prediction :</p>
<p><code>y=model.predict_classes(patches, batch_size=50000,verbose=1)
y=y.reshape((256,256))
</code></p>
<p><a href="https://i.stack.imgur.com/Kiklb.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Kiklb.png" alt="Figure 1"></a> <a href="https://i.stack.imgur.com/1oltb.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1oltb.png" alt="Figure 2"></a></p>
<p>My model:-</p>
<pre><code>model = Sequential()
model.add(Convolution2D(32, 3, 3, border_mode='same',
input_shape=(img_channels, img_rows, img_cols)))
model.add(Activation('relu'))
model.add(Convolution2D(32, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Convolution2D(64, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(Convolution2D(64, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
# let's train the model using SGD + momentum (how original).
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
</code></pre>
|
<p>Keras is standarizing input automaticaly in the <code>predict</code> function. The statistics needed for standarization are computed on a batch - that's why your outputs might depend on a batch size. You may solve this by :</p>
<ol>
<li>If Keras > 1.0 you could simply define your model in functional API and simpy apply a trained function to self standarized data.</li>
<li>If you have your model trained - you could recover it as Theano function and also apply it to self standarized data.</li>
<li>If your data is not very big you could also simply set your batch size to the number of examples in your dataset.</li>
</ol>
<p><strong>UPDATE:</strong> here is a code for 2nd solution :</p>
<pre><code>import theano
input = model.layers[0].input # Gets input Theano tensor
output = model.layers[-1].output # Gets output Theano tensor
model_theano = theano.function(input, output) # Compiling theano function
# Now model_theano is a function which behaves exactly like your classifier
predicted_score = model_theano(example) # returns predicted_score for an example argument
</code></pre>
<p>Now if you want to use this new <code>theano_model</code> you should standarize main dataset on your own (e.g. by subtracting mean and dividing by standard deviation every pixel in your image) and apply <code>theano_model</code> to obtain scores for a whole dataset (you could do this in a loop iterating over examples or using <code>numpy.apply_along_axis</code> or <code>numpy.apply_over_axes</code> functions).</p>
<p><strong>UPDATE 2:</strong> in order to make this solution working change </p>
<pre><code>model.add(Dense(nb_classes))
model.add(Activation('softmax'))
</code></pre>
<p>to: </p>
<pre><code>model.add(Dense(nb_classes, activation = "softmax"))
</code></pre>
|
python|machine-learning|neural-network|deep-learning|keras
| 7 |
1,905,751 | 34,287,941 |
Django multi table inheritance: Find and change parent model
|
<p>Suppose I have the following model structure in django: </p>
<pre><code>class A(models.Model):
x = models.IntegerField()
def copy(self):
obj = self
obj.pk = None
obj.save()
return obj
class B(A):
y = models.IntegerField()
def copy(self):
# this method is what I am confused about
new_parent = super(B, self).copy() # not sure about this
obj = self
obj.pk = None
# how to set obj's parent model to 'new_parent'
obj.save()
return obj
</code></pre>
<p>I am not sure on how I can access the parent model's object, and how can I make this copy method work?</p>
<p>I have searched quite a bit and couldn't find any answer. Should I just use a one-to-one relation instead?</p>
|
<p>If you have a normal parent-child models you will get attribute in child to access parent. You can update this attribute with new parent object. </p>
<p>Also, the way you create parent object may not work, you need to call method on that object.</p>
<p>So I will update child's <code>copy()</code> method as:</p>
<pre><code>class B(A):
def copy(self):
# this method is what I am confused about
new_parent = self.a.copy() # change 'a' with appropriate attribute name
obj = self
obj.pk = None
# set obj's parent model to 'new_parent'
obj.a = new_parent
obj.save()
return obj
</code></pre>
|
python|django|inheritance
| 1 |
1,905,752 | 34,417,580 |
Gui for Particlefilter with Python
|
<p>I'm trying to implement a particle filter and I chose python for it because I kinda like python. By now i have written my gui using tkinter and python 3.4.</p>
<p>I use the tkinter.canvas object to display a map (png image loaded with PIL) and then i create dots for each particle like:<br></p>
<blockquote>
<p>dot = canvas.create_oval(x, y, x + 1, y + 1)</p>
</blockquote>
<p>When the robot moves I calculate the new position of each particle with the control command of the robot, the particles position and the particles alignment.
To move the particle tkinter.canvas has two methods:</p>
<blockquote>
<p>canvas.move()<br> canvas.coords()</p>
</blockquote>
<p>But both methods seem to update the gui immediately which is OK when there are about 100 particles but not if there are 200 - 5000 (what I actually should have in the beginning for the global localization). So my problem is the performance of the gui.</p>
<p>So my actual question is: Is there a way in tkinter to stop the canvas from updating the gui, then change the gui and then update the gui again? Or can you recommend me a module that is better than tkinter for my use-case?</p>
|
<p>Your observation is incorrect. The canvas is <em>not</em> updated immediately. The oval isn't redrawn until the event loop is able to process events. It is quite possible to update thousands of objects before the canvas is redrawn. Though, the canvas isn't a high performance tool so moving thousands of objects at a high frame rate will be difficult.</p>
<p>If you are seeing the object being updated immediately it's likely because somewhere in your code you are either calling <code>update</code>, <code>update_idletasks</code>, or you are otherwise allowing the event loop to run.</p>
<p>The specific answer to your question, then, is to make sure that you don't call <code>update</code> or <code>update_idletasks</code>, or let the event loop process events, until you've changed the coordinates of all of your particles.</p>
<p>Following is a short example. When it runs, notice that all of the particles move at once in one second intervals. This is because all of the calculations are done before allowing the event loop to redraw the items on the canvas.</p>
<pre><code>import Tkinter as tk
import random
class Example(tk.Frame):
def __init__(self, parent):
tk.Frame.__init__(self, parent)
self.canvas = tk.Canvas(self, width=500, height=500, background="black")
self.canvas.pack(fill="both", expand=True)
self.particles = []
for i in range(1000):
x = random.randint(1, 499)
y = random.randint(1, 499)
particle = self.canvas.create_oval(x,y,x+4,y+4,
outline="white", fill="white")
self.particles.append(particle)
self.animate()
def animate(self):
for i, particle in enumerate(self.particles):
deltay = (2,4,8)[i%3]
deltax = random.randint(-2,2)
self.canvas.move(particle, deltax, deltay)
self.after(30, self.animate)
if __name__ == "__main__":
root = tk.Tk()
Example(root).pack(fill="both", expand=True)
root.mainloop()
</code></pre>
|
python|canvas|tkinter|particle-filter
| 1 |
1,905,753 | 7,521,297 |
sorting by a related item with ming on mongodb
|
<h2>setup</h2>
<p>A TurboGears2 project using ming as an ORM for mongodb. I'm used to working with relational databases and the Django ORM.</p>
<h2>question</h2>
<p>Ming claims to let to interact with mongodb like it's a relational database and a common thing to do in a relational database is sort a query by a property of a foreign key. With the Django ORM this is expressed with double underscores like so: <code>MyModel.objects.all().order_by('user__username')</code></p>
<p><strong>Is there an equivalent for this in ming?</strong></p>
|
<p>I have never used ming, but they seem to have a <code>sort</code> method that you can add to a query, check it out <a href="http://merciless.sourceforge.net/tour.html#querying-the-database" rel="nofollow">here</a>, not much in the form of documentation</p>
<p>I use <a href="http://mongoengine.org/" rel="nofollow">mongoengine</a>, it has great documentation and its very similar to the django ORM</p>
|
python|orm|mongodb|turbogears2|ming
| 1 |
1,905,754 | 7,034,498 |
Can descriptors be used on properties to provide some declarative information?
|
<p>I'm new to Python so forgive me if I'm not even using the right terminology... I'm using Python 3.2 and I'm trying to figure out whether I can decorate a class property with some declarative-style information.</p>
<p>In my mind it would look like this:</p>
<pre><code>class MyTestClass:
def __init__(self, foo):
self.foo = foo
@property
@somedeclarativeInfo("ABC",123)
def radius(self):
return self.__foo
@radius.setter
def radius(self, foo):
self.__foo = foo
</code></pre>
<p>There are then two different things I'd want to do with the class:</p>
<p>A - Be able to interact with the foo property just like any other property (simple gets and sets)</p>
<p>B - Be able to dynamically find properties on a particular class that are decorated with this descriptor and be able to pull out the "ABC" and 123 values, etc.</p>
<p>I think maybe I should be creating a descriptor to accomplish what I want, but I'm not sure if I'm on the right track, or if this can be done.</p>
<p>Since my background is .Net I whipped up the following example to show what I want to do, in case that helps anyone understand my goal:</p>
<pre><code>using System;
using System.Reflection;
namespace SampleWithProperties
{
public class MyCustomAttribute : Attribute
{
public string Val1;
public string Val2;
public MyCustomAttribute(string val1,string val2)
{
Val2 = val2;
Val1 = val1;
}
}
public class Foo
{
[MyCustomAttribute("abc","def")]
public string PropertyA { get; set; }
[MyCustomAttribute("xyz","X")]
public int PropertyB { get; set; }
public string PropertyC { get; set; }
}
class Program
{
static void Main(string[] args)
{
// Show that we can figure out which properties have the custom attribute,
// and that we can get the values for Val1 and Val2
foreach(PropertyInfo propertyInfo in typeof(Foo).GetProperties())
{
Console.WriteLine("Found a property named "+propertyInfo.Name);
foreach(Attribute attribute in propertyInfo.GetCustomAttributes(
attributeType:typeof(MyCustomAttribute),inherit:true))
{
Console.WriteLine("Found a MyCustomAttribute on the property.");
MyCustomAttribute myCustomAttribute = attribute as MyCustomAttribute;
Console.WriteLine("Val1 = " + myCustomAttribute.Val1);
Console.WriteLine("Val2 = " + myCustomAttribute.Val2);
}
Console.WriteLine();
}
// Show that the properties can be used like normal
Foo foo = new Foo {PropertyA = "X", PropertyB = 2, PropertyC = "Z"};
Console.WriteLine("Created an instance of Foo just for fun. Its property values are "+
foo.PropertyA+","+foo.PropertyB+","+foo.PropertyC);
}
}
}
</code></pre>
<p>Can this be done?</p>
|
<p>There is no simple way to do what you want with properties. You can't simply set attributes on or get attributes from items protected by a property.</p>
<pre><code>def declarativeInfo(*args, **kwargs):
def wrapper(obj):
for arg in args:
setattr(obj, arg, arg)
for k, v in kwargs:
setattr(obj, k, v)
return obj
return wrapper
class MyTestClass:
def __init__(self, foo):
print MyTestClass.__dict__
self.radius = self.Radius('foo')
@declarativeInfo(bar="ABC",baz=123)
class Radius(object):
def __init__(self, foo):
self.value = foo
a = MyTestClass('foo')
print a.radius.value
print a.radius.a
</code></pre>
<p>is the easiest way to do this. You can always, of course, make <code>value</code> a property.</p>
<p>If you really want <code>radius</code> to be a normal property, you can store the information elsewhere in a dict and retrieve it from <code>self.propdict</code> or something.</p>
|
python|python-3.x
| 1 |
1,905,755 | 16,573,802 |
Flask-SQLAlchemy how to delete all rows in a single table
|
<p>How do I delete all rows in a single table using Flask-SQLAlchemy?</p>
<p>Looking for something like this: </p>
<pre><code>>>> users = models.User.query.all()
>>> models.db.session.delete(users)
# but it errs out: UnmappedInstanceError: Class '__builtin__.list' is not mapped
</code></pre>
|
<p>Try <a href="http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.delete" rel="noreferrer"><code>delete</code></a>:</p>
<pre><code>models.User.query.delete()
</code></pre>
<p>From <a href="http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.delete" rel="noreferrer">the docs</a>: <code>Returns the number of rows deleted, excluding any cascades.</code></p>
|
python|sqlalchemy|flask-sqlalchemy
| 184 |
1,905,756 | 31,781,937 |
jupyter notebook jumps after evaluation of cell
|
<p>In my Jupyter notebook, every time I evaluate a cell at the bottom of the page, the notebook jumps up so that it hides part of the output.</p>
<p>Here is an example with pictures. In this picture I want to evaluate the last cell.
<a href="https://imgur.com/qzAB8Nw" rel="noreferrer">https://imgur.com/qzAB8Nw</a></p>
<p>After evaluation, the screen jumps up so I can't see the last part of the output.
<a href="https://imgur.com/GPQjRBt" rel="noreferrer">https://imgur.com/GPQjRBt</a></p>
<p>So every time I have to scroll up manually to get back the view. Is there a setting here I can tweak to eliminate this issue?</p>
<p>Here's information from the "about" page of Jupyter:</p>
<blockquote>
<p>The version of the notebook server is 3.2.1-2d95975 and is running on:
Python 2.7.5 |Anaconda 1.6.2 (32-bit)| (default, Jul 1 2013,
12:41:55) [MSC v.1500 32 bit (Intel)]</p>
<p>Current Kernel Information:</p>
<p>Python 2.7.5 |Anaconda 1.6.2 (32-bit)| (default, Jul 1 2013,
12:41:55) [MSC v.1500 32 bit (Intel)] Type "copyright", "credits" or
"license" for more information.</p>
<p>IPython 3.2.1</p>
</blockquote>
|
<p>You may add few cells below your last <code>code cell</code> to avoid this issue.!!!</p>
<p>When you press <code>ctrl+enter</code> to run the last cell (infact any cell), the DOM of notebook removes the existing output. So the browser clears some content below it and scrolls up. But python output is displayed only after few microseconds, so the output is only partly visible. Unaware of any other method to overcome this issue.</p>
|
python|ipython|jupyter
| 8 |
1,905,757 | 40,503,037 |
how to combine three list into one list by specious format such as dictionary or JSON ? Python
|
<p>I have three lists:</p>
<pre><code>imglist=['1.jpg', '12.jpg']
classlist=['class1', 'class5']
sentencelist=['Good for health.', 'Good luck.']
</code></pre>
<p>how to combine the list into following format?</p>
<pre><code>[ ['1.jpg','class1','Good for health.'],
['12.jpg','class5','Good luck.']]
</code></pre>
<p>or if u know how to combine the lists as dictionary or JSON format ? such as </p>
<pre><code>[{'img':'1.jpg','class':'class1','sentence':'Good for health.'},
{'img':'12.jpg','class':'class5','sentence':'Good luck'}]
</code></pre>
|
<p>Use <a href="https://docs.python.org/2/library/functions.html#zip" rel="nofollow noreferrer"><code>zip()</code></a> to combine the list as:</p>
<pre><code>>>> imglist=['1.jpg', '12.jpg']
>>> classlist=['class1', 'class5']
>>> sentencelist=['Good for health.', 'Good luck.']
# combining list
>>> zip(imglist, classlist, sentencelist)
[('1.jpg', 'class1', 'Good for health.'), ('12.jpg', 'class5', 'Good luck.')]
</code></pre>
<p>For converting it to <code>dict</code> of your format, use <code>zip</code> with <em>list comprehension</em> as:</p>
<pre><code>>>> key_list = ["img", "class", "sentence"]
>>> my_zipped_list = zip(imglist, classlist, sentencelist) # same list as above example
>>> [dict(zip(key_list, zipped_element)) for zipped_element in my_zipped_list]
[{'class': 'class1', 'img': '1.jpg', 'sentence': 'Good for health.'},
{'class': 'class5', 'img': '12.jpg', 'sentence': 'Good luck.'}]
</code></pre>
|
python|json|list|dictionary
| 5 |
1,905,758 | 9,841,220 |
how to plot on a smaller scale
|
<p>I am using matplotlib and I'm finding some problems when trying to plot large vectors.
sometimes get "MemoryError"
My question is whether there is any way to reduce the scale of values that i need to plot ?</p>
<p><img src="https://i.stack.imgur.com/Wt7Rr.png" alt="enter image description here"></p>
<p>In this example I'm plotting a vector with size 2647296!</p>
<p>is there any way to plot the same values on a smaller scale?</p>
|
<p>It is very unlikely that you have so much resolution on your display that you can see 2.6 million data points in your plot. A simple way to plot less data is to sample e.g. every 1000th point: <code>plot(x[::1000])</code>. If that loses too much and it is e.g. important to see the extremal values, you could write some code to split the long vector into suitably many parts and take the minimum and maximum of each part, and plot those:</p>
<pre><code>tmp = x[:len(x)-len(x)%1000] # drop some points to make length a multiple of 1000
tmp = tmp.reshape((1000,-1)) # split into pieces of 1000 points
tmp = tmp.reshape((-1,1000)) # alternative: split into 1000 pieces
figure(); hold(True) # plot minimum and maximum in the same figure
plot(tmp.min(axis=0))
plot(tmp.max(axis=0))
</code></pre>
|
python|matplotlib
| 9 |
1,905,759 | 68,295,523 |
How can I concatenate three layers in a Keras sequential model?
|
<p>I would like to concatenate three layers in my Keras sequential model. Is there a way to do so? I would like to concatenate them along <code>axis=2</code>.</p>
<p><a href="https://i.stack.imgur.com/6vqEN.png" rel="nofollow noreferrer">Here is how the model summary looks like for now.</a></p>
<p>Here is the code.</p>
<pre class="lang-py prettyprint-override"><code>model = Sequential()
model.add(keras.layers.InputLayer(input_shape=(seq_len, n_inputs)))
model.add(keras.layers.Conv1D(input_shape=(None, n_inputs,seq_len), filters=N_CONV_A, padding='same',kernel_size=F_CONV_A, strides=1, activation='relu'))
model.add(keras.layers.Conv1D(input_shape=(None, n_inputs,seq_len), filters=N_CONV_B, padding='same',kernel_size=F_CONV_B, strides=1, activation='relu'))
model.add(keras.layers.Conv1D(input_shape=(None, n_inputs,seq_len), filters=N_CONV_C, padding='same',kernel_size=F_CONV_C, strides=1, activation='relu'))
</code></pre>
|
<p>A sequential model cannot manage concatenation because the layers that are concatenated are computed <em>in parallel</em> with respect to its input.</p>
<p>In a sequential model the output of a layer becomes the input of the next layer, while the concatenation requires to use the same input for many layers.</p>
<p>In this situation you should use the <a href="https://www.tensorflow.org/guide/keras/functional" rel="nofollow noreferrer">Functional API</a> or <a href="https://www.tensorflow.org/guide/keras/custom_layers_and_models" rel="nofollow noreferrer">Model subclassing</a>.</p>
<p>For example using the functional API the code becomes</p>
<pre><code>inputs = keras.layers.Input(shape=(n_inputs, seq_len))
convA = keras.layers.Conv1D(input_shape=(None, n_inputs, seq_len),filters=N_CONV_A, padding='same',kernel_size=F_CONV_A, strides=1, activation='relu')
convB = keras.layers.Conv1D(input_shape=(None, n_inputs, seq_len),filters=N_CONV_B, padding='same',kernel_size=F_CONV_B, strides=1, activation='relu')
convC = keras.layers.Conv1D(input_shape=(None, n_inputs, seq_len),filters=N_CONV_C, padding='same',kernel_size=F_CONV_C, strides=1, activation='relu')
concat = keras.layers.Concatenate(axis=-1)
out = concat([convA(inputs), convB(inputs), convC(inputs)])
</code></pre>
<p>Attention: <code>input_shape</code> in the your code is not consistent with the expected ones.</p>
|
python|tensorflow|keras|concatenation
| 3 |
1,905,760 | 26,335,064 |
filtering in qtreeview in pyqt and to display only folders
|
<p>I'm working on a GUI which needs qtreeview in the layout. now I've written a sample code to get know with the working of qtreeview and I got stuck with a problem.</p>
<p>My problems are:
1. it should display only folders present in the given path.</p>
<p>2.doubleclicking the folder in the qtreeview should give contents of the folder in the column view with owner also as one of the column(here I mean to say if I do a "ll" in terminal I get a list with owners of the folder) I want that information as well.</p>
<p>this is my code:</p>
<pre><code>from PyQt4 import QtCore, QtGui
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
def _fromUtf8(s):
return s
try:
_encoding = QtGui.QApplication.UnicodeUTF8
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig, _encoding)
except AttributeError:
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig)
class Ui_Dialog(object):
def setupUi(self, Dialog):
Dialog.setObjectName(_fromUtf8("Dialog"))
Dialog.resize(1150, 905)
self.gridLayout_2 = QtGui.QGridLayout(Dialog)
self.gridLayout_2.setObjectName(_fromUtf8("gridLayout_2"))
self.groupBox = QtGui.QGroupBox(Dialog)
self.groupBox.setObjectName(_fromUtf8("groupBox"))
self.gridLayout = QtGui.QGridLayout(self.groupBox)
self.gridLayout.setObjectName(_fromUtf8("gridLayout"))
self.treeView = QtGui.QTreeView(self.groupBox)
self.treeView.setObjectName(_fromUtf8("treeView"))
self.gridLayout.addWidget(self.treeView, 0, 0, 1, 1)
self.columnView = QtGui.QColumnView(self.groupBox)
self.columnView.setObjectName(_fromUtf8("columnView"))
self.gridLayout.addWidget(self.columnView, 0, 1, 1, 1)
self.gridLayout_2.addWidget(self.groupBox, 0, 0, 1, 2)
spacerItem = QtGui.QSpacerItem(1073, 20, QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Minimum)
self.gridLayout_2.addItem(spacerItem, 1, 0, 1, 1)
self.pushButton = QtGui.QPushButton(Dialog)
self.pushButton.setObjectName(_fromUtf8("pushButton"))
self.gridLayout_2.addWidget(self.pushButton, 1, 1, 1, 1)
self.fileSystemModel = QtGui.QFileSystemModel(self.treeView)
self.fileSystemModel.setFilter(QtCore.QDir.AllDirs | QtCore.QDir.NoDotAndDotDot | QtCore.QDir.AllEntries)
self.fileSystemModel.setReadOnly(False)
self.root = self.fileSystemModel.setRootPath('/home/hamanda/present_wrkng_python')
self.treeView.setModel(self.fileSystemModel)
self.treeView.setRootIndex(self.root)
self.retranslateUi(Dialog)
QtCore.QMetaObject.connectSlotsByName(Dialog)
def retranslateUi(self, Dialog):
Dialog.setWindowTitle(_translate("Dialog", "Dialog", None))
self.groupBox.setTitle(_translate("Dialog", "List of folders", None))
self.pushButton.setText(_translate("Dialog", "Quit", None))
if __name__ == "__main__":
import sys
app = QtGui.QApplication(sys.argv)
Dialog = QtGui.QDialog()
ui = Ui_Dialog()
ui.setupUi(Dialog)
Dialog.show()
sys.exit(app.exec_())
</code></pre>
<p>Please help me with this I'm new to pyqt programming</p>
|
<p>To get directories only:</p>
<pre><code>self.filemodel.setFilter(QtCore.QDir.AllDirs|QtCore.QDir.NoDotAndDotDot)
</code></pre>
<p>to have a double-click in the tree reset the column view:</p>
<pre><code>ui.treeView.doubleClicked.connect(ui.columnView.setRootIndex)
</code></pre>
<p>The column view is not the one you want to display details, however; the treeView or listView would be the ones to use. As you can see, the treeView already gives some details by default. I don't know how to get the owner via this QFileSystemModel, you would probably have to subclass.</p>
<p>To display files on the one side and only folders on the other, you would either need two models, a proxy model, or a custom model.</p>
|
python|pyqt4|qtreeview
| 1 |
1,905,761 | 32,468,801 |
How to get href from an a tag inside a div
|
<p>I am using beautifulsoup, some how I cannot extract href inside the a tags, no matter what I do it returns errors to me. here is the function I am using</p>
<pre><code>def scrape_a(url):
r = requests.get(url)
soup = BeautifulSoup(r.content)
news = soup.find_all("div", attrs={"class": "news"})
return news
</code></pre>
<p>the html data structure is</p>
<pre><code><div class="news">
<a href="www.link.com">
<h2 class="heading">
Kenyan police foil potential bomb attack in Nairobi mall
</h2>
<div class="teaserImg">
<img alt="" border="0" height="124" src="/image">
</div>
<p> text </p>
</a>
</div>
</code></pre>
<p>What I want to extract from these is href and h2 class='heading', whenever I try to get the both I get an error none type object has no attribute get item</p>
|
<p>How about something like this?</p>
<pre><code>from bs4 import BeautifulSoup
def get_news_class_hrefs(html):
"""
Finds all urls pointed to by all links inside
'news' class div elements
"""
soup = BeautifulSoup(html, 'html.parser')
links = [a['href'] for div in soup.find_all("div", attrs={"class": "news"}) for a in div.find_all('a')]
return links
# example html copied from question
html="""<div class="news">
<a href="www.link.com">
<h2 class="heading">
Kenyan police foil potential bomb attack in Nairobi mall
</h2>
<div class="teaserImg">
<img alt="" border="0" height="124" src="/image">
</div>
<p> text </p>
</a>"""
get_news_class_hrefs(html)
# Output:
# [u'www.link.com']
</code></pre>
|
python|tags|beautifulsoup
| 0 |
1,905,762 | 32,589,348 |
Speed up Python/Numpy code
|
<p>the following code calculates local contrasts over the whole picture. And my version is really slow. I tried multithreading with 'pool' from the 'multiprocessing'-module, but it only speeded up 10%.<br>
Can you help me speed it up more?</p>
<pre><code>#pic: gray value picture (large 2d-array)
#xvar,yvar: scalar values, e.g. 200
contrast=[np.std(pic[stepx-xvar:xvar+stepx:,stepy-yvar:yvar+stepy:])*2 \
for ystep in np.arange(yvar,np.int(pic.shape[1]-yvar),1)] \
for stepx in np.arange(xvar,np.int(pic.shape[0]-xvar),1)]
</code></pre>
|
<p>The most-straightforward way would be:</p>
<pre><code>from skimage.util import view_as_windows
windows = view_as_windows(pic, (xvar, yvar))
contrast = 2 * np.std(windows, axis=(2,3))
</code></pre>
<p>This blows up the picture to an array containing all the windows, but without additional memory overhead. The advantage is that you get rid of the Python loop/function-call overhead. It's based on a Numpy "stride-trick", see for the implementation <a href="https://github.com/scikit-image/scikit-image/blob/master/skimage/util/shape.py#L107" rel="nofollow noreferrer">here</a> (or alternatively, the <a href="https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/feature_extraction/image.py#L242" rel="nofollow noreferrer">scikit-learn variant</a>).</p>
<p>There is a method which scales a lot better though, see the answers here: <a href="https://stackoverflow.com/q/18419871">improving code efficiency: standard deviation on sliding windows</a></p>
|
python|performance|numpy
| 0 |
1,905,763 | 28,292,996 |
Python: convert string to date was failed in float datatype using dict zip
|
<p>I would like to convert string to date. the converter are use 'm' for a minute, 'h' for an hour and 'd' for a day. for instance: '1d3h50m'.
It has a correct answer if I put the exact number, but the problem that it will be wrong if I use float number. for example: '1.5d3h50m'. </p>
<p>Here is my script:</p>
<pre><code>import re
def compute_time(hour_string):
numbers = [int(i) for i in re.split('d|h|m|s', hour_string) if i != '']
words = [i for i in re.split('\d', hour_string) if i != '']
combined = dict(zip(words, numbers))
return combined.get('d', 0) * 86400 + combined.get('h', 0) * 3600 + combined.get('m', 0) * 60 + combined.get('s', 0)
print compute_time('1.5h15m5s')
</code></pre>
<p>Could someone tell me how to make this work?</p>
|
<p>As pointed out, you can use <code>float</code> instead of <code>int</code>, but that leads to some weird combinations of what you can do. I'd also simplify to find stuff up until valid <code>dhms</code> as pairs, then sum over those, eg:</p>
<pre><code>def compute_time(text):
scale = {'d': 86400, 'h': 3600, 'm': 60, 's': 1}
return sum(float(n) * scale[t] for n, t in re.findall('(.*?)([dhms])', text))
</code></pre>
|
python
| 4 |
1,905,764 | 34,834,850 |
How would I write NumPy argmode()?
|
<p>I know that <code>argmax()</code> returns the indices of the maximum values along an axis.</p>
<p>I also know that in the case of multiple occurrences of the maximum values, the index corresponding to the first occurrence is returned.</p>
<p><code>argmax()</code> works perfectly when you want to find the maximum value and its index. How would a numpy.argmode() function be written? </p>
<p>In other words how would a function that calculates the mode value in a numpy array and gets the index of the first occurrence be written?</p>
<p><em>Just so everyone knows there is no numpy.argmode but the functionality of such a function is what I seek.</em></p>
<p>I understand that the mode would have multiple occurrences. We should be able to get it to behave like argmax where if we have multiple occurrences, it simply returns the value and index of the first occurrence.</p>
<p>An example of what I would want is:</p>
<pre><code>a = numpy.array([ 6, 3, 4, 1, 2, 2, 2])
numberIWant = numpy.argmode(a)
print(numberIWant)
# should print 4 (the index of the first occurrence of the mode)
</code></pre>
<p>I tried using:</p>
<pre><code>stats.mode(a)[0][0]
numpy.argwhere(a==num)[0][0]
</code></pre>
<p>This did work but I'm looking for a more efficient and concise solution.
Any ideas? </p>
|
<p>If you want to stay within NumPy, you can use some of the extra returns of <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.unique.html" rel="nofollow"><code>np.unique</code></a> to get what you want:</p>
<pre><code>>>> _, idx, cnt = np.unique(a, return_index=True, return_counts=True)
>>> idx[np.argmax(cnt)]
4
</code></pre>
<p><strong>EDIT</strong></p>
<p>To provide some context on what is going on... <code>np.unique</code> always returns a sorted array of unique values. The optional <code>return_index</code> provides another output array, with the index in which the first occurrence of each unique value happens. And the optional <code>return_counts</code> provides an extra output with the number of occurrences of each unique value. With those building blocks, all you need to do is return the item of the index array at the position where the highest count happens.</p>
|
python|math|numpy
| 3 |
1,905,765 | 34,674,152 |
Python PHPSESSID missing
|
<p>I'm trying to get PHPSESSID. What am I doing wrong?</p>
<pre><code>import requests
k = requests.get('http://index.hu')
for cookie in k.cookies:
print (cookie.name, cookie.value)
</code></pre>
<p>Results:</p>
<pre><code>('INX_CHECKER2', '1')
('mobile_detect', 'desktop')
</code></pre>
<p>When I open the page it has PHPSESSID in cookies.</p>
|
<p>The PHPSESSID is not set by the main page as was already discovered in the comments of your question.
It is set by making a request to <a href="http://index.hu/ident" rel="nofollow">http://index.hu/ident</a></p>
<p>So by altering your code to:
</p>
<pre><code>import requests
k = requests.get('http://index.hu/ident')
for cookie in k.cookies:
print (cookie.name, cookie.value)
</code></pre>
<p>I get the following results:</p>
<pre><code>INX_CHECKER2 1
PHPSESSID 7hq1o1rqe9ft78h0d2agelcf90
inx_checker2 1
</code></pre>
|
python|cookies|sessionid
| 0 |
1,905,766 | 27,092,032 |
Check if a Non-ASCII word exists in a string List
|
<p>I have this list scraped from a web page</p>
<pre><code>scraped = ['24 hour front desk', 'Bar / Lounge', 'Business centre', 'Café']
</code></pre>
<p>I want to check whether <code>Cafe</code> exists in the list or not?</p>
<p>I am doing this:</p>
<pre><code>if "Caf" in scraped:
print("Yes")
else:
print("No")
</code></pre>
<p>It always prints <em>No</em>
Reason I am writing <code>Caf</code> that from the scraped list, it always comes in non-ascii character <code>Café</code></p>
<p>How can I perform this check without doing this <code># -*- coding: utf-8 -*-</code>?</p>
|
<pre><code>if "Caf" in scraped:
</code></pre>
<p>checks if the literal string "Caf" is in the list, which it is not.</p>
<p>To check if "Caf" is in any of the strings in the list, use:</p>
<pre><code>if any("Caf" in s for s in scraped)
</code></pre>
<p>To check for non-ascii characters, you can always use unicode escape sequences:</p>
<pre><code>>>> "Caf\xe9" == 'Café'
True`
</code></pre>
|
python|python-3.x|unicode|beautifulsoup
| 3 |
1,905,767 | 27,030,233 |
In behave, how do you run a scenario only?
|
<p>I have a 'behave' feature that has a lot of tests on it.</p>
<p>I only need to run a specific scenario for development needs.</p>
<p>How do I do it? </p>
<p>(preferably on the command line)</p>
|
<p>To run only a single scenario you can use <code>-n</code> with the name of the scenario:</p>
<pre><code>$ behave -n 'clicking the button "foo" should bar the baz'
</code></pre>
<p>I'm using single quotes above to keep the name of the scenario as <em>one</em> argument for <code>-n</code>. Otherwise, the shell will pass each word of the scenario name as a separate argument.</p>
|
python|bdd|python-behave
| 35 |
1,905,768 | 23,188,191 |
Deploying Django/Python 3.4 to Heroku
|
<p>I am trying to deploy my first example app with Django/Heroku using the Django/Heroku Getting Started Tutorial. </p>
<p>My tools: Python 3.4 and Windows 7 PowerShell.</p>
<p>My challenge: deploying to Heroku fails and I am not sure why. Upon my first "git push" I saw that python-2.7.0 was used by default. I then added a <code>runtime.txt</code> (python-3.4.0) file in the app root.</p>
<p>Here is what happens when I run <code>git push heroku master</code></p>
<pre><code>-----> Python app detected
-----> Preparing Python runtime (python-3.4.0)
-----> Installing Setuptools (2.1)
-----> Installing Pip (1.5.4)
-----> Installing dependencies using Pip (1.5.4)
Exception:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.4/site-packages/pip-1.5.4-py3.4.egg/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/app/.heroku/python/lib/python3.4/site-packages/pip-1.5.4-py3.4.egg/pip/commands/install.py", line 262, in run
for req in parse_requirements(filename, finder=finder, options=options, session=session):
File "/app/.heroku/python/lib/python3.4/site-packages/pip-1.5.4-py3.4.egg/pip/req.py", line 1546, in parse_requirements
session=session,
File "/app/.heroku/python/lib/python3.4/site-packages/pip-1.5.4-py3.4.egg/pip/download.py", line 275, in get_file_content
content = f.read()
File "/app/.heroku/python/lib/python3.4/codecs.py", line 313, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
Storing debug log for failure in /app/.pip/pip.log
! Push rejected, failed to compile Python app
</code></pre>
<p>Here the content of my <code>requirements.txt</code> file (created with <code>pip freeze > requirements.txt</code>)</p>
<pre><code>Django==1.6.2
dj-database-url==0.3.0
dj-static==0.0.5
django-toolbelt==0.0.1
gunicorn==18.0
psycopg2==2.5.2
pystache==0.5.3
static==1.0.2
</code></pre>
<p>Here my <code>Procfile</code> (btw: gunicorn seems to be a Unix "only" command and does not work for Windows; <a href="http://www.swarley.me.uk/blog/2014/02/24/create-a-django-development-environment-on-64-bit-windows-for-heroku-deployment/">read here</a>):</p>
<pre><code>web: gunicorn mytodo.wsgi
</code></pre>
<p>The Heroku tutorial does not mention a <code>setup.py</code> file, <a href="https://github.com/heroku/heroku-buildpack-python/blob/master/bin/detect">but it seems that one is necessary</a>, so I simply copied a template.... not my preferred solution, but I did not know what else to do.</p>
<pre><code>setup(
name='mysite',
version='0.1.0',
install_requires=[], # Don't put anything here, just use requirements.txt
packages=['mysite'],
package_dir={'mysite': 'src/mysite'},
)
</code></pre>
<p>What could be going on:
- The unicode error message could stem from the <code>Procfile</code>. Somewhere online I read that it has to be ASCII file, but I am not sure how to declare that as the Procfile has no file ending.
- The setup.py file is wrong.</p>
<p>Any help is appreciated. Thanks!</p>
|
<p>I encountered this exact problem during my own attempt to deploy a Django app to Heroku on Windows 7. The cause turned out to be this: The command <code>pip freeze >requirements.txt</code> encodes the file in UTF-16 format. Heroku expects requirements.txt to be ansi-encoded.</p>
<p>To fix it, I opened requirements.txt in Notepad, went to File->Save As, and set the Encoding to ANSI before saving again. After git-committing the new requirements.txt, I was able to run <code>git push heroku master</code> and it worked as expected.</p>
|
python|django|heroku|pip
| 4 |
1,905,769 | 8,046,142 |
Can I define a scope anywhere in Python?
|
<p>Sometimes I find that I have to use functions with long names such as <code>os.path.abspath</code> and <code>os.path.dirname</code> a <strong>lot</strong> in just a few lines of code. I don't think it's worth littering the global namespace with such functions, but it would be incredibly helpful to be able to define a scope around the lines where I need those functions. As an example, this would be perfect:</p>
<pre><code>import os, sys
closure:
abspath = os.path.abspath
dirname = os.path.dirname
# 15 lines of heavy usage of those functions
# Can't access abspath or dirname here
</code></pre>
<p>I'd love to know if this is doable somehow</p>
|
<p>Python doesn't have a temporary namespace tool like <em><a href="http://www.gnu.org/software/emacs/emacs-lisp-intro/html_node/let.html" rel="noreferrer">let</a></em> in Lisp or Scheme.</p>
<p>The usual technique in Python is to put names in the current namespace and then take them out when you're done with them. This technique is used heavily in the standard library:</p>
<pre><code>abspath = os.path.abspath
dirname = os.path.dirname
# 15 lines of heavy usage of those functions
a = abspath(somepath)
d = dirname(somepath)
...
del abspath, dirname
</code></pre>
<p>An alternative technique to reduce typing effort is to shorten the recurring prefix:</p>
<pre><code>>>> import math as m
>>> m.sin(x / 2.0) + m.sin(x * m.pi)
>>> p = os.path
...
>>> a = p.abspath(somepath)
>>> d = p.dirname(somepath)
</code></pre>
<p>Another technique commonly used in the standard library is to just not worry about contaminating the module namespace and just rely on <em>__all__</em> to list which names you intend to make public. The effect of <em>__all__</em> is discussed in the <a href="http://docs.python.org/reference/simple_stmts.html?highlight=__all__#the-import-statement" rel="noreferrer">docs for the import statement</a>.</p>
<p>Of course, you can also create your own namespace by storing the names in a dictionary (though this solution isn't common):</p>
<pre><code>d = dict(abspath = os.path.abspath,
dirname = os.path.dirname)
...
a = d['abspath'](somepath)
d = d['dirname'](somepath)
</code></pre>
<p>Lastly, you can put all the code in a function (which has its own local namespace), but this has a number of disadvantages:</p>
<ul>
<li>the setup is awkward (an atypical and mysterious use of functions)</li>
<li>you need to declare as <em>global</em> any assignments you want to do that aren't temporary.</li>
<li>the code won't run until you call the function</li>
</ul>
<blockquote>
<pre><code> def temp(): # disadvantage 1: awkward setup
global a, d # disadvantage 2: global declarations
abspath = os.path.abspath
dirname = os.path.dirname
# 15 lines of heavy usage of those functions
a = abspath(somepath)
d = dirname(somepath)
temp() # disadvantage 3: invoking the code
</code></pre>
</blockquote>
|
python|scope
| 20 |
1,905,770 | 1,318,736 |
Ctypes pro and con
|
<p>I have heard that Ctypes can cause crashes (or stop errors) in Python and windows. Should I stay away from their use? Where did I hear? It was back when I tried to control various aspects of windows, automation, that sort of thing.</p>
<p>I hear of swig, but I see Ctypes more often than not. Any danger here? If so, what should I watch out for?</p>
<p>I did search for ctype pro con python.</p>
|
<p>In terms of robustness, I still think swig is somewhat superior to ctypes, because it's possible to have a C compiler check things more thoroughly for you; however, this is pretty moot by now (while it loomed larger in earlier ctypes versons), thanks to the <code>argtypes</code> feature @Mark already mentioned. However, there is no doubt that the runtime overhead IS much more significant for ctypes than for swig (and sip and boost python and other "wrapping" approaches): so, I think of ctypes as a convenient way to reach for a few functions within a DLL when the calls happen outside of a key bottleneck, not as a way to make large C libraries available to Python in performance-critical situations.</p>
<p>For a nice middle way between the runtime performance of swig (&c) and the convenience of ctypes, with the added bonus of being able to add more code that can use a subset of Python syntax yet run at just about C-code speeds, also consider <a href="http://cython.org/" rel="noreferrer">Cython</a> -- a python-like language that compiles down to C and is specialized for writing Python-callable extensions and wrapping C libraries (including ones that may be available only as static libraries, not DLLs: ctypes wouldn't let you play with <em>those</em>;-).</p>
|
python|winapi|ctypes
| 13 |
1,905,771 | 70,751,659 |
Iteration using pandas dataframe values
|
<p>While turning around the iteration, if the conditions are satisfied then the value is changed.
However, the original dataframe remains unchanged.
Is there a way to solve this?</p>
<p>(I know itertuples, iterrows loc can available. But I want to use values. (more faster))</p>
<pre><code>import panda as pd
df = pd.read_csv(filename)
for value in df.values:
if A:
value[2] = 3
print(value) # changed
df.to_csv(newfilename) # unchanged
</code></pre>
|
<p>The CSV should also be changed. I just tested this and it changed:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'A': [0,1,0,0,1,1,0,1,0],
'B': [1,0,1,1,0,0,1,0,1],
})
for value in df.values:
if value[0]==0:
value[1]=5
print(value) # changed
df #also changed
df.to_excel("output.xlsx") #also changed
</code></pre>
|
python|pandas|csv|iteration
| 0 |
1,905,772 | 70,956,087 |
Its showing me a module not found error but the module is installed on my computer
|
<p>Whenever I run my code code I am getting an Module Not Found Error for the tkvideoplayer module</p>
<pre><code>"Traceback (most recent call last):
File "C:\Code\Proj1\Video_Player.py", line 2, in <module>
from tkvideoplayer import TkinterVideo
ModuleNotFoundError: No module named 'tkvideoplayer'"
</code></pre>
<pre><code>from tkinter import *
from tkvideoplayer import TkinterVideo
from tkinter.filedialog import askopenfile
window = Tk()
window.title("My Video Player")
window.geometry("500x500")
window.config(bg="Turquoise")
heading = Label(window, text="My Video Player", bg="Orange Red", fg="white", font="4 none bold")
heading.config(anchor=CENTER)
def openFile():
file = askopenfile(mode="r", filetypes=[('Video Files', '*.mp4', '*.mov')])
if file is not None:
global filename
filename = file.name
global videoPlayer
videoPlayer = TkinterVideo(master=window, scaled=True, pre_load=False)
videoPlayer.load(r"{}".format(filename))
videoPlayer.pack(expand=True, fill="both")
videoPlayer.play()
def playFile():
videoPlayer.play()
def stopFile():
videoPlayer.stop()
def pauseFile():
videoPlayer.pause()
openbtn = Button(window, text="Open", command=lambda: openFile())
stopbtn = Button(window, text="Stop", command=lambda: stopFile())
playbtn = Button(window, text="Play", command=lambda: playFile())
pausebtn = Button(window, text="Pause", command=lambda: pauseFile())
openbtn.pack(side=TOP, pady=2)
stopbtn.pack(side=TOP, pady=4)
playbtn.pack(side=TOP, pady=3)
pausebtn.pack(side=TOP, pady=5)
heading.pack()
window.mainloop()
</code></pre>
|
<p>You need to install the Library/ Module first.
Use this link.</p>
<p><a href="https://www.geeksforgeeks.org/how-to-install-tkinter-in-windows/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/how-to-install-tkinter-in-windows/</a></p>
<p>then you need to install the module for tkvideoplayer blow link will help.</p>
<p><a href="https://github.com/huskeee/tkvideo/blob/master/README.md" rel="nofollow noreferrer">https://github.com/huskeee/tkvideo/blob/master/README.md</a></p>
<p>Finally you can run your code.</p>
|
python|error-handling
| 0 |
1,905,773 | 70,777,574 |
PyCharm Virtual Environment Setup
|
<p>I'm trying to get on with Python coding rather than fiddling about with settings getting in the way of development. Alas, it is not meant to be.</p>
<p>1 - I've installed Python 3.9 in Windows 10 from the Windows Store (running python in a powershell and instructions therein).
2 - I'm using PyCharm IDE.</p>
<p>I am very familiar with JetBrains products and building php projects. The transition to Python is not as straightforward. When I setup a new project and want to run code I need to set my configuration for the virtual environment.</p>
<p>I range from not being able to run any code through to perfect success with a lack of being able to import libraries in between.</p>
<p>In my toolkit is:</p>
<p>a) Edit Configuation - selecting a python interpreter. This never seems to make any difference no matter which one I choose. I only have options that are the right version but seemingly python.exe files in the project, in my Windows directory and all over the place.</p>
<p>b) Settings>PythonInterpreter>VirtualEnvironment>ExisitngLocation
When I get it to work it's usually the python.exe in the Windows directory that pip installs everything in. Selecting that interpreter then locates all the packages I have installed and I can import all the libraries I want.</p>
<p>I would like to (i) understand why I have so many options (don't I just use python.exe because I've installed it somewhere) to pick a multitude of python.exe everytime I build a new project. I have missed this aspect of Python.</p>
<p>I would like to (ii) understand how to get a project to reference the libraries I have installed on my machine such that I can access them properly in my project and spend less time trying to wire everything up and more time actually building the project out.</p>
|
<p>I had a similar issue initially.</p>
<p>My "solution" was to delete python from the Windows Store, install <a href="https://www.anaconda.com/" rel="nofollow noreferrer">Anaconda</a> and then <a href="https://stackoverflow.com/a/65160772/13132315">follow these steps from a SO answer</a> for enabling PowerShell to run python with conda</p>
<p>Details in creating projects with PyCharm <a href="https://docs.anaconda.com/anaconda/user-guide/tasks/pycharm/" rel="nofollow noreferrer">can be found here</a> (a webpage on Anaconda).</p>
<p>Then PyCharm just works... like a charm.</p>
|
python|pycharm|interpreter
| 1 |
1,905,774 | 58,227,923 |
How to flush Flask appication log output using Gunicorn on Python 2.7
|
<p>I would like to flush the log of my Flask application as it is ran by Gunicorn and Supervisord. I found a few options <a href="https://stackoverflow.com/questions/13934801/supervisord-logs-dont-show-my-output">here</a> for supervisord and <a href="https://stackoverflow.com/questions/27687867/is-there-a-way-to-log-python-print-statements-in-gunicorn">there</a> specifically for Gunicorn.</p>
<p>The ones for Gunicorn only work for Python 3, is there something that would work with Python 2? In particular, is there a way to pass the -u option to Python when Gunicorn is executing my app?</p>
|
<p>I switched from print() to app.logger.info() and I am using the method described in <a href="https://medium.com/@trstringer/logging-flask-and-gunicorn-the-manageable-way-2e6f0b8beb2f" rel="nofollow noreferrer">https://medium.com/@trstringer/logging-flask-and-gunicorn-the-manageable-way-2e6f0b8beb2f</a> to get the logs to be stored by supervisord.</p>
|
python|gunicorn|supervisord
| 0 |
1,905,775 | 58,378,374 |
Why does keras model predict slower after compile?
|
<p><a href="https://i.stack.imgur.com/cCXBx.png" rel="noreferrer"><img src="https://i.stack.imgur.com/cCXBx.png" alt="prediction speed keras"></a></p>
<p>In theory, the prediction should be constant as the weights have a fixed size. How do I get my speed back after compile (without the need to remove optimizer)?</p>
<p>See associated experiment: <a href="https://nbviewer.jupyter.org/github/off99555/TensorFlowExperiments/blob/master/test-prediction-speed-after-compile.ipynb?flush_cache=true" rel="noreferrer">https://nbviewer.jupyter.org/github/off99555/TensorFlowExperiments/blob/master/test-prediction-speed-after-compile.ipynb?flush_cache=true</a></p>
|
<p><strong>UPDATE - 1/15/2020</strong>: the current best practice for small batch sizes should be to feed inputs to the model directly - i.e. <code>preds = model(x)</code>, and if layers behave differently at train / inference, <code>model(x, training=False)</code>. Per latest commit, this is now <a href="https://github.com/tensorflow/tensorflow/commit/42f469be0f3e8c36624f0b01c571e7ed15f75faf" rel="noreferrer">documented</a>.</p>
<p>I haven't benchmarked these, but per the <a href="https://github.com/tensorflow/tensorflow/issues/33340" rel="noreferrer">Git discussion</a>, it's also worth trying <code>predict_on_batch()</code> - especially with improvements in TF 2.1.</p>
<hr>
<p><strong>ULTIMATE CULPRIT</strong>: <code>self._experimental_run_tf_function = True</code>. It's <a href="https://stackoverflow.com/questions/58333491/what-does-experimental-in-tensorflow-mean/58333539#58333539"><em>experimental</em></a>. But it's not actually bad.</p>
<p>To any TensorFlow devs reading: <em>clean up your code</em>. It's a mess. And it violates important coding practices, such as <em>one function does one thing</em>; <code>_process_inputs</code> does a <em>lot</em> more than "process inputs", same for <code>_standardize_user_data</code>. "I'm not paid enough" - but you <em>do</em> pay, in extra time spent understanding your own stuff, and in users filling your Issues page with bugs easier resolved with a clearer code.</p>
<hr>
<p><strong>SUMMARY</strong>: it's only a <em>little</em> slower with <code>compile()</code>. </p>
<p><code>compile()</code> sets an internal flag which assigns a different prediction function to <code>predict</code>. This function <em>constructs a new graph</em> upon each call, slowing it down relative to uncompiled. However, the difference is only pronounced when <strong>train time is much shorter than data processing time</strong>. If we <em>increase</em> the model size to at least mid-sized, the two become equal. See code at the bottom.</p>
<p>This slight increase in data processing time is more than compensated by amplified graph capability. Since it's more efficient to keep only one model graph around, the one pre-compile is discarded. <em>Nonetheless</em>: if your model is small relative to data, you are better off without <code>compile()</code> for model inference. See my other answer for a workaround.</p>
<hr>
<p><strong>WHAT SHOULD I DO?</strong></p>
<p>Compare model performance compiled vs uncompiled as I have in code at the bottom.</p>
<ul>
<li><em>Compiled is faster</em>: run <code>predict</code> on a compiled model.</li>
<li><em>Compiled is slower</em>: run <code>predict</code> on an uncompiled model.</li>
</ul>
<p>Yes, <em>both</em> are possible, and it will depend on (1) data size; (2) model size; (3) hardware. Code at the bottom actually shows <em>compiled</em> model being faster, but 10 iterations is a small sample. See "workarounds" in my other answer for the "how-to". </p>
<hr>
<p><strong>DETAILS</strong>:</p>
<p>This took a while to debug, but was fun. Below I describe the key culprits I discovered, cite some relevant documentation, and show profiler results that led to the ultimate bottleneck.</p>
<p>(<code>FLAG == self.experimental_run_tf_function</code>, for brevity)</p>
<ol>
<li><code>Model</code> by default instantiates with <code>FLAG=False</code>. <code>compile()</code> sets it to <code>True</code>.</li>
<li><code>predict()</code> involves acquiring the prediction function, <code>func = self._select_training_loop(x)</code></li>
<li>Without any special kwargs passed to <code>predict</code> and <code>compile</code>, all other flags are such that:
<ul>
<li><strong>(A)</strong> <code>FLAG==True</code> --> <code>func = training_v2.Loop()</code></li>
<li><strong>(B)</strong> <code>FLAG==False</code> --> <code>func = training_arrays.ArrayLikeTrainingLoop()</code></li>
</ul></li>
<li>From <a href="https://github.com/tensorflow/tensorflow/blob/r2.0/tensorflow/python/eager/def_function.py#L826" rel="noreferrer">source code docstring</a>, <strong>(A)</strong> is heavily graph-reliant, uses more distribution strategy, and ops are prone to creating & destroying graph elements, which "may" (do) impact performance. </li>
</ol>
<p><strong>True culprit</strong>: <code>_process_inputs()</code>, accounting for <strong>81% of runtime</strong>. Its major component? <strong><code>_create_graph_function()</code></strong>, <strong>72% of runtime</strong>. This method does not even <em>exist</em> for <strong>(B)</strong>. Using a mid-sized model, however, <code>_process_inputs</code> comprises <strong>less than 1% of runtime</strong>. Code at bottom, and profiling results follow.</p>
<hr>
<p><strong>DATA PROCESSORS</strong>:</p>
<p><strong>(A)</strong>: <code><class 'tensorflow.python.keras.engine.data_adapter.TensorLikeDataAdapter'></code>, used in <code>_process_inputs()</code> . <a href="https://github.com/tensorflow/tensorflow/blob/r2.0/tensorflow/python/keras/engine/data_adapter.py#L645" rel="noreferrer">Relevant source code</a></p>
<p><strong>(B)</strong>: <code>numpy.ndarray</code>, returned by <code>convert_eager_tensors_to_numpy</code>. <a href="https://github.com/tensorflow/tensorflow/blob/r2.0/tensorflow/python/keras/engine/training_arrays.py#L156" rel="noreferrer">Relevant source code</a>, and <a href="https://github.com/tensorflow/tensorflow/blob/1cf0898dd4331baf93fe77205550f2c2e6c90ee5/tensorflow/python/keras/engine/training_utils.py#L1806" rel="noreferrer">here</a></p>
<hr>
<p><strong>MODEL EXECUTION FUNCTION</strong> (e.g. predict)</p>
<p><strong>(A)</strong>: <a href="https://github.com/tensorflow/tensorflow/blob/r2.0/tensorflow/python/keras/engine/training_v2_utils.py#L59" rel="noreferrer">distribution function</a>, and <a href="https://github.com/tensorflow/tensorflow/blob/r2.0/tensorflow/python/keras/engine/training_v2.py#L254" rel="noreferrer">here</a></p>
<p><strong>(B)</strong>: <a href="https://github.com/tensorflow/tensorflow/blob/r2.0/tensorflow/python/keras/distribute/distributed_training_utils.py#L839" rel="noreferrer">distribution function (different)</a>, and <a href="https://github.com/tensorflow/tensorflow/blob/r2.0/tensorflow/python/keras/engine/training_arrays.py#L189" rel="noreferrer">here</a></p>
<hr>
<p><strong>PROFILER</strong>: results for code in my other answer, "tiny model", and in this answer, "medium model":</p>
<p><strong>Tiny model</strong>: 1000 iterations, <code>compile()</code></p>
<p><img src="https://i.stack.imgur.com/09umH.png" width="600"></p>
<p><strong>Tiny model</strong>: 1000 iterations, <em>no</em> <code>compile()</code></p>
<p><img src="https://i.stack.imgur.com/7pXMV.png" width="370"></p>
<p><strong>Medium model</strong>: 10 iterations</p>
<p><img src="https://i.stack.imgur.com/SBY7z.png" width="620"></p>
<hr>
<p><strong>DOCUMENTATION</strong> (indirectly) on effects of <code>compile()</code>: <a href="https://github.com/tensorflow/tensorflow/blob/r2.0/tensorflow/python/eager/def_function.py#L826" rel="noreferrer">source</a></p>
<blockquote>
<p>Unlike other TensorFlow operations, we don't convert python
numerical inputs to tensors. Moreover, <strong>a new graph is generated for each
distinct python numerical value</strong>, for example calling <code>g(2)</code> and <code>g(3)</code> will
generate two new graphs</p>
<p><code>function</code> <strong>instantiates a separate graph for every unique set of input
shapes and datatypes</strong>. For example, the following code snippet will result
in three distinct graphs being traced, as each input has a different
shape</p>
<p>A single tf.function object might need to map to multiple computation graphs
under the hood. This should be visible only as <strong>performance</strong> (tracing graphs has
a <strong>nonzero computational and memory cost</strong>) but should not affect the correctness
of the program</p>
</blockquote>
<hr>
<p><strong>COUNTEREXAMPLE</strong>:</p>
<pre class="lang-py prettyprint-override"><code>from tensorflow.keras.layers import Input, Dense, LSTM, Bidirectional, Conv1D
from tensorflow.keras.layers import Flatten, Dropout
from tensorflow.keras.models import Model
import numpy as np
from time import time
def timeit(func, arg, iterations):
t0 = time()
for _ in range(iterations):
func(arg)
print("%.4f sec" % (time() - t0))
batch_size = 32
batch_shape = (batch_size, 400, 16)
ipt = Input(batch_shape=batch_shape)
x = Bidirectional(LSTM(512, activation='relu', return_sequences=True))(ipt)
x = LSTM(512, activation='relu', return_sequences=True)(ipt)
x = Conv1D(128, 400, 1, padding='same')(x)
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(128, activation='relu')(x)
x = Dense(64, activation='relu')(x)
out = Dense(1, activation='sigmoid')(x)
model = Model(ipt, out)
X = np.random.randn(*batch_shape)
timeit(model.predict, X, 10)
model.compile('adam', loss='binary_crossentropy')
timeit(model.predict, X, 10)
</code></pre>
<p><strong>Outputs</strong>:</p>
<pre class="lang-py prettyprint-override"><code>34.8542 sec
34.7435 sec
</code></pre>
|
python|performance|tensorflow|keras|jupyter-notebook
| 42 |
1,905,776 | 33,717,557 |
How to open a directory path with spaces in Python?
|
<p>I am trying to pass source and destination path from the command line as below</p>
<pre><code>python ReleaseTool.py -i C:\Users\Abdur\Documents\NetBeansProjects\Exam System -o C:\Users\Abdur\Documents\NetBeansProjects\Release
</code></pre>
<p>but it is throwing error</p>
<pre><code>WindowsError: [Error 3] The system cannot find the path specified: ''
</code></pre>
<p>due to 'Exam System' which has a space between.
Please suggest how to handle this.</p>
|
<p><strong>Cause</strong></p>
<p>Long filenames or paths with spaces are supported by NTFS in Windows NT. However, these filenames or directory names require quotation marks around them when they are specified in a command prompt operation. Failure to use the quotation marks results in the error message. </p>
<p><strong>Solution</strong></p>
<p>Use quotation marks when specifying long filenames or paths with spaces. For example, typing the following at the command prompt
<code>copy c:\my file name d:\my new file name</code>
results in the following error message:</p>
<pre>The system cannot find the file specified.</pre>
<p>The correct syntax is:
<code>copy "c:\my file name" "d:\my new file name"</code>
Note that the quotation marks must be used. </p>
|
python|python-2.7
| 6 |
1,905,777 | 33,588,223 |
Reference to (module only) global function within a class method
|
<p>In python one can easily refer to a global method by means of it's name.</p>
<pre><code>def globalfoo(a):
print(a)
class myClass:
def test(self):
v = globalfoo
v("hey")
t = myClass()
t.test()
</code></pre>
<p>Now in python one can also "hide" a (class) method by prefixing it with two underscores. - And I thought(!) one could do the same for global functions - to make a global function module-only. (Similar to C++ where one can decide to put a function declaration not in the header so it's only visible to the current compilation unit).</p>
<p>I then tried to combine this:</p>
<pre><code>def __globalfoo(a):
print(a)
class myClass:
def test(self):
v = __globalfoo
v("hey")
t = myClass()
t.test()
</code></pre>
<p>However this doesn't seem to work, an error is thrown that "_myCLass__globalfoo" is undefined. So how would I make both things work: having a reference to a function. And hiding the function from the external scope?<br>
Is a static method in this case the only/best solution?</p>
|
<blockquote>
<p>And I thought(!) one could do the same for global functions - to make a global function module-only.</p>
</blockquote>
<p>You were incorrect. Using a double-underscore name for a module-level function will not prevent it from being used in other modules. Someone can still do <code>import yourmodule</code> and then call <code>yourmodule.__globalfoo</code>.</p>
<p>The double-underscore name mangling behavior is defined in <a href="https://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references" rel="nofollow">the docs</a>:</p>
<blockquote>
<p>Any identifier of the form <code>__spam</code> (at least two leading underscores, at most one trailing underscore) is textually replaced with <code>_classname__spam</code>, where classname is the current class name with leading underscore(s) stripped. This mangling is done without regard to the syntactic position of the identifier, as long as it occurs within the definition of a class.</p>
</blockquote>
<p>It is purely a textual substitution and takes no account of what (if anything) the double-underscore name refers to. It just takes any identifier occurring anywhere inside the class definition that looks like <code>__blah</code> and changes it to <code>_class__blah</code>. It is specific to class definitions and does not occur for module-level functions.</p>
<p>The best thing to do is prefix your function with a single underscore and document that it is not part of the public API and users of the module should not rely on it. There is nothing to be gained by attempting to "enforce" this privacy.</p>
|
python|function-pointers|encapsulation
| 3 |
1,905,778 | 47,051,326 |
unable to read stata .dta file in python
|
<p>I am trying to read a Stata (<code>.dta</code>) file in Python with <code>pandas.read_stata</code>, But I'm getting this error:</p>
<blockquote>
<p>ValueError: Version of given Stata file is not 104, 105, 108, 111 (Stata 7SE), 113 (Stata 8/9), 114 (Stata 10/11), 115 (Stata 12), 117 (Stata 13), or 118 (Stata 14)</p>
</blockquote>
<p>Please advise.</p>
|
<p>Just use the <strong>read_table()</strong> of Pandas then make sure to include <strong>delim_whitespace=True</strong> and <strong>header=None</strong>. </p>
|
python|pandas|stata
| 1 |
1,905,779 | 46,669,962 |
Is it important to use an "else" block after an "except" block?
|
<p>I know <a href="https://stackoverflow.com/questions/855759/python-try-else">what a <code>try-else</code> block is</a>, but consider the following two functions:</p>
<pre><code># Without else
def number_of_foos1(x):
try:
number = x['foo_count']
except:
return 0
return number
# With else
def number_of_foos2(x):
try:
number = x['foo_count']
except:
return 0
else:
return number
x_with_foo = dict(foo_count=5)
x_without_foo = 3
</code></pre>
<p>Unlike <a href="https://stackoverflow.com/q/14590146/2071807">this <code>try-else</code> question</a> we're not adding extra lines to the <code>try</code> block. In both cases the <code>try</code> block is a single line, and the principle of keeping error handling "close" to the errors that caused it is not violated.</p>
<p>The difference is in where we go <em>after</em> the successful <code>try</code> block.</p>
<p>In the first block, the code continues after the <code>except</code> block, and in the second, the code continues at the <code>else</code>.</p>
<p>They obviously give the same output:</p>
<pre><code>In [138]: number_of_foos1(x_with_foo)
Out[139]: 5
In [140]: number_of_foos1(x_without_foo)
Out[140]: 0
In [141]: number_of_foos2(x_with_foo)
Out[141]: 5
In [142]: number_of_foos2(x_without_foo)
Out[142]: 0
</code></pre>
<p>Is either preferred? Are they even any different as far as the interpreter is concerned? Should you <em>always</em> have an <code>else</code> when continuing after a successful <code>try</code> or is it OK just to carry on unindented, as in <code>number_of_foos1</code>?</p>
|
<p>I would say that the case where you enter in the exception block must be <em>rare</em> (that's what we call that an <em>exception</em>). So using <code>else</code> gives too much importance to that block, which isn't supposed to happen in normal operation.</p>
<p>So if an exception occurs, handle the error and return, and forget about it.</p>
<p>Using <code>else</code> here add more complexity, and you can confirm this by disassembling both functions:</p>
<pre><code>>>> dis.dis(number_of_foos1)
4 0 SETUP_EXCEPT 14 (to 17)
5 3 LOAD_FAST 0 (x)
6 LOAD_CONST 1 ('foo_count')
9 BINARY_SUBSCR
10 STORE_FAST 1 (number)
13 POP_BLOCK
14 JUMP_FORWARD 12 (to 29)
6 >> 17 POP_TOP
18 POP_TOP
19 POP_TOP
7 20 LOAD_CONST 2 (0)
23 RETURN_VALUE
24 POP_EXCEPT
25 JUMP_FORWARD 1 (to 29)
28 END_FINALLY
8 >> 29 LOAD_FAST 1 (number)
32 RETURN_VALUE
>>> dis.dis(number_of_foos2)
<exactly the same beginning then:>
15 20 LOAD_CONST 2 (0)
23 RETURN_VALUE
24 POP_EXCEPT
25 JUMP_FORWARD 5 (to 33)
28 END_FINALLY
17 >> 29 LOAD_FAST 1 (number)
32 RETURN_VALUE
>> 33 LOAD_CONST 0 (None)
36 RETURN_VALUE
>>>
</code></pre>
<p>As you see in the second example, addresses 24, 25, 28, 33 and 36 aren't reachable, that's because Python inserts jumps to the end of the code, and also a default <code>return None</code> in the main branch. All this code is useless, and would vouch for snippet #1 which is simpler and returns the result in the main branch.</p>
|
python
| 1 |
1,905,780 | 37,679,553 |
increasing pandas dataframe imputation performance
|
<p>I want to impute a large datamatrix (90*90000) and later an even larger one (150000*800000) using pandas.
At the moment I am testing with the smaller one on my laptop (8gb ram, Haswell core i5 2.2 GHz, the larger dataset will be run on a server).</p>
<p>The columns have some missing values that I want to impute with the most frequent one over all rows.</p>
<p>My working code for this is: </p>
<pre><code>freq_val = pd.Series(mode(df.ix[:,6:])[0][0], df.ix[:,6:].columns.values) #most frequent value per column, starting from the first SNP column (second row of 'mode'gives actual frequencies)
df_imputed = df.ix[:,6:].fillna(freq_val) #impute unknown SNP values with most frequent value of respective columns
</code></pre>
<p>The imputation takes about 20 minutes on my machine. Is there another implementation that would increase performance?</p>
|
<p>I tried different approaches. The key learning is that the <code>mode</code> function is really slow. Alternatively, I implemented the same functionality using <code>np.unique</code> (<code>return_counts=True</code>) and <code>np.bincount</code>. The latter is supposedly faster, but it doesn't work with <code>NaN</code> values.</p>
<p>The optimized code now needs about 28 s to run. MaxU's answer needs ~48 s on my machine to finish.</p>
<p>The code:</p>
<pre><code>iter = range(np.shape(df.ix[:,6:])[1])
freq_val = np.zeros(np.shape(df.ix[:,6:])[1])
for i in iter:
_, count = np.unique(df.ix[:,i+6], return_counts=True)
freq_val[i] = count.argmax()
freq_val_series = pd.Series(freq_val, df.ix[:,6:].columns.values)
df_imputed = df.ix[:,6:].fillna(freq_val_series)
</code></pre>
<p>Thanks for the input!</p>
|
python|pandas|optimization|runtime
| 2 |
1,905,781 | 37,866,736 |
Efficient way to convert dictionary of list to pair list of key and value
|
<p>I have dictionary of list as follows (it can be more than 1M elements, also assume dictionary is sorted by key)</p>
<pre><code>import scipy.sparse as sp
d = {0: [0,1], 1: [1,2,3],
2: [3,4,5], 3: [4,5,6],
4: [5,6,7], 5: [7],
6: [7,8,9]}
</code></pre>
<p>I want to know what is the most efficient way (fastest way for large dictionary) to convert it into list of row and column index like:</p>
<pre><code>r_index = [0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 6, 6, 6]
c_index = [0, 1, 1, 2, 3, 3, 4, 5, 4, 5, 6, 5, 6, 7, 7, 7, 8, 9]
</code></pre>
<p>Here are some solutions that I have so far:</p>
<ol>
<li><p>Using iteration</p>
<pre><code>row_ind = [k for k, v in d.iteritems() for _ in range(len(v))] # or d.items() in Python 3
col_ind = [i for ids in d.values() for i in ids]
</code></pre></li>
<li><p>Using pandas library</p>
<pre><code>import pandas as pd
df = pd.DataFrame.from_dict(d, orient='index')
df = df.stack().reset_index()
row_ind = list(df['level_0'])
col_ind = list(df[0])
</code></pre></li>
<li><p>Using itertools</p>
<pre><code>import itertools
indices = [(x,y) for x, y in itertools.chain.from_iterable([itertools.product((k,), v) for k, v in d.items()])]
indices = np.array(indices)
row_ind = indices[:, 0]
col_ind = indices[:, 1]
</code></pre></li>
</ol>
<p>I'm not sure which way is the fastest way to deal with this problem if I have a lot of elements in my dictionary. Thanks!</p>
|
<p>The first rule of thumb of optimization in python is, to make sure that your innermost loop is outsourced to some library function. This only applies for cpython - pypy is a completely different story.
In your case using extend is giving some significant speedup.</p>
<pre><code>import time
l = range(10000)
x = dict([(k, list(l)) for k in range(1000)])
def org(d):
row_ind = [k for k, v in d.items() for _ in range(len(v))]
col_ind = [i for ids in d.values() for i in ids]
def ext(d):
row_ind = [k for k, v in d.items() for _ in range(len(v))]
col_ind = []
for ids in d.values():
col_ind.extend(ids)
def ext_both(d):
row_ind = []
for k, v in d.items():
row_ind.extend([k] * len(v))
col_ind = []
for ids in d.values():
col_ind.extend(ids)
functions = [org, ext, ext_both]
for func in functions:
begin = time.time()
func(x)
elapsed = time.time() - begin
print(func.__name__ + ": " + str(elapsed))
</code></pre>
<p>Output when using python2:</p>
<pre><code>org: 0.512559890747
ext: 0.340406894684
ext_both: 0.149670124054
</code></pre>
|
python|dictionary|itertools
| 2 |
1,905,782 | 67,987,250 |
Return all strings from column in a table. [MySQL; pymysql]
|
<p>Is there a way to return all strings from a specific column in a MySQL database?</p>
<p>Note: I want to save those strings in a list, I'm also using pymysql.</p>
|
<p>Those are the steps you need to follow (in words as well since you do not provide any code either):</p>
<ul>
<li>Establish a connection with the database</li>
<li>Create a cursor</li>
<li>With the cursor you created execute a query which should be type of "SELECT column_that_you_want FROM table_you_want"</li>
<li>The cursor now will hold the results.</li>
<li>You can add the results in a list via a loop for example. Typically those results will be a one item tuple.</li>
</ul>
|
python|mysql|pymysql
| 0 |
1,905,783 | 29,999,510 |
Summarizing Dataframes with ambiguous columns with apply function
|
<p>I have a dataframe and a for loop with dictionary to define how to handle specific column names from my previous question: <a href="https://stackoverflow.com/questions/29964552/pandas-generating-dataframe-based-on-columns-being-present">Pandas Generating dataframe based on columns being present</a></p>
<pre><code>import pandas as pd
df=pd.DataFrame({'Players': [ 'Sam', 'Greg', 'Steve', 'Sam',
'Greg', 'Steve', 'Greg', 'Steve', 'Greg', 'Steve'],
'Wins': [10,5,5,20,30,20,6,9,3,10],
'Losses': [5,5,5,2,3,2,16,20,3,12],
'Type': ['A','B','B','B','A','B','B','A','A','B'],
})
p=df.groupby('Players')
sumdict = {'Total Games': (None, 'count'),
'Average Wins': ('Wins', 'mean'),
'Greatest Wins': ('Wins', 'max'),
'Unique games': ('Type', 'nunique'),
'Max Score': ('Score', 'max')}
summary = []
for key, (column, op) in sumdict.items():
if column is None:
res = p.agg(op).max(axis=1)
elif column not in df:
continue
else:
res = p[column].agg(lambda x: getattr(x, op)())
summary.append(pd.DataFrame({key: res}))
summary = pd.concat(summary, axis=1)
</code></pre>
<p>The code works for almost all cases except for <code>apply</code> functions that count specific cases inside a column:</p>
<pre><code>streak = pd.DataFrame({'Streak':p.Wins.apply(lambda x: (x > 5).sum())})
</code></pre>
<p>Is there a way to incorporate the apply function into the dictionary <code>sumdict</code>?</p>
|
<p>You have a couple of options here.</p>
<ol>
<li>check for a function and use that rather the getattr.</li>
<li>just use the string and let the function fall through...</li>
</ol>
<p>IMO 2. is a little cleaner (although perhaps lesser known?) that you can do <code>g.agg("max")</code> as an alias to <code>g.max()</code>.</p>
<pre><code>sumdict["Streak"] = "Wins", lambda x: (x > 5).sum()
</code></pre>
<p>and you do the following, the commented line is the only change:</p>
<pre><code>summary = []
for key, (column, op) in sumdict.items():
if column is None:
res = p.agg(op).max(axis=1)
elif column not in df:
continue
else:
res = p[column].agg(op) # just use the string (or it could be a func)
summary.append(pd.DataFrame({key: res}))
summary = pd.concat(summary, axis=1)
</code></pre>
<p>Then Streak works just perfect:</p>
<pre><code>In [23]: summary
Out[23]:
Greatest Wins Total Games Streak Average Wins Unique games
Players
Greg 30 4 2 11 2
Sam 20 2 2 15 2
Steve 20 4 3 11 2
</code></pre>
|
python|dictionary|pandas
| 0 |
1,905,784 | 61,351,551 |
TypeError: argument of type 'Object' is not iterable
|
<p>I've written what I thought was a simple python script to search through several lines of output and match (i.e. "grep") on a specific string. Listing all queues without the pattern match is simple enough:</p>
<pre><code>from qmf.console import Session
sess = Session()
broker = sess.addBroker("amqp://guest/guest@localhost")
queues = sess.getObjects(_class="queue", _package="org.apache.qpid.broker")
for q in queues:
print (q)
</code></pre>
<p>Running the script produces the following output (truncated):</p>
<pre><code>pmena@myhost=> python ./queue_stuff.py
org.apache.qpid.broker:queue[0-1-1-0-62] 0-1-1-0-3:queue01
org.apache.qpid.broker:queue[0-1-1-0-55] 0-1-1-0-3:queue02
org.apache.qpid.broker:queue[0-1-1-0-63] 0-1-1-0-3:queue03
org.apache.qpid.broker:queue[0-1-1-0-51] 0-1-1-0-3:queue04
.
.
org.apache.qpid.broker:queue[0-1-1-0-51] 0-1-1-0-3:queue99
</code></pre>
<p>However when I add an "if" statement to match on a particular string, like so:</p>
<pre><code>from qmf.console import Session
sess = Session()
broker = sess.addBroker("amqp://guest/guest@localhost")
queues = sess.getObjects(_class="queue", _package="org.apache.qpid.broker")
for q in queues:
if 'queue37' in q:
print (q)
</code></pre>
<p>I get the following error:</p>
<pre><code>pmena@myhost=> python ./queue_stuff.py
Traceback (most recent call last):
File "./queue_stuff.py", line 6, in <module>
if 'queue37' in q:
TypeError: argument of type 'Object' is not iterable
</code></pre>
<p>I feel like this is a simple python syntax issue, but wasn't able to glean the resolution from other posts.</p>
|
<p>The problem is that the Queue class does not have an <em>iter</em> method like you might suspect. That method needs to be defined for a class for <code>for i in object:</code> to work. <a href="https://stackoverflow.com/questions/21157739/how-to-iterate-through-a-python-queue-queue-with-a-for-loop-instead-of-a-while-l">This answer</a> goes over a variety of workarounds that people have used, so you can see which one best fits your needs.</p>
|
python|amqp|qpid
| 1 |
1,905,785 | 27,497,559 |
NDB Validator Prop Fields
|
<p>I rolled a custom validator for my ndb stringProperties to strip out malicious code for my website.</p>
<pre><code>def stringValidator(prop, value):
lowerValue = value.lower()
stripped = str(utils.escape(lowerValue))
if stripped != lowerValue:
raise datastore_errors.BadValueError(prop)
return stripped
</code></pre>
<p>Elsewhere, I'm catching that error and returning a failure to the client. I want to be able to return the type of the property that failed validation.</p>
<pre><code>except datastore_errors.BadValueError as err:
</code></pre>
<p>If I <code>print(err)</code> I get:</p>
<pre><code>StringProperty('email', validator=<function stringValidator at 0x1079e11b8>)
</code></pre>
<p>I see that this StringProperty contains the name of the property I want to return: <code>'email'</code>. How do I extract it?</p>
<p>EDIT: Dmitry gave me the most important half the answer - in order to access the error object's value once I pass the ._name property, I need to use: </p>
<pre><code>str(err.args[0])
</code></pre>
|
<p>You can get name of the property by <code>_name</code> attribute.</p>
<pre><code>from google.appengine.ext import ndb
def stringValidator(prop, value):
lowerValue = value.lower()
stripped = 'bla'
if stripped != lowerValue:
raise datastore_errors.BadValueError(prop._name)
return stripped
class Foo(ndb.Model):
email = ndb.StringProperty(validator=stringValidator)
Foo(email='blas') # raises BadValueError: email
</code></pre>
<p><strong>Update</strong>: you can also use "human friendly" property name by setting </p>
<pre><code>email = ndb.StringProperty(validator=stringValidator, verbose_name='E-mail')
</code></pre>
<p>in the property definition, and then get it by <code>_verbose_name</code> attribute.</p>
|
python|google-app-engine|properties|error-handling|validation
| 3 |
1,905,786 | 65,853,960 |
If Statement Is Not Being Reached
|
<p>I'm making a simple <code>discord.py</code> command that is, for some reason, not working. I'm trying to make a <code>slowmode</code> command, and there seems to be a fault in it. This is the code in the command:</p>
<pre class="lang-py prettyprint-override"><code> @commands.command()
async def slowmode(self, ctx, seconds=5):
if seconds == 'off':
seconds = 0
elif seconds == 'on':
seconds = 5
seconds = int(seconds)
await ctx.channel.edit(slowmode_delay=seconds)
embed = discord.Embed(
title='Slowmode Changed',
description=f'The slowmode for {ctx.message.channel.mention} has been changed to: `{seconds}` seconds.',
color=0x15E700
)
await ctx.send(embed=embed)
</code></pre>
<p>The issue is, whenever I enter <code>$slowmode off</code> or <code>$slowmode on</code> (<code>$</code> is the prefix), I get the following error: <code>discord.ext.commands.errors.BadArgument: Converting to "int" failed for parameter "seconds".</code></p>
<blockquote>
<p>I am clearly stating that, if <code>seconds</code> is either <code>on</code> or <code>off</code>, it will be turned to <code>5</code> or <code>0</code> respectively.</p>
</blockquote>
<p>In addition, whenever I choose to enter an improper argument, as in a bunch of random letters, and I have a <code>try</code> and <code>except</code> block, the code immediately skips the <code>try</code> and <code>except</code> block and returns the exact error above. It is almost as if the code isn't there.</p>
|
<p>The behavior you're noticing is due to the Commands extension's <a href="https://discordpy.readthedocs.io/en/latest/ext/commands/commands.html#converters" rel="nofollow noreferrer">Converters</a>. Specifically, the problem is with how you've declared your command's parameters:</p>
<pre class="lang-py prettyprint-override"><code>@commands.command()
async def slowmode(self, ctx, seconds=5):
</code></pre>
<p><code>seconds</code> has an <code>int</code> default argument, so the Converter is casting it to an <code>int</code>. Since you also accept string values for this argument, it's going to raise <code>BadArgument</code> when it attempts to convert <code>'on'</code> to an <code>int</code>. There is a <a href="https://discordpy.readthedocs.io/en/latest/ext/commands/commands.html#typing-union" rel="nofollow noreferrer">special converter for <code>typing.Union</code></a> that you can use to annotate your <code>seconds</code> variable to correctly accept <code>str</code> and <code>int</code> inputs:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Union
@commands.command()
async def slowmode(self, ctx, seconds: Union[int, str] = 5):
</code></pre>
<p>In addition, you should check the type of <code>seconds</code> before attempting to check for string values:</p>
<pre class="lang-py prettyprint-override"><code>@commands.command()
async def slowmode(self, ctx, seconds: Union[int, str] = 5):
if isinstance(seconds, str):
if seconds == 'on':
seconds = 5
elif seconds == 'off':
seconds = 0
# no longer needed
# seconds = int(seconds)
</code></pre>
<p>This way, you also no longer need to explicitly cast <code>seconds</code> to <code>int</code>.</p>
|
python|python-3.x|discord.py
| 2 |
1,905,787 | 65,620,155 |
R's curve function, but in Python, for plotting a continuous distribution
|
<p>The following <a href="https://stackoverflow.com/questions/65606337/how-to-make-histograms-in-python-scipy-stats-look-as-good-as-r">example</a> of the <a href="https://www.rdocumentation.org/packages/graphics/versions/3.6.2/topics/curve" rel="nofollow noreferrer">curve function</a> in R,</p>
<p><code>curve(dgamma(x, 3, .1), add=T, lwd=2, col="orange")</code>,</p>
<p>plots the curve for the probability density function of the <code>dgamma</code> continuous distribution. The equivalent to <code>dgamma</code> in Python is <code>scipy.stats.dgamma</code>.</p>
<p>How can I plot the same curve for the same distribution in Python? I would like this more than fitting a kernel density estimator (KDE), which tend to be inaccurate.</p>
|
<p>I don't think you have a <code>curve</code> equivalent in <code>matplotlib</code> or <code>seaborn</code> for that matter. You have to define a set of points and plot over it on the same device. In this case, since you are doing a histogram, it's getting a number of evenly spaced points between the min and max :</p>
<pre><code>from scipy import stats
import numpy as np
import matplotlib.pyplot as plt
x = stats.gamma.rvs(a=3,scale=1/0.1,size=1000)
plt.hist(x,density=True)
xl = np.linspace(x.min(),x.max(),1000)
plt.plot(xl,stats.gamma.pdf(xl,a=3,scale=1/0.1))
</code></pre>
<p><a href="https://i.stack.imgur.com/FeqAq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FeqAq.png" alt="enter image description here" /></a></p>
|
python|r|statistics|curve-fitting|curve
| 0 |
1,905,788 | 43,353,447 |
Issues installing python package Six (to install Pip)
|
<p>We recently uninstalled pip to do some cleanup on Mac OS X El Capitan. Now trying to re-install pip.</p>
<pre><code>$ sudo easy_install pip
Traceback (most recent call last):
File "/usr/local/bin/easy_install", line 11, in <module>
sys.exit(main())
File "/Library/Python/2.7/site-packages/setuptools/command/easy_install.py", line 2270, in main
**kw
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 111, in setup
_setup_distribution = dist = klass(attrs)
File "/Library/Python/2.7/site-packages/setuptools/dist.py", line 321, in __init__
_Distribution.__init__(self, attrs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 287, in __init__
self.finalize_options()
File "/Library/Python/2.7/site-packages/setuptools/dist.py", line 389, in finalize_options
ep.require(installer=self.fetch_build_egg)
File "/Library/Python/2.7/site-packages/pkg_resources/__init__.py", line 2324, in require
items = working_set.resolve(reqs, env, installer, extras=self.extras)
File "/Library/Python/2.7/site-packages/pkg_resources/__init__.py", line 859, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.VersionConflict: (six 1.4.1 (/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python), Requirement.parse('six>=1.6.0'))
</code></pre>
<p>Looks like we need to upgrade Six. So:</p>
<pre><code>$ easy_install --upgrade six
Traceback (most recent call last):
File "/usr/local/bin/easy_install", line 11, in <module>
sys.exit(main())
File "/Library/Python/2.7/site-packages/setuptools/command/easy_install.py", line 2270, in main
**kw
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 111, in setup
_setup_distribution = dist = klass(attrs)
File "/Library/Python/2.7/site-packages/setuptools/dist.py", line 321, in __init__
_Distribution.__init__(self, attrs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 287, in __init__
self.finalize_options()
File "/Library/Python/2.7/site-packages/setuptools/dist.py", line 389, in finalize_options
ep.require(installer=self.fetch_build_egg)
File "/Library/Python/2.7/site-packages/pkg_resources/__init__.py", line 2324, in require
items = working_set.resolve(reqs, env, installer, extras=self.extras)
File "/Library/Python/2.7/site-packages/pkg_resources/__init__.py", line 859, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.VersionConflict: (six 1.4.1 (/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python), Requirement.parse('six>=1.6.0'))
</code></pre>
<p>Now it looks like we need to upgrade Six in order to upgrade Six??? Maybe it's just a permissions issue:</p>
<pre><code>$sudo easy_install --upgrade six
Traceback (most recent call last):
File "/usr/local/bin/easy_install", line 11, in <module>
sys.exit(main())
File "/Library/Python/2.7/site-packages/setuptools/command/easy_install.py", line 2270, in main
**kw
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 111, in setup
_setup_distribution = dist = klass(attrs)
File "/Library/Python/2.7/site-packages/setuptools/dist.py", line 321, in __init__
_Distribution.__init__(self, attrs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 287, in __init__
self.finalize_options()
File "/Library/Python/2.7/site-packages/setuptools/dist.py", line 389, in finalize_options
ep.require(installer=self.fetch_build_egg)
File "/Library/Python/2.7/site-packages/pkg_resources/__init__.py", line 2324, in require
items = working_set.resolve(reqs, env, installer, extras=self.extras)
File "/Library/Python/2.7/site-packages/pkg_resources/__init__.py", line 859, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.VersionConflict: (six 1.4.1 (/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python), Requirement.parse('six>=1.6.0'))
</code></pre>
<p>Nope. Same error.</p>
<p>I'm obviously missing something. Can someone shed some light on this?</p>
<p>Tried the first answer:</p>
<pre><code>$ python get-pip.py
Collecting pip
Using cached pip-9.0.1-py2.py3-none-any.whl
Collecting wheel
Using cached wheel-0.29.0-py2.py3-none-any.whl
Installing collected packages: pip, wheel
Exception:
Traceback (most recent call last):
File "/var/folders/23/49gg72xd4wb1qps4z5j9vbz80000gy/T/tmpz5ckOD/pip.zip/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/var/folders/23/49gg72xd4wb1qps4z5j9vbz80000gy/T/tmpz5ckOD/pip.zip/pip/commands/install.py", line 342, in run
prefix=options.prefix_path,
File "/var/folders/23/49gg72xd4wb1qps4z5j9vbz80000gy/T/tmpz5ckOD/pip.zip/pip/req/req_set.py", line 784, in install
**kwargs
File "/var/folders/23/49gg72xd4wb1qps4z5j9vbz80000gy/T/tmpz5ckOD/pip.zip/pip/req/req_install.py", line 851, in install
self.move_wheel_files(self.source_dir, root=root, prefix=prefix)
File "/var/folders/23/49gg72xd4wb1qps4z5j9vbz80000gy/T/tmpz5ckOD/pip.zip/pip/req/req_install.py", line 1064, in move_wheel_files
isolated=self.isolated,
File "/var/folders/23/49gg72xd4wb1qps4z5j9vbz80000gy/T/tmpz5ckOD/pip.zip/pip/wheel.py", line 247, in move_wheel_files
prefix=prefix,
File "/var/folders/23/49gg72xd4wb1qps4z5j9vbz80000gy/T/tmpz5ckOD/pip.zip/pip/locations.py", line 140, in distutils_scheme
d = Distribution(dist_args)
File "/Library/Python/2.7/site-packages/setuptools/dist.py", line 321, in __init__
_Distribution.__init__(self, attrs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 287, in __init__
self.finalize_options()
File "/Library/Python/2.7/site-packages/setuptools/dist.py", line 389, in finalize_options
ep.require(installer=self.fetch_build_egg)
File "/Library/Python/2.7/site-packages/pkg_resources/__init__.py", line 2324, in require
items = working_set.resolve(reqs, env, installer, extras=self.extras)
File "/Library/Python/2.7/site-packages/pkg_resources/__init__.py", line 859, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
VersionConflict: (six 1.4.1 (/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python), Requirement.parse('six>=1.6.0'))
</code></pre>
|
<p>Python 2.7.9+ and 3.4+</p>
<p>Good news! Python 3.4 (released March 2014) and Python 2.7.9 (released December 2014) ship with Pip. This is the best feature of any Python release. It makes the community's wealth of libraries accessible to everyone. Newbies are no longer excluded from using community libraries by the prohibitive difficulty of setup. In shipping with a package manager, Python joins Ruby, Node.js, Haskell, Perl, Go--almost every other contemporary language with a majority open-source community. Thank you Python.</p>
<p>Of course, that doesn't mean Python packaging is problem solved. The experience remains frustrating. I discuss this in Stack Overflow question Does Python have a package/module management system?.</p>
<p>And, alas for everyone using Python 2.7.8 or earlier (a sizable portion of the community). There's no plan to ship Pip to you. Manual instructions follow.
Python 2 ≤ 2.7.8 and Python 3 ≤ 3.3</p>
<p>Flying in the face of its 'batteries included' motto, Python ships without a package manager. To make matters worse, Pip was--until recently--ironically difficult to install.
Official instructions</p>
<p>Per <a href="https://pip.pypa.io/en/stable/installing/#do-i-need-to-install-pip" rel="nofollow noreferrer">https://pip.pypa.io/en/stable/installing/#do-i-need-to-install-pip</a>:</p>
<p>Download get-pip.py, being careful to save it as a .py file rather than .txt. Then, run it from the command prompt:</p>
<p>python get-pip.py</p>
<p>You possibly need an administrator command prompt to do this. Follow Start a Command Prompt as an Administrator (Microsoft TechNet).
Alternative instructions</p>
<p>The official documentation tells users to install Pip and each of its dependencies from source. That's tedious for the experienced, and prohibitively difficult for newbies.</p>
<p>For our sake, Christoph Gohlke prepares Windows installers (.msi) for popular Python packages. He builds installers for all Python versions, both 32 and 64 bit. You need to</p>
<pre><code>Install setuptools
Install pip
</code></pre>
<p>For me, this installed Pip at C:\Python27\Scripts\pip.exe. Find pip.exe on your computer, then add its folder (for example, C:\Python27\Scripts) to your path (Start / Edit environment variables). Now you should be able to run pip from the command line. Try installing a package:</p>
<p>pip install httpie</p>
<p>There you go (hopefully)! Solutions for common problems are given below:
Proxy problems</p>
<p>If you work in an office, you might be behind a HTTP proxy. If so, set the environment variables http_proxy and https_proxy. Most Python applications (and other free software) respect these. Example syntax:</p>
<p><a href="http://proxy_url:port" rel="nofollow noreferrer">http://proxy_url:port</a>
<a href="http://username:password@proxy_url:port" rel="nofollow noreferrer">http://username:password@proxy_url:port</a></p>
<p>If you're really unlucky, your proxy might be a Microsoft NTLM proxy. Free software can't cope. The only solution is to install a free software friendly proxy that forwards to the nasty proxy. <a href="http://cntlm.sourceforge.net/" rel="nofollow noreferrer">http://cntlm.sourceforge.net/</a>
Unable to find vcvarsall.bat</p>
<p>Python modules can be part written in C or C++. Pip tries to compile from source. If you don't have a C/C++ compiler installed and configured, you'll see this cryptic error message.</p>
<pre><code>Error: Unable to find vcvarsall.bat
</code></pre>
<p>You can fix that by installing a C++ compiler such as MinGW or Visual C++. Microsoft actually ship one specifically for use with Python. Or try Microsoft Visual C++ Compiler for Python 2.7.</p>
<p>Often though it's easier to check Christoph's site for your package.</p>
|
python|macos|pip|osx-elcapitan
| -1 |
1,905,789 | 37,083,260 |
Cannot find “Grammar.txt” in lib2to3
|
<p>I am trying to get NetworkX running under IronPython on my machine. From other sources I think other people have made this work. (<a href="https://networkx.github.io/documentation/networkx-1.10/reference/news.html" rel="nofollow noreferrer">https://networkx.github.io/documentation/networkx-1.10/reference/news.html</a>)</p>
<p>I am running IronPython 2.7 2.7.5.0 on .NET 4.0.30319.42000 in VisualStudio 2015 Community Edition.</p>
<p>The problem is that when I </p>
<pre><code>import NetworkX as nx
</code></pre>
<p>I get this exception:</p>
<pre><code>Traceback (most recent call last):
File "C:\SourceModules\CodeKatas\IronPythonExperiment\ProveIronPython\ProveIronPython\ProveIronPython.py", line 1, in <module>
File "C:\Program Files (x86)\IronPython 2.7\lib\site-packages\networkx\__init__.py", line 87, in <module>
File "C:\Program Files (x86)\IronPython 2.7\lib\site-packages\networkx\readwrite\__init__.py", line 14, in <module>
File "C:\Program Files (x86)\IronPython 2.7\lib\site-packages\networkx\readwrite\gml.py", line 46, in <module>
File "C:\Program Files (x86)\IronPython 2.7\Lib\lib2to3\refactor.py", line 27, in <module>
File "C:\Program Files (x86)\IronPython 2.7\Lib\lib2to3\fixer_util.py", line 9, in <module>
File "C:\Program Files (x86)\IronPython 2.7\Lib\lib2to3\pygram.py", line 32, in <module>
File "C:\Program Files (x86)\IronPython 2.7\Lib\lib2to3\pgen2\driver.py", line 121, in load_grammar
File "C:\Program Files (x86)\IronPython 2.7\Lib\lib2to3\pgen2\pgen.py", line 385, in generate_grammar
File "C:\Program Files (x86)\IronPython 2.7\Lib\lib2to3\pgen2\pgen.py", line 15, in __init__
IOError: [Errno 2] Could not find file 'C:\Program Files (x86)\IronPython 2.7\Lib\lib2to3\Grammar.txt'.: C:\Program Files (x86)\IronPython 2.7\Lib\lib2to3\Grammar.txt
</code></pre>
<p>The bottom line seems to be that NetworkX wants Grammar.txt to be in the lib2to3 directory of my IronPython installation.</p>
<p>I have tried several things, but no success. Some are too dumb to admit to in public, but I did try</p>
<ul>
<li>running from command line: (ipy myExecutable.py) </li>
<li>pip installing another package (BeautifulSoup), but that package installed and
instantiated with no problems. </li>
<li>I also looked at
<a href="https://stackoverflow.com/questions/11649565/cannot-find-grammar-txt-in-python-sphinx">Cannot find "Grammar.txt" in python-sphinx</a>
, but it did not seem to have any explanation that helped my specific
case.</li>
</ul>
<p><strong>My Question:</strong>
How can I resolve this problem with 'import NetworkX' raising this exception?</p>
|
<p>A lib2to3 import snuck into networkx-1.10 and networkx-1.11 which is the latest release. Try the development release from the github site. (That will soon be networkx-2.0). The lib2to3 library import has been removed since the networkx-1.11 release. github.com/networkx/networkx/archive/master.zip </p>
|
visual-studio|python-2.7|pip|ironpython|networkx
| 2 |
1,905,790 | 67,174,064 |
How to match two lists of strings with regex in Python
|
<p>I have two lists of strings in Python. One of them is a list of desired strings, the other is a larger list of different strings. For example:</p>
<pre><code>desired = ["cat52", "dog64"]
buf = ["horse101", "elephant5", "dog64", "mouse90", "cat52"]
</code></pre>
<p>I need a True/False for whether the second list contains all the strings in the first list. So far I did this with:</p>
<pre><code>if all(element in buf for element in desired)
</code></pre>
<p>However, now I need the list of desired strings to have some regex properties. For example:</p>
<pre><code>desired = ["cat52", "dog[0-9]+"]
</code></pre>
<p>I've looked into the <code>re</code> and <code>regex</code> python libraries but I can't figure out a statement that gives me what I want. Any help would be appreciated.</p>
|
<p>You need to test whether <code>any</code> of the strings in <code>buf</code> match each regex in <code>desired</code>, and then return <code>True</code> if <code>all</code> of them do:</p>
<pre class="lang-py prettyprint-override"><code>import re
buf = ["horse101", "elephant5", "dog64", "mouse90", "cat52"]
desired = ["cat52", "dog[0-9]+"]
print(all(any(re.match(d + '$', b) for b in buf) for d in desired))
</code></pre>
<p>Output:</p>
<pre><code>True
</code></pre>
<p>Note that we add <code>$</code> to the regex so that (for example) <code>dog[0-9]+</code> will not match <code>dog4a</code> (adding <code>^</code> to the beginning is not necessary as <a href="https://docs.python.org/3/library/re.html?highlight=re%20compile#re.match" rel="nofollow noreferrer"><code>re.match</code></a> anchors matches to the start of the string).</p>
|
python|regex|string|list
| 2 |
1,905,791 | 4,248,413 |
Error with new version
|
<p>I have Arch linux and recently it's python packages was upgraded to the 3rd branch. Now I'm not able to run selenium-python bindings. When I run it (even with old-python version) I get:</p>
<pre><code> from selenium import selenium
File "/usr/lib/python2.7/site-packages/selenium-2.0a5-py2.7.egg/selenium/__init__.py", line 23, in <module>
from selenium.selenium import selenium
File "/usr/lib/python2.7/site-packages/selenium-2.0a5-py2.7.egg/selenium/selenium/selenium.py", line 193
raise Exception, result
</code></pre>
<p>What could it be? (Btw, looks like my selenium was built with 2.6 python).</p>
<hr>
<p><strong>UPD</strong> I tried to get selenium again but:</p>
<pre><code> easy_install-2.7 selenium
install_dir /usr/lib/python2.7/site-packages/
Searching for selenium
Best match: selenium 2.0a5
Processing selenium-2.0a5-py2.7.egg
selenium 2.0a5 is already the active version in easy-install.pth
Using /usr/lib/python2.7/site-packages/selenium-2.0a5-py2.7.egg
Processing dependencies for selenium
Finished processing dependencies for selenium
</code></pre>
|
<p>I've tried it and it works for me. The error doesn't make sense to me since line 193 in selenium.py is part of the Selenium object "start" method - it shouldn't be called at import time.</p>
<p>Maybe ask the <a href="http://groups.google.com/group/selenium-users" rel="nofollow">user group</a>?</p>
|
python|selenium
| 1 |
1,905,792 | 4,659,680 |
Matplotlib: simultaneous plotting in multiple threads
|
<p>I am trying to do some plotting in parallel to finish large batch jobs quicker. To this end, I start a thread for each plot I plan on making.</p>
<p>I had hoped that each thread would finish its plotting and close itself (as I understand it, Python closes threads when they get through all the statements in run()). Below is some code that shows this behavior.</p>
<p>If the line that creates a figure is commented out, it runs as expected. Another plausibly helpful tidbit is that it also runs as expected when you only spawn one thread.</p>
<pre><code>import matplotlib.pyplot as plt
import time
import Queue
import threading
def TapHistplots():
## for item in ['str1']:
# # it behaves as expected if the line above is used instead of the one below
for item in ['str1','str2']:
otheritem = 1
TapHistQueue.put((item, otheritem))
makeTapHist().start()
class makeTapHist(threading.Thread):
def run(self):
item, otheritem = TapHistQueue.get()
fig = FigureQueue.get()
FigureQueue.put(fig+1)
print item+':'+str(fig)+'\n',
time.sleep(1.3)
plt.figure(fig) # comment out this line and it behaves as expected
plt.close(fig)
TapHistQueue = Queue.Queue(0)
FigureQueue = Queue.Queue(0)
def main():
start = time.time()
"""Code in here runs only when this module is run directly"""
FigureQueue.put(1)
TapHistplots()
while threading.activeCount()>1:
time.sleep(1)
print 'waiting on %d threads\n' % (threading.activeCount()-1),
print '%ds elapsed' % (time.time()-start)
if __name__ == '__main__':
main()
</code></pre>
<p>Any help is duly appreciated.</p>
|
<p>Why not just use multiprocessing? As far as I can tell from your description, threading won't help you much, anyway... </p>
<p>Matplotlib already threads so that you can display and interact with multiple figures at once. If you want to speed up batch processing on a multicore machine, you're going to need multiprocessing regardless.</p>
<p>As a basic example (<strong>Warning: This will create 20 small .png files in whatever directory you run it in!</strong>)</p>
<pre><code>import multiprocessing
import matplotlib.pyplot as plt
import numpy as np
def main():
pool = multiprocessing.Pool()
num_figs = 20
input = zip(np.random.randint(10,1000,num_figs),
range(num_figs))
pool.map(plot, input)
def plot(args):
num, i = args
fig = plt.figure()
data = np.random.randn(num).cumsum()
plt.plot(data)
plt.title('Plot of a %i-element brownian noise sequence' % num)
fig.savefig('temp_fig_%02i.png' % i)
main()
</code></pre>
|
python|multithreading|matplotlib|python-multithreading
| 31 |
1,905,793 | 48,245,879 |
How do I only check for existing form fields values if it was modified?
|
<p>I am trying to allow users the ability to update their profile, but can't seem to figure out how to only raise an error if the 2 fields <code>username, email</code> were modified, or if the user is not that user. As of now, I can't save the updates as the error is continuously popping up since the user has those values obviously. I've also tried <code>excludes</code> but couldn't get it to work right either. Here is my code:</p>
<p>forms.py</p>
<pre><code>class UpdateUserProfile(forms.ModelForm):
first_name = forms.CharField(
required=True,
label='First Name',
max_length=32,
)
last_name = forms.CharField(
required=True,
label='Last Name',
max_length=32,
)
email = forms.EmailField(
required=True,
label='Email (You will login with this)',
max_length=32,
)
username = forms.CharField(
required = True,
label = 'Display Name',
max_length = 32,
)
class Meta:
model = User
fields = ('username', 'email', 'first_name', 'last_name')
def clean_email(self):
email = self.cleaned_data.get('email')
username = self.cleaned_data.get('username')
if (User.objects.filter(username=username).exists() or User.objects.filter(email=email).exists()):
raise forms.ValidationError('This email address is already in use.'
'Please supply a different email address.')
return email
def save(self, commit=True):
user = super().save(commit=False)
user.email = self.cleaned_data['email']
user.username = self.cleaned_data['username']
if commit:
user.save()
return user, user.username
</code></pre>
<p>views.py</p>
<pre><code>def update_user_profile(request, username):
args = {}
if request.method == 'POST':
form = UpdateUserProfile(request.POST, instance=request.user)
if form.is_valid():
form.save()
return HttpResponseRedirect(reverse('user-profile', kwargs={'username': form.save()[1]}))
else:
form = UpdateUserProfile(instance=request.user)
args['form'] = form
return render(request, 'storytime/update_user_profile.html', args)
</code></pre>
|
<p>Just check if <em>another</em> user exists by excluding the current one:</p>
<pre><code>from django.db.models import Q
class UpdateUserProfile(forms.ModelForm):
# ...
def clean_email(self):
# ...
if User.objects.filter(
Q(username=username)|Q(email=email)
).exclude(pk=self.instance.pk).exists():
raise ...
# for checking if both were modified
if self.instance.email != email and self.instance.username != username:
raise ...
</code></pre>
<p>One could further argue that this code belongs in the form's <code>clean</code> method as it validates field interdependencies.</p>
|
python|django|django-forms
| 1 |
1,905,794 | 51,336,835 |
How to select items from a list based on probability
|
<p>I have lists <code>a</code> and <code>b</code></p>
<pre><code>a = [0.1, 0.3, 0.1, 0.2, 0.1, 0.1, 0.1]
b = [apple, gun, pizza, sword, pasta, chicken, elephant]
</code></pre>
<p>Now I want to create a new list c of 3 items</p>
<p>the 3 items are chosen form list b based on the probabilities in list a </p>
<p>the items should not repeat in list c </p>
<p>for example- output I am looking for</p>
<pre><code>c = [gun,sword,pizza]
</code></pre>
<p>or</p>
<pre><code>c = [apple, pizza, pasta]
</code></pre>
<p><strong>note</strong>
(sum of all values of list a is 1,number of items in list a and b is the same, actually i have a thousand items in both list a and b and i want to select hundred items from the list based on probability assigned to them,python3 )</p>
|
<p>Use <code>random.choices</code>:</p>
<pre><code>>>> import random
>>> print(random.choices(
... ['apple', 'gun', 'pizza', 'sword', 'pasta', 'chicken', 'elephant'],
... [0.1, 0.3, 0.1, 0.2, 0.1, 0.1, 0.1],
... k=3
... ))
['gun', 'pasta', 'sword']
</code></pre>
<p>Edit: To avoid replacement, you can remove the selected item from the population:</p>
<pre><code>def choices_no_replacement(population, weights, k=1):
population = list(population)
weigths = list(weights)
result = []
for n in range(k):
pos = random.choices(
range(len(population)),
weights,
k=1
)[0]
result.append(population[pos])
del population[pos], weights[pos]
return result
</code></pre>
<p>Testing:</p>
<pre><code>>>> print(choices_no_replacement(
... ['apple', 'gun', 'pizza', 'sword', 'pasta', 'chicken', 'elephant'],
... [0.1, 0.3, 0.1, 0.2, 0.1, 0.1, 0.1],
... k=3
... ))
['gun', 'pizza', 'sword']
</code></pre>
|
python|python-3.x|probability
| 7 |
1,905,795 | 51,258,363 |
Python memory leak with strings
|
<p>Here's an extremely simple example that's causing me endless grief:</p>
<pre><code>import gc
def test(str1, str2):
a = str1
b = str2
#del a
#del b
#gc.collect()
for i in range(10000000000):
test('\t\t\t\t\t\t\t\t\t', '\t\t\t\t\t\t\t\t\t')
</code></pre>
<p><s>If I remove the <code>del</code>'s and the <code>gc.collect</code></s>, memory goes up forever at about 5MB a second on my system. I'm using Python 3.6.5 with Visual Studio. Is there some facet of Python I'm missing? I'm relatively new to the language. </p>
<p>Edit: It looks like the gc made it so slow I couldn't see if memory was going up or not. It still goes up very fast without it.</p>
|
<p>Thanks to @shmee, It looks like this was caused by Visual Studio itself as when the above code is run from the shell I don't have the same memory issue with runaway memory consumption. </p>
|
python|memory|memory-leaks|garbage-collection
| 0 |
1,905,796 | 17,242,267 |
How can I convert print output to a csv and then upload it to a Google Spreadsheet?
|
<p>I have a python script that prints lines in a comma delimited format.</p>
<p>The output currently looks like this:</p>
<pre><code>a,b,c,d
e,f,g,h
i,j,k,l
</code></pre>
<p>I want to be able to take this output and append it to a Google spreadsheet. If using Excel is an easier route, I would be open to that solution too. </p>
<p>Is creating a CSV and then importing to a Google/Excel the right way to go or should I look into another way of doing this?</p>
|
<p>Your question is quite confusing. I'm taking it you mean 'print' as in the Python statement or function (3+) which writes to the stdout for your OS.</p>
<p>I suggest you take a look at <a href="http://docs.python.org/2/library/csv.html" rel="nofollow">13.1.csv</a> </p>
<p>This documentation will allow you to write CSV's.</p>
<p>If you want to write to Google Docs it is going to require use of a Google drive API or networking.</p>
|
python|excel|google-sheets
| 0 |
1,905,797 | 17,457,792 |
Can`t run Python code computing Fourier Transform on .tiff image? How to manage memory?
|
<p>My Python code, designed for computation of Fourier Transform can`t complete the task.</p>
<pre><code> def fouriertransform(result): #function for FTM computation
for filename in glob.iglob('*.tif'):
imgfourier = scipy.misc.imread(filename, flatten = True)
image = np.array([imgfourier])#make an array as np
arr = np.abs(np.fft.fftshift(np.fft.fft2(image)))**2
with open('сomput.csv', 'wb') as csvfile:
for elem in arr.flat[:50]:
writer = csv.writer(csvfile, .....)
writer.writerow([('{}\t'.format(elem))])
</code></pre>
<p>Traceback (most recent call last):</p>
<pre><code> File "C:\Python27\lib\site-packages\numpy\fft\fftpack.py", line 524, in _raw_fftnd
a = function(a, n=s[ii], axis=axes[ii])
File "C:\Python27\lib\site-packages\numpy\fft\fftpack.py", line 164, in fft
return _raw_fft(a, n, axis, fftpack.cffti, fftpack.cfftf, _fft_cache)
File "C:\Python27\lib\site-packages\numpy\fft\fftpack.py", line 75, in _raw_fft
r = work_function(a, wsave)
</code></pre>
<p>MemoryError </p>
<p>Image is large 90 MB, how to solve the problem if it works on 1-5 MB images somehow?</p>
<p>Thank you</p>
|
<p>Some suggestions:</p>
<ul>
<li><p>The <code>scipy.fftpack</code> functions allow you the option of overwriting your input array (<code>overwrite_x=True</code>), which may buy you some memory savings.</p></li>
<li><p>You could also try <a href="https://code.google.com/p/anfft/" rel="nofollow"><code>anfft</code></a> (or the newer <a href="http://hgomersall.github.io/pyFFTW/" rel="nofollow">pyFFTW</a>), which is just a Python wrapper around the FFTW C libraries. It's definitely much faster than the numpy and scipy FFT functions, and at least in my hands it seems to also be a bit more memory-efficient.</p></li>
<li><p>Could you maybe cast your array to a lower bit depth (float64->float32, uint16->uint8)?</p></li>
<li><p>You could always downsample your image first (e.g. using <code>scipy.ndimage.zoom</code>). Reducing the spatial resolution of the image will of course reduce the spectral resolution of the FFT, but it might not matter that much to you depending on what exactly you want to do with it.</p></li>
<li><p>Buy some more RAM?</p></li>
</ul>
|
python|numpy|fft
| 1 |
1,905,798 | 70,699,987 |
OpenCV : How to clear Image Background of ID Card without losing quality
|
<p>I want to clear an image background of ID without losing quality, Keep only the text with white background</p>
<p>Using the following code is not efficient, produce high noise and distortion</p>
<pre><code> img = cv2.imread(imge)
# Convert into grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Convert BGR to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define range of black color in HSV
lower_val = np.array([0,0,0])
upper_val = np.array([179,255,135])
# Threshold the HSV image to get only black colors
mask = cv2.inRange(hsv, lower_val, upper_val)
# invert mask to get black symbols on white background
mask_inv1 = cv2.bitwise_not(mask)
mask_inv = cv2.blur(mask_inv1,(5,5))
</code></pre>
<p>How I can achieve clean background with these images</p>
<h2>Samples</h2>
<p><a href="https://i.stack.imgur.com/7nKaR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7nKaR.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/wKziQ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wKziQ.jpg" alt="enter image description here" /></a></p>
<h2>Output</h2>
<p><a href="https://i.stack.imgur.com/9uiNY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9uiNY.jpg" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/70KpE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/70KpE.jpg" alt="enter image description here" /></a></p>
|
<p>Using <a href="https://github.com/danielgatis/rembg" rel="nofollow noreferrer">Rembg</a> to get perfect output like below</p>
<p><a href="https://github.com/danielgatis/rembg" rel="nofollow noreferrer">Rembg</a> is an offline tool to remove images background. you can simply install Rembg package using pip in python</p>
<pre><code>pip install rembg
</code></pre>
<p>afterthat,</p>
<pre><code>rembg i path/to/input.png path/to/output.png
</code></pre>
<p><a href="https://i.stack.imgur.com/vZtcB.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vZtcB.jpg" alt="output1" /></a></p>
<p><a href="https://i.stack.imgur.com/rYJ7B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rYJ7B.png" alt="output2" /></a></p>
|
python-3.x|opencv|image-processing
| 2 |
1,905,799 | 69,898,206 |
how do I connect with salesforce with python requests?
|
<p>Traceback error while connecting with salesforce API: [{'message': 'INVALID_HEADER_TYPE', 'errorCode': 'INVALID_AUTH_HEADER'}]</p>
<p>What is the problem?</p>
<p>My python codes are as follows:</p>
<pre><code>client_id = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
client_secret = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
redirect_url = 'http://localhost/'
cm_user = 'XXXXXXXXXXXXXXXXXXXXXX'
cm_pass = 'XXXXXXXXXXXXXXXXXXXXXX'
auth_url = 'https://login.salesforce.com/services/oauth2/token'
response = requests.post(auth_url, data = {
'client_id': client_id,
'client_secret': client_secret,
'grant_type':'password',
'username': cm_user,
'password': cm_pass
})
json_res = response.json()
access_token = json_res['access_token']
auth = {'Authorization': 'Bearer' + access_token}
instance_url = json_res['instance_url']
url = instance_url + '/services/data/v45.0/sobjects/contact/describe'
res = requests.get(url, headers=auth)
r = res.json()
print(r)
</code></pre>
|
<p>You are missing a space after the word Bearer, which renders your authorization header invalid.</p>
|
python|salesforce
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.