Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,907,500 | 28,788,430 |
Print to an empty file in python 3
|
<p>can someone tell me why in python 3.4.2 when I try</p>
<pre><code>import codecs
f = codecs.open('/home/filename', 'w', 'utf-8')
print ('something', file = f)
</code></pre>
<p>it gives me an empty file?</p>
<p>Previously it was working well, but only suddenly it stopped printing to file</p>
|
<p>File writing is buffered to avoid hitting the performance drain that is writing to a disk. Flushing the buffer takes place when you reach a threshold, flush explicitly or close the file.</p>
<p>You have not <em>closed</em> the file, did not flush the buffer, and haven't written enough to the file to auto-flush the buffer.</p>
<p>Do one of the following:</p>
<ul>
<li><p>Flush the buffer:</p>
<pre><code>f.flush()
</code></pre>
<p>This can be done with the <code>flush</code> argument to <code>print()</code> as well:</p>
<pre><code>print('something', file=f, flush=True)
</code></pre>
<p>but the argument requires Python 3.3 or newer. </p></li>
<li><p>Close the file:</p>
<pre><code>f.close()
</code></pre>
<p>or use the file as a context manager (using the <code>with</code> stamement):</p>
<pre><code>with open('/home/filename', 'w', encoding='utf-8') as f:
print('something', file=f)
</code></pre>
<p>and the file will be closed automatically when the block is exited (on completion, or an exception).</p></li>
<li><p>Write more data to the file; how much depends on the buffering configuration.</p></li>
</ul>
|
python|python-3.x|file-io
| 3 |
1,907,501 | 68,587,055 |
how to convert string into object in python odoo
|
<p>I have 2 methods
in method1:</p>
<pre><code>def print_xls(self):
record = self.env['sale.order'].search([('id','in',filtered_list)])
data = {
'model_recs':record
}
return self.env.ref('module_name.report_name').report_action(self, data=data)
</code></pre>
<p>in method 2:</p>
<pre><code>def generate_xlsx_report(self, workbook, data, lines):
h1 = workbook.add_format({'font_size': 16, 'align': 'center', 'align':'center','valign':'vcenter', 'bold': True,'underline':True})
sheet = workbook.add_worksheet('sale dept')
sheet.merge_range(1, 0, 3, 17,"Sale Report",h1)
print(data['model_recs'])
</code></pre>
<p>I am getting data['model_recs'] = 'sale.order(1,2,3)'</p>
<p>My question is how to convert the string model 'sale.order(1,2,3)' into Model/object sale.order(1,2,3), so the i can get it's field data like sale.order[0].some_field in second method.</p>
|
<p>You can use eval("sale.order(1,2,3)")</p>
<pre class="lang-py prettyprint-override"><code>>>> eval("str(1)")
'1'
>>> eval("type(str(1))")
<class 'str'>
>>> class x:
... def gg(self, x, y, z):
... print(x, y, z)
...
>>> eval('x().gg(1,2,3)')
1 2 3
</code></pre>
<p>From here <a href="https://stackoverflow.com/a/42227653/7035448">https://stackoverflow.com/a/42227653/7035448</a></p>
<p>While it is very powerful method it is also worthwhile to note problems with eval <a href="https://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html" rel="nofollow noreferrer">eval_problems</a></p>
|
python|odoo|odoo-12|odoo-13|odoo-14
| 0 |
1,907,502 | 6,328,340 |
how to get the math operators strings from module `operator` in python
|
<p>Take <code>operator.add</code> for example:</p>
<pre><code>>>>import operator as op
>>>op.add(1,2) #means 1 + 2
3
>>>op.add.__name__
'add'
</code></pre>
<p>I want sort of:</p>
<pre><code>>>>op.add.math_str
"+"
</code></pre>
<p>Can I get all those math string <code>"+", "-", ">"...</code> module <code>operator</code> supported runtime?</p>
<p>EDIT:</p>
<pre><code>>>> [eval(x) for x in [".".join(("op",x,"__doc__")) for x in dir(op)]]
['abs(a) -- Same as abs(a).',
'add(a, b) -- Same as a + b.',
'and_(a, b) -- Same as a & b.',
'concat(a, b) -- Same as a + b, for a and b sequences.',
'contains(a, b) -- Same as b in a (note reversed operands).',
'delitem(a, b) -- Same as del a[b].',
'delslice(a, b, c) -- Same as del a[b:c].',
'div(a, b) -- Same as a / b when __future__.division is not in effect.',
'str(object) -> string\n\nReturn a nice string representation of the object.\nIf the argument is a string, the return value is the same object.',
</code></pre>
<p>above code can list most operators strings, is that means I can list strings with re module?</p>
<p>Thanks!</p>
|
<p>You can use the <a href="http://docs.python.org/library/operator.html?highlight=operator#mapping-operators-to-functions" rel="nofollow">following table from tutorial</a> to build your own mappings dictionary only once and then simply use it whenever you will need it.</p>
|
python|string|math|operators
| 3 |
1,907,503 | 56,978,481 |
extracting parameter values from one large string
|
<p>I have the following string (loaded from a .txt file to a Matlab cell):</p>
<pre><code>text = 'u1 @ t=0, K=3.1416, gamma=0.1, A=-0.1 u1 @ t=0.01, K=3.1416,
gamma=0.1, A=-0.1 u1 @ t=0.02, K=3.1416, gamma=0.1, A=-0.1 u1 @ t=0.03,
K=3.1416, gamma=0.1, A=-0.1'
</code></pre>
<p>The whole string variable is very long (from t=0 to t=1 and for different parameter values). I want to separate them into multiple cells so that</p>
<ul>
<li><p><code>A(1)='u1 @ t=0, K=3.1416, gamma=0.1, A=-0.1'</code>,</p>
</li>
<li><p><code>A(2)='u1 @ t=0.01, K=3.1416, gamma=0.1, A=-0.1'</code>,</p>
</li>
<li><p>etc.</p>
<p>Or even better is to extract the variables of the parameters <code>t</code>, <code>K</code>, <code>gamma</code>, <code>A</code> and store them in an array.</p>
</li>
</ul>
<p>How to do this in Matlab? (or in Python)</p>
<p>Edit:</p>
<p>Turns out the first few entry in my data is in the form <code>... t=1E-4, ... t=2E-4, ...... t=9E-4, ... t=0.001</code>, and some of the answers will skip the first few time steps which are in scientific notations. How to handle those numbers as well?</p>
|
<p>You've gotten a lot of Python answers, so here's a MATLAB one. You can use the function <a href="https://www.mathworks.com/help/matlab/ref/regexp.html" rel="nofollow noreferrer"><code>regexp</code></a> to parse the string, then <a href="https://www.mathworks.com/help/matlab/ref/vertcat.html" rel="nofollow noreferrer"><code>vertcat</code></a>, <a href="https://www.mathworks.com/help/matlab/ref/cellfun.html" rel="nofollow noreferrer"><code>cellfun</code></a>, and <a href="https://www.mathworks.com/help/matlab/ref/str2double.html" rel="nofollow noreferrer"><code>str2double</code></a> to reshape and convert the resulting cell array of strings into an N-by-4 matrix of values. Starting with this sample data (4 sets of entries in one string):</p>
<pre><code>str = 'u1 @ t=0, K=3.1416, gamma=0.1, A=-0.1 u1 @ t=0.01, K=3.1416, gamma=0.1, A=-0.1 u1 @ t=0.02, K=3.1416, gamma=0.1, A=-0.1 u1 @ t=0.03, K=3.1416, gamma=0.1, A=-0.1';
</code></pre>
<p>The code is just 2 lines:</p>
<pre><code>vals = regexp(str, 'u1 @ t=([-\.\dE]+), K=([-\.\dE]+), gamma=([-\.\dE]+), A=([-\.\dE]+)', 'tokens');
vals = cellfun(@str2double, vertcat(vals{:}));
</code></pre>
<p>And the result:</p>
<pre><code>vals =
0 3.141600000000000 0.100000000000000 -0.100000000000000
0.010000000000000 3.141600000000000 0.100000000000000 -0.100000000000000
0.020000000000000 3.141600000000000 0.100000000000000 -0.100000000000000
0.030000000000000 3.141600000000000 0.100000000000000 -0.100000000000000
</code></pre>
<p>Each column contains the values for <code>t</code>, <code>K</code>, <code>gamma</code>, and <code>A</code>.</p>
|
python|string|matlab|text|cell
| 1 |
1,907,504 | 56,910,206 |
Can you check if two objects instances are same
|
<p>In a LinkedList Class defined here, I wanted to check if you can check self.head == node or you need to compare the node with all the attributes and define a <strong>equals</strong> method explicitly? I saw code where someone was using this without <strong>equals</strong> method</p>
<pre><code>class Node(object):
def __init__(self,key=None,value=None):
self.key = key
self.value = value
self.previous = None
self.next = None
class LinkedList(object):
def __init__(self):
self.head = None
self.tail = None
self.count = 0
def prepend(self,value):
node = Node(value)
if self.head is None:
self.head = node
self.tail = self.head
self.count = 1
return
self.head.previous = node
node.next = self.head
self.head = node
self.count += 1
</code></pre>
|
<p>To see if the address of <code>obj1</code> matches the address of <code>obj2</code>, use the <code>is</code> operator.</p>
<p>You are doing that already in this test:</p>
<pre><code> if self.head is None:
</code></pre>
<p>There is exactly one object (a singleton) in the <code>NoneType</code> class,
and you are essentially asking if <code>id(self.head)</code> matches the <code>id()</code>, or address, of <code>None</code>.</p>
<p>Feel free to do that with other linked list node objects.</p>
<p>If, OTOH, you were to ask if <code>self.head == some_node</code>,
that might well be asking if node attribute <code>a</code> matches in both,
and attribute <code>b</code> matches in both,
depending on your class methods,
e.g. using <code>def __eq__</code>.
A node created by a shallow copy might be <code>==</code> equal to original,
but <code>is</code> will reveal that separate storage is allocated for it.</p>
|
python|python-3.x
| 1 |
1,907,505 | 53,876,205 |
What's wrong in this code? Unknown attribute 'array' of type Module(<module 'numpy' from filename __init__.py'>
|
<p>I'm trying to create an array inside a function using @vectorize, I don't know why I keep receiving this error:</p>
<pre><code>Unknown attribute 'array' of type Module( < module 'numpy' from 'filename.... /lib/python3.6/site-packages/numpy/ __ init __ .py'>)
</code></pre>
<p>Code:</p>
<pre><code>from numba import vectorize, float32
import numpy as np
@vectorize([float32(float32[:,:], float32[:])], target='cuda')
def fitness(vrp_data, individual):
# The first distance is from depot to the first node of the first route
depot = np.array([0.0, 0.0, 30.0, 40.0], dtype=np.float32)
firstnode = np.array([0.0, 0.0, 0.0, 0.0], dtype=np.float32)
firstnode = vrp_data[vrp_data[:,0] == individual[0]][0] if
individual[0] !=0 else depot
x1 = depot[2]
x2 = firstnode[2]
y1 = depot[3]
y2 = firstnode[3]
dx = x1 - x2
dy = y1 - y2
totaldist = math.sqrt(dx * dx + dy * dy)
return totaldist
</code></pre>
<p>The code works fine without the function decoration.</p>
|
<h1>The problem</h1>
<p><code>numpy.array</code> is not supported by Numba. Numba only supports a subset of the Numpy top-level functions (ie any function you call like <code>numpy.foo</code>). Here's an <a href="https://github.com/numba/numba/issues/1646" rel="nofollow noreferrer">identical issue from the Numba bug tracker</a>.</p>
<h1>The "solution"</h1>
<p>Here's the <a href="http://numba.pydata.org/numba-doc/latest/reference/numpysupported.html#other-functions" rel="nofollow noreferrer">list of Numpy functions that Numba actually supports</a>. <code>numpy.zeros</code> is supported, so in an ideal world you could just change the lines in your code that use <code>np.array</code> to:</p>
<pre><code>depot = np.zeros(4, dtype=np.float32)
depot[2:] = [30, 40]
firstnode = np.zeros(4, dtype=np.float32)
</code></pre>
<p>and it would work. However, when targeting <code>cuda</code> <a href="http://numba.pydata.org/numba-doc/latest/cuda/cudapysupported.html#numpy-support" rel="nofollow noreferrer">all Numpy functions that allocate memory (including <code>np.zeros</code>) are disabled</a>. So you'll have to come up with a solution that doesn't involve any array allocation.</p>
<h1>Issues with use of <code>vectorize</code></h1>
<p>Also, it looks like <code>vectorize</code> is not the wrapper function you should be using. Instead, a function like the one you've written <a href="https://numba.pydata.org/numba-doc/dev/user/vectorize.html#the-guvectorize-decorator" rel="nofollow noreferrer">requires the use of <code>guvectorize</code></a>. Here's the closest thing to your original code that I was able to get to work:</p>
<pre><code>import math
from numba import guvectorize, float32
import numpy as np
@guvectorize([(float32[:,:], float32[:], float32[:])], '(m,n),(p)->()')
def fitness(vrp_data, individual, totaldist):
# The first distance is from depot to the first node of the first route
depot = np.zeros(4, dtype=np.float32)
depot[2:] = [30, 40]
firstnode = np.zeros(4, dtype=np.float32)
firstnode = vrp_data[vrp_data[:,0] == individual[0]][0] if individual[0] !=0 else depot
x1 = depot[2]
x2 = firstnode[2]
y1 = depot[3]
y2 = firstnode[3]
dx = x1 - x2
dy = y1 - y2
totaldist[0] = math.sqrt(dx * dx + dy * dy)
</code></pre>
<p>The third argument in the signature is actually the return value, so you call the function like:</p>
<pre><code>vrp_data = np.arange(100, 100 + 4*4, dtype=np.float32).reshape(4,4)
individual = np.arange(100, 104, dtype=np.float32)
fitness(vrp_data, individual)
</code></pre>
<p>Output:</p>
<pre><code>95.67131
</code></pre>
<h1>Better error message in latest Numba</h1>
<p>You should probably upgrade your version of Numba. In the current version, your original code raises a somewhat more specific error message:</p>
<pre><code>TypingError: Failed in nopython mode pipeline (step: nopython frontend). Use of unsupported NumPy function 'numpy.array' or unsupported use of the function.
</code></pre>
|
python|arrays|numpy|numba
| 2 |
1,907,506 | 25,851,779 |
Robot Framework : How to know whether a test library function is being executed from setup/test/teardown
|
<p>I have my own test library function run_test_routine()</p>
<p>I am calling the <strong>same</strong> function "Run Test Routine" as an RF Function from my RF suite in the setup section , test section and tear down section, like this</p>
<pre><code>my RF test case
[Setup] Run Test Routine setup_input
Run Test Routine test_input
[Teardown] Run Test Routine teardown_input
</code></pre>
<p>Now , when this run_test_routine() gets invoked in the RF python library how do I get to know where it was called from ?
i.e was it called from the Setup section , test section or Teardown section ?</p>
<p>I would like to stress that this is required in the python code of the RF library and not in the text based RF suite </p>
|
<p>I don't believe there is a reliable way to determine the context in which a keyword is called. The only thing I can think of is to examine the stack to see if the internal routine <code>_run_setup</code> or <code>_run_teardown</code> was called. This could easily break in future versions of robot since it depends on the name of some private internal functions. </p>
<p>If you really want to do that, it might look something like this:</p>
<pre><code>import traceback
def _is_setup():
for tb in reversed(traceback.extract_stack()):
if (tb[2] == "_run_setup"):
return True
return False
def _is_teardown():
for tb in reversed(traceback.extract_stack()):
if (tb[2] == "_run_teardown"):
return True
return False
</code></pre>
<p>I think the better solution is to have three keywords. Keep the one you have, and then create two more called <code>Run Setup Test Routine</code> and <code>Run Teardown Test Routine</code>. They could both call the <code>Run Test Routine</code> function in addition to whatever special processing you need to do. Or, they could simply pass an extra argument to <code>Run Test Routine</code> to tell it the context.</p>
|
python|automation|robotframework
| 1 |
1,907,507 | 25,733,112 |
Plone Migrate Blob Data to "bushy" layout IOError Errno 21
|
<p>I am trying to migrate our blobstorage (using Plone 4.3.2 and ZODB3 3.10.5) from 'lawn' to 'bushy' layout. While running the script, I get the following traceback:</p>
<pre><code>(11719) Blob directory `var/blobstorage-lawn/` has layout marker set. Selected `lawn` layout.
(11719) The `lawn` blob directory layout is deprecated due to scalability issues on some file systems, please consider migrating to the `bushy` layout.
Migrating blob data from `var/blobstorage-lawn/` (lawn) to `var/blobstorage` (bushy)
Traceback (most recent call last):
File "bin/migrateblobs", line 19, in <module>
sys.exit(ZODB.scripts.migrateblobs.main())
File "/var/db/zope/plone43_dev/buildout-cache/eggs/ZODB3-3.10.5-py2.7-linux- x86_64.egg/ZODB/scripts/migrateblobs.py", line 77, in main
migrate(source, dest, options.layout)
File "/var/db/zope/plone43_dev/buildout-cache/eggs/ZODB3-3.10.5-py2.7-linux-x86_64.egg/ZODB/scripts/migrateblobs.py", line 52, in migrate
link_or_copy(source_file, dest_file)
File "/var/db/zope/plone43_dev/buildout-cache/eggs/ZODB3-3.10.5-py2.7-linux-x86_64.egg/ZODB/scripts/migrateblobs.py", line 30, in link_or_copy
shutil.copy(f1, f2)
File "/var/db/zope/plone43_dev/Python-2.7/lib/python2.7/shutil.py", line 119, in copy
copyfile(src, dst)
File "/var/db/zope/plone43_dev/Python-2.7/lib/python2.7/shutil.py", line 82, in copyfile
with open(src, 'rb') as fsrc:
IOError: [Errno 21] Is a directory: '/var/db/zope/plone43_dev/zeocluster/var/blobstorage-lawn/0x00/0x00'
</code></pre>
<p>I don't understand why it is trying to copy a directory. Is this a bug in the product? Or could my blobstorage be corrupt? It is a dev environment, and I have been having some other issues with the blobstorage, which is why I'm trying to migrate to bushy in hopes that it will resolve some issues. </p>
<p>Thoughts or solutions?</p>
|
<p>You appear to have a <em>bushy</em> layout intermixed with your <em>lawn</em> layout.</p>
<p>The <em>lawn</em> layout uses a flat structure; directories are named after the OID, with in each directory <em>just</em> the revisions of the blob files. The <em>bushy</em> layout uses one directory <em>per byte in the OID</em>, leading to a tree of directories.</p>
<p>When moving from <em>lawn</em> to <em>bushy</em> the script takes <em>directories</em> assuming they are valid OIDs, and on each assumes that all it'll find in the directory is revision files. </p>
<p>You, however, already have a <em>bushy</em> layout structure. The script is trying to move the directory <code>0x00</code> out of a top-level directory <code>0x00</code>. That's exactly the kind of directories you'd find in a <em>bushy</em> layout, not a <em>lawn</em> layout. Your structure is corrupted indeed.</p>
<p>It may be that all that is wrong is the marker file; if all you have at the top level are <code>0xhh</code> 2 digit hex numbered directories, then you have <em>just</em> a <em>bushy</em> layout disguised as a <em>lawn</em>. You can then try changing the <code>.layout</code> file in the <code>var/blobstorage-lawn</code> directory from <code>lawn</code> to <code>bushy</code> and see if your ZODB still works. If not, it probably is beyond repair.</p>
<p>If you have a mix of <code>0xhh</code> and longer <code>0xhhhhhhhhh</code> hex directories (the latter only containing files, no directories) then you managed to put both a <em>lawn</em> and a <em>bushy</em> layout into the one blob storage. If the layout is marked as <em>lawn</em>, the <em>bushy</em> part is most likely out of date. You can try and move all directories with just 2 hex digits out to a new <code>blobstorage</code> directory (and add a new <code>.layout</code> file with the content <code>bushy</code> to it), but I am not too confident that anything useful is contained in it.</p>
|
python|plone|zodb
| 5 |
1,907,508 | 20,472,484 |
Python: How to build a dictionary with multiple slip strings
|
<p>In my admin page I have a list of usernames and IDs. I'm trying to build a dictionary to use it later to automate some jobs.
The code that I've build is:</p>
<pre><code>def nameID_tag(src):
nameID = {}
try:
id = src.split('<option value="')[1].split('" >')[0]
names = src.split(id+'\" >')[1].split('</option>')[0]
nameID[names] = id
except:
print "Could not retrieve data."
return nameID
</code></pre>
<p>The HTML code:</p>
<pre><code><option value="1" >admin</option>
<option value="2" >viktor</option>
<option value="3" >ana</option>
</code></pre>
<p>Then I call it:</p>
<pre><code>s=br.open(url).read()
nameID_tag(s)
</code></pre>
<p>Its working fine for 1 user, but don't know how to get all the users+ids and build the dictionary.</p>
|
<p>Now, i'm pretty bad with regular expressions. So there's probably better ways to do this. But this seems to work</p>
<pre><code>import re
IDs = re.findall('(?<=<option value=")\w+', html)
names = re.findall('(?<=>)\w+(?=</option>)', html)
nameID = dict(zip(IDs,names))
</code></pre>
|
python|parsing|dictionary|html-parsing|mechanize
| 2 |
1,907,509 | 29,675,185 |
How to read only a specific range of lines out of a csv file with python?
|
<p>How can I only read from line 5000 to 6000 in this csv file for example? At this moment "for row in reader:" loops through all lines of course.</p>
<p>So I have the lines:</p>
<pre><code>with open('A.csv', 'rt') as f:
reader = csv.reader(f, delimiter=';')
for row in reader:
response = urllib2.urlopen(row[12])
</code></pre>
<p>This code is used to open specific url links. </p>
|
<p>Because csv <a href="https://docs.python.org/2/library/csv.html#csv.reader" rel="nofollow noreferrer"><code>reader</code></a> object supports iteration, you can simply use <a href="https://docs.python.org/2/library/itertools.html#itertools.islice" rel="nofollow noreferrer"><code>itertools.islice</code></a> to slice any specific part that you want.</p>
<pre><code>from itertools import islice
with open('A.csv', 'rt') as f:
reader = csv.reader(f, delimiter=';')
for row in islice(reader,5000,6000):
response = urllib2.urlopen(row[12])
</code></pre>
|
python|python-3.x|csv
| 9 |
1,907,510 | 29,523,067 |
Python: Reading Ftp file list with UTF-8?
|
<p>Hi I am using module ftplib. And list my files with this code:</p>
<pre><code>files=[]
files = ftp.nlst()
</code></pre>
<p>And write them to text file with this code:</p>
<pre><code>for item in files:
filenames.write(item +'\n')
</code></pre>
<p>But there is an encoding problem that if my file name has 'ı,ğ,ş' characters, It cant read this and writes to file with '?' instead.</p>
<p>How can read them properly?</p>
|
<p>Python 3.x is using default encoding ISO-8859-1 for file name.</p>
<p>To use UTF-8 encoding for file name with the server, you need to add the following line:</p>
<pre><code>ftpConnector = ftplib.FTP(host,user,password) # connection
ftpConnector.encoding='utf-8' #force encoding for file name in utf-8 rather than default that is iso-8889-1
</code></pre>
<p>then you can use:</p>
<pre><code>ftpConnector.storbinary( 'STOR '+fileName, file) # filename will be utf-8 encoded
</code></pre>
|
python|python-3.x|utf-8|ftp|ftplib
| 15 |
1,907,511 | 61,121,949 |
Change python3 command directory in terminal
|
<p><strong>Problem:</strong>
I am using Mac Catalina 10.15. I know that Catalina installed Python 2.7 already and I installed Python 3.7.3. Then I also installed Anaconda which contained Conda, Python 3.7.3. Now I have 3 Pythons:</p>
<p>A. /usr/bin/python -> python 2.7<br>
B. /usr/bin/python3 -> python 3.7.3<br>
C. /Users/david/anaconda3/python.app/Contents/MacOS/python -> python 3.7.6</p>
<p>When I type "python3" in terminal it will run B. But I want to change "python3" command to open C.</p>
<p><strong>What I tried:</strong>
I found "/Users/david/.bash_profile" and added</p>
<pre><code>alias python3="/Users/david/opt/anaconda3/python.app/Contents/MacOS/python"
</code></pre>
<p>at end of the file but "python3" still opens B. How can I open Anaconda Python by typing "python3" in the terminal?</p>
|
<p>If you have not so already, try running:</p>
<pre><code>$ source ~/.bash_profile
</code></pre>
<p>That will load all your settings for the current terminal session. However, this will not load automatically when you start a new terminal session. For this to happen, you first need to know what shell you are running.<br>
Run:</p>
<pre><code>$ echo $SHELL
/bin/zsh
</code></pre>
<p>If it returns <code>/bin/zsh</code> like mine does(which it should since this is MacOS Catalina), you must copy your alias to the bottom of <code>~/.zshrc</code>. Then your alias will automatically be loaded when you start a new terminal session.<br>
If for some reason <code>echo $SHELL</code> returns something other than <code>/bin/zsh</code>, run:</p>
<pre><code>$ chsh -s /bin/zsh
</code></pre>
<p>which will change your shell to <code>zsh</code>. Then your alias settings in <code>~/.zshrc</code> will be loaded in every new terminal session.</p>
|
python|python-3.x|macos|shell
| 2 |
1,907,512 | 60,767,809 |
How to count len of strings in a list without built-in function?
|
<p>How can I create a function <code>count_word</code> in order to get the result like this: </p>
<pre><code>x = ['Hello', 'Bye']
print(count_word(x))
# Result must be [5, 3]
</code></pre>
<p>without using <code>len(x[index])</code> or any built-in function?</p>
|
<p>Since you're not allowed to use built-in functions, you have to iterate over each string in the list and over all characters of each word as well. Also you have to memorize the current length of each word and reset the counter if the next word is taken. This is done by re-assigning the counter value to 0 (<code>length = 0</code>) before the next inner iteration will be started:</p>
<pre class="lang-py prettyprint-override"><code>def count_word(x):
result = []
for word in x:
length = 0
for char in word:
length += 1
result.append(length)
return result
</code></pre>
<p>Please note that this is probably the no-brainer par excellence. However, Python offers some interesting other approaches to solve this problem. <a href="https://stackoverflow.com/q/3992192/2648551">Here</a> are some other interesting examples, which of course need to be adapted.</p>
<hr>
<p>While this should answer your questions, I would like to add some notes about performance and why it is better to use built-in functions:</p>
<p>Generally spoken, built-in functions are doing the iteration under the hood for you or are even faster by e.g. <a href="https://docs.python.org/3/faq/design.html#how-are-lists-implemented-in-cpython" rel="nofollow noreferrer">simply getting the array’s length from the CPython list head structure</a> <em>(emphasis mine)</em>:</p>
<blockquote>
<p><strong>How are lists implemented in CPython?</strong></p>
<p>CPython’s lists are really variable-length arrays, not Lisp-style linked lists. The implementation uses a contiguous array of references to other objects, and <strong>keeps a pointer to</strong> this array and <strong>the array’s length in a list head structure</strong>.</p>
<p>This makes indexing a list <code>a[i]</code> an operation whose cost is independent of the size of the list or the value of the index.</p>
<p>When items are appended or inserted, the array of references is resized. Some cleverness is applied to improve the performance of appending items repeatedly; when the array must be grown, some extra space is allocated so the next few times don’t require an actual resize.</p>
</blockquote>
<p><em>(Credits also to Ken Y-N, see <a href="https://stackoverflow.com/q/42942135/2648551">How does len(array) work under the hood</a>)</em></p>
<p>Generally, it is better to use built-in functions whenever you can, because you seldom can beat the performance of the underlying implementation (e.g. for C-based Python installations):</p>
<pre class="lang-py prettyprint-override"><code>def count_word_2(x):
return [len(word) for word in x]
</code></pre>
<p>You can see that if you time the two given functions:</p>
<pre class="lang-py prettyprint-override"><code>In [1]: from timeit import timeit
In [2]: statement = 'count_word(["Hello", "Bye"])'
In [3]: count_word_1 = """
...: def count_word(x):
...: result = []
...: for word in x:
...: length = 0
...: for char in word:
...: length += 1
...: result.append(length)
...: return result
...: """
In [4]: count_word_2 = """
...: def count_word(x):
...: return [len(word) for word in x]
...: """
In [5]: timeit(stmt=statement, setup=count_word_1, number=10000000)
Out[5]: 4.744415309000033
In [6]: timeit(stmt=statement, setup=count_word_2, number=10000000)
Out[6]: 2.7576589090022026
</code></pre>
<p>If also a little bit of <em>cheating</em> is allowed (using string dunder method <code>__len()__</code> instead of built-in function <code>len()</code>), you can get some performance back (credits to HeapOverflow):</p>
<pre class="lang-py prettyprint-override"><code>In [7]: count_word_3 = """
...: def count_word(x):
...: return [word.__len__() for word in x]
...: """
In [8]: timeit(stmt=statement, setup=count_word_3, number=10000000)
Out[8]: 3.313732603997778
</code></pre>
<p>So a good rule of thumb is: Do whatever you can with built-in functions. They are more readable and faster.</p>
|
python
| 1 |
1,907,513 | 49,496,091 |
Why are braces allowed in this Python code?
|
<p>Python has never used braces to define code blocks, it relies on indentation instead; this is one of the defining features of the language. There's even a little cookie that CPython gives you to show how strongly they feel about this:</p>
<pre><code>>>> from __future__ import braces
SyntaxError: not a chance
</code></pre>
<p>When I saw this little snippet posted to a forum (since deleted) I thought it cannot possibly work. But it does!</p>
<pre><code>>>> def hi(): {
print('Hello')
}
>>> hi()
Hello
</code></pre>
<p>Why does this code work, when it appears to violate the language syntax?</p>
|
<p>The braces aren't defining a code block as they would in other languages - they're defining a <a href="https://docs.python.org/3.6/library/stdtypes.html#set-types-set-frozenset" rel="nofollow noreferrer"><code>set</code></a>. The <code>print</code> function is being evaluated and its return value (<code>None</code>) is being placed in the set. Once the set is created it is immediately discarded since it isn't being assigned to anything.</p>
<p>There are a couple of Python syntax features that are being exploited here. First, Python allows a single-statement code block to come immediately after a <code>:</code>. Second, an expression is allowed to span multiple lines under certain circumstances.</p>
<p>This code wouldn't have worked if the body of the block were more than one line, or if an assignment or statement other than a function call were attempted.</p>
<p>Here's a redoing of the function to make it clearer what's happening:</p>
<pre><code>>>> def hi2(): print(
{ print('Hello') }
)
>>> hi2()
Hello
{None}
</code></pre>
|
python|curly-braces
| 4 |
1,907,514 | 53,771,470 |
Resample different behaviour with agg and calling the function
|
<p>I have a <code>pandas.DataFrame</code> called <code>data</code> with this structure:</p>
<pre><code> id action
date
1900-11-01 00:00:00 10.0 starts_game
1900-11-01 00:05:00 10.0 team_a_scores
1900-11-01 00:25:00 10.0 team_a_scores
1900-11-01 00:30:00 10.0 team_a_scores
1900-11-01 00:55:00 10.0 team_b_scores
1900-11-01 23:58:00 99.0 starts_game
1900-11-02 00:40:00 99.0 team_b_scores
1900-11-02 00:50:00 99.0 team_b_scores
1900-11-03 00:05:00 10.0 starts_game
1900-11-03 00:24:00 10.0 team_b_scores
</code></pre>
<p>I want to resample it minute by minute, and take different upsampling strategies. With <code>id</code> column I will fill forward it and with <code>action</code> I will only fill the upsampled values with 'playing'.</p>
<p>The problem is that the result is different when I make directly a ffill to the resampled datafrrame and with agg function, let´s look at it:</p>
<pre><code>data.resample('T').ffill().head()
id action
date
1900-11-01 00:00:00 10.0 starts_game
1900-11-01 00:01:00 10.0 starts_game
1900-11-01 00:02:00 10.0 starts_game
1900-11-01 00:03:00 10.0 starts_game
1900-11-01 00:04:00 10.0 starts_game
</code></pre>
<p>But remenber, I wanted <code>action</code> column to be only the string 'playing', so:</p>
<pre><code>data.resample('T').agg(dict(id='ffill', action=lambda _: 'playing')).head()
id action
date
1900-11-01 00:00:00 10.0 playing
1900-11-01 00:01:00 NaN playing
1900-11-01 00:02:00 NaN playing
1900-11-01 00:03:00 NaN playing
1900-11-01 00:04:00 NaN playing
</code></pre>
<p>I don´t understand why the id doesn´t upsample correctly, any idea?</p>
<p>For easy reproducibility this is the csv:</p>
<pre><code>date,id,action
1900-11-01 00:00:00,10.0,starts_game
1900-11-01 00:05:00,10.0,team_a_scores
1900-11-01 00:25:00,10.0,team_a_scores
1900-11-01 00:30:00,10.0,team_a_scores
1900-11-01 00:55:00,10.0,team_b_scores
1900-11-01 23:58:00,99.0,starts_game
1900-11-02 00:40:00,99.0,team_b_scores
1900-11-02 00:50:00,99.0,team_b_scores
1900-11-03 00:05:00,10.0,starts_game
1900-11-03 00:24:00,10.0,team_b_scores
</code></pre>
<p>And the code:</p>
<pre><code>import pandas as pd
filename = 'your_custom_name.csv'
data = pd.read_csv(filename)
data = data.set_index('date')
</code></pre>
|
<p>The reason why <code>agg</code> isn't working is that <code>resample('T')</code> returns a <code>groupby</code>-like structure with groups being the minute-by-minute rows</p>
<pre><code>>>> data.resample('T').groups
{Timestamp('1900-11-01 00:00:00', freq='T'): 1,
Timestamp('1900-11-01 00:01:00', freq='T'): 1,
Timestamp('1900-11-01 00:02:00', freq='T'): 1,
Timestamp('1900-11-01 00:03:00', freq='T'): 1,
Timestamp('1900-11-01 00:04:00', freq='T'): 1, ...
</code></pre>
<p><code>agg</code> is applied over a group which in this case is just a single row meaning that the lambda would happily return a scalar and <code>ffill</code> would take the only element available to it.</p>
<p>Had you resampled it by e.g. a day</p>
<pre><code>>>> data.resample('D').groups
{Timestamp('1900-11-01 00:00:00', freq='D'): 6,
Timestamp('1900-11-02 00:00:00', freq='D'): 8,
Timestamp('1900-11-03 00:00:00', freq='D'): 10}
</code></pre>
<p>it would have been the other way around. Your lambda would return just a single value for the entire 6 elements of the first group but the <code>'ffill'</code> method would work as expected propagating the first encountered non-<code>NaN</code> value forward</p>
<pre><code>>>> data.resample('D').agg({'id': 'ffill', 'action': lambda _: 'playing'})
id action
date
1900-11-01 00:00:00 10.0 playing
1900-11-01 00:05:00 10.0 NaN
1900-11-01 00:25:00 10.0 NaN
1900-11-01 00:30:00 10.0 NaN
1900-11-01 00:55:00 10.0 NaN
1900-11-01 23:58:00 99.0 NaN
1900-11-02 00:00:00 NaN playing
1900-11-02 00:40:00 99.0 NaN
1900-11-02 00:50:00 99.0 NaN
1900-11-03 00:00:00 NaN playing
1900-11-03 00:05:00 10.0 NaN
1900-11-03 00:24:00 10.0 NaN
</code></pre>
<p>I'm not sure if the entire operation can be done in one go but the following should work</p>
<pre><code>df = data.resample('T').first()
df['id'] = df['id'].ffill()
df['action'] = df['action'].fillna('playing')
</code></pre>
<p>giving you</p>
<pre><code> id action
date
1900-11-01 00:00:00 10.0 starts_game
1900-11-01 00:01:00 10.0 playing
1900-11-01 00:02:00 10.0 playing
1900-11-01 00:03:00 10.0 playing
1900-11-01 00:04:00 10.0 playing
1900-11-01 00:05:00 10.0 team_a_scores
1900-11-01 00:06:00 10.0 playing
1900-11-01 00:07:00 10.0 playing
</code></pre>
<hr>
<p><strong>UPDATE</strong></p>
<p>Instead of <code>resample</code> you can use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.asfreq.html" rel="nofollow noreferrer"><code>asfreq</code></a> which returns a plain DataFrame and behaves the way you expect it to</p>
<pre><code>>>> data.asfreq('T').agg({'id': 'ffill', 'action': lambda _: 'playing'})
id action
date
1900-11-01 00:00:00 10.0 playing
1900-11-01 00:01:00 10.0 playing
1900-11-01 00:02:00 10.0 playing
1900-11-01 00:03:00 10.0 playing
1900-11-01 00:04:00 10.0 playing
</code></pre>
<p>which would change the above solution to</p>
<pre><code>df = data.asfreq('T')
df['id'] = df['id'].ffill()
df['action'] = df['action'].fillna('playing')
</code></pre>
|
pandas
| 1 |
1,907,515 | 45,724,147 |
How to turn 1D array into symmetrical 3D array?
|
<p>I have a symmetrical 1D numpy array, for example, something like this:</p>
<pre><code>0 1 2 1 0
</code></pre>
<p>How could I turn this into a 3D array (kinda similar to a gaussian kernel), with the value 2 at the center?</p>
<p>As an example of what I mean (though the math is likely not right), in 2D this would be something like this (though I need it to be 3D):</p>
<pre><code>0 0 0 0 0
0 0.5 1 0.5 0
0 1 2 1 0
0 0.5 1 0.5 0
0 0 0 0 0
</code></pre>
|
<p>Acknowledging that this is <em>not</em> a Gaussian kernel, here's how you calculate <em>it</em>:</p>
<pre><code>center = a[a.size // 2]
(a[:, np.newaxis].repeat(a.size, axis=1) * a)\
[:, :, np.newaxis].repeat(a.size, axis=2) * a \
/ center ** 2
</code></pre>
<p>(Not gonna paste the whole output here.)</p>
|
python|arrays|numpy|gaussian|gaussianblur
| 1 |
1,907,516 | 54,999,320 |
creating a loop for a method that will extract all the subgraph with a triangle relationship
|
<p>i am trying to make a for loop that will loop in python this method for all node in graph G that will return set/list of subgraphs.</p>
<p>H=G.subgraph(nodes_in_triangle(G, n))</p>
<p>thank you.</p>
|
<p>To find all triangles in the graph you can use the function <code>enumerate_all_cliques()</code>, which returns all cliques in the graph. You can filter out all triangles by the number of nodes in cliques.</p>
<pre><code>import networkx as nx
G = nx.house_x_graph()
%matplotlib inline # jupyter notebook
nx.draw(G, with_labels = True, node_color='pink', node_size=1000)
tri = filter(lambda x: len(x) == 3, nx.enumerate_all_cliques(G))
tri_subraphs = [G.subgraph(nodes) for nodes in tri]
for graph in tri_subraphs:
print(graph.nodes())
</code></pre>
<p>Output:</p>
<pre><code>[0, 1, 2]
[0, 1, 3]
[0, 2, 3]
[1, 2, 3]
[2, 3, 4]
</code></pre>
<p><a href="https://i.stack.imgur.com/Cb84q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cb84q.png" alt="enter image description here"></a></p>
|
python|networking|graph|networkx|subgraph
| 0 |
1,907,517 | 73,724,740 |
How to scrape text file from children of children sublinks with Beautifulsoup?
|
<p>I am trying to scrape all the rap lyrics text documents and place them into an array from the following webpage (<a href="https://ohhla.com/all.html" rel="nofollow noreferrer">https://ohhla.com/all.html</a>).</p>
<p>I am having a challenge figuring out how to write a script that will go to the lowest sublink and pull the text into an array.</p>
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code>import requests
from bs4 import BeautifulSoup
base_site = "https://ohhla.com/all.html"
response = requests.get(base_site)
relative_urls = [a['href'] for a in soup.select('a[href]') if soup.select('a[href]') != None]
relative_urls = relative_urls[31:]
from urllib.parse import urljoin
full_urls = [urljoin(base_site, url) for url in relative_urls
</code></pre>
<p>each URL in the full_urls has sublinks that have the text file that I want to pull.</p>
<p>Does anyone have ideas on how to best approach writing a list comprehension?</p>
|
<p>First you need to separate anonymous and famous artists, parsing for them is different. I will show on the example of anonymous. But you will need to figure out how to store this information yourself. The easiest way seems to me to make a folder hierarchy.</p>
<pre><code>response = requests.get('https://ohhla.com/all.html')
soup = BeautifulSoup(response.text, 'lxml')
links = ['https://ohhla.com/' + link.get('href') for link in soup.find('pre').find_all('a', {'href': True})]
anonymous_links = [link for link in links if 'anonymous' in link]
famous_links = [link for link in links if 'anonymous' not in link]
</code></pre>
<p>Output for famous:</p>
<pre><code>['https://ohhla.com/YFA_natedogg.html', 'https://ohhla.com/YFA_2pac.html', 'https://ohhla.com/YFA_goodie.html', 'https://ohhla.com/YFA_liks.html', 'https://ohhla.com/YFA_asaprocky.html', 'https://ohhla.com/YFA_azealia.html', 'https://ohhla.com/YFA_eminem.html', 'https://ohhla.com/YFA_beastieboys.html', 'https://ohhla.com/YFA_otk.html', 'https://ohhla.com/YFA_kane.html', 'https://ohhla.com/YFA_goodie.html/', 'https://ohhla.com/YFA_bigkrit.html', 'https://ohhla.com/YFA_bigl.html', 'https://ohhla.com/YFA_littlebrother.html', 'https://ohhla.com/YFA_bigpun.html', 'https://ohhla.com/YFA_bigsean.html', 'https://ohhla.com/YFA_bizmark.html', 'https://ohhla.com/YFA_blackeyedpeas.html', 'https://ohhla.com/YFA_talib.html', 'https://ohhla.com/YFA_BOB.html', 'https://ohhla.com/YFA_icet.html', 'https://ohhla.com/YFA_krs.html', 'https://ohhla.com/YFA_brand.html', 'https://ohhla.com/YFA_cypress.html', 'https://ohhla.com/YFA_cypress.html#misc', 'https://ohhla.com/YFA_ugk.html', 'https://ohhla.com/YFA_getoboys.html', 'https://ohhla.com/YFA_busta.html', 'https://ohhla.com/YFA_camron.html', 'https://ohhla.com/YFA_can_i_bus.html', 'https://ohhla.com/YFA_cnn.html', 'https://ohhla.com/YFA_goodie.html', 'https://ohhla.com/YFA_j5.html', 'https://ohhla.com/YFA_chiefkeef.html', 'https://ohhla.com/YFA_childishgambino.html', 'https://ohhla.com/YFA_koolkeith_two.html', 'https://ohhla.com/YFA_E40.html', 'https://ohhla.com/YFA_common.html', 'https://ohhla.com/YFA_coflow.html', 'https://ohhla.com/YFA_breeze.html', 'https://ohhla.com/YFA_coup.html', 'https://ohhla.com/anonymoys/cymphoni/', 'https://ohhla.com/YFA_cypress.html', 'https://ohhla.com/YFA_eminem.html', 'https://ohhla.com/YFA_dabrat.html', 'https://ohhla.com/YFA_danjamowf.html', 'https://ohhla.com/YFA_danny.html', 'https://ohhla.com/YFA_dpg.html', 'https://ohhla.com/YFA_dls.html', 'https://ohhla.com/YFA_devinthedude.html', 'https://ohhla.com/YFA_camron.html', 'https://ohhla.com/YFA_ludacris.html', 'https://ohhla.com/YFA_willsmith.html', 'https://ohhla.com/YFA_djkhaled.html', 'https://ohhla.com/YFA_three6.html', 'https://ohhla.com/YFA_rundmc.html', 'https://ohhla.com/YFA_dmx.html', 'https://ohhla.com/YFA_dpg.html', 'https://ohhla.com/YFA_oddfuture.html', 'https://ohhla.com/YFA_mfdoom.html', 'https://ohhla.com/YFA_drake.html', 'https://ohhla.com/YFA_koolkeith_two.html', 'https://ohhla.com/YFA_drdre.html', 'https://ohhla.com/YFA_koolkeith_two.html', 'https://ohhla.com/YFA_df.html', 'https://ohhla.com/YFA_E40.html', 'https://ohhla.com/YFA_oddfuture.html', 'https://ohhla.com/YFA_coflow.html', 'https://ohhla.com/YFA_eminem.html', 'https://ohhla.com/YFA_epmd.html', 'https://ohhla.com/YFA_eve.html', 'https://ohhla.com/YFA_houseofpain.html']
</code></pre>
<p>Now you need function to get links to txt:</p>
<pre><code>def get_tracks(url):
response = requests.get(url)
soup = BeautifulSoup(response.text, 'lxml')
links = [url + link.get('href') for link in soup.find_all('a')][5:]
artist_name = url[:-1].split('/')[-1]
albums = {}
for link in links:
response = requests.get(link)
soup = BeautifulSoup(response.text, 'lxml')
album = link[:-1].split('/')[-1]
tracks = [link + _link.get('href') for _link in soup.find_all('a')][5:]
albums[album] = tracks
return {artist_name: albums}
</code></pre>
<p>Now everything is ready to get all the albums\lyrics. I will display the first 100 artists as an example:</p>
<pre><code>artists = []
for artist in anonymous_links[:100]:
artists.append(get_tracks(artist))
print(artists)
</code></pre>
<p>OUTPUT is so long, just image:
<a href="https://i.stack.imgur.com/LjPlg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LjPlg.png" alt="enter image description here" /></a></p>
<p>Now that you have links to texts, you can get them using:</p>
<pre><code>BeautifulSoup(requests.get(url).text, 'lxml').find('pre').get_text()
</code></pre>
|
python|beautifulsoup
| 1 |
1,907,518 | 12,977,134 |
Using PUT to receive an xml file in webpy
|
<p>I am trying to receive an xml file through PUT in web.py, it is not working.Can any one explain what is the issue in the below code</p>
<pre><code>import web
urls = (
'/', 'index'
)
class index:
def PUT(self):
postdata = web.data().read()
fout = open('/home/test/Desktop/e.xml','w')
fout.write(postdata)
fout.close()
return "Hello, world!"
if __name__ == "__main__":
app = web.application(urls, globals())
app.run()
</code></pre>
<p>I am getting this is terminal</p>
<pre><code>"HTTP/1.1 PUT /doc.xml" - 404 Not Found
</code></pre>
<p>I use curl to upload the xml</p>
<pre><code>curl -o log.out -H "Content-type: text/xml; charset=utf-8" -T doc.xml "http://0.0.0.0:8760"
</code></pre>
|
<p>You are using the wrong <code>curl</code> option.</p>
<p>If you want your file content in the request body, you should use <code>-d</code> instead of <code>-T</code></p>
<pre><code>curl -o log.out -H "Content-type: text/xml; charset=utf-8" -d doc.xml "http://0.0.0.0:8760"
</code></pre>
<p>EDIT:</p>
<p>Anyway, this will transform your <code>curl</code> into a POST request. To keep it as PUT, use <code>-X PUT</code></p>
<pre><code>curl -X PUT -o log.out -H "Content-type: text/xml; charset=utf-8" -d doc.xml "http://0.0.0.0:8760"
</code></pre>
|
python|web.py
| 0 |
1,907,519 | 24,706,883 |
API to creating a new user in django
|
<p>I'm trying to implement an API which allows creating a new user in django.
I'm also using django-rest-frameowrk, if that helps.</p>
<p>I've tried the following with an admin user:</p>
<pre><code>curl -X POST -H 'Authorization: Token aa294c745214d18f392f5f96f2d2278921e11d74' -H 'Content-Type: application/json' -d '{"username":"dan"}' http://localhost:8000/api/users/
</code></pre>
<p>But I'm getting the response <code>{"detail": "Method 'POST' not allowed."}.</code></p>
<p>Is this the right approach?</p>
<p>I might go for OAuth later, but right now I'm just looking to implement a simple API which will allow registering new users on the fly from multiple devices. I'd like to add the user's token to the response when the user is created, so each device can store the generated token for use later.</p>
|
<p>Turns out this was the right approach, but I had the <code>User</code> viewset defined incorrectly:</p>
<pre><code>class UserViewSet(viewsets.ReadOnlyModelViewSet):
model = User
serializer_class = UserSerializer
</code></pre>
<p>with the correct definition being</p>
<pre><code>class UserViewSet(viewsets.ModelViewSet):
model = User
serializer_class = UserSerializer
</code></pre>
|
python|django|django-rest-framework
| 0 |
1,907,520 | 24,455,176 |
Pylint rule is unknown to sonar
|
<p>I am trying to use sonarQube with eclipse and python.
The quality profile was sonar way, and it had only 11 rules to start with. So i added the pylint rules and they are marked as activated. But when i run analyze on the project I don't get any more issues compared to before (when i used 11 rules). Then console looks something like this:</p>
<pre><code>16:38:49.091 INFO - Sensor org.sonar.plugins.python.pylint.PylintSensor@1603ae07...
16:38:50.079 WARN - Pylint rule 'C' is unknown in Sonar
16:38:50.079 WARN - Pylint rule 'C' is unknown in Sonar
16:38:50.079 WARN - Pylint rule 'C' is unknown in Sonar
16:38:50.079 WARN - Pylint rule 'C' is unknown in Sonar
16:38:50.079 WARN - Pylint rule 'C' is unknown in Sonar
16:38:50.079 WARN - Pylint rule 'C' is unknown in Sonar
16:38:50.079 WARN - Pylint rule 'C' is unknown in Sonar
16:38:50.079 WARN - Pylint rule 'C' is unknown in Sonar
</code></pre>
|
<p>I'm getting this kind of warnings for every Pylint rule that is disabled in the current quality profile, so it seems that this is a feature, not a bug (SonarQube 5.1.1 + Python plugin 1.6-SNAPSHOT)</p>
|
python|eclipse|sonarqube|pylint
| 2 |
1,907,521 | 38,095,423 |
Send and receive packet to and from nodes using pycore library in python script
|
<pre><code>#!/usr/bin/python
from core import pycore
import logging
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
from scapy.all import *
session = pycore.Session(persistent=True)
node1 = session.addobj(cls=pycore.nodes.CoreNode, name="node1")
node2 = session.addobj(cls=pycore.nodes.CoreNode, name="node2")
hub1 = session.addobj(cls=pycore.nodes.HubNode, name="hub")
node1.newnetif(hub1, ["10.0.0.1/24"])
node2.newnetif(hub1, ["10.0.0.2/24"])
packet = IP(src="10.0.0.1",dst="10.0.0.2")/ICMP()/"Hello World"
</code></pre>
<p>Here I have created two nodes i.e <code>node1</code> and <code>node2</code> which are connected to a hub named <code>hub1</code>. <code>node2</code> is pingable from <code>node1</code> but I want to send packet (I made in last line of code) from <code>node1</code> to <code>node2</code> and process that packet after receiving at <code>node2</code>. Kindly Help me out!</p>
|
<p>To send a file over network you can use <a href="http://man.openbsd.org/OpenBSD-current/man1/nc.1" rel="nofollow noreferrer">NetCat</a> on Linux. And in CORE, you can send shell commands to these nodes. Hence you can do something like the following.</p>
<pre><code>node1.shcmd("nc -l 4444 > download.txt")
node2.shcmd("nc -w3 10.0.0.1 4444 < upload.txt")
</code></pre>
<p>The file <em>upload.txt</em> is assumed to be present on the Virtual node, <em>node2</em>, and is copied over to Virtual node <em>node1</em> under the file name <em>download.txt</em></p>
<p>To create a file, you can again use <em>shcmd</em> to pass a shell command to create the file.</p>
<pre><code>node2.shcmd("dd if=/dev/urandom of=upload.txt bs=1024 count=100")
</code></pre>
<p>This will create a 102kB file on <em>node2</em>. Make sure to use this command before you use NetCat.</p>
|
python
| 1 |
1,907,522 | 39,990,844 |
gunicorn "configuration cannot be imported"
|
<p>I'm migrating a project that has been on Heroku to a DO droplet. Install went smoothly, and everything is working well when I <code>python manage.py runserver 0.0.0.0:8000</code>.</p>
<p>I'm now setting up gunicorn using these instructions:
<a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-14-04" rel="nofollow">https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-14-04</a></p>
<p>I activate the virtual environment, and then try <code>--bind 0.0.0.0:3666 myproject.wsgi:application</code>. I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/var/www/myproject/venv/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 515, in spawn_worker
worker.init_process()
File "/var/www/myproject/venv/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 122, in init_process
self.load_wsgi()
File "/var/www/myproject/venv/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 130, in load_wsgi
self.wsgi = self.app.wsgi()
File "/var/www/myproject/venv/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/var/www/myproject/venv/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load
return self.load_wsgiapp()
File "/var/www/myproject/venv/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
return util.import_app(self.app_uri)
File "/var/www/myproject/venv/local/lib/python2.7/site-packages/gunicorn/util.py", line 357, in import_app
__import__(module)
File "/var/www/myproject/myproject/wsgi.py", line 6, in <module>
from configurations.wsgi import get_wsgi_application
File "/var/www/myproject/venv/local/lib/python2.7/site-packages/configurations/wsgi.py", line 3, in <module>
importer.install()
File "/var/www/myproject/venv/local/lib/python2.7/site-packages/configurations/importer.py", line 54, in install
importer = ConfigurationImporter(check_options=check_options)
File "/var/www/myproject/venv/local/lib/python2.7/site-packages/configurations/importer.py", line 73, in __init__
self.validate()
File "/var/www/myproject/venv/local/lib/python2.7/site-packages/configurations/importer.py", line 122, in validate
raise ImproperlyConfigured(self.error_msg.format(self.namevar))
ImproperlyConfigured: Configuration cannot be imported, environment variable DJANGO_CONFIGURATION is undefined.
</code></pre>
<p>My <code>wsgi.py</code> looks like this:</p>
<pre><code># -*- coding: utf-8 -*-
import os
from configurations.wsgi import get_wsgi_application
application = get_wsgi_application()
</code></pre>
<p>I didn't set up the project initially, so I'm not sure what is different, or where to look.</p>
|
<p>Use this link and Set the DJANGO_CONFIGURATION environment variable to the name of the class you just created, e.g. in bash:</p>
<p>export DJANGO_CONFIGURATION=Dev </p>
<p><a href="https://github.com/jazzband/django-configurations" rel="nofollow">Read further here.</a></p>
<p><a href="https://django-configurations.readthedocs.io/en/stable/" rel="nofollow">And/or here.</a></p>
|
python|django|gunicorn
| 1 |
1,907,523 | 52,194,266 |
Activate Ecc for Browsermob/Selenium
|
<p>I have the problem that the testing with Selenium and browsermob becomes very slow for certain websites. Here is my current code for setting up the server and proxy:</p>
<pre><code> server = Server(path_browsermob)
server.start()
proxy = server.create_proxy()
co = webdriver.ChromeOptions()
co.add_argument('--proxy-server={host}:{port}'.format(host='localhost', port=proxy.port))
driver = webdriver.Chrome(path_driver, chrome_options=co)
</code></pre>
<p>I already read that one way to speed up testing is to use EC certificates instead of RSA. However, how do I do activate ECC with the code above?</p>
|
<p>I had this same question after learning about this "issue" with browsermob-proxy and SSL certs. </p>
<p>After digging around in the browsermob-proxy python library it looks as if any <a href="https://github.com/AutomatedTester/browsermob-proxy-py/blob/6e9a29326a8ca5d0cf970fb33fcd3e03501f938e/browsermobproxy/client.py#L31" rel="nofollow noreferrer">extra parameters</a> are passed into the URL when creating the proxy. </p>
<p>With that you should be able to pass any of the parameters outlined in <a href="https://github.com/lightbody/browsermob-proxy/blob/master/README.md#rest-api" rel="nofollow noreferrer">API documentation</a> into create_proxy().</p>
<p>Here's my code snippet (although, I'm not sure how to query the proxy to see if its actually set).</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from browsermobproxy import Server
#Create proxy server
bmp_server_opts = {"port": 8080}
bmp_server = Server("browsermob-proxy-2.1.4/bin/browsermob-proxy", options = bmp_server_opts)
bmp_server.start()
time.sleep(1)
proxy_server = bmp_server.create_proxy({"useEcc": True, "trustAllServers": True})
time.sleep(1)
selenium_proxy = proxy_server.selenium_proxy()
#Create Firefox options
firefox_opts = webdriver.FirefoxOptions()
firefox_profile = webdriver.FirefoxProfile()
firefox_opts.set_headless()
firefox_profile.set_proxy(selenium_proxy)
#Fire up a Firefox browser
firefox_browser = webdriver.Firefox(firefox_profile = firefox_profile, firefox_options = firefox_opts)
wait_load = WebDriverWait(firefox_browser, 10)
proxy_server.new_har("103398", options = {'captureHeaders': True, "captureContent": True})
</code></pre>
<p>Although there was still some issues with even setting useEcc to true and I ended up adding trustAllServers which ignores ssl checking all together, but I'm not sure this would be the right way to go if you needed close to true user experience. In either case I still have fairly slow SSL/TLS connections.</p>
|
python|selenium|browsermob-proxy|browsermob
| 0 |
1,907,524 | 51,762,406 |
What is the Tensorflow loss equivalent of "Binary Cross Entropy"?
|
<p>I'm trying to rewrite a Keras graph into a Tensorflow graph, but wonder which loss function is the equivalent of "Binary Cross Entropy". Is it tf.nn.softmax_cross_entropy_with_logits_v2?</p>
<p>Thanks a lot!</p>
|
<p>No, the implementation of the <code>binary_crossentropy</code> with tensorflow backend is defined <a href="https://github.com/tensorflow/tensorflow/blob/r1.10/tensorflow/python/keras/backend.py" rel="nofollow noreferrer">here</a> as</p>
<pre class="lang-py prettyprint-override"><code>@tf_export('keras.backend.binary_crossentropy')
def binary_crossentropy(target, output, from_logits=False):
"""Binary crossentropy between an output tensor and a target tensor.
Arguments:
target: A tensor with the same shape as `output`.
output: A tensor.
from_logits: Whether `output` is expected to be a logits tensor.
By default, we consider that `output`
encodes a probability distribution.
Returns:
A tensor.
"""
# Note: nn.sigmoid_cross_entropy_with_logits
# expects logits, Keras expects probabilities.
if not from_logits:
# transform back to logits
epsilon_ = _to_tensor(epsilon(), output.dtype.base_dtype)
output = clip_ops.clip_by_value(output, epsilon_, 1 - epsilon_)
output = math_ops.log(output / (1 - output))
return nn.sigmoid_cross_entropy_with_logits(labels=target, logits=output)
</code></pre>
<p>Therefore, it uses <code>sigmoid_crossentropy</code> and not <code>softmax_crossentropy</code>.</p>
|
tensorflow|keras|loss-function
| 4 |
1,907,525 | 51,918,012 |
Get print() realtime output with subprocess
|
<p>I want to execute a Python file from another Python file and show all <code>print()</code> outputs and error outputs without waiting (realtime).</p>
<p>The simplified version of my code is as follows and I would like to show "start" and an error message without waiting for "end" (the end of the script).</p>
<pre><code>def main():
# Function that takes a long time (in my actual code)
x += 1 # this raises an error
if __name__ == "main":
print("start")
main()
print("end")
</code></pre>
<p>I also have <code>run.py</code>:</p>
<pre><code>import subprocess
def run():
subprocess.run(["python", "main.py"])
if __name__ == '__main__':
run()
</code></pre>
<p>I tried <a href="https://www.endpoint.com/blog/2015/01/28/getting-realtime-output-using-python" rel="nofollow noreferrer">this blog post</a> and several other similar answers on stackoverflow, but none of them worked, so I decided to put my original code here, which is above.</p>
|
<p>Is this line a mistake?</p>
<pre><code>if __name__ == "main":
</code></pre>
<p>The symbol is <code>__main__</code> set by interpreter and not <code>main</code>. It is possible that because of this typo error no code is running from main script. Try first executing the main script directly on command shell. </p>
|
python|subprocess
| 1 |
1,907,526 | 13,695,229 |
Django calling function in URL for views
|
<p>I am confused with these two different syntaxes of using views in the URL. For generic views we use this</p>
<pre><code>views.myview.as_view()
</code></pre>
<p>But if i need to use my own custom function for view then i need to use</p>
<pre><code>views.myview().myfunction
</code></pre>
<p>Why is there difference between the two</p>
<p>why not <code>views.myview.myfunction</code> is working</p>
|
<p>Views can be written as either classes or functions. If you're not worried about re-using code, then functions are probably easier. Have a look at the <a href="https://docs.djangoproject.com/en/1.4/topics/http/views/" rel="nofollow">docs for writing views</a>. Then maybe have a quick look at the <a href="https://docs.djangoproject.com/en/1.4/topics/class-based-views/" rel="nofollow">docs for class based views</a>. Lastly check the docs for the <a href="https://docs.djangoproject.com/en/1.4/topics/http/urls/" rel="nofollow">URL dispatcher</a>.</p>
<p>View functions are written like this -</p>
<pre><code>def my_view(request, *args, **kwargs):
...
return HttpResponse()
</code></pre>
<p>A view function is called by passing the function into urlpatterns as follows -</p>
<pre><code>from django.conf.urls import patterns
from views import my_view
urlpatterns = patterns('',
(r'^my_page/$', my_view)
)
</code></pre>
<p>Class based views allow you to reuse functionality through inheritance. </p>
<pre><code>from django.views.generic import DetailView
class MySpecialDetailView(DetailView):
...
# add functionality here
</code></pre>
<p>The problem is that the url setup is expecting a function, not a class. That's where the <code>as_view()</code> function comes in. Class based views are called in the url conf as -</p>
<pre><code>from django.conf.urls import patterns
from views import MySpecialDetailView
urlpatterns = patterns('',
(r'^my_special_page/$', MySpecialDetailView.as_view())
)
</code></pre>
<p>Apologies if I've mis-understood your question</p>
|
python|django
| 2 |
1,907,527 | 22,119,244 |
pygame.key.get_pressed() - doesn't work - pygame.error: video system not initialized
|
<p>I have two problems with my program:</p>
<ol>
<li>When I close my program it has error: <code>keys = pygame.key.get_pressed() pygame.error: video system not initialized</code></li>
<li>Square moves while I'm pressing 'd' and when I press something (or move mouse)</li>
</ol>
<p>Important part of code is:</p>
<pre><code>import pygame
from pygame.locals import*
pygame.init()
screen = pygame.display.set_mode((1200, 700))
ticket1 = True
# ...
c = 550
d = 100
# ...
color2 = (250, 20, 20)
while ticket1 == True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
ticket1 = False
pygame.quit()
pygame.display.quit()
keys = pygame.key.get_pressed()
if keys[pygame.K_d]:
c += 1
# ...
screen.fill((255, 250, 245))
pygame.draw.rect(screen, color2, pygame.Rect(c, d, 50, 75))
pygame.display.flip()
</code></pre>
<p>If I write <code>keys = pygame.key.get_pressed()</code> in just while loop it doesn't have error but it seems slower.</p>
<p>I also have another error: <code>pygame.error: display Surface quit</code>, but I always and in all my pygame programs have it and it isn'n so important but other things are important.</p>
|
<p>1.--------------</p>
<p>After <code>pygame.quit()</code> you don't need <code>pygame.display.quit()</code> but <code>sys.exit()</code>.
<code>pygame.quit()</code> doesn't exit program so program still try to call <code>screen.fill()</code> and other function below <code>pygame.quit()</code></p>
<p>Or you have to put <code>pygame.quit()</code> outside <code>while ticket == True:</code> (and then you don't need <code>sys.exit()</code>)</p>
<p>You can use <code>while ticket1:</code> in place of <code>while ticket == True:</code> - it is more pythonic.</p>
<pre><code>while ticket1: # it is more pythonic
for event in pygame.event.get():
if event.type == pygame.QUIT:
ticket1 = False
keys = pygame.key.get_pressed()
if keys[pygame.K_d]:
c += 1
# ...
screen.fill((255, 250, 245))
pygame.draw.rect(screen, color2, pygame.Rect(c, d, 50, 75))
pygame.display.flip()
pygame.quit()
</code></pre>
<p>2.--------------</p>
<p><code>if keys[pygame.K_d]: c += 1</code> is inside <code>for event</code> loop so it is call only when event is happend - when mouse is moving, when key is pressed or "unpressed". Move it outside of <code>for event</code> loop.</p>
<pre><code>while ticket1: # it is more pythonic
for event in pygame.event.get():
if event.type == pygame.QUIT:
ticket1 = False
keys = pygame.key.get_pressed()
# outside of `for event` loop
if keys[pygame.K_d]:
c += 1
# ...
screen.fill((255, 250, 245))
pygame.draw.rect(screen, color2, pygame.Rect(c, d, 50, 75))
pygame.display.flip()
pygame.quit()
</code></pre>
<p>Some people do it without <code>get_pressed()</code></p>
<pre><code># clock = pygame.time.Clock()
move_x = 0
while ticket1 == True:
# events
for event in pygame.event.get():
if event.type == pygame.QUIT:
ticket1 = False
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_ESCAPE:
ticket1 = False
elif event.key == pygame.K_d:
move_x = 1
elif event.type == pygame.KEYUP:
if event.key == pygame.K_d:
move_x = 0
# variable modification
c += move_x
# ...
# draws
screen.fill((255, 250, 245))
pygame.draw.rect(screen, color2, pygame.Rect(c, d, 50, 75))
pygame.display.flip()
# 60 FPS (Frame Per Second) to make CPU cooler
# clock.tick(60)
pygame.quit()
</code></pre>
<hr>
<p>BTW: use pygame.time.Clock() to get the same FPS on fast and slow computers. Without FPS program refresh screen thousends times per second so CPU is busy and hot. </p>
<p>If you use FPS you have to add to <code>c</code> bigger value to get the same speed then before.</p>
|
python|python-3.x|error-handling|key|pygame
| 3 |
1,907,528 | 58,149,378 |
Simple Python question with dictionaries and lists
|
<p>The goal of the function is to make a grade adjustment based off of a dictionary and list. For instance </p>
<pre><code>def adjust_grades(roster, grade_adjustment)
adjust_grades({'ann': 75, 'bob': 80}, [5, -5])
</code></pre>
<p>will return </p>
<pre><code>{'ann': 80, 'bob': 75}
</code></pre>
<p>I just need a nudge in the right direction, I'm new to Python so I thought to put a nested for loop for each num in grade_adjustment but its not the right way. </p>
|
<p>assuming 3.7 or ordered dict and equal length:</p>
<pre><code>def adjust_grades(roster, grade_adjustment):
return {key:value + adjustment for (key, value), adjustment in
zip(roster.items(), grade_adjustment)}
print(adjust_grades({'ann': 75, 'bob': 80}, [5, -5]))
</code></pre>
|
python|python-3.x
| 1 |
1,907,529 | 43,859,750 |
How to connect broken lines in a binary image using Python/Opencv
|
<p>How can I make these lines connect at the target points? The image is a result of a skeletonization process.</p>
<p><img src="https://i.stack.imgur.com/Jpu2c.jpg" alt="Resulting Image From Skeletonization"></p>
<p><img src="https://i.stack.imgur.com/2iSVu.jpg" alt="Points That I Need To Connect"></p>
<p>I'm trying to segment each line as a region using Watershed Transform.</p>
|
<p><a href="https://stackoverflow.com/a/43862917/1714410">MikeE</a>'s answer is quite good: using dilation and erosion morphological operations can help a lot in this context.<br>
I want to suggest a little improvement, taking advantage of the specific structure of the image at hand. Instead of using dilation/erosion with a general kernel, I suggest using a <strong>horizontal kernel</strong> that will connect the endpoints of the horizontal lines, but will not connect adjacent lines to one another.</p>
<p>Here's a sketch of code (assuming the input image is stored in <code>bw</code> numpy 2D array):</p>
<pre><code>import cv2, numpy as np
kernel = np.ones((1,20), np.uint8) # note this is a horizontal kernel
d_im = cv2.dilate(bw, kernel, iterations=1)
e_im = cv2.erode(d_im, kernel, iterations=1)
</code></pre>
<p>What you get is the dilated image:<br>
<a href="https://i.stack.imgur.com/KRdy4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KRdy4.png" alt="enter image description here"></a></p>
<p>Note how the gaps are closed, while maintaining the distinct horizontal lines</p>
<p>And the eroded image:<br>
<a href="https://i.stack.imgur.com/UmUne.png" rel="noreferrer"><img src="https://i.stack.imgur.com/UmUne.png" alt="enter image description here"></a></p>
<p>To remove artifacts created by dilate/erode, I suggest to extract the skeleton again.<br>
If you further apply skeleton morphological operation to the eroded image you can get this result:<br>
<a href="https://i.stack.imgur.com/RpyAc.png" rel="noreferrer"><img src="https://i.stack.imgur.com/RpyAc.png" alt="enter image description here"></a></p>
<p>Once you have the curves connected you do not need to use watershed segmentation, but rather use connected components to label each curve.</p>
|
python|opencv|image-processing|image-segmentation|edge-detection
| 21 |
1,907,530 | 54,594,056 |
Python, Float Recording Text File, Loading Floats and average them
|
<p>i have biological floats and want to save them in a Text file, and then load and make average them. This Floats about :</p>
<pre><code>0.12
0.23
0.30
0.21
..
..
..
</code></pre>
<p>This will be saved in Text file.</p>
<p>Average of Floats will be showed in Label.</p>
|
<p>In order to save the floats to a text file, you need to convert them to a string. We convert the list of floats to a list of strings, then join them with a space symbol, which will be the separator and then save the file.
In order to read a text file and make that a new list of floats we need to do the same operations but reversed.</p>
<p>About the label, I don't know what GUI framework you are using.</p>
<p>code:</p>
<pre><code>list_of_floats=[0.12, 0.23, 0.30, 0.21]
def save(path,l):
with open(path,'w') as file:
file.write(' '.join(map(str,l)))
def load(path):
with open(path,'r') as file:
return list(map(float,file.read().split()))
save('file.txt',list_of_floats)
new_list=load('file.txt')
print(sum(new_list)/len(new_list))
</code></pre>
|
python|floating
| 1 |
1,907,531 | 54,645,359 |
removing sequences from the data Pandas Python Numpy
|
<p>I have tried the following: </p>
<pre><code>>>> import pandas as pd
>>> import numpy as np
>>> df = pd.read_csv("training.csv")
>>> data_raw = df.values
>>> data = []
>>> seq_len = 5
>>> for index in range(len(data_raw) - seq_len):
... data.append(data_raw[index: index + seq_len])
...
>>> len(data)
1994
>>> len(data_raw)
1999
>>> del data[0]
</code></pre>
<p>The data is available here: <a href="https://gist.github.com/JafferWilson/3ab8ee88f3fc32e78579a1054aac757d" rel="nofollow noreferrer">training.csv</a><br>
I have seen that the <code>del</code> removes the first element from the array. And rearrange the values like what was on 1st position, is now the 0th position, and so on.<br>
I want to remove the values at indices: <code>0,4,5,9,10,14,</code> and so on.<br>
But this is not getting possible with the current <code>del</code> statement as it will rearrange the values.<br>
Please help me find the missing part.</p>
|
<p>you can do it like this</p>
<p><strong>example code:</strong></p>
<pre><code>index = [0,4,5,9,10,14]
for i, x in enumerate(index):
index[i] -= i
print(index)
for i in index:
del data[i]
</code></pre>
|
python|python-3.x|pandas|numpy|slice
| 2 |
1,907,532 | 71,357,851 |
Uploading data with psycopg2 and python
|
<p>With the next cmds I am trying to upload a csv file where columns are separated by tabs and sometimes null values can be assigned to a column.</p>
<pre><code>conn = psycopg2.connect(host="localhost",
port="5432",
user="postgres",
password="somepwd",
database="mydb",
options="-c search_path=dbo")
</code></pre>
<p>...</p>
<pre><code>cur = conn.cursor()
with open(opath, "r") as opath_file:
next(opath_file) # skip the header row
cur.copy_from(opath_file, table_name[3:], null='', columns=cols.split(','))
</code></pre>
<ul>
<li>cols has a string with the column names separated by ','</li>
<li>the table with name table_name[3:] belongs to the dbo schema</li>
</ul>
<p>This code runs, no error is reported but no data is uploaded. The owner of the db is postgres.</p>
<p>Any ideas?</p>
|
<p>Would you believe me if the problem was I needed to run</p>
<p>conn.commit()</p>
<p>after the cur.copy_from cmd?</p>
|
python|postgresql|psycopg2|psql
| 0 |
1,907,533 | 9,218,208 |
Propellerheads' NN-XT fileformat: Problems with REFE chunk
|
<p>I'm reading Propellerheads' NN-XT file-format, but I'm having problems with the <em>REFE</em> chunk. The NN-XT specifications says, the chunk is structured as follows:</p>
<hr>
<blockquote>
<p>There is a REFE chunk for every sample referenced by the NN-XT patch. (If a patch has no samples it does not have any REFE chunks.)</p>
</blockquote>
<ul>
<li>Chunk name</li>
<li>Chunk size</li>
<li>Version</li>
<li>Relative Path to sample</li>
<li>Database Path to sample</li>
<li>Absolute Path to sample</li>
<li>Sample name</li>
<li>ReFill name</li>
<li>ReFill URL</li>
<li>Reserved (Checkpoint)</li>
<li>Package Name</li>
</ul>
<blockquote>
<p>The Refill Name is the name of the ReFill as it appears in the Reason Browser, not the file name.
A Package is a ReFill, a REX file or a SoundFont file. If the REX or SoundFont file is inside a ReFill, the Package Name should contain the name of the REX or SoundFont file.</p>
</blockquote>
<hr>
<p>Reading this chunk <em>does run fine</em>, but after the <em>Package name</em> has been read, there are still bytes that belong to this chunk (I know this because this chunk appears multiple times and the next "REFE" is about 378 bytes away from the current position in the file [in this particular case of course]).</p>
<p>The documentation does not say anything about bytes that eventually follow, etc. Do you have an Idea of what theese additional bytes may be?</p>
<p>I'm processing the NN-XT file using Python. The following is an example output of the above mentioned chunk-structure.</p>
<pre><code>size: 832
version: NNXTVersion(1, 3, 0)
relative path: NNXTRelativePath(NNXTVersion(1, 1, 0), False)
database path: NNXTDatabasePath(NNXTVersion(1, 2, 0), True, 'Reason Factory Sound Bank')
absolute path: NNXTAbsolutePath(NNXTVersion(1, 4, 0), True, 11, NNXTVolume(, 15), True)
sample name: PianoC23.wav
ReFill name: Reason Factory Sound Bank
ReFill Url: www.propellerheads.se
Project name: Reason Factory Sound Bank
</code></pre>
<p>The bytes that I cannot process are as follows (Escaped, max-line width: 80):</p>
<pre><code> \x00\x00\x00\x0cPianoC23.wav\xbc\x01\x05\x00\x00\x00\xbc\x01\x05\x00\x00\x01\x00\x00\x00\x19Reason Factory Sound Bank\x00\x00\x00
\x06\x00\x00\x00\x19Reason Factory Sound Bank\x00\x00\x00\x15NN-XT Sampler Patches\x00\x00\x00\x05Piano\x00\x00\x00\rPiano samples
\x00\x00\x00\nGrandPiano\x00\x00\x00\x0cPianoC23.wav\x01\xbc\x01\t\x00\x00\x01\x00\x00\x00\x00\x0f\x00\x00\x00\x0b\x00\x00\x00\x08
Computer\x00\x00\x00\x0cwindows (C:)\x00\x00\x00\x0fProgramme (x86)\x00\x00\x00\rPropellerhead\x00\x00\x00\x06Reason\x00\x00\x00\x
16Factory Sound Bank.rfl\x00\x00\x00\x15NN-XT Sampler Patches\x00\x00\x00\x05Piano\x00\x00\x00\rPiano samples\x00\x00\x00\nGrandPi
ano\x00\x00\x00\x0cPianoC23.wav\x01G\x00\x00\x00\x00\x00
</code></pre>
<p>After this bytestream, the next <em>REFE</em> chunk begins.</p>
<p>I don't think I am allowed to distribute the specifications, but if you really are interested, you can download it after a <a href="http://www.propellerheads.se/developer/index.cfm?fuseaction=get_article&article=nnxtinfo" rel="nofollow">simple registration</a> at the Propellerheads' homepage.</p>
<p>Thank you very very much,
Niklas R</p>
<p><em>PS: If you don't have any idea, maybe you know about an email address developers can email to, I couldn't find one.</em></p>
|
<p>I've recieved an answer from the official devloper support <em>(development@proppellerheads.se)</em>. They told me I obviously use an outdated file-format specification, however I've downloaded it from their servers just recently. They told me they will fix this issue and make the newest documentation available.</p>
<p>In Reason 4, new data was added to this chunk. If the <em>Version</em> is equal or later to <code>(1, 3, 0)</code>, additional 4 values follow, if not, the chunk's end was reached.</p>
<ul>
<li>Physical Sample name (String)</li>
<li>Relative Path to sample (Relative Path)</li>
<li>Relative Path to sample (Database Path)</li>
<li>Relative Path to sample (Absolute Path)</li>
</ul>
|
python|c|file-format|chunks
| 0 |
1,907,534 | 52,674,120 |
How to access the id of a different slider widget?
|
<pre><code>Slider:
id:slider_id4
min: -90
max: 90
value: 0
step: 1
pos: root.width/2+0.3*root.width/2,0.90*root.height
size_hint:0.7,0.05
canvas:
PushMatrix
Rotate:
angle: slider_id4.value
origin: 30,65
Color:
rgb: [.5,1,.5]
Rectangle:
pos: 25+slider_id5.value,65
size: 10,25
PopMatrix
Slider:
id:slider_id5
min: 0
max: 50
value: 0
step: 1
pos: root.width/2+0.3*root.width/2,0.80*root.height
size_hint:0.7,0.05
canvas:
Color:
rgb: [.5,.5,.5]
Rectangle:
id:r1
pos: 0+slider_id5.value,30
size: 60,20
Color:
rgb: [1,.5,.5]
Ellipse:
pos: 15+slider_id5.value,35
angle_start: 270
angle_end: 450
size: 30,30
</code></pre>
<p>I am trying to access <code>slider_id5.value</code> in the slider widget canvas that has an <code>id: slider_id4</code> but I get an error saying :</p>
<blockquote>
<p>The name, slider_id5 is not defined</p>
</blockquote>
<p>I need to access the value of <code>slider_id5</code> in the <code>slider_id4</code> widget (under Rectangle). Any Suggestions?</p>
|
<p>When Kivy parse the kv file, <em>slider_id5</em> is not defined yet when it was referenced in <em>slider_id4</em>.</p>
<h1>Solution</h1>
<p>Check <code>app.root</code> for <code>None</code> and </p>
<p>replace </p>
<p><code>25+slider_id5.value,65</code> </p>
<p>with </p>
<p><code>(25, 65) if app.root is None else (app.root.ids.slider_id5.value+25,65)</code></p>
<h1>Example</h1>
<h2>test.kv</h2>
<pre><code>#:kivy 1.11.0
<RootWidget>:
orientation: 'vertical'
Slider:
id: slider_id4
min: -90
max: 90
value: 0
step: 1
pos: root.width/2+0.3*root.width/2,0.90*root.height
size_hint:0.7, 0.05
canvas:
PushMatrix
Rotate:
angle: slider_id4.value
origin: 30,65
Color:
rgb: [.5,1,.5]
Rectangle:
pos: (25, 65) if app.root is None else (app.root.ids.slider_id5.value+25,65)
size: 10,25
PopMatrix
Slider:
id: slider_id5
min: 0
max: 50
value: 0
step: 1
pos: root.width/2+0.3*root.width/2,0.80*root.height
size_hint:0.7,0.05
canvas:
Color:
rgb: [.5,.5,.5]
Rectangle:
id:r1
pos: 0+slider_id5.value,30
size: 60,20
Color:
rgb: [1,.5,.5]
Ellipse:
pos: 15+slider_id5.value,35
angle_start: 270
angle_end: 450
size: 30,30
</code></pre>
<h2>main.py</h2>
<pre><code>from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
class RootWidget(BoxLayout):
pass
class TestApp(App):
def build(self):
return RootWidget()
if __name__ == "__main__":
TestApp().run()
</code></pre>
<h1>Output</h1>
<p><a href="https://i.stack.imgur.com/sQ2tE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sQ2tE.png" alt="Img01 - Moved Slider4"></a>
<a href="https://i.stack.imgur.com/bsQJ0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bsQJ0.png" alt="Img02 - Moved Slider5"></a></p>
|
python|kivy
| 0 |
1,907,535 | 52,835,053 |
Changing arguments of function inside iter(function, sentinel)
|
<p>After watching Raymond Hettinger lecture from pycon, where he demonstrate better way of "do while loop"</p>
<pre><code>blocks = []
while True:
block = f.read(32)
if block == '':
break
blocks.append(block)
</code></pre>
<p>is equal to:</p>
<pre><code>blocks = []
for block in iter(partial(f.read, 32), ''):
blocks.append(block)
</code></pre>
<p>There is same structure in a code. But if arguments of function inside iter needed to be changed it doesn't work "correctly".</p>
<pre><code>def get_data_from_user(user, type, token):
data = []
url = f'https://api.github.com/users/{user}/{type}?access_token={token}&page='
i = 1
while True:
a = get_json_from(url + str(i))
if not a:
break
data.extend(a)
i += 1
return data
i = 1
data = []
for piece in iter(partial(get_json_from, url+str(i)), False):
data.append(piece)
i += 1
</code></pre>
<p>Is there a way to make it work?</p>
|
<p>You can use <code>lambda</code> instead of <code>partial</code> to allow re-evaluation of the variables inside every time it's called:</p>
<pre><code>for piece in iter(lambda: get_json_from(url+str(i)), False):
</code></pre>
|
python|python-3.x
| 1 |
1,907,536 | 47,896,881 |
Progressbar with Percentage Label?
|
<p>How can I put a label in the middle of a progressbar that shows the percentage?
The problem is that python doesn't support transparency for label backgrounds, so I don't know how I can solve that.</p>
|
<p>This is possible using a <code>ttk.Style</code>. The idea is to modify the layout of the <code>Horizontal.TProgressbar</code> style (do the same with <code>Vertical.TProgressbar</code> for a vertical progressbar) to add a label inside the bar:</p>
<p>Usual <code>Horizontal.TProgressbar</code> layout:</p>
<pre><code>[('Horizontal.Progressbar.trough',
{'children': [('Horizontal.Progressbar.pbar',
{'side': 'left', 'sticky': 'ns'})],
'sticky': 'nswe'})]
</code></pre>
<p>With an additional label:</p>
<pre><code>[('Horizontal.Progressbar.trough',
{'children': [('Horizontal.Progressbar.pbar',
{'side': 'left', 'sticky': 'ns'})],
'sticky': 'nswe'}),
('Horizontal.Progressbar.label', {'sticky': 'nswe'})]
</code></pre>
<p>Then, the text of the label can be changed with <code>style.configure</code>.</p>
<p>Here is the code:</p>
<pre><code>import tkinter as tk
from tkinter import ttk
root = tk.Tk()
style = ttk.Style(root)
# add label in the layout
style.layout('text.Horizontal.TProgressbar',
[('Horizontal.Progressbar.trough',
{'children': [('Horizontal.Progressbar.pbar',
{'side': 'left', 'sticky': 'ns'})],
'sticky': 'nswe'}),
('Horizontal.Progressbar.label', {'sticky': 'nswe'})])
# set initial text
style.configure('text.Horizontal.TProgressbar', text='0 %', anchor='center')
# create progressbar
variable = tk.DoubleVar(root)
pbar = ttk.Progressbar(root, style='text.Horizontal.TProgressbar', variable=variable)
pbar.pack()
def increment():
pbar.step() # increment progressbar
style.configure('text.Horizontal.TProgressbar',
text='{:g} %'.format(variable.get())) # update label
root.after(200, increment)
increment()
root.mainloop()
</code></pre>
<p><a href="https://i.stack.imgur.com/8Lul2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8Lul2.png" alt="screenshot of the result" /></a></p>
<h3>Styling</h3>
<p>The font, color and position of the label can be changed using <code>style.configure</code>. For instance,</p>
<pre><code>style.configure('text.Horizontal.TProgressbar', foreground="red",
font='Arial 20', anchor='w')
</code></pre>
<p>gives <a href="https://i.stack.imgur.com/L1cVh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L1cVh.png" alt="screenshot of result" /></a></p>
<h3>Multiple progressbars</h3>
<p>The text is set through the style therefore to have multiple progressbars with different labels, one needs to use a different style for each. However, there is no need to set the layout for each style: create the layout <code>'text.Horizontal.TProgressbar'</code> like in the above code and then use substyles <code>'pb1.text.Horizontal.TProgressbar'</code>, <code>'pb2.text.Horizontal.TProgressbar'</code>, ... for each progressbar. Then the text of a single progressbar can be changed with</p>
<pre><code>style.configure('pb1.text.Horizontal.TProgressbar', text=...)
</code></pre>
|
python|tkinter|label|transparency
| 16 |
1,907,537 | 47,850,132 |
How to preserve the resolution when adding axis using matplotlib.pyplot?
|
<p>If the following code is run</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
a=np.random.random((1000,1000))
plt.imshow(a, cmap='Reds', interpolation='nearest')
plt.savefig('fig.png',bbox_inches='tight')
</code></pre>
<p>I got the picture below, with all the cells representing each random number.</p>
<p><a href="https://i.stack.imgur.com/G0Hjm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G0Hjm.png" alt="enter image description here"></a></p>
<p>However, when the axis is added as the code shown below:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
a=np.random.random((1000,1000))
plt.imshow(a, cmap='Reds', interpolation='nearest')
plt.xlim(0, 10)
plt.xticks(list(range(0, 10)))
plt.ylim(0, 10)
plt.yticks(list(range(0, 10)))
plt.savefig('fig3.png',bbox_inches='tight')
</code></pre>
<p>I got the picture with less resolution:</p>
<p><a href="https://i.stack.imgur.com/Ucsrt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ucsrt.png" alt="enter image description here"></a></p>
<p>So how can I add axis ticks without affecting the resolution? If this is related to the font size of axis markers, how to automatically adjust them so as to keep the original resolution?</p>
|
<p>Application to your problem:</p>
<pre><code>from matplotlib.ticker import FuncFormatter
from matplotlib.pyplot import show
import matplotlib.pyplot as plt
import numpy as np
a=np.random.random((1000,1000))
# create scaled formatters / for Y with Atom prefix
formatterY = FuncFormatter(lambda y, pos: 'Atom {0:g}'.format(y*0.01))
formatterX = FuncFormatter(lambda x, pos: '{0:g}'.format(x*0.01))
# apply formatters
fig, ax = plt.subplots()
ax.yaxis.set_major_formatter(formatterY)
ax.xaxis.set_major_formatter(formatterX)
plt.imshow(a, cmap='Reds', interpolation='nearest')
# create labels
plt.xlabel('nanometer')
plt.ylabel('measure')
plt.xticks(list(range(0, 1001,100)))
plt.yticks(list(range(0, 1001,100)))
plt.show()
</code></pre>
<h2><a href="https://i.stack.imgur.com/lhTGU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lhTGU.png" alt="Y with Atoms, X with scalen numbers, both with titles"></a></h2>
<p><strong>Sources:</strong></p>
<p>A possible solution is to format the ticklabels according to some function as seen in below example code from the matplotlib page.</p>
<blockquote>
<pre><code>from matplotlib.ticker import FuncFormatter
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(4)
money = [1.5e5, 2.5e6, 5.5e6, 2.0e7]
def millions(x, pos):
'The two args are the value and tick position'
return '$%1.1fM' % (x * 1e-6)
formatter = FuncFormatter(millions)
fig, ax = plt.subplots()
ax.yaxis.set_major_formatter(formatter)
plt.bar(x, money)
plt.xticks(x, ('Bill', 'Fred', 'Mary', 'Sue'))
plt.show()
</code></pre>
<p><a href="https://matplotlib.org/gallery/ticks_and_spines/custom_ticker1.html#sphx-glr-gallery-ticks-and-spines-custom-ticker1-py" rel="nofollow noreferrer">matplotlib.org Example</a></p>
</blockquote>
<hr>
<p>A similar solution is shown in <a href="https://stackoverflow.com/a/17816809/7505395">this answer</a>, where
you can set a function to label the axis for you and scale it down:</p>
<blockquote>
<pre><code>ticks = ticker.FuncFormatter(lambda x, pos: '{0:g}'.format(x*scale))
ax.xaxis.set_major_formatter(ticks)
</code></pre>
</blockquote>
<p>Here, you would need to do <code>/100</code> instead of <code>*scale</code></p>
<p>The easier way for yours would probably be:</p>
<blockquote>
<pre><code>ticks = plt.xticks()/100
plt.gca().set_xticklabels(ticks.astype(int))
</code></pre>
<p>(adapted from <a href="https://stackoverflow.com/a/10171851/7505395">https://stackoverflow.com/a/10171851/7505395</a>)</p>
</blockquote>
|
python|matplotlib|resolution
| 1 |
1,907,538 | 37,591,645 |
Getting information of a panoram in google street view python
|
<p>I'm using the following url to get important information of one latitud-longitud coordinate points in google street view.</p>
<p><a href="http://maps.google.com/cbk?output=xml&ll=" rel="nofollow">http://maps.google.com/cbk?output=xml&ll=</a>....</p>
<p>Specifically I need to know what is the real coordinates for google street view for a GIVEN pair of coordinates.</p>
<p>In the Python API there is not how to access to this information.</p>
<p>For example:</p>
<p><a href="http://maps.google.com/cbk?output=xml&ll=46.414382,10.013988" rel="nofollow">http://maps.google.com/cbk?output=xml&ll=46.414382,10.013988</a>
For the (latitud,longitud)=(46.414382,10.013988)</p>
<p>This is the only way I found to do it in Python.
My question is, is it legal to use this url to get this information?</p>
<p>Thank you so much</p>
|
<p>From the <a href="https://developers.google.com/maps/terms" rel="nofollow">Google Maps API TOS</a>: 10.4d</p>
<blockquote>
<p>No use of Content without a Google map. Unless the Maps APIs Documentation expressly permits you to do so, you will not use the Content in a Maps API Implementation without a corresponding Google map. For example, you may display Street View imagery without a corresponding Google map because the Maps APIs Documentation expressly permits this use.</p>
</blockquote>
<p>I'm not sure if the endpoint you mention is explicitly documented and use without a map is allowed somewhere in the API docs, if not, I think 10.4d applies and you are not allowed to use it.</p>
|
python|google-street-view
| 0 |
1,907,539 | 34,143,233 |
PyQt MainWindow not showing widgets
|
<p>I am making a GUI with PyQt, and I am having issues with my MainWindow class. The window doesn't show widgets that I define in other classes, or it will show a small portion of the widgets in the top left corner, then cut off the rest of the widget.
Can someone please help me with this issue?</p>
<p>Here is some example code showing my issue.</p>
<pre><code>import sys
from PyQt4 import QtGui, QtCore
class MainWindow(QtGui.QMainWindow):
def __init__(self, parent=None):
super(MainWindow, self).__init__(parent=parent)
self.resize(300, 400)
self.centralWidget = QtGui.QWidget(self)
self.hbox = QtGui.QHBoxLayout(self.centralWidget)
self.setLayout(self.hbox)
names = ['button1', 'button2', 'button3']
testButtons = buttonFactory(names, parent=self)
self.hbox.addWidget(testButtons)
class buttonFactory(QtGui.QWidget):
def __init__(self, names, parent=None):
super(buttonFactory, self).__init__(parent=parent)
self.vbox = QtGui.QVBoxLayout()
self.setLayout(self.vbox)
for name in names:
btn = QtGui.QPushButton(name)
self.vbox.addWidget(btn)
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
gui = MainWindow()
gui.show()
app.exec_()
</code></pre>
|
<p>A QMainWindow has a central widget that is a container in which you should add your widgets. It has its own layout. The layout of the QMainWindow is for toolbars and such. The centralWidget must be set with the <code>setCentralWidget</code> method. It isn't enough to just call it <code>self.centralWidget</code></p>
<p>Use the following three lines instead.</p>
<pre><code>self.setCentralWidget(QtGui.QWidget(self))
self.hbox = QtGui.QHBoxLayout()
self.centralWidget().setLayout(self.hbox)
</code></pre>
|
python|qt|pyqt
| 5 |
1,907,540 | 34,415,191 |
while loop not working and wont print list-python
|
<p>So i feel pretty stupid but I've been stuck on this task forever and I honestly don't know how to fix it or what's wrong with it. I've changed it so much in the process of trial and error I think only errors are left and I'm more confused now than when I started. So I'm supposed to get a user's input and as long as the input isn't equal to 'John', it should keep asking, if it is, it should stop the loop and print out all incorrect input. But I'M STUCK. Please be patient and help this idiot...i know I didn't define a list because I don't know how I should in this example. what do i do next?</p>
<pre><code>name='John'
your_name=''
while (your_name!= name):
your_name=(raw_input("Enter your name?"))
if your_name==name:
</code></pre>
|
<p>The <code>if</code> inside the loop is wrong: you should break out of the loop if equality is true and store the bad name to the <code>incorrectNames</code> list if not:</p>
<pre><code>while (your_name!= name):
your_name=(raw_input("That's not the name I'm looking for,try again"))
if your_name==name:
break
else:
incorrectNames.append(your_name)
# print alphabet['']
print "Incorrect names so far : " + ', '.join(incorrectNames)
</code></pre>
|
python|list
| 1 |
1,907,541 | 65,988,673 |
PyPDF2 extracts blank text
|
<p>I am trying to extract Text from PyPDF2, but it's extracting blank Text from the PDF. The PDF is textual and not Image-based.
Is there any way to generalize the pdf so that it extracts the text? Bcoz I don't wanna change the library my whole code depends upon it. Otherwise, I'd have to rewrite the entire 2000+ lines of code.
Find the pdf here: <a href="https://drive.google.com/file/d/1aoWtxNhOKwFw2xbBZgv3gzZPOvt5Ovhc/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/1aoWtxNhOKwFw2xbBZgv3gzZPOvt5Ovhc/view?usp=sharing</a></p>
<pre><code>import PyPDF2
pdf_file = open('sample.pdf', 'rb')
read_pdf = PyPDF2.PdfFileReader(pdf_file)
number_of_pages = read_pdf.getNumPages()
page = read_pdf.getPage(0)
page_content = page.extractText()
</code></pre>
|
<p><code>extractText()</code> still has problems extracting the text properly. you can use another library called <code>slate</code>:</p>
<p>Install slate:</p>
<pre><code>pip install slate3k
</code></pre>
<p>extract text:</p>
<pre><code>with open('G10.pdf','rb') as f:
extracted_text = slate.PDF(f)
print(extracted_text)
</code></pre>
<p>you can go through this answer <a href="https://stackoverflow.com/questions/4203414/pypdf-unable-to-extract-text-from-some-pages-in-my-pdf">here</a> too.</p>
|
python|pdf|pypdf2
| 1 |
1,907,542 | 72,549,322 |
pip install does not work with python3 on windows
|
<p>I am getting acquainted with python development and it's been some time since I wrote python code. I am setting up my IDE (Pycharm) and Python3 binaries on Windows 10.</p>
<p>I want to start working with the pytorch library and I am used to before in python2 just typing pip install and it works fine.</p>
<p>Now it seems pip is not installed by default and has been replaced by something called conda?</p>
<p>What's the best way to install the pytorch package from the command line?</p>
<p>here is a screenshot <a href="https://imgur.com/a/AdMiI1j" rel="nofollow noreferrer">link</a></p>
|
<p><a href="https://pytorch.org/get-started/locally/" rel="nofollow noreferrer">PyTorch Documentation</a> does have a selector that will give you the commands for pip and conda.</p>
<p>However, Python3 should have pip installed, it may just be pip3 (that's what it was for me).</p>
|
python|python-3.x|pytorch
| 1 |
1,907,543 | 72,532,448 |
java.lang.OutOfMemoryError: Unable to acquire 36 bytes of memory, got 0
|
<p>I am executing my script with pyspark local mode with following command</p>
<p><code>pyspark --executor-memory 16g --driver-memory 16g</code></p>
<p>In my pyspark code, I am joining a new column in every iteration with existing spark dataframe, at the start df is empty, then in first iteration column A is added and in second iteration column B and in third iteration C and so on. One 27th iteration, I get this memory error when pyspark try to execute df.count().</p>
<p>Total number of columns to join are 52.</p>
<pre> (142 + 2) / 200]22/06/07 09:41:06 WARN TaskSetManager: Lost task 137.0 in stage 2618.0 (TID 18826, XXXX, executor 2): org.apache.spark.memory.SparkOutOfMemoryError: Unable to acquire 36 bytes of memory, got 0
at org.apache.spark.memory.MemoryConsumer.throwOom(MemoryConsumer.java:157)
at org.apache.spark.memory.MemoryConsumer.allocatePage(MemoryConsumer.java:119)
at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.acquireNewPageIfNecessary(UnsafeExternalSorter.java:383)
at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.insertRecord(UnsafeExternalSorter.java:407)
at org.apache.spark.sql.execution.UnsafeExternalRowSorter.insertRow(UnsafeExternalRowSorter.java:135)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage84.sort_addToSorter_0$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage84.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage138.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage138.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage139.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage139.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage140.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage140.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage141.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage141.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage142.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage142.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage143.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage143.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage144.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage144.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage145.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage145.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage146.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage146.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage147.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage147.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage148.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage148.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage149.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage149.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage150.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage150.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage151.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage151.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage152.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage152.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage153.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage153.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage154.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage154.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage155.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage155.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage156.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage156.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage157.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage157.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage158.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage158.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage159.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage159.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage160.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage160.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage161.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage161.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage162.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage162.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage163.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage163.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage164.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage164.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage165.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage165.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage166.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage166.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage167.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage167.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage168.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage168.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage169.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage169.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage170.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage170.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage171.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage171.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage172.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage172.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage173.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage173.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage174.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage174.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage175.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage175.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage176.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage176.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage177.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage177.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage178.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage178.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage179.findNextInnerJoinRows$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage179.agg_doAggregateWithoutKey_0$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage179.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$12$$anon$2.hasNext(WholeStageCodegenExec.scala:633)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)</pre>
|
<p>I am able to solved this problem by add the following configuration in the pyspark script and setting the number to 1000.</p>
<pre><code>sqlContext.setConf("spark.sql.shuffle.partitions", "1000")
</code></pre>
|
python|apache-spark|pyspark
| 0 |
1,907,544 | 39,794,747 |
TypeError: cannot concatenate 'str' and 'NoneType' objects when placing the custom url in scrapy.Request()
|
<p>I get a url that cannot be used to fetch data from next page, so created a <code>base_url = 'http://www.marinetraffic.com'</code> variable and passed it scrapy request. <code>port_homepage_url = base_url + port_homepage_url</code>. It works fine, when i yeild the result like this. <code>yield {'a': port_homepage_url, 'b':item['port_name']}</code>
I get this result i wanted. </p>
<p><a href="http://www.marinetraffic.com/en/ais/index/ships/range/port_id:20585/port_name:FUJAIRAH%20ANCH,FUJAIRAH" rel="nofollow">http://www.marinetraffic.com/en/ais/index/ships/range/port_id:20585/port_name:FUJAIRAH%20ANCH,FUJAIRAH</a> ANCH</p>
<p>however if place it in scrapy request <code>yield scrapy.Request(port_homepage_url, callback=self.parse, meta={'item': item})</code> i get error </p>
<pre><code>port_homepage_url = base_url + port_homepage_url
TypeError: cannot concatenate 'str' and 'NoneType' objects
</code></pre>
<p>here is code</p>
<pre><code>class GetVessel(scrapy.Spider):
name = "getvessel"
allowed_domains = ["marinetraffic.com"]
start_urls = [
'http://www.marinetraffic.com/en/ais/index/ports/all/flag:AE',
]
def parse(self, response):
item = VesseltrackerItem()
base_url = 'http://www.marinetraffic.com'
for ports in response.xpath('//table/tr[position()>1]'):
item['port_name'] = ports.xpath('td[2]/a/text()').extract_first()
port_homepage_url = ports.xpath('td[7]/a/@href').extract_first()
port_homepage_url = base_url + port_homepage_url
yield scrapy.Request(port_homepage_url, callback=self.parse, meta={'item': item})
</code></pre>
|
<p>The problem does not happen on the initial start URL page, but happens later on when subsequent requests are processed. Take for example <a href="http://www.marinetraffic.com/en/ais/index/ships/range/port_id:1699/port_name:HAMRIYA" rel="nofollow">this page</a>. There are no links in the 7-th <code>td</code> element and, hence, <code>ports.xpath('td[7]/a/@href').extract_first()</code> returns <code>None</code> which results in a failure on the <code>port_homepage_url = base_url + port_homepage_url</code> line.</p>
<p>How to approach the problem depends on what were you planning to do on the "port" pages. From what I understand, you did not mean to actually handle the "port" page requests with <code>self.parse</code> and need to have a separate callback with different logic inside.</p>
|
python|scrapy
| 2 |
1,907,545 | 16,121,170 |
K-Fold Cross Validation for Naive Bayes
|
<p>I am trying to do a k-fold validation for my naive bayes classifier using sklearn</p>
<pre><code>train = csv_io.read_data("../Data/train.csv")
target = np.array( [x[0] for x in train] )
train = np.array( [x[1:] for x in train] )
#In this case we'll use a random forest, but this could be any classifier
cfr = RandomForestClassifier(n_estimators=100)
#Simple K-Fold cross validation. 10 folds.
cv = cross_validation.KFold(len(train), k=10, indices=False)
#iterate through the training and test cross validation segments and
#run the classifier on each one, aggregating the results into a list
results = []
for traincv, testcv in cv:
probas = cfr.fit(train[traincv], target[traincv]).predict_proba(train[testcv])
results.append( myEvaluationFunc(target[testcv], [x[1] for x in probas]) )
#print out the mean of the cross-validated results
print "Results: " + str( np.array(results).mean() )
</code></pre>
<p>I found a code from this website, <a href="https://www.kaggle.com/wiki/GettingStartedWithPythonForDataScience/history/969" rel="nofollow">https://www.kaggle.com/wiki/GettingStartedWithPythonForDataScience/history/969</a>. In the example the classifier is RandomForestClassifier, I would like to use my own naive bayes classifer, but I not very sure what the fit method do at this line probas = cfr.fit(train[traincv], target[traincv]).predict_proba(train[testcv])</p>
|
<p>Seems like you just need to change cfr, for example:</p>
<pre><code>cfr = sklearn.naive_bayes.GaussianNB()
</code></pre>
<p>and it should work the same.</p>
|
python|cross-validation
| 0 |
1,907,546 | 16,318,551 |
Thrift python - TApplicationException: Invalid method name
|
<p>I have 2 services that are defined in the same thrift file and share a port. I can use any method from serviceA no problem but whenever i try to call any of ServiceB's methods i get the exception.
this is my thrift file (service-a.thrift):</p>
<pre><code>service ServiceA extends common.CommonService {
list<i64> getByIds(1: list<i64> ids)
...
}
service ServiceB extends common.CommonService {
list<i64> getByIds(1: list<i64> ids)
...
}
</code></pre>
<p>notes:</p>
<ul>
<li>I'm working with a python client</li>
<li>Thrift version 0.8.0</li>
</ul>
<p>Any ideas?</p>
|
<p>We had this need as well and solved by writing a new implementation of TProcessor that creates a map of multiple processors. The only gotcha is that with this implementation you need to ensure no method names overlap - i.e. don't use nice generic names like Run() in different servers. Apologies on not converting C# to Python...</p>
<p>Example Class:</p>
<pre><code>using System;
using System.Collections;
using System.Collections.Generic;
using System.IO;
using System.Reflection;
using Thrift;
using Thrift.Protocol;
/// <summary>
/// Processor that allows for multiple services to run under one roof. Requires no method name conflicts across services.
/// </summary>
public class MultiplexProcessor : TProcessor {
public MultiplexProcessor(IEnumerable<TProcessor> processors) {
ProcessorMap = new Dictionary<string, Tuple<TProcessor, Delegate>>();
foreach (var processor in processors) {
var processMap = (IDictionary) processor.GetType().GetField("processMap_", BindingFlags.NonPublic | BindingFlags.Instance).GetValue(processor);
foreach (string pmk in processMap.Keys) {
var imp = (Delegate) processMap[pmk];
try {
ProcessorMap.Add(pmk, new Tuple<TProcessor, Delegate>(processor, imp));
}
catch (ArgumentException) {
throw new ArgumentException(string.Format("Method already exists in process map: {0}", pmk));
}
}
}
}
protected readonly Dictionary<string, Tuple<TProcessor, Delegate>> ProcessorMap;
internal protected Dictionary<string, Tuple<TProcessor, Delegate>> GetProcessorMap() {
return new Dictionary<string, Tuple<TProcessor, Delegate>>(ProcessorMap);
}
public bool Process(TProtocol iprot, TProtocol oprot) {
try {
TMessage msg = iprot.ReadMessageBegin();
Tuple<TProcessor, Delegate> fn;
ProcessorMap.TryGetValue(msg.Name, out fn);
if (fn == null) {
TProtocolUtil.Skip(iprot, TType.Struct);
iprot.ReadMessageEnd();
var x = new TApplicationException(TApplicationException.ExceptionType.UnknownMethod, "Invalid method name: '" + msg.Name + "'");
oprot.WriteMessageBegin(new TMessage(msg.Name, TMessageType.Exception, msg.SeqID));
x.Write(oprot);
oprot.WriteMessageEnd();
oprot.Transport.Flush();
return true;
}
Console.WriteLine("Invoking service method {0}.{1}", fn.Item1, fn.Item2);
fn.Item2.Method.Invoke(fn.Item1, new object[] {msg.SeqID, iprot, oprot});
}
catch (IOException) {
return false;
}
return true;
}
}
</code></pre>
<p>Example Usage:</p>
<pre><code> Processor = new MultiplexProcessor(
new List<TProcessor> {
new ReportingService.Processor(new ReportingServer()),
new MetadataService.Processor(new MetadataServer()),
new OtherService.Processor(new OtherService())
}
);
Server = new TThreadPoolServer(Processor, Transport);
</code></pre>
|
python|thrift
| 1 |
1,907,547 | 32,083,358 |
How do I print all my results on one line with the spaces
|
<p>My code takes a line of text from the user and attempts to encode or decode the text using a function I have created. However I am trying to print all the results of one line and also include the spaces that the user inputted so that it clearly shows that each word has been encoded. It currently just prints all the results underneath one another and does not include the spaces that the user inputted. </p>
<pre><code>print ""
# To God be the Glory
text = raw_input("Please enter a line of text: ")
text = text.lower()
print ""
key = int(input("Please enter a key: "))
def ascii_func (text) :
for charc in text:
if charc in ['-', '+', '*', '/', '!' , "@"]:
print "Error input is not correct"
for charc in text:
if charc != " " :
charc = ord(charc)
charc = (charc - 97) + key
charc = (charc % 26)
charc = charc + 97
charc = chr(charc)
print charc
ascii_func(text)
</code></pre>
|
<p>Build a string instead of printing one character at a time:</p>
<pre><code>result = ''
for charc in text:
if charc != " " :
charc = ord(charc)
charc = (charc - 97) + key
charc = (charc % 26)
charc = charc + 97
charc = chr(charc)
result += charc
print result
</code></pre>
|
python
| 1 |
1,907,548 | 38,752,665 |
Python Tkinter GUI Calculator
|
<p>So I am currently in the process of making a GUI Calculator, but am unsure on how to write code that will perform the operations of the calculator. Right now I currently have setup the window, entry box, and calculator buttons, but none of them actually do anything at the moment. </p>
<p>I am just confused on how these buttons are represented in code and so I am not sure how to write a block of code that will be able to read in these button inputs and perform addition,subtraction, etc. </p>
<p>Here is my code so far </p>
<pre><code>class Calculator(Frame):
def __init__(self,master):
Frame.__init__(self,master)
self.grid()
self.dataEnt = Entry(self)
self.dataEnt.grid(row = 0, column = 1, columnspan = 4)
labels =[['AC','%','/'],
['7','8','9','*'],
['4','5','6','-'],
['1','2','3','+'],
['0','.','=']]
label = Button(self,relief = RAISED, padx = 10, text = labels[0][0]) #AC
label.grid(row = 1, column = 0, columnspan = 2)
label = Button(self,relief = RAISED, padx = 10, text = labels[0][1]) # %
label.grid(row = 1, column = 3)
label = Button(self,relief = RAISED, padx = 10, text = labels[0][2]) # /
label.grid(row = 1, column = 4)
for r in range(1,4):
for c in range(4):
#create label for row r and column c
label = Button(self,relief = RAISED,
padx = 10,
text = labels[r][c]) # 789* 456- 123+
# place label in row r and column c
label.grid(row = r+1, column = c+1)
label = Button(self,relief = RAISED, padx = 10, text = labels[4][0]) #0
label.grid(row = 5, column = 0, columnspan = 2)
label = Button(self,relief = RAISED, padx = 10, text = labels[4][1]) # .
label.grid(row = 5, column = 3)
label = Button(self,relief = RAISED, padx = 10, text = labels[4][2]) # =
label.grid(row = 5, column = 4)
def operations(self,num ):
def main():
root = Tk()
root.title('Calculator')
obj = Calculator(root)
root.mainloop()
</code></pre>
<p><a href="http://i.stack.imgur.com/Oq7sV.png" rel="nofollow">and here is what the calculator looks like so far</a></p>
<p>My guess is that I need to somehow be able to read the input as a string and then have python evaluate that string as a mathematical expression but I am not sure how to go about it. </p>
<p>Thanks for any help!</p>
|
<p>use simply pyautogui for basic GUI calculator</p>
<pre><code>import math
import pyautogui
print( '+ =add |- =subtraction |* =multiplication | / =division |w =remainder of division |i =percentage |s =product of a number by itself \n'
'|p =prime number |r =square root')
def p(num):
for i in range(2, int(num)):
if int(num) % i == 0:
return i
return False
while True:
try:
try:
#----------------------------------------------------------#
num=float(pyautogui.prompt('1st','pcalculator'))
op=pyautogui.prompt('sign','pcalculator')
if not op == 'p':
nun=float(pyautogui.prompt('2nd','pcalculator'))
if op=='+':
x=pyautogui.confirm(num+nun,'pcalculator ',buttons=['Ok','Cancel'])
if x=='Cancel':
break
elif op=='-':
x=pyautogui.confirm(num-nun,'pcalculator ',buttons=['Ok','Cancel'])
if x=='Cancel':
break
elif op=='*':
x=pyautogui.confirm(num*nun,'pcalculator ',buttons=['Ok','Cancel'])
if x=='Cancel':
break
elif op=='/':
x=pyautogui.confirm(num/nun,'pcalculator ',buttons=['Ok','Cancel'])
if x=='Cancel':
break
elif op=='w':
x=pyautogui.confirm(num%nun,'pcalculator ',buttons=['Ok','Cancel'])
if x=='Cancel':
break
elif op=='i':
x=pyautogui.confirm(num/nun*100,'pcalculator ',buttons=['Ok','Cancel'])
if x=='Cancel':
break
elif op=='s':
x=pyautogui.confirm(num**nun,'pcalculator ',buttons=['Ok','Cancel'])
if x=='Cancel':
break
elif op=='p':
if p(num):
x=pyautogui.confirm((int(num),'is a composite number'),'pcalculator',buttons=['Ok','Cancel'])
if x=='Cancel':
break
else:
x=pyautogui.confirm((int(num),'is a prime number'),'pcalculator ',buttons=['Ok','Cancel'])
if x=='Cancel':
break
elif op=='r':
x=pyautogui.confirm(math.sqrt(num),'pcalculator ',buttons=['Ok','Cancel'])
if x=='Cancel':
break
#-------------------------------------------------------------------#
except ZeroDivisionError:
x=pyautogui.confirm("Error: you can not divide by 0",'pcalculator ',buttons=['Ok','Cancel'])
if x=='Cancel':
break
except ValueError:
x=pyautogui.confirm('Invalid input,use only integers,decimals and symbols.','pcalculator ',buttons=['Ok','Cancel'])
if x=='Cancel':
break
</code></pre>
|
python|user-interface|tkinter|calculator|operator-keyword
| 0 |
1,907,549 | 38,917,113 |
Join two lists into one dictionary
|
<p>These are my lists (both lists have the same length - 43 indices):</p>
<pre><code>list1 = [u'UMTS', u'UMTS', u'UMTS', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'LTE']
list2 = [u'60000', u'60000', u'60000', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'120000', u'120000', u'120000', u'120512', u'120512', u'120512', u'120512', u'120512', u'120512', u'118629', u'118629', u'118629', u'120000', u'120000', u'120000', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'300000']
</code></pre>
<p>I would like to join them into one dictionary:</p>
<pre><code>dictionary = {
'UMTS' : 'indices from 0 to 2'
'GSM' : 'indices from 3 to 42'
'LTE' : 'index 43'
}
</code></pre>
<p>Does anyone know how to do it? Is it possible at all? Thanks in advance !!!</p>
|
<p>You can use <code>collections.defaultdict()</code> and <code>zip()</code> function:</p>
<pre><code>>>> from collections import defaultdict
>>>
>>> d = defaultdict(list)
>>>
>>> for i, j in zip(list1, list2):
... d[i].append(j)
...
>>> d
defaultdict(<type 'list'>, {u'UMTS': [u'60000', u'60000', u'60000'], u'GSM': [u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'120000', u'120000', u'120000', u'120512', u'120512', u'120512', u'120512', u'120512', u'120512', u'118629', u'118629', u'118629', u'120000', u'120000', u'120000', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629'], u'LTE': [u'300000']})
</code></pre>
|
python|python-2.7
| 2 |
1,907,550 | 40,596,231 |
How to turn off 'new command window spawning' in the newest spyder when running another program from python
|
<p>I spend a lot of time running programs from python using the <code>subprocess</code> module. One of my scripts uses the <code>check_call</code> command to run a program from the command line around 600 times. Today I updated to Spyder 3 and when I run this script I get a pop up which looks like this
<a href="https://i.stack.imgur.com/4dAx1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4dAx1.png" alt="enter image description here"></a></p>
<p>This stays for the duration of the program (a few seconds) then disappears but then another appears to replace it as my programs uses the <code>check_call</code> command again. This behavior is very disruptive as it means I can't just run a long program in the background on my machine whilst working on something else. Also this was never a problem on the old version of Spyder I had. Does anybody know how to turn this very annoying behavior off? </p>
|
<p>(<em>Spyder dev here</em>) If I'm not mistaken, now you need to pass the parameter <code>shell=True</code> to all <code>subprocess</code> commands you're using to avoid this problem.</p>
|
python|subprocess|spyder
| 1 |
1,907,551 | 68,121,306 |
check if nested list is empty
|
<p>My values arrays shows like this</p>
<pre><code>values = [array(0., dtype=float32), array(0., dtype=float32)]
</code></pre>
<p>How to check if the array is empty?</p>
<p>I tried following, but it does not work</p>
<pre><code>if not any(values):
print("Empty list!")
</code></pre>
|
<p>If we want to search for arrays containing exactly one zero and then print 'empty' we can do the following:</p>
<p>If it should return 'empty' if both are empty you can do that:</p>
<pre><code>if not any([bool(value) if len(value) == 1 else True for value in values]):
print('empty')
</code></pre>
<p>If it should return 'empty' if one of the nested lists is empty:</p>
<pre><code>if not all([bool(value) if len(value) == 1 else True for value in values]):
print('empty')
</code></pre>
|
python|list
| 1 |
1,907,552 | 59,939,343 |
SQL query generates undefined charmap
|
<p>This is the code I'm running, a SELECT command in a firebird database. </p>
<p>I wish to write the contents of a certain table in a .txt file:</p>
<pre class="lang-py prettyprint-override"><code>import fdb
con = fdb.connect(dsn=defaultDSN, user=defaultUser, password=defaultPassword)
cur = con.cursor()
cur.execute("SELECT * FROM TableName")
#I'm aware this erases everything in the file, this is intended
file = open(file="FirebirdMirror.txt", mode="w+", encoding='utf-8', errors='ignore')
file.write('')
file = open(file="FirebirdMirror.txt", mode="a+", encoding='utf-8', errors='ignore')
for fieldDesc in cur.description:
file.write(fieldDesc[fdb.DESCRIPTION_NAME] + ', ')
file.write("\n")
for x in list(list(str(cur.fetchall()))):
for y in x:
file.write(str(y) + ', ')
file.write('\n')
file.close()
</code></pre>
<p>I have no clue as to why, but my cur.fetchall() returns something alien...</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "C:/Users/graciele.davince/PycharmProjects/helloworld/venv/firebirdSQL.py", line 205, in <module>
generate_text_file()
File "C:/Users/graciele.davince/PycharmProjects/helloworld/venv/firebirdSQL.py", line 166, in generate_text_file
for x in list(list(str(cur.fetchall()))):
File "C:\Users\graciele.davince\PycharmProjects\helloworld\venv\lib\site-packages\fdb\fbcore.py", line 3807, in fetchall
return [row for row in self]
File "C:\Users\graciele.davince\PycharmProjects\helloworld\venv\lib\site-packages\fdb\fbcore.py", line 3807, in <listcomp>
return [row for row in self]
File "C:\Users\graciele.davince\PycharmProjects\helloworld\venv\lib\site-packages\fdb\fbcore.py", line 3542, in next
row = self.fetchone()
File "C:\Users\graciele.davince\PycharmProjects\helloworld\venv\lib\site-packages\fdb\fbcore.py", line 3759, in fetchone
return self._ps._fetchone()
File "C:\Users\graciele.davince\PycharmProjects\helloworld\venv\lib\site-packages\fdb\fbcore.py", line 3412, in _fetchone
return self.__xsqlda2tuple(self._out_sqlda)
File "C:\Users\graciele.davince\PycharmProjects\helloworld\venv\lib\site-packages\fdb\fbcore.py", line 2843, in __xsqlda2tuple
value = b2u(value, self.__python_charset)
File "C:\Users\graciele.davince\PycharmProjects\helloworld\venv\lib\site-packages\fdb\fbcore.py", line 486, in b2u
return st.decode(charset)
File "C:\Users\graciele.davince\AppData\Local\Programs\Python\Python38-32\lib\encodings\cp1252.py", line 15, in decode
return codecs.charmap_decode(input,errors,decoding_table)
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 3234: character maps to <undefined>
</code></pre>
<p>My database might have certain characters from brazillian portuguese, namely:</p>
<p>ç, â, ê, ô, ã, õ, á, é, í, ó, ú, à, and their capitalized cousins.</p>
<p>From what I googled, it has something to do with how text is stored in form of bits, and the character represented by the 0x9d bit seems to be the issue.</p>
<p>I'm using errors='ignore' but the error still shows up, using encoding='utf-8' and I have also tried latin-1, ISOnumbersnumbers, windows1252 and some others, but to no avail.</p>
<ul>
<li><strong>Constraints:</strong> I can't use any commands that change the contents of my table, and I necessairily have to store everything inside it in a .txt file. </li>
</ul>
<p><strong>ps.: columns in the same row must be separated by commas, and each row must be separated by \n</strong></p>
<hr>
<p>edit.:</p>
<p>Mark's solution worked - but I'd like to add that it's a good idea to check what dialect your database is in.</p>
|
<p>The error occurs when communicating with Firebird, not when reading from your text file, so the <code>errors="ignore"</code> instruction isn't applied.</p>
<p>Probably you need to explicitly specify the connection character set (eg UTF8), so FDB doesn't use connection character set NONE, which applies the default encoding (Cp1252 in your case).</p>
|
python|sql|python-3.x|firebird|codepages
| 1 |
1,907,553 | 2,264,488 |
Permission problem of .egg of easy_install under windows7/vista
|
<p>I use the easy_install to install python packages in a virtuaenv under windows7. Due to the UAV, I have to run the CMD as administrator for installing packages. Here comes the problem, I notice that I can't import the package from a normal user account.</p>
<pre><code>>>> import tempita
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named tempita
</code></pre>
<p>But tempita-0.4-py2.6 is just right there in the site-package. Also, run python as administrator, import works correctly. That's the problem of permission. It's strange, I don't know why, but only .egg files are installed with restricted permissions setting. I find there is an article about this problem:</p>
<p><a href="http://catherinedevlin.blogspot.com/2009/09/easyinstall-no-longer-easy-on-vista.html" rel="nofollow noreferrer">easy_install no longer easy on Vista</a></p>
<p>It doesn't work to change the owner or permissions of parent folder, the only solution I know is to modify the permissions of those egg files one by one. This is really annoying, why easy_install set such a restricted permissions only to .egg files rather than .py files? And how can I solve this problem without shut UAV down or run as a super user?</p>
|
<p>I've started using <a href="http://pypi.python.org/pypi/distribute" rel="nofollow noreferrer">distribute</a> in lieu of setuptools, because the distribute team has been much more proactive in tracking down problems. Curiously, it appears as if distribute no longer creates zip eggs on my Windows 7 system, perhaps for the permissions issues you've encountered. Switching to distribute might be a solution for you, although I would understand if that seems like more of a hack than a fix.</p>
|
python|windows|virtualenv|easy-install
| 0 |
1,907,554 | 1,789,254 |
Clustering text in Python
|
<p>I need to cluster some text documents and have been researching various options. It looks like LingPipe can cluster plain text without prior conversion (to vector space etc), but it's the only tool I've seen that explicitly claims to work on strings.</p>
<p>Are there any Python tools that can cluster text directly? If not, what's the best way to handle this?</p>
|
<p>The quality of text-clustering depends mainly on two factors:</p>
<ol>
<li><p>Some notion of similarity between the documents you want to cluster. For example, it's easy to distinguish between newsarticles about sports and politics in vector space via tfidf-cosine-distance. It's a lot harder to cluster product-reviews in "good" or "bad" based on this measure.</p></li>
<li><p>The clustering method itself. You know how many cluster there'll be? Ok, use kmeans. You don't care about accuracy but want to show a nice tree-structure for navigation of search-results? Use some kind of hierarchical clustering.</p></li>
</ol>
<p>There is no text-clustering solution, that would work well under any circumstances. And therefore it's probably not enough to take some clustering software out of the box and throw your data at it.</p>
<p>Having said that, here's some experimental code i used some time ago to play around with text-clustering. The documents are represented as normalized tfidf-vectors and the similarity is measured as cosine distance. The clustering method itself is <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.85.3073&rep=rep1&type=pdf" rel="noreferrer">majorclust</a>.</p>
<pre><code>import sys
from math import log, sqrt
from itertools import combinations
def cosine_distance(a, b):
cos = 0.0
a_tfidf = a["tfidf"]
for token, tfidf in b["tfidf"].iteritems():
if token in a_tfidf:
cos += tfidf * a_tfidf[token]
return cos
def normalize(features):
norm = 1.0 / sqrt(sum(i**2 for i in features.itervalues()))
for k, v in features.iteritems():
features[k] = v * norm
return features
def add_tfidf_to(documents):
tokens = {}
for id, doc in enumerate(documents):
tf = {}
doc["tfidf"] = {}
doc_tokens = doc.get("tokens", [])
for token in doc_tokens:
tf[token] = tf.get(token, 0) + 1
num_tokens = len(doc_tokens)
if num_tokens > 0:
for token, freq in tf.iteritems():
tokens.setdefault(token, []).append((id, float(freq) / num_tokens))
doc_count = float(len(documents))
for token, docs in tokens.iteritems():
idf = log(doc_count / len(docs))
for id, tf in docs:
tfidf = tf * idf
if tfidf > 0:
documents[id]["tfidf"][token] = tfidf
for doc in documents:
doc["tfidf"] = normalize(doc["tfidf"])
def choose_cluster(node, cluster_lookup, edges):
new = cluster_lookup[node]
if node in edges:
seen, num_seen = {}, {}
for target, weight in edges.get(node, []):
seen[cluster_lookup[target]] = seen.get(
cluster_lookup[target], 0.0) + weight
for k, v in seen.iteritems():
num_seen.setdefault(v, []).append(k)
new = num_seen[max(num_seen)][0]
return new
def majorclust(graph):
cluster_lookup = dict((node, i) for i, node in enumerate(graph.nodes))
count = 0
movements = set()
finished = False
while not finished:
finished = True
for node in graph.nodes:
new = choose_cluster(node, cluster_lookup, graph.edges)
move = (node, cluster_lookup[node], new)
if new != cluster_lookup[node] and move not in movements:
movements.add(move)
cluster_lookup[node] = new
finished = False
clusters = {}
for k, v in cluster_lookup.iteritems():
clusters.setdefault(v, []).append(k)
return clusters.values()
def get_distance_graph(documents):
class Graph(object):
def __init__(self):
self.edges = {}
def add_edge(self, n1, n2, w):
self.edges.setdefault(n1, []).append((n2, w))
self.edges.setdefault(n2, []).append((n1, w))
graph = Graph()
doc_ids = range(len(documents))
graph.nodes = set(doc_ids)
for a, b in combinations(doc_ids, 2):
graph.add_edge(a, b, cosine_distance(documents[a], documents[b]))
return graph
def get_documents():
texts = [
"foo blub baz",
"foo bar baz",
"asdf bsdf csdf",
"foo bab blub",
"csdf hddf kjtz",
"123 456 890",
"321 890 456 foo",
"123 890 uiop",
]
return [{"text": text, "tokens": text.split()}
for i, text in enumerate(texts)]
def main(args):
documents = get_documents()
add_tfidf_to(documents)
dist_graph = get_distance_graph(documents)
for cluster in majorclust(dist_graph):
print "========="
for doc_id in cluster:
print documents[doc_id]["text"]
if __name__ == '__main__':
main(sys.argv)
</code></pre>
<p>For real applications, you would use a decent tokenizer, use integers instead of token-strings and don't calc a O(n^2) distance-matrix... </p>
|
python|cluster-analysis|nlp
| 45 |
1,907,555 | 2,241,891 |
How to initialize a dict with keys from a list and empty value in Python?
|
<p>I'd like to get from this:</p>
<pre><code>keys = [1,2,3]
</code></pre>
<p>to this:</p>
<pre><code>{1: None, 2: None, 3: None}
</code></pre>
<p>Is there a pythonic way of doing it?</p>
<p>This is an ugly way to do it:</p>
<pre><code>>>> keys = [1,2,3]
>>> dict([(1,2)])
{1: 2}
>>> dict(zip(keys, [None]*len(keys)))
{1: None, 2: None, 3: None}
</code></pre>
|
<p><a href="https://docs.python.org/3/library/stdtypes.html#dict.fromkeys" rel="noreferrer"><code>dict.fromkeys</code></a> directly solves the problem:</p>
<pre><code>>>> dict.fromkeys([1, 2, 3, 4])
{1: None, 2: None, 3: None, 4: None}
</code></pre>
<p>This is actually a classmethod, so it works for dict-subclasses (like <code>collections.defaultdict</code>) as well.</p>
<p>The optional second argument, which defaults to <code>None</code>, specifies the value to use for the keys. Note that the <em>same object</em> will be used for each key, which can cause problems with mutable values:</p>
<pre><code>>>> x = dict.fromkeys([1, 2, 3, 4], [])
>>> x[1].append('test')
>>> x
{1: ['test'], 2: ['test'], 3: ['test'], 4: ['test']}
</code></pre>
|
dictionary|python
| 502 |
1,907,556 | 62,921,213 |
Is it possible to use more than one framework at the backend(Spring boot + Django)?
|
<p><strong>tl;dr: Is Spring + Django back-end possible?</strong></p>
<p>When I was new to industry and was still working my way around the office, I got interested in Django and created a very small, basic-level application using the framework. When I got to meet my team after a few weeks, they said to go for Spring framework. After spending half a year on the framework and the main proj, I finally started to get time to start working off-hours. But, I don't want to lose both the skills - My teammate(when we were still in office ;) ) once told me that they worked on a project that started with python code, and then later added features using Java. And I am unable to find any helpful google searches(mostly showing Spring vs Django).</p>
<p>How should I go about it? Is it too much to ask for? Is it worthwhile? Will I learn some new concepts of application architecture a noob like me would have missed. Please provide me with some insight.</p>
<p>Are there resources(docs) I can go through?</p>
<p>P.S. I'm not a diehard fan of either of the frameworks right now, just another coder testing waters.</p>
|
<p>You can't write java in python.</p>
<p>You can extend Python with C/C++ which is quite common: <a href="https://docs.python.org/3/extending/extending.html" rel="nofollow noreferrer">Extending Python with C or C++</a></p>
<p>And about the part that they told that they added features with java:</p>
<p>It's common to create different parts of a project using different languages and tools. Microservice architecture is a common architecture for these kinds of use cases. You basically code different parts of the project in a language you want and then you connect all the parts using different methods like REST APIs, gRPC and etc.</p>
<p>Imagine you are creating a website like youtube that lets others upload videos. There is a form that users upload their files and you store them in your storage and then you have to encode the video file for different qualities. You can code the form handler using Python and Django to store the files in your storage. Then you can code another service using java that handles the encoding part which is a heavy process. When an upload is completed, you send the file or file path to your java service using an internal REST API and tell the service to start encoding the video and notify the Django service and then the Django service will publish the video on the feed that can itself be written in another language.</p>
|
java|python|django|spring|backend
| 2 |
1,907,557 | 44,284,297 |
Python regex, keep alphanumeric but remove numeric
|
<p>I want to remove numbers in my string but keep alphanumeric as is using regex in python.</p>
<pre><code>" How to remove 123 but keep abc123 from this question?"
</code></pre>
<p>I want result to be like:</p>
<pre><code>"How to remove but keep abc123 from this question?"
</code></pre>
<p>I tried</p>
<pre><code>spen=re.sub('[0-9]+', '', que)
</code></pre>
<p>but it removes all numbers.
I want <code>abc123</code> to be as is.</p>
|
<p>You could use the <a href="http://www.regular-expressions.info/wordboundaries.html" rel="noreferrer">word boundary symbol</a> <code>\b</code>, something like this:</p>
<pre><code>re.sub(r'\b[0-9]+\b', '', que)
</code></pre>
<p>That will not match numbers that are part of a longer word.</p>
|
python|regex
| 8 |
1,907,558 | 44,186,151 |
Python 2.7 - Differences when writing to a file on Win7 vs OSX
|
<p>I've been writing a program that reads in a file, and changes specific characters before writing it again. When I run the program under OSX, the output is exactly as I would like. However, when trying to run it under Windows, the written file has a number of unintended characters sprinkled in the file. If I check the length of the output in Python before I write it, it's the intended size, so I assume something is different with the Python writing protocol on Windows. Here's a snippet of the code if you're interested. </p>
<pre><code> rom = open(rom_name, 'rb').read()
rom_list = list(rom)
for item in ability_locations:
address = int(item, 16)
rand_ind = random.randint(0,len(ability_values) - 1)
new_enemy = ability_values[rand_ind]
new_enemy = chr(int(new_enemy,16))
rom_list[address] = new_enemy
rom = "".join(rom_list)
new_rom = open(rom_name.split(".")[0] + "_" + str(KA_seed) + ".nes", 'w')
new_rom.write(rom)
new_rom.close()
</code></pre>
<p>It might be worth noting that I'm attempting to modify a hex file, so many of the characters are "unusual". I don't know if writing some of these characters might be the problem.</p>
<p>I'd appreciate any help you can give me. Thanks!</p>
<p>Edit: For future people having the same issue, writing in binary mode fixed my issue ('wb' instead of 'w').</p>
|
<p>Windows inserts carrage-return chars, when writing in text mode. Write your file in binary mode <code>"wb"</code>:</p>
<pre><code>with open(rom_name, 'rb') as rom:
rom = rom.read()
rom_list = list(rom)
for item in ability_locations:
address = int(item, 16)
new_enemy = random.choice(ability_values)
rom_list[address] = chr(int(new_enemy, 16))
with open('{}_{}.new'.format(rom_name.split(".")[0], KA_seed), 'wb') as new_rom:
new_rom.write("".join(rom_list))
</code></pre>
|
python|windows|macos|python-2.7
| 2 |
1,907,559 | 44,017,834 |
How can I get the attribute in androidviewclient
|
<p>How can I get the dump attribute in androidviewclient.
for example, I want to get the attribute value of 'selected'.Please help!! </p>
<p><a href="https://i.stack.imgur.com/fqMih.png" rel="nofollow noreferrer">android attribute</a></p>
|
<p>If you run <code>dump -a</code> to dump all attributes you can see something like this (filtering <code>selected</code>):</p>
<pre><code>$ dump -a | grep selected
View[ class=android.widget.TextView index=2 selected=false checked=false clickable=false package=com.android.deskclock text=Tomorrow long-clickable=false enabled=true bounds=((36, 1455), (254, 1520)) content-desc= focusable=true focused=false uniqueId=id/no_id/29 checkable=false resource-id=com.android.deskclock:id/upcoming_instance_label password=false class=android.widget.TextView scrollable=false ] parent=android.widget.LinearLayout
</code></pre>
<p>then, you can find any View using whatever attribute is appropriate for you case and invoke <code>View.getSelected()</code> as in this example:</p>
<pre><code>com_android_deskclock___id_upcoming_instance_label = vc.findViewWithTextOrRaise(u'Tomorrow')
print "selected=", com_android_deskclock___id_upcoming_instance_label.getSelected()
</code></pre>
<p>this was generated using <a href="https://github.com/dtmilano/AndroidViewClient/wiki/culebra" rel="nofollow noreferrer">culebra</a>, which is an excellent way of creating your tests or scripts automatically. I just added the <code>print</code> line.</p>
|
android|python|androidviewclient
| 0 |
1,907,560 | 32,954,522 |
Xcode / mod_pbxproj: How to set ENABLE_BITCODE
|
<p>I am trying to modify a Unity3D generated Xcode project with <a href="https://github.com/kronenthaler/mod-pbxproj" rel="nofollow noreferrer">mod_pbxproj.py</a> called from a python script which is triggered by a <a href="http://docs.unity3d.com/ScriptReference/Callbacks.PostProcessBuildAttribute.html" rel="nofollow noreferrer">PostProcessBuild</a> attribute. I need to set <em>ENABLE_BITCODE = NO</em> due to the problem described in <a href="https://stackoverflow.com/questions/30848208/new-warnings-in-ios9">New warnings in iOS 9</a>.</p>
<p>I am a Python newbie and don't know much about the Xcode PBX internals. I tried a number of calls like </p>
<pre><code> project.add_flags ('ENABLE_BITCODE=NO')
</code></pre>
<p>or array, dictionary, etc. variants. Everything I tried either did not do the job or threw an error in the system logs. Finally I ended up with a patch in mod_pbxproj.py which does what I want: </p>
<pre><code>def add_other_buildsetting(self, flag, value):
build_configs = [b for b in self.objects.values() if b.get('isa') == 'XCBuildConfiguration']
for b in build_configs:
if b.add_other_buildsetting(flag, value):
self.modified = True
</code></pre>
<p>and </p>
<pre><code>def add_other_buildsetting(self, flag, value):
modified = False
base = 'buildSettings'
key = flag
if not self.has_key(base):
self[base] = PBXDict()
self[base][key] = value
modified = True
return modified
</code></pre>
<p>Now calling <code>project.add_other_buildsetting ('ENABLE_BITCODE', 'NO')</code> works almost as expected. I got 5 entries in the pbxproj file instead of the 2 changes I noticed when setting the option manually in Xcode. Anyway it seems to work so far.</p>
<p><strong>But:</strong> Patching a well known piece of software feels pretty strange and I cannot believe that it is not possible to add (or modify) an option in the root of the <em>buildSettings</em> tree using the standard mod_pbxproj.py.</p>
<p>How can this be achieved?</p>
<p><strong>Edit:</strong> <a href="https://github.com/kayy/mod-pbxproj" rel="nofollow noreferrer">My fork</a> of mod_pbxproj</p>
|
<p>As long as you have latest <code>mod_pbxproj.py</code> this works fine:</p>
<pre><code>project.add_flags({'ENABLE_BITCODE':'NO'})
</code></pre>
<p>You can get mod_pbxproj.py from here: <a href="https://github.com/kronenthaler/mod-pbxproj/blob/master/mod_pbxproj/mod_pbxproj.py" rel="nofollow">https://github.com/kronenthaler/mod-pbxproj/blob/master/mod_pbxproj/mod_pbxproj.py</a></p>
|
python|xcode|unity3d|bitcode|unity3d-editor
| 0 |
1,907,561 | 13,976,809 |
Abstract classes (with pure virtual methods) in Cython
|
<p><strong>Quick version:</strong>
How to declare an abstract class in Cython? The goal is to declare interface only, so that other classes can inherit from it, there must be <em>no implementation</em> of this class.</p>
<p>interface.pxd:</p>
<pre><code>cdef class IModel:
cdef void do_smth(self)
</code></pre>
<p>impl.pyx:</p>
<pre><code>from interface cimport IModel
cdef class A(IModel):
cdef void do_smth(self):
pass
</code></pre>
<p>Everything <strong>nicely compiles</strong>, but when I'm importing <code>impl.so</code> in python I get following:</p>
<pre><code>ImportError: No module named interface
</code></pre>
<p>Apparently the method wasn't really virtual and python wants <code>IModel</code>'s instance</p>
<p><strong>More details:</strong> </p>
<p>I have a cython extension class (<code>cdef class Integrator</code>) which should operate on any instance, implementing the <code>IModel</code> interface. The interface simply ensures that the instance have a method <code>void get_dx(double[:] x, double[:] dx)</code>, so that integrator can call it every integration step in order to, well, integrate the model. The idea is that one can implement different models in cython and then interactively integrate them and plot the reults in <strong>python</strong> scripts. Like that:</p>
<pre><code>from integrator import Integrator # <-- pre-compiled .so extension
from models import Lorenz # <-- also pre-compiled one, which inherits
# from IModel
mod = Lorenz()
i = Inegrator(mod)
i.integrate() # this one's really fast cuz no python is used inside
# do something with data from i
</code></pre>
<p>The <code>lorenz.pyx</code> class should look something like:</p>
<pre><code>from imodel cimport IModel
cdef class Lorenz(IModel):
cdef void get_dx(double[:] x, double[:] dx)
# implementation
</code></pre>
<p>And the <code>integrator.pyx</code>:</p>
<pre><code>from imodel cimport IModel
cdef class Integrator:
cdef IModel model
def __init__(self, IModel model):
self.model = model
# rest of the implementation
</code></pre>
<p>Ideally, IModel should <em>only</em> exist in the form of a class definition in a <em>cython header</em> file (i.e. imodel.pxd), but so far I could only achieve the desired functionality by writing an ugly dummy implementation class in <code>imodel.pyx</code>. The worst thing is that this useless dummy implementation has to be compiled and linked so that other cython classes can inherit from it.</p>
<p><strong>PS:</strong> I think this is a perfect use case for abstract classes, however, if it in fact looks bad to you, dear OOP coders, please tell me which other approach shall I use. </p>
|
<p>It turns out that it's not quite possible (<a href="https://groups.google.com/forum/?fromgroups=#!topic/cython-users/-PrYBVm7lDE" rel="noreferrer">discussion</a>). Currently, interfaces are not supported, apparently because they are not of a critical importance: usual inheritance works quite good.</p>
|
c++|python|oop|cython
| 6 |
1,907,562 | 34,744,852 |
How to Create Custom Permission and Group model in django?
|
<p>i am trying to create my Custom Permission and Group model with following code but when i try to migrate i got error "django.db.utils.ProgrammingError: relation "auth_permission" already exists"</p>
<pre><code>class Role(models.Model):
def __unicode__(self):
return self.name
# slug = models.CharField(max_length=50, primary_key=True)
name = models.CharField(max_length=50, blank=True)
class Meta:
db_table = 'auth_group'
# ROLE_CHOICES = (('superuser', 'Super User'),('user', 'User'))
class Permission(models.Model):
def __unicode__(self):
return self.name
codename = models.CharField(max_length=50, blank=False)
name = models.CharField(max_length=50, blank=False)
class Meta:
db_table = 'auth_permission'
</code></pre>
<p>---------------------------Settings.py----------------------------</p>
<pre><code>INSTALLED_APPS = (
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'rest_framework.authtoken',
'django_filters',
'sparkAuth',
# Uncomment the next line to enable the admin:
# 'django.contrib.admin',
# Uncomment the next line to enable admin documentation:
# 'django.contrib.admindocs',
)
</code></pre>
|
<pre><code>class Permission(models.Model):
def __unicode__(self):
return self.name
codename = models.CharField(max_length=50, blank=False)
name = models.CharField(max_length=50, blank=False)
class Meta:
db_table = 'auth_permission'
</code></pre>
<p>You expect your table to be named <code>auth_permission</code> which is already being used by the <code>Permission</code> model in the <code>django.contrib.auth</code> app. This is why the error says: </p>
<pre><code>django.db.utils.ProgrammingError: relation "auth_permission" already exists
</code></pre>
<p>Solution:</p>
<ul>
<li>Choose a different table name</li>
<li>Do not specify the table name, then it will be in the form of <code><app>_<modelclass></code></li>
<li>Another very bad idea would be to remove the <code>django.contrib.auth</code> from <code>INSTALLED_APPS</code> in <code>settings.py</code> but then you risk breaking a lot of stuff and you probably really don't want that. </li>
</ul>
|
python|django
| 1 |
1,907,563 | 34,568,985 |
Generate fractal squares using recursion
|
<p>I'm learning recursion, and want to achieve this in Python (turtle): </p>
<p><a href="https://i.stack.imgur.com/7a76N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7a76N.png" alt="Fractal cubes"></a></p>
<p>I made a recursive function, in which I draw a square starting from the bottom-left corner, facing 'east'. I can get the squares on one of the sides correct, but not on the other.</p>
<p>Moving backwards before drawing the smaller square gives odd results:</p>
<p><a href="https://i.stack.imgur.com/pnXVp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pnXVp.png" alt="Odd results"></a></p>
<pre class="lang-py prettyprint-override"><code>from turtle import *
delay(0)
speed(10)
def square(length, level):
if level == 0:
return
else:
# Start from the bottom-left corner
forward(length)
# Right square
square(length // 2, level - 1)
lt(90)
forward(length)
lt(90)
forward(length)
lt(90)
forward(length)
lt(90)
### Try moving backward before drawing
##backward(length / 2)
# Left square
square(length // 2, level - 1)
square(110, 4)
</code></pre>
<p>Any tips or good examples for learning these kinds of fractals?</p>
|
<p>When drawing fractal with <code>turtle</code>, you should be careful regarding the following points:</p>
<ol>
<li>Where the function must starts (in your case, you specify "bottom-left corner").</li>
<li>Where it stops (?) - Position and orientation! This points is not clear in your code and it is why it is not working.</li>
</ol>
<p>There are two problems in your code:</p>
<ul>
<li>You should move <code>backward(length // 2)</code> to start correctly the drawing of the left square (as you did in the comment)</li>
<li>You should come back where you started the square (bottom-left corner of the big square)</li>
</ul>
<p>Here is the code with some comment:</p>
<pre><code>def square(length, level):
# Start from the bottom-left corner
if level == 0:
return
else:
# Draw the bottom side
forward(length)
# Draw the right square
square(length // 2, level - 1)
# Assume we ended at the same position
# Draw the right side
lt(90); forward(length)
# Draw the upper side
lt(90); forward(length)
# Draw the left side
lt(90); forward(length)
# Go backward
lt(90); backward(length // 2) ;
# Draw the left square
square(length // 2, level - 1)
# Go back to the original position
forward(length // 2)
</code></pre>
<p>Basically, you were missing the last <code>forward(length // 2)</code> which moves the <em>turtle</em> to its original position.</p>
|
python|recursion|turtle-graphics|fractals
| 1 |
1,907,564 | 27,279,152 |
Need to find which capital letter occurs most often
|
<p>I'm wondering what the best way to write a function in Python would be to find which capitalized letter occurs most often in a string, and then tell me how many times that letter occurs. </p>
<p>I'm messing around with using for loops, the first one going through the string, and then a nested one to go through all capitalized characters. Just trying to find out the best way to count each letter separately.</p>
|
<p>This is a very straight forward application for <code>collections.Counter</code></p>
<pre><code>>>> from collections import Counter
>>> s = 'This is A TesT String With CAPITALS'
>>> c = Counter(i for i in s if i.isupper())
>>> c
Counter({'T': 4, 'A': 3, 'S': 2, 'I': 1, 'W': 1, 'P': 1, 'L': 1, 'C': 1})
</code></pre>
<p>To do this in a more step-by-step manner</p>
<pre><code>>>> uniqueCaps = set(filter(str.isupper, s))
>>> uniqueCaps
{'S', 'C', 'P', 'I', 'L', 'A', 'T', 'W'}
counts = dict()
for letter in uniqueCaps:
counts[letter] = s.count(letter)
>>> counts
{'I': 1, 'S': 2, 'A': 3, 'L': 1, 'T': 4, 'C': 1, 'P': 1, 'W': 1}
</code></pre>
<p>Meeting somewhere in the middle</p>
<pre><code>>>> counts = {letter: s.count(letter) for letter in uniqueCaps}
>>> counts
{'I': 1, 'S': 2, 'A': 3, 'L': 1, 'T': 4, 'C': 1, 'P': 1, 'W': 1}
</code></pre>
|
python
| 3 |
1,907,565 | 23,171,169 |
Transparent tkinter frame?
|
<p>Dear Stackoverflow community;</p>
<p>I am working on a Touchscreen application.
For this work I need to Change the window if a user clicks on it.
My code:</p>
<pre><code>def init(win):
def getclick(event):
parent.destroy()
openSubWindow(dialog);
frame = Frame(win, width=650, height=550)
frame.bind("", getclick)
frame.pack()
win.title("Ausgangsposition")
win.minsize(650, 550)
</code></pre>
<p>I used the following idea:
<a href="http://effbot.org/tkinterbook/tkinter-events-and-bindings.htm" rel="nofollow">http://effbot.org/tkinterbook/tkinter-events-and-bindings.htm</a>
The idea:
Create a Frame which overlays the whole window, click on frame means changing window.
Is there any way, to get the frame transparent or is my idea completly wrong? At the Moment the frame hides all my Labels in this Windows.</p>
|
<p>Could you evaluate the folllowing aproaches?</p>
<ol>
<li><p>remove the frame</p>
<pre><code>frame.pack_forget() # remove
frame.pack() # place it there again
</code></pre></li>
<li><p>create a toplevel widget that is above the frame all the time. It can have transparencies from 0 to 100</p></li>
<li><p>Try to put the frame behind the other elements and raise the frame above the other elements.</p></li>
</ol>
|
python|tkinter|frame|python-2.x|transparent
| 0 |
1,907,566 | 257,563 |
Python MySQL Statement returning Error
|
<p>hey, I'm very new to all this so please excuse stupidity :)</p>
<pre><code>import os
import MySQLdb
import time
db = MySQLdb.connect(host="localhost", user="root", passwd="********", db="workspace")
cursor = db.cursor()
tailoutputfile = os.popen('tail -f syslog.log')
while 1:
x = tailoutputfile.readline()
if len(x)==0:
break
y = x.split()
if y[2] == 'BAD':
timestring = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(time.time()))
cursor.execute("INSERT INTO releases (date, cat, name) values (timestring, y[4], y[7]")
if y[2] == 'GOOD':
print y[4] + '\t' + y[7]
</code></pre>
<p>so i run the program and this is the error message I am getting</p>
<pre><code>user@machine:~/$ python reader.py
Traceback (most recent call last):
File "reader.py", line 17, in ?
cursor.execute("INSERT INTO releases (date, cat, name) values (timestring, y[4], y[7]")
File "/usr/lib/python2.4/site-packages/MySQLdb/cursors.py", line 163, in execute
self.errorhandler(self, exc, value)
File "/usr/lib/python2.4/site-packages/MySQLdb/connections.py", line 35, in defaulterrorhandler
raise errorclass, errorvalue
_mysql_exceptions.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '[4], y[7]' at line 1")
user@machine:~/$
</code></pre>
<p>So i'm assuming that the error is obviously coming from the SQL Statement </p>
<pre><code>cursor.execute("INSERT INTO releases (date, cat, name) values (timestring, y[4], y[7]")
</code></pre>
<p>Here is an example of what y[4] and y[7] will look like. </p>
<pre><code>YES Mail.Sent.To.User:user@work.com.11.2.2008:23.17
</code></pre>
<p>Is this error happening because I should be escaping those values before I try and Insert them into the Database?
Or am I completely missing the point??</p>
<p>Any help would be appreciated!
thanks in advance. </p>
|
<p>As pointed out, you're failing to copy the Python variable values into the query, only their names, which mean nothing to MySQL.</p>
<p>However the direct string concatenation option:</p>
<pre><code>cursor.execute("INSERT INTO releases (date, cat, name) VALUES ('%s', '%s', '%s')" % (timestring, y[4], y[7]))
</code></pre>
<p>is dangerous and should never be used. If those strings have out-of-bounds characters like ' or \ in, you've got an SQL injection leading to possible security compromise. Maybe in your particular app that can never happen, but it's still a very bad practice, which beginners' SQL tutorials really need to stop using.</p>
<p>The solution using MySQLdb is to let the DBAPI layer take care of inserting and escaping parameter values into SQL for you, instead of trying to % it yourself:</p>
<pre><code>cursor.execute('INSERT INTO releases (date, cat, name) VALUES (%s, %s, %s)', (timestring, y[4], y[7]))
</code></pre>
|
python|sql
| 10 |
1,907,567 | 267,436 |
How do I treat an ASCII string as unicode and unescape the escaped characters in it in python?
|
<p>For example, if I have a <em>unicode</em> string, I can encode it as an <em>ASCII</em> string like so:</p>
<pre><code>>>> u'\u003cfoo/\u003e'.encode('ascii')
'<foo/>'
</code></pre>
<p>However, I have e.g. this <em>ASCII</em> string:</p>
<pre><code>'\u003foo\u003e'
</code></pre>
<p>... that I want to turn into the same <em>ASCII</em> string as in my first example above:</p>
<pre><code>'<foo/>'
</code></pre>
|
<p>It took me a while to figure this one out, but <a href="http://www.egenix.com/www2002/python/unicode-proposal.txt" rel="noreferrer">this page</a> had the best answer:</p>
<pre><code>>>> s = '\u003cfoo/\u003e'
>>> s.decode( 'unicode-escape' )
u'<foo/>'
>>> s.decode( 'unicode-escape' ).encode( 'ascii' )
'<foo/>'
</code></pre>
<p>There's also a 'raw-unicode-escape' codec to handle the other way to specify Unicode strings -- check the "Unicode Constructors" section of the linked page for more details (since I'm not that Unicode-saavy).</p>
<p>EDIT: See also <a href="http://www.python.org/doc/2.5.2/lib/standard-encodings.html" rel="noreferrer">Python Standard Encodings</a>.</p>
|
python|unicode|ascii
| 53 |
1,907,568 | 41,907,704 |
How would you make it so a option goes back to a different menu?
|
<p>Its not perfect I know but for the choices inbox, sent and spam I would like to make it so it returns to that menu instead of the main one.</p>
<p>Please help and thank you </p>
<pre><code>def menu():
print("""
<<<<<<<<<<<<<<<<<<<<>>>>>>>>
1.Sign In||2.Create Account|
3.Exit || |
<<<<<<<<<<<<<<<<<<<<>>>>>>>>
""")
ans=input("What would you like to do? ")
if ans=="1":
email()
elif ans=="2":
sign()
elif ans=="3":
print("\n Goodbye")
ans = None
else:
print("\n Not Valid Choice Try again")
import re
def email():
email = input("Enter Your email: ")
match = re.search(r'[\w.-]+@[\w.-]+.\w+', email)
if match:
print ("Valid email, Your are now logged in :D "), match.group()
print("""
<<<<<<<<<<<<<<<<<<<<>>>>>>>>
1.Inbox||2.Spam |
3.Sent || |
<<<<<<<<<<<<<<<<<<<<>>>>>>>>
""")
else:
print ("Not valid,Try Again! ")
choice=input("Pick Your Choice:")
if choice=="1":
inbox()
elif choice=="2":
spam()
elif choice=="3":
sent()
choice = None
else:
print("Not valid,Try Again! ")
def sign():
user = input('Create Email: ')
password = input('Create Password: ')
print("You have now created your account, go back to the menu to sign in!")
anykey=input("Enter anything to return to the menu")
menu()
store_user = [brandonbiba@gmail.com]
store_pass = [brandon]
def inbox():
print("You have no mail, come back later.")
anykey=input("Enter anything to return to the menu")
menu()
def sent():
print(" You have sent 1 piece of mail to m.mazi@gmail.com ")
anykey=input("Enter anything to return to the menu")
menu()
import random
def spam():
random.randrange(1,1000)
print("This is how much spam you have!")
menu()
</code></pre>
<p>Its not perfect I know but for the choices inbox, sent and spam I would like to make it so it returns to that menu instead of the main one.</p>
|
<p>You can give control back to any function that called <code>inbox()</code> or <code>sent()</code>. Use <code>return</code> statements instead of calling <code>menu()</code> in the 2 functions above. Like this:</p>
<pre><code>def inbox():
print("You have no mail, come back later.")
anykey=input("Enter anything to return to the menu")
return
def sent():
print(" You have sent 1 piece of mail to m.mazi@gmail.com ")
anykey=input("Enter anything to return to the menu")
return
</code></pre>
|
python|python-3.x
| 0 |
1,907,569 | 41,760,225 |
Concatenate row values using python
|
<p>I am new to pyhton and am stuck on this topic from 2 days,tried looking for a basic answer but couldn't,so finally I decided to come up with my question.</p>
<p>I want to concatenate the values of only first two rows of my csv file(if possible with help of inbuilt modules).
Any kind of help would be appreciated. Thnx in advance</p>
<h2>Below is my sample csv file without headers:</h2>
<pre><code>1,Suraj,Bangalore
2,Ahuja,Karnataka
3,Rishabh,Bangalore
</code></pre>
<h2>Desired Output:</h2>
<pre><code>1 2,Suraj Ahuja,Bangalore Karnataka
3,Rishabh,Bangalore
</code></pre>
|
<p>Just create a <code>csv.reader</code> object (and a <code>csv.writer</code> object). Then use <code>next()</code> on the first 2 rows and zip them together (using list comprehension) to match the items.</p>
<p>Then process the rest of the file normally.</p>
<pre><code>import csv
with open("file.csv") as fr, open("output.csv","w",newline='') as fw:
cr=csv.reader(fr)
cw=csv.writer(fw)
title_row = [" ".join(z) for z in zip(next(cr),next(cr))]
cw.writerow(title_row)
# dump the rest as-is
cw.writerows(rows)
</code></pre>
<p>(you'll get an exception if the file has only 1 row of course)</p>
|
python|csv|join|row|concatenation
| 2 |
1,907,570 | 70,821,485 |
Tournament lab CS50: list index out of range
|
<p>I am new to python and am doing an lab 6 problem in CS50 but got into the error: List index out of rang. Even though I have recheck many times and also watch the hint video of the course,I still couldn't find out a way to fix this problem. Please help me.</p>
<p>Here is my code:</p>
<pre><code># Simulate a sports tournament
import csv
import sys
import random
# Number of simluations to run
N = 1000
def main():
# Ensure correct usage
if len(sys.argv) != 2:
sys.exit("Usage: python tournament.py FILENAME")
teams = []
# TODO: Read teams into memory from file
with open(sys.argv[1], "r") as file:
reader = csv.DictReader(file)
# The line 22 is to skip the first line of the csv file
next (reader)
for row in reader:
row["rating"] = int(row["rating"])
teams.append(row)
counts = {}
# TODO: Simulate N tournaments and keep track of win counts
for i in range(0,N,1):
winner = simulate_tournament(teams)
if winner in counts:
counts[winner] += 1
else:
counts[winner] = 1
# Print each team's chances of winning, according to simulation
for team in sorted(counts, key=lambda team: counts[team], reverse=True):
print(f"{team}: {counts[team] * 100 / N:.1f}% chance of winning")
def simulate_game(team1, team2):
"""Simulate a game. Return True if team1 wins, False otherwise."""
rating1 = team1["rating"]
rating2 = team2["rating"]
probability = 1 / (1 + 10 ** ((rating2 - rating1) / 600))
return random.random() < probability
def simulate_round(teams):
"""Simulate a round. Return a list of winning teams."""
winners = []
# Simulate games for all pairs of teams
for i in range(0, len(teams), 2):
if simulate_game(teams[i], teams[i + 1]):
winners.append(teams[i])
else:
winners.append(teams[i + 1])
return winners
def simulate_tournament(teams):
"""Simulate a tournament. Return name of winning team."""
while len(teams) > 1:
teams = simulate_round(teams)
# "team" because the name "team" is define in the csv file. Because each element in teams list is a dictionary therefore
# we need to add ["team"] meaning we will only return the name of the final winning team
return teams[0]["team"]
if __name__ == "__main__":
main()
</code></pre>
<p>and when running, I got this error:</p>
<pre><code>Traceback (most recent call last):
File "/workspaces/86940196/world-cup/tournament.py", line 70, in <module>
main()
File "/workspaces/86940196/world-cup/tournament.py", line 29, in main
winner = simulate_tournament(teams)
File "/workspaces/86940196/world-cup/tournament.py", line 64, in simulate_tournament
teams = simulate_round(teams)
File "/workspaces/86940196/world-cup/tournament.py", line 53, in simulate_round
if simulate_game(teams[i], teams[i + 1]):
IndexError: list index out of range
</code></pre>
<p>I did try adding i = 0 before every for loop using variable i but it didn't work either :(</p>
|
<p>I just realize that when using csv.DictReader, the function will automatedly treats the first line as the “keys” so there is no need to add next(reader). And after deleted that line, everything works out perfectly now.</p>
|
python|cs50|tournament
| 0 |
1,907,571 | 71,066,450 |
How to remove previous outputs and execute Python commands through Python's input()?
|
<p>I have a piece of code that stops the loop when nothing is entered but is there a way to make it look more clean and delete the last line that has nothing? It will give me an output like:</p>
<blockquote>
<p>**Line 1: print("Hello World"),</p>
</blockquote>
<blockquote>
<p>Line 2:**</p>
</blockquote>
<p>How can I delete that last line(<em>Line 1: print("hello world"</em>)? Note, I tried using the \r\033[F thing but it doesn't seem to work for inputs.</p>
<pre class="lang-py prettyprint-override"><code>line_number=1
command=None
command2=None
command3=None
command4=None
letter_start=0
letter_end=0
command_end=0
textbox_input=None
print_type=1 #1 = string: 2 = variable
commands=[]
variables={}
def execute_command(textbox):
global print_type
global commands
if textbox.startswith('p'):
if textbox.startswith("pr"):
if textbox.startswith("print("):
command=print #You can change what command is used
if print_type == 1:
letter_start=7 #"letter_start" is the variable that holds the position of the first letter (not including parentheses "" or brackets())
command_end=5
elif print_type == 2:
letter_start=6
command_end=5
else:
print("Only Print")
else:
print("Only Print")
elif "=" in textbox:
equals=textbox.index("=")
variables[textbox(0,equals)]=textbox(equals,-1)
if textbox.startswith('(', command_end):
if textbox.startswith('("', command_end):
if textbox.endswith(')'):
if textbox.endswith('")'):
print_type=1
letter_end=textbox.index(')')-1
textbox_input=textbox[letter_start:letter_end]
else:
print(f"-->{textbox}<--: Missing quotes")
else:
print(f"-->{textbox}<--: Missing parenthesis") ;
elif textbox.endswith(')'):
if textbox.endswith('")'):
print(f"-->{textbox}<--: Missing quotes")
else:
print_type=2
letter_end=textbox.index(')')
else:
print(f"-->{textbox}<--: Missing parenthesis")
else:
print(f"-->{textbox}<--: Missing parenthesis")
#PROGRAM BEGGINING
while True:
textbox1=input(f" Line {line_number}: ")
line_number+=1
if textbox1:
commands.append(textbox1)
else: #if the line is empty finish inputting commands
break
print("--------------")
print(commands)
for cmd in commands:
execute_command(cmd)
</code></pre>
<p>Edit:</p>
<p>according to the comments, here's what questioner want to achieve:</p>
<ul>
<li>Want to remove or edit the previous <code>input</code>.</li>
<li>Execution of <code>Python</code> commands through <code>input</code> function.</li>
<li>Saving the command typed in the <code>input</code> function in a file.</li>
</ul>
|
<p>What I understood is you want to create a program which executes <code>Python</code> commands like <code>print('Hello World')</code>, show you the output and also save's it in a document or text file.</p>
<p>for execution of Python command's, you can use <code>sub-process</code> module.</p>
<pre><code># Code to execute python command
import subprocess
def executor(cmd):
process=subprocess.Popen(['python.exe'],
stdin=subprocess.PIPE, # Input
stdout=subprocess.PIPE,# Output
stderr=subprocess.PIPE,# Error
shell=True)
results=process.communicate(input=bytes(cmd,'utf-8')) # We can only send data to subprocess in bytes.
output=results[0].decode('utf-8') # Decoding or converting output from bytes to string.
error=results[1].decode('utf-8')# Decoding or converting output from bytes to string.
return (output,error)
a=executor('''print('hellow')''')
print(a)
</code></pre>
<p>To execute commands from <code>input</code>, you can just change <code>a=executor('''print('hellow')''')</code> to <code>a=executor(input())</code>.</p>
<p>and in order to save the previous command in a text file, you can do the following:</p>
<ol>
<li>Change <code>return (output,error)</code> to <code>return (output,error,cmd)</code> so you can have an instance of text passed in <code>input</code>.</li>
<li>Write it in a file.</li>
</ol>
<p>To clear the screen after the execution of previous code, just use <code>os.system('cls') # on windows</code></p>
<hr />
<p>Here are the site's I used for references:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/10673185/python-execute-command-line-sending-input-and-reading-output">Execution from python 1</a></li>
<li><a href="https://www.phillipsj.net/posts/executing-powershell-from-python/" rel="nofollow noreferrer">Execution from python 2</a></li>
<li><a href="https://stackoverflow.com/questions/4810537/how-to-clear-the-screen-in-python">To clear Python Screen</a></li>
<li><a href="https://docs.python.org/3/library/threading.html" rel="nofollow noreferrer">I would suggest you to know about Treading too, i am just guessing that it would be use full for you</a></li>
</ul>
<hr />
<p>You might have to make some change in code for your usage, as here on stack overflow, we just give you hints, we can not help you with creation of full program.</p>
|
python|python-3.x
| 0 |
1,907,572 | 33,818,007 |
Error building/installing mod_wsgi on AWS elasticbeanstalk for Django deployment
|
<p>I'm getting the following error trying to deploy a Django app. I've been following the tutorial in the documentation at <a href="http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html" rel="nofollow">http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html</a>. My current hunch is it's because pip is trying to build mod_wsgi from source instead of using a .whl, but that's the extent of my troubleshooting expertise. How can I fix it?</p>
<pre><code>[2015-11-20T02:11:22.515Z] INFO [23704] - [Application update/AppDeployStage0/AppDeployPreHook/03deploy.py] : Activity execution failed, because: Collecting Django==1.8.6 (from -r /opt/python/ondeck/app/requirements.txt (line 1))
Using cached Django-1.8.6-py2.py3-none-any.whl
Collecting mod-wsgi==4.4.7 (from -r /opt/python/ondeck/app/requirements.txt (line 2))
Using cached mod_wsgi-4.4.7.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 20, in <module>
File "/tmp/pip-build-EAwCnb/mod-wsgi/setup.py", line 140, in <module>
'missing Apache httpd server packages.' % APXS)
RuntimeError: The 'apxs' command appears not to be installed or is not executable. Please check the list of prerequisites in the documentation for this package and install any missing Apache httpd server packages.
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-EAwCnb/mod-wsgi
2015-11-20 02:11:22,510 ERROR Error installing dependencies: Command '/opt/python/run/venv/bin/pip install -r /opt/python/ondeck/app/requirements.txt' returned non-zero exit status 1
Traceback (most recent call last):
File "/opt/elasticbeanstalk/hooks/appdeploy/pre/03deploy.py", line 22, in main
install_dependencies()
File "/opt/elasticbeanstalk/hooks/appdeploy/pre/03deploy.py", line 18, in install_dependencies
check_call('%s install -r %s' % (os.path.join(APP_VIRTUAL_ENV, 'bin', 'pip'), requirements_file), shell=True)
File "/usr/lib64/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
CalledProcessError: Command '/opt/python/run/venv/bin/pip install -r /opt/python/ondeck/app/requirements.txt' returned non-zero exit status 1 (ElasticBeanstalk::ExternalInvocationError)
caused by: Collecting Django==1.8.6 (from -r /opt/python/ondeck/app/requirements.txt (line 1))
Using cached Django-1.8.6-py2.py3-none-any.whl
Collecting mod-wsgi==4.4.7 (from -r /opt/python/ondeck/app/requirements.txt (line 2))
Using cached mod_wsgi-4.4.7.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 20, in <module>
File "/tmp/pip-build-EAwCnb/mod-wsgi/setup.py", line 140, in <module>
'missing Apache httpd server packages.' % APXS)
RuntimeError: The 'apxs' command appears not to be installed or is not executable. Please check the list of prerequisites in the documentation for this package and install any missing Apache httpd server packages.
</code></pre>
|
<p>Sorted it. </p>
<p>apsx is supplied by the httpd-devel package. Use eb ssh to connect then</p>
<pre><code>yum list installed|grep httpd
</code></pre>
<p>to see what version of httpd-devel matches the environment. Install with</p>
<pre><code>sudo yum install httpd24-devel
</code></pre>
<p>and then</p>
<pre><code>sudo /opt/python/run/venv/bin/pip install -r /opt/python/ondeck/app/requirements.txt
</code></pre>
<p>to check that it's fixed. If that works then create or edit a (local) file </p>
<pre><code>.ebextensions/01_packages.config
</code></pre>
<p>or similar to add that as part of the deployment:</p>
<pre><code>packages:
yum:
httpd24-devel: []
</code></pre>
|
python|django|amazon-web-services
| 1 |
1,907,573 | 47,011,889 |
Sentry for Python: Add "git blame" like prefix to each source code line
|
<p>It would be really great if exceptions in sentry would contain info like <code>git blame</code> does.</p>
<p>If every line of source code which I see in an exception in sentry would have a prefix like <code>git blame</code> (date, commit hash, author) you could find the relevant commit faster.</p>
<p>AFAIK sentry can't do this out of the box. Where and how could I hook into sentry to get this?</p>
<p>Please leave a comment why you down-vote this question. I am curious and willing to learn.</p>
<p>Just for the records. The sentry team is working on something like this. Not exactly, but it does solve the same use case: <a href="https://github.com/getsentry/sentry/issues/6547" rel="nofollow noreferrer">https://github.com/getsentry/sentry/issues/6547</a></p>
|
<p>A traceback consists of multiple lines of code. You can extract info from <code>git blame</code> for each of these lines using the <a href="https://pypi.python.org/pypi/GitPython/" rel="nofollow noreferrer">GitPython</a> library:</p>
<pre><code>import sys
import traceback
from git import Repo
def commit_info(file_path, line_number):
for commit, lines in Repo().blame('HEAD', file_path):
line_number -= len(lines)
if line_number <= 0:
return commit.hexsha, commit.committed_datetime, commit.author.name
try:
raise Exception('error')
except Exception:
for filename, line_number, _, _ in traceback.extract_tb(sys.exc_info()[2]):
print filename, line_number, commit_info(filename, line_number)
</code></pre>
<p>Then, it's up to you how you want to send this information to Sentry. One of the possible solutions is choosing one commit of the list above and using the <code>extra</code> keyword and let your logger do the job for you:</p>
<pre><code>try:
raise Exception
except Exception:
commit = choose_one_commit()
logger.exception('Error', extra={'author': author.name, 'sha': commit.hexsha})
</code></pre>
<p>Also, you can use your own logger which would add this <code>extra</code> parameter to all <code>.exception()</code>, <code>.error()</code> and <code>.critical()</code> calls.</p>
<p>Overall, it's quite vague what behaviour do you want to achieve, and everything is possible. But calling <code>git blame</code> is expensive and may hurt the performance of your application a lot.</p>
|
python|sentry|raven
| 1 |
1,907,574 | 67,803,041 |
Error in connecting with pyspark to SQL Server
|
<p>I am trying to connect to a SQL Server from local with pyspark. I've downloaded the last <a href="https://docs.microsoft.com/es-es/sql/connect/jdbc/release-notes-for-the-jdbc-driver?view=sql-server-2016#a-id72-722" rel="nofollow noreferrer">driver version</a>, but when running the code below this error shows:</p>
<blockquote>
<p>Error: "The server selected protocol version TLS10 is not accepted by client preferences [TLS12]"</p>
</blockquote>
<p>The code is as following (Spark session was initiated before and the driver is located in a supported path):</p>
<pre><code>jdbcDF = spark.read \
.format("jdbc") \
.option("url","jdbc:sqlserver://<host>\\<port>;database=<database>;") \
.option("dbtable", <table>) \
.option("user", <username>) \
.option("password", <password>) \
.option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver") \
.load()
</code></pre>
<p>I read that one possible solution would be to change some configuration from the SQL Server, but this is not possible for me. Is there any way of fixing it without changing anything in the SQL Server?</p>
|
<p>I think your question is similar to <a href="https://stackoverflow.com/questions/67246010/the-server-selected-protocol-version-tls10-is-not-accepted-by-client-preferences">this one</a></p>
<p>From the above link:</p>
<p>TLS10 is disabled in Java 1.8 and above. You can enable it by remove <code>TLSv1</code> from <code>jdk.tls.disabledAlgorithms</code> in the <code>java.security</code> file (This edit is done in your client machine, not SQL Server).</p>
<p>This worked in my case.</p>
|
python|sql-server|jdbc|pyspark
| 2 |
1,907,575 | 61,287,333 |
django.db.utils.OperationalError: no such table: Homepage_generalsettings
|
<p>I am setting up git project to my local server. </p>
<p>when I try to makemigrations, migrate, run. </p>
<p>i get the following error:</p>
<p>django.db.utils.OperationalError: no such table: Homepage_generalsettings</p>
<p>i have installed sqlite as well. I am using django version 3. </p>
<p>please help me to solve this problem<a href="https://i.stack.imgur.com/N9oit.png" rel="nofollow noreferrer">screenshot of error message</a></p>
|
<p>Based on the screenshot, you have code that's accessing the database outside of a view, during import time:</p>
<pre><code>general_settings = GeneralSettings.objects.all()[0]
</code></pre>
<p>That's disallowed; that table doesn't necessarily exist while things are being imported.</p>
<p>You need to refactor things so this doesn't happen; one easy option is to make <code>general_settings</code> there a property:</p>
<pre><code>@property
def general_settings(self):
return GeneralSettings.objects.get() # assumes only one `GeneralSettings` row
</code></pre>
|
python|django
| -1 |
1,907,576 | 57,035,509 |
Want create a new list from existing list using if else statement
|
<blockquote>
<p>I want to create a new list using existing list "dta", where i do not want a value for "banana" but a numeric for others.
desired list is below:</p>
</blockquote>
<pre><code>[1,3,4,5,6]
</code></pre>
<p>but when i am trying to print of my final list "d" then i get only single value.</p>
<pre><code>dta=list(["apple","banana","pine","cucumber","Guava","Coconut"])
d=[]
def cont_list(x):
for i in x:
if i=="banana":
continue
if i=="pine":
d.append(3)
elif i=="apple":
d.append(1)
elif i=="cucumber":
d.append(4)
elif i=="Guava":
d.append(5)
else:
d.append(6)
return d
cont_list(dta)
print(d)
</code></pre>
|
<p>you can do it with list comprehension:</p>
<pre><code>dta = ["apple","banana","pine","cucumber","Guava","Coconut"]
d = [idx for idx in range(1,len(dta)+1) if dta[idx-1] != "banana"]
print (d)
</code></pre>
<p>in function:</p>
<p>dta=list(["apple","banana","pine","cucumber","Guava","Coconut"])</p>
<pre><code>def cont_list(x):
dta = ["apple","banana","pine","cucumber","Guava","Coconut"]
d = [idx for idx in range(1,len(dta)+1) if dta[idx-1] != "banana"]
return d
print (cont_list(dta))
</code></pre>
<p>output:</p>
<pre><code>[1, 3, 4, 5, 6]
</code></pre>
<p>NOTE: your code will be ok if you fix INDENTION:</p>
<pre><code>d=[]
def cont_list(x):
for i in x:
if i=="banana":
continue
if i =="pine":
d.append(3)
elif i=="apple":
d.append(1)
elif i=="cucumber":
d.append(4)
elif i=="Guava":
d.append(5)
else:
d.append(6)
return d
cont_list(dta)
print(d)
</code></pre>
<p>or you can do it like:</p>
<pre><code>d=[]
def cont_list(x):
for i in range(len(x)):
if x[i]=="banana":
continue
else:
d.append(i+1)
return d
cont_list(dta)
print(d)
</code></pre>
|
python|list|if-statement|jupyter-notebook|list-comprehension
| 2 |
1,907,577 | 27,674,289 |
The complextiy of Python issubset()
|
<p>Given two sets A and B and their length: a=len(A) and b=len(B) where a>=b. What is the complextiy of Python 2.7's issubset() function, ie, B.issubset(A)? There are two conflicting answers I can find from the Internet:</p>
<p>1, O(a) or O(b)</p>
<p>found from:<a href="https://wiki.python.org/moin/TimeComplexity" rel="noreferrer">https://wiki.python.org/moin/TimeComplexity</a>
and bit.ly/1AWB1QU</p>
<p>(Sorry that I can not post more http links so I have to use shorten url instead.)</p>
<p>I downloaded the source code from Python offical website and found that:</p>
<pre><code>def issubset(self, other):
"""Report whether another set contains this set."""
self._binary_sanity_check(other)
if len(self) > len(other): # Fast check for obvious cases
return False
for elt in ifilterfalse(other._data.__contains__, self):
return False
return True
</code></pre>
<p>there is only loop here.</p>
<p>2, O(a*b)</p>
<p>found from: bit.ly/1Ac7geK</p>
<p>I also found some codes look like source codes of Python from: bit.ly/1CO9HXa as following:</p>
<pre><code>def issubset(self, other):
for e in self.dict.keys():
if e not in other:
return False
return True
</code></pre>
<p>there are two loop here.</p>
<p>So which one is right? Could someone give me a detailed answer about the difference between the above two explanations? Great thanks in advance.</p>
|
<p><strong>The complexity of <code>B.issubset(A)</code> is <code>O(len(B))</code></strong>, assuming that <code>e in A</code> is constant-time.</p>
<p>This a reasonable assumption generally, but can be easily violated with a bad hash function. If, for example, all elements of <code>A</code> had the same hash code, the time complexity of <code>B.issubset(A)</code> would deteriorate to <code>O(len(B) * len(A))</code>.</p>
<p>In your second code snippet, the complexity is the same as above. If you look closely, there is only one loop; the other is an <code>if</code> statement (<code>if e not in other:</code>).</p>
|
python|performance|loops|time-complexity
| 8 |
1,907,578 | 72,181,057 |
what is the ideal parameters for spectrogram of eeg signal?
|
<p>I am trying to plot a spectrogram of an EEG signal whose sampling rate of 1000Hz and is filtered with a bandpass of 14 - 70 Hz, and the length of the signal is 440 ( and I cant increase the length of the signal). The signal(data link <a href="https://drive.google.com/file/d/1qFR_3G7pJ8pEM1wQWdZM8DLRYCosnuhi/view?usp=sharing" rel="nofollow noreferrer">here</a>) looks like this:</p>
<p><a href="https://i.stack.imgur.com/7aB5z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7aB5z.png" alt="image_orginal_signal" /></a></p>
<p>I have tried the following parameter values to plot the spectrogram:</p>
<pre><code>#plot spectrogram for a single channel
fs = 1000
nperseg = 200
plt.figure(figsize=(10,10))
f, t, Sxx = signal.spectrogram(Oz[0], fs, nperseg=nperseg, noverlap=nperseg-1)
plt.pcolormesh(t, f, Sxx, shading='gouraud')
plt.ylim([0,70])
plt.ylabel('Frequency [Hz]')
plt.xlabel('Time [sec]')
plt.show()
</code></pre>
<p>which gave a plot like this:</p>
<p><a href="https://i.stack.imgur.com/m65HT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m65HT.png" alt="spectrogram_of_signal" /></a></p>
<p>Can anyone suggest ideal parameter settings to improve the spectrogram resolution?</p>
<p><strong>EDIT:</strong>
To enhance my query, I want to know how the value of parameters like <strong>nperseg</strong>, <strong>nfft</strong>, <strong>window_size</strong> and <strong>noverlap</strong> are decided. And How would they be related if I have <strong>sampling rate(fs)</strong> and <strong>length of signal</strong>.</p>
|
<p>I don't know what output you expect but out can try to change the shading of the pcolormesh. Also you can add a window to the computation of the spectrogram. You can also change the colors use to represent the spectrogram with cmap.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from scipy import signal
my_data = np.genfromtxt('signal_value_spectro.csv', delimiter=',',skip_header=0)
Oz=my_data[1:,0]
fs = 1000
t = np.arange(len(Oz))/fs
# nperseg = len(Oz[0])-1
nperseg=50
f50, t50, Sxx_50 = signal.spectrogram(Oz, fs, nperseg=nperseg , noverlap=nperseg-1,window=signal.get_window('hann',nperseg))
nperseg=150
f150, t150, Sxx_150 = signal.spectrogram(Oz, fs, nperseg=nperseg , noverlap=nperseg-1,window=signal.get_window('hann',nperseg))
nperseg=350
f350, t350, Sxx_350 = signal.spectrogram(Oz, fs, nperseg=nperseg , noverlap=nperseg-1,window=signal.get_window('hann',nperseg))
plt.figure(figsize=(10,10))
plt.subplot(211)
plt.plot(np.arange(len(Oz))/fs,Oz)
plt.subplot(234)
plt.pcolormesh(t50, f50, Sxx_50, shading='auto',cmap = 'inferno')
plt.ylim([0,70])
plt.ylabel('Frequency [Hz]')
plt.xlabel('Time [sec], nperseg = 50')
plt.subplot(235)
plt.pcolormesh(t150, f150, Sxx_150, shading='auto',cmap = 'inferno')
plt.ylim([0,70])
plt.ylabel('Frequency [Hz]')
plt.xlabel('Time [sec], nperseg = 150')
plt.subplot(236)
plt.pcolormesh(t350, f350, Sxx_350, shading='auto',cmap = 'inferno')
plt.ylim([0,70])
plt.ylabel('Frequency [Hz]')
plt.xlabel('Time [sec], nperseg = 350')
plt.show()
</code></pre>
<p>But your big issue is the length of your signal. You are limited by the time-frequency resolution. Basically the number of frequency band you will get is the length of nperseg divided by 2, spread over the interval [0, FS/2]. As your signal is 440 samples, the nperseg should be lover. But if you increase the nperseg too much, you will loose the time resolution. For instance, if nperseg = 50, there will be 390 point in time but id nperseg = 350 there will only be 90 point in time.</p>
<p><a href="https://i.stack.imgur.com/9ynm5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9ynm5.png" alt="enter image description here" /></a></p>
<p>Info are avaialable in the doc <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.spectrogram.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.spectrogram.html</a></p>
|
python|scipy|signal-processing|spectrogram|time-frequency
| 3 |
1,907,579 | 72,197,815 |
HTML Output in Pyscript
|
<p>I was experimenting in Pyscript and I tried to print an HTML table, but it didn't work. It seems to delete the tags and mantain just the plain text.</p>
<p>Why is that? I tried to search online, but being a new technology i didn't find much.</p>
<p>This is my code:</p>
<pre class="lang-py prettyprint-override"><code><py-script>
print("<table>")
for i in range (2):
print("<tr>")
for j in range (2):
print("<td>test</td>")
print("</tr>")
print("</table>")
</py-script>
</code></pre>
<p>And this is the output I get:
<a href="https://i.stack.imgur.com/Qp9ja.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Qp9ja.png" alt="pyscript test output" /></a></p>
<p>I tried to replace the <code>print()</code> method with the <code>pyscript.write()</code> method, but it didn't work too.</p>
|
<p>I dig in source code <a href="https://github.com/pyscript/pyscript/blob/main/pyscriptjs/src/pyscript.py" rel="nofollow noreferrer">pyscript.py</a>
and at this moment works for me only code similar to JavaScript</p>
<p>For example this adds <code><h1>Hello</h1></code></p>
<pre><code><div id="output"></div>
<py-script>
element = document.createElement('h1')
element.innerText = "Hello"
document.getElementById("output").append(element)
</py-script>
</code></pre>
<hr />
<p>Full working code</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>PyScript Demo</title>
<!--<link rel="stylesheet" href="https://pyscript.net/alpha/pyscript.css" />-->
<script defer src="https://pyscript.net/alpha/pyscript.js"></script>
</head>
<body>
<div id="output"></div>
<py-script>
element = document.createElement('h1')
element.innerText = "Hello"
document.getElementById("output").append(element)
</py-script>
</body>
</html>
</code></pre>
<hr />
<p><strong>EDIT:</strong></p>
<p>After digging in source code I found that <code>pyscript.js</code> runs function <code>htmlDecode()</code> which removes all tags from code in <code><py-script></code> (and probably it also removes tags when you load code from file) and this makes problem.</p>
<p>See Pyscript issue: <a href="https://github.com/pyscript/pyscript/issues/347" rel="nofollow noreferrer">[BUG] print() doesn't output HTML tags. · Issue #347 · pyscript/pyscript</a></p>
<p>Some workaround is to use some replacement - ie. <code>{{ }}</code> instead of <code>< ></code> in code - and later use code to replace it back to <code>< ></code></p>
<pre class="lang-py prettyprint-override"><code>print( "{{h1}}Hello{{/h1}}".replace("{{", "<").replace("}}", ">") )
</code></pre>
<p>or more universal - using function for this</p>
<pre class="lang-py prettyprint-override"><code>def HTML(text):
return text.replace("{{", "<").replace("}}", ">")
print( HTML("{{h1}}Hello{{/h1}}") )
pyscript.write(some_id, HTML("{{h1}}Hello{{/h1}}") )
document.getElementById(some_id).innerHTML = HTML("{{h1}}Hello{{/h1}}")
</code></pre>
<hr />
<p>Sometimes problem can be also <code>pyscript.css</code> which redefines some items and ie. <code><h1></code> looks like normal text.</p>
<p>One solution is to remove <code>pyscript.css</code>.</p>
<p>Other solution is to use classes from <code>pyscript.css</code> like in <a href="https://github.com/pyscript/pyscript/blob/main/pyscriptjs/examples/index.html" rel="nofollow noreferrer">examples/index.html</a></p>
<pre><code><h1 class="text-4xl font-bold">Hello World</h1>
</code></pre>
<p>which means</p>
<pre><code>print( HTML('{{h1 class="text-4xl font-bold"}}Hello{{/h1}}') )
</code></pre>
|
python|html|pyscript
| 2 |
1,907,580 | 70,693,284 |
Raw Query with SQL function in SQLAlchemy/encode/databases
|
<p>I'm a complete beginner at Python and FastAPI.
I'm using FastAPI and I have a table where the requirement is to encrypt the personal information of the user using <code>pgcrypto</code> module of <code>PostgreSQL</code>.
The raw query would be something like this which can be executed into any database client and it executes without any error</p>
<pre><code>insert into customers (email, gender) values (pgm_sym_encrypt('hello@gmail.com', 'some_secret_key'), 'male')
</code></pre>
<p>How to execute this query using SQLAlchemy core or <a href="https://github.com/encode/databases" rel="nofollow noreferrer">encode/databases</a>?
I've tried this</p>
<pre><code>from sqlalchemy import func
query = f"""
insert into customers (email, gender) values
(:email, :gender)
"""
await database.execute(query=query, values={'email': func.pgm_sys_encrypt('hello@gmail.com', 'secret_key'), 'gender': 'male'})
</code></pre>
<p>It didn't work.
I also tried</p>
<pre><code>query = f"""
insert into customers (email, gender) values
(pgm_sys_encrypt('hello@gmail.com', 'secret_key'), :gender)
"""
await database.execute(query=query, values={'gender': 'male'})
</code></pre>
<p>This didn't work either. I've no idea how to execute a function in the raw query. Please help. I've tried so much but I'm totally clueless on this one still now. Thank you in advance for your help.</p>
|
<p>As it's a raw query you should be able specify it as you would in raw SQL, so this should work:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy.sql import text
query = """
insert into customers (email, gender) values
(pgm_sys_encrypt(:email, :secret_key), :gender)
"""
await database.execute(query=text(query), values={'email': 'hello@gmail.com', 'secret_key': 's3cr37', 'gender': 'male'})
</code></pre>
|
python|sqlalchemy|flask-sqlalchemy|psycopg2|fastapi
| 2 |
1,907,581 | 66,660,868 |
I converted my .py into .exe using pyinstaller and py-to-exe but i am getting error while executing
|
<p><a href="https://i.stack.imgur.com/BDBxn.jpg" rel="nofollow noreferrer">enter image description here</a></p>
<p>Error image and code snip have been attached. I have created exe using py-to-exe and pyinstaller but not able to run please help me to make this exe as executable.</p>
<pre><code>if os.path.exists("ResultCRMDailyFile.csv"): os.remove("ResultCRMDailyFile.csv") else: time.sleep(5) data = pd.read_csv("CRMDailyFile.csv", names= col_names, skiprows=[0], encoding='utf-8', errors='ignore') data.insert(3, column="UTCDclDate_c", value=data["UTCAppDate_c"]) data["UTCAppDate_c"] = np.where(data["ApprovedDeclined_c"] == 'Approved', data["UTCAppDate_c"], '') data["UTCDclDate_c"] = np.where(data["ApprovedDeclined_c"] == 'Declined', data["UTCDclDate_c"], '') data = data.to_csv("ResultCRMDailyFile.csv", index=False) print(data.head())
</code></pre>
|
<p>Make sure that python is added to the <code>PATH</code>.</p>
<p>And run the following in your terminal</p>
<pre><code>pyinstaller --onefile pythonScriptName.py -w --icon=iconFileName.ico
</code></pre>
<p><code>-w</code> and <code>icon</code> are optional.</p>
<p><code>-w</code> is to remove the black screen (console/ terminal window) while launching the <code>.exe</code> file</p>
<p><code>icon</code> is used to set an icon for your <code>.exe</code> file</p>
|
python|python-3.x|automation|pyinstaller
| 0 |
1,907,582 | 64,865,813 |
handle_info/2 doesn't get the message sent by Python when cast a message using ErlPort and GenServer
|
<p>I am trying to mix Elixir with Python using <a href="http://erlport.org/" rel="nofollow noreferrer">ErlPort</a>, so I decided to read some tutorials about it and the documentation related about everything involved. I understand how works the logic and what does each function. However, I am having problems casting a message and receiving the Python response.</p>
<p>Based on what I read and what I have done I understand that when I cast a message with <code>cast_count/1</code>, this is handle by <code>handle_cast/2</code> and then is handle by the Python function <code>handle_message()</code> and then this one cast the message with the function <code>cast_message()</code> and the imported one <code>cast()</code>from <code>erlport.erlang</code>. Finally, Elixir should handle the message received from Python with <code>handle_info/2</code>. I think this function is not being executed but I don't know the reason, although I have investigated a lot this stuff in different sources and in the documentation of GenServer and ErlPort.</p>
<p>In my case I have the next structure: <code>lib/python_helper.ex</code> to make ErlPort works and <code>lib/server.ex</code> to call and cast the Python functions.</p>
<p><code>lib/python_helper.ex</code></p>
<pre><code>defmodule WikiElixirTest.PythonHelper do
def start_instance do
path =
[:code.priv_dir(:wiki_elixir_test), "python"]
|> Path.join()
|> to_charlist()
{:ok, pid} = :python.start([{:python_path, path}])
pid
end
def call(pid, module, function, arguments \\ []) do
pid
|> :python.call(module, function, arguments)
end
def cast(pid, message) do
pid
|> :python.cast(message)
end
def stop_instance(pid) do
pid
|> :python.stop()
end
end
</code></pre>
<p><code>lib/server.ex</code></p>
<pre><code>defmodule WikiElixirTest.Server do
use GenServer
alias WikiElixirTest.PythonHelper
def start_link() do
GenServer.start_link(__MODULE__, [])
end
def init(_args) do
session = PythonHelper.start_instance()
PythonHelper.call(session, :counter, :register_handler, [self()])
{:ok, session}
end
def cast_count(count) do
{:ok, pid} = start_link()
GenServer.cast(pid, {:count, count})
end
def call_count(count) do
{:ok, pid} = start_link()
GenServer.call(pid, {:count, count}, :infinity)
end
def handle_call({:count, count}, _from, session) do
result = PythonHelper.call(session, :counter, :counter, [count])
{:reply, result, session}
end
def handle_cast({:count, count}, session) do
PythonHelper.cast(session, count)
{:noreply, session}
end
def handle_info({:python, message}, session) do
IO.puts("Received message from Python: #{inspect(message)}")
{:stop, :normal, session}
end
def terminate(_reason, session) do
PythonHelper.stop_instance(session)
:ok
end
end
</code></pre>
<p><code>priv/python/counter.py</code></p>
<pre class="lang-py prettyprint-override"><code>import time
import sys
from erlport.erlang import set_message_handler, cast
from erlport.erlterms import Atom
message_handler = None
def cast_message(pid, message):
cast(pid, (Atom('python', message)))
def register_handler(pid):
global message_handler
message_handler = pid
def handle_message(count):
try:
print('Received message from Elixir')
print(f'Count: {count}')
result = counter(count)
if message_handler:
cast_message(message_handler, result)
except Exception as e:
print(e)
pass
def counter(count=100):
i = 0
data = []
while i < count:
time.sleep(1)
data.append(i+1)
i = i + 1
return data
set_message_handler(handle_message)
</code></pre>
<p><strong>Note</strong>: I removed <code>@doc</code> to light the code snippets. And yes, I know <code>sys</code> isn't being used at this moment and that catch <code>Exception</code> in Python <code>try</code> block is not the best approach, it is just temporal.</p>
<p>If I test it in <code>iex</code> (<code>iex -S mix</code>), I get the next:</p>
<pre><code>iex(1)> WikiElixirTest.Server.cast_count(19)
Received message from Elixir
Count: 19
:ok
</code></pre>
<p>I want to note that <code>call_count/1</code> and <code>handle_call/1</code> works fine:</p>
<pre><code>iex(3)> WikiElixirTest.Server.call_count(10)
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
</code></pre>
<p>What I am doing wrong that the communication of Elixir with Python is successful but not the communication of Python with Elixir when I cast a message?</p>
|
<p>Well, @Everett reported in the question that I closed wrong the atom in <code>counter.py</code>.</p>
<pre class="lang-py prettyprint-override"><code>def cast_message(pid, message):
cast(pid, (Atom('python', message)))
</code></pre>
<p>This must be:</p>
<pre class="lang-py prettyprint-override"><code>def cast_message(pid, message):
cast(pid, (Atom('python'), message))
</code></pre>
<p>Although this doesn't solve the problem, it helps me to get with the solution. After fixed the first part (<code>(Atom('python'), message)</code>), when I execute <code>cast_count/1</code> I get a message from Python:</p>
<pre><code>iex(1)> WikiElixirTest.Server.cast_count(2)
Received message from Elixir
Count: 2
:ok
bytes object expected
</code></pre>
<p>Although it confuses me a bit at the first time because I awaited something like "Received message from Python: bytes object expected", due <code>handle_info/2</code>. However, I decided to inspect the code of ErlPort and I found the error in <a href="https://github.com/hdima/erlport/blob/master/priv/python3/erlport/erlterms.py#L66" rel="nofollow noreferrer">the line 66 of <code>erlterms.py</code></a>, part of the <code>Atom</code> class. Thus the error was in the first part of the message, the atom, so I specified it as binary:</p>
<pre class="lang-py prettyprint-override"><code>def cast_message(pid, message):
cast(pid, (Atom(b'python', message)))
</code></pre>
<p>Then I checked it:</p>
<pre><code>iex(2)> WikiElixirTest.Server.cast_count(2)
:ok
Received message from Elixir
Count: 2
Received message from Python: [[1, 2]]
</code></pre>
<p>And it works nicely! So the problem there, even taking into account my mistyping of the message tuple, was that <code>Atom()</code> needs a binary.</p>
<p>Probably this misunderstanding may be due the first <a href="https://medium.com/hackernoon/mixing-python-with-elixir-ii-async-e8586f9b2d53" rel="nofollow noreferrer">ErlPort tutorial</a> I followed uses Python 2, version in which <code>str</code> is a string of bytes and in Python 3 it would be necessary to convert the string to bytes, due <code>str</code> is a string of text.</p>
|
python|elixir|gen-server
| 0 |
1,907,583 | 68,534,661 |
Is there a way to get the index of an element and store it to update an XPATH in Python?
|
<p>I have a list that looks like this in my UI:
<a href="https://i.stack.imgur.com/Ts6G0.png" rel="nofollow noreferrer">UI image</a></p>
<p>The XPATH for the first row's "..." button is <code>//tbody/tr[1]/td[2]/a[1]/i[1]</code>, and each subsequent row's XPATH has the <code>tr[]</code> portion updated depending on the index of the row.</p>
<p>Is it possible to store the xpath of the button as:
<code>button = //tbody/tr[i]/td[2]/a[1]/i[1] </code>?</p>
<p>And if so, if I create a new rule, is there a way I can update <code>tr[i]</code> to reflect the index of the newly added rule so I can call <code>button.click()</code>?</p>
|
<p>looks like, you wanna make indexing bit dynamic :</p>
<p>so, for this <strong>xpath</strong> :-</p>
<pre><code>//tbody/tr[1]/td[2]/a[1]/i[1]
</code></pre>
<p>you can do following:</p>
<pre><code>i = 1
button = f"//tbody/tr[{i}]/td[2]/a[1]/i[1]"
</code></pre>
<p>and can use it in Selenium.</p>
<p>you can use it for loop as well with the indices.</p>
|
python|selenium|selenium-webdriver|automation
| 0 |
1,907,584 | 71,549,279 |
How to generate a series containing each date for the following month relative to a given date in pandas
|
<p>My start point is the variable <code>data_published_date</code></p>
<pre><code>data_published_date
Out[47]: "DatetimeIndex(['2022-03-18'], dtype='datetime64[ns]', freq=None)"
</code></pre>
<p>as it's a date in March, I wish to generate EACH day for the NEXT month as timestamp, like</p>
<pre><code>['2022-04-01', '2022-04-02', '2022-04-03'.....'2022-04-30']
</code></pre>
<p>I tried</p>
<pre><code>index = pd.date_range(data_published_date + pd.offsets.MonthBegin(n=1), data_published_date + pd.offsets.MonthEnd(n=1))
</code></pre>
<p>and received</p>
<blockquote>
<p>TypeError: Cannot convert input [DatetimeIndex(['2022-04-01'],
dtype='datetime64[ns]', freq=None)] of type <class
'pandas.core.indexes.datetimes.DatetimeIndex'> to Timestamp</p>
</blockquote>
<p>tried then 2 steps</p>
<pre><code>start_date = data_published_date + pd.offsets.MonthBegin(n=1)
end_date = data_published_date + pd.offsets.MonthEnd(n=2)
</code></pre>
<p>but can't find a solution to convert these 2 Datetimeindex to timestamp so that I can use <code>pd.date_range</code> to reach my objective.</p>
<p>Any idea?</p>
|
<p>I think the problem is <code>data_published_date</code> is Index object, but <code>date_range</code> is expecting a singleton. Since it contains only a single element, we could index it and use that instead:</p>
<pre><code>out = pd.date_range(data_published_date[0] + pd.offsets.MonthBegin(n=1),
data_published_date[0] + pd.offsets.MonthEnd(n=2))
</code></pre>
<p>Output:</p>
<pre><code>DatetimeIndex(['2022-04-01', '2022-04-02', '2022-04-03', '2022-04-04',
'2022-04-05', '2022-04-06', '2022-04-07', '2022-04-08',
'2022-04-09', '2022-04-10', '2022-04-11', '2022-04-12',
'2022-04-13', '2022-04-14', '2022-04-15', '2022-04-16',
'2022-04-17', '2022-04-18', '2022-04-19', '2022-04-20',
'2022-04-21', '2022-04-22', '2022-04-23', '2022-04-24',
'2022-04-25', '2022-04-26', '2022-04-27', '2022-04-28',
'2022-04-29', '2022-04-30'],
dtype='datetime64[ns]', freq='D')
</code></pre>
|
python|python-3.x|pandas|time-series|timestamp
| 1 |
1,907,585 | 71,696,157 |
Move ticks and labels to the top of a pyplot figure
|
<p>As per <a href="https://stackoverflow.com/questions/14406214/moving-x-axis-to-the-top-of-a-plot-in-matplotlib">this question</a>, moving the xticks and labels of an <code>AxesSubplot</code> object can be done with <code>ax.xaxis.tick_top()</code>. However, I cannot get this to work with multiple axes inside a figure.</p>
<p>Essentially, I want to move the xticks to the very top of the figure (only displayed at the top for the subplots in the first row).</p>
<p>Here's a silly example of what I'm trying to do:</p>
<pre class="lang-py prettyprint-override"><code>fig, axs = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True)
fig.set_figheight(5)
fig.set_figwidth(10)
for ax in axs.flatten():
ax.xaxis.tick_top()
plt.show()
</code></pre>
<p>Which shows</p>
<p><a href="https://i.stack.imgur.com/VLbiB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VLbiB.png" alt="Close try" /></a></p>
<p>My desired result is this same figure but with the <code>xticks</code> and <code>xticklabels</code> at the top of the two plots in the first row.</p>
|
<p>Credits to @BigBen for the <code>sharex</code> comment. It is indeed what's preventing <code>tick_top</code> to work.</p>
<p>To get your results, you can combine using <code>tick_top</code> for the two top plots and use <code>tick_params</code> for the bottom two:</p>
<pre><code>fig, axs = plt.subplots(2, 2, sharex=False) # Do not share xaxis
for ax in axs.flatten()[0:2]:
ax.xaxis.tick_top()
for ax in axs.flatten()[2:]:
ax.tick_params(axis='x',which='both',labelbottom=False)
</code></pre>
<p>See a live implementation <a href="https://1000words-hq.com/n/zRGDjshUglK" rel="nofollow noreferrer">here</a>.</p>
|
python|matplotlib|subplot
| 1 |
1,907,586 | 5,200,911 |
Grouping CheckboxSelectMultiple Options in Django
|
<p>In my Django App I have the following model:</p>
<pre><code>class SuperCategory(models.Model):
name = models.CharField(max_length=100,)
slug = models.SlugField(unique=True,)
class Category(models.Model):
name = models.CharField(max_length=100,)
slug = models.SlugField(unique=True,)
super_category = models.ForeignKey(SuperCategory)
</code></pre>
<p>What I'm trying to accomplish in Django's Admin Interface is the rendering of <strong>Category</strong> using widget CheckboxSelectMultiple but with <strong>Category</strong> somehow grouped by <strong>SuperCategory</strong>, like this:</p>
<hr>
<blockquote>
<p><strong>Category:</strong> </p>
<p>Sports: <- Item of SuperCategory<br>
[ ] Soccer <- Item of Category<br>
[ ] Baseball <- Item of Category<br>
[ ] ...</p>
<p>Politics: <- Another item of SuperCategory<br>
[ ] Latin America<br>
[ ] North america<br>
[ ] ... </p>
</blockquote>
<hr>
<p>Does anybody have a nice suggestion on how to do this?</p>
<p>Many thanks.</p>
|
<p>After some struggle, here is what I got.</p>
<p>First, make ModelAdmin call a ModelForm: </p>
<pre><code>class OptionAdmin(admin.ModelAdmin):
form = forms.OptionForm
</code></pre>
<p>Then, in the form, use use a custom widget to render:</p>
<pre><code>category = forms.ModelMultipleChoiceField(queryset=models.Category.objects.all(),widget=AdminCategoryBySupercategory)
</code></pre>
<p>Finally, the widget:</p>
<pre><code>class AdminCategoryBySupercategory(forms.CheckboxSelectMultiple):
def render(self, name, value, attrs=None, choices=()):
if value is None: value = []
has_id = attrs and 'id' in attrs
final_attrs = self.build_attrs(attrs, name=name)
output = [u'<ul>']
# Normalize to strings
str_values = set([force_unicode(v) for v in value])
supercategories = models.SuperCategory.objects.all()
for supercategory in supercategories:
output.append(u'<li>%s</li>'%(supercategory.name))
output.append(u'<ul>')
del self.choices
self.choices = []
categories = models.Category.objects.filter(super_category=supercategory)
for category in categories:
self.choices.append((category.id,category.name))
for i, (option_value, option_label) in enumerate(chain(self.choices, choices)):
if has_id:
final_attrs = dict(final_attrs, id='%s_%s' % (attrs['id'], i))
label_for = u' for="%s"' % final_attrs['id']
else:
label_for = ''
cb = forms.CheckboxInput(final_attrs, check_test=lambda value: value in str_values)
option_value = force_unicode(option_value)
rendered_cb = cb.render(name, option_value)
option_label = conditional_escape(force_unicode(option_label))
output.append(u'<li><label%s>%s %s</label></li>' % (label_for, rendered_cb, option_label))
output.append(u'</ul>')
output.append(u'</li>')
output.append(u'</ul>')
return mark_safe(u'\n'.join(output))
</code></pre>
<p>Not the most elegant solution, but hey, it worked.</p>
|
python|django|django-admin|django-widget
| 5 |
1,907,587 | 61,755,900 |
Django render dynamic image in template
|
<p>In a Django view, I can generate a dynamic image (a graph in PNG format), and it creates a response that is an image of my graph. I can get it to display in the browser, but there is no web page with it - it's just the image.
Now I'd like to embed this image in an HTML template and render it. How can I do that?
(This is my first Django project.)</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import io
# Initialize the matplotlib figure
f, ax = plt.subplots(figsize=(6, 15))
width = my_data['bar_length']
data_names = my_data['ObjectNames']
y_pos = np.arange(len(data_names))
norm = plt.Normalize(my_data['bar_length'].min(), my_data['bar_length'].max())
cmap = plt.get_cmap("RdYlGn")
# Create horizontal bars
plt.barh(y_pos, width, color=cmap(norm(my_data['bar_length'].values)))
# Create names on the y-axis
plt.yticks(y_pos, data_names)
FigureCanvasAgg(f)
buf = io.BytesIO()
plt.savefig(buf, format='png')
plt.close(f)
response = HttpResponse(buf.getvalue(), content_type='image/png')
# This works, but it generates just an image (no web page)
return response
# This ends up displaying the bytes of the image as text
#return render(request, 'my_app/graph.html', {'graph1': buf.getvalue()})
# This is what is in the graph.html template
# <div class="graph1">
# {{graph1 | safe}}
# </div>
</code></pre>
|
<p>Solved: I can pass the image as a base64 encoded string</p>
<pre><code>buf = io.BytesIO()
plt.savefig(buf, format='png')
plt.close(f)
img_b64 = base64.b64encode(buf.getvalue()).decode()
return render(request, 'my_app/graph.html', {'graph1b64': img_b64})
# in graph.html
<div class="graph1">
<img src="data:image/png;base64, {{ graph1b64 }}" alt="graph1" />
</div>
</code></pre>
|
django|python-3.x|matplotlib
| 1 |
1,907,588 | 61,974,667 |
Nearest neighbor algorithm
|
<p>I have my algorithm of the nearest neighbor and I am trying to go through all the points in a cycle. The length of the path is calculated as the sum of Euclidean distances between adjacent points on the path as what I did in the function "dist". The problem is it doesn't print the last point for some reason.</p>
<pre><code>Input style:
n #number of points
x1, y2 #coordinates of the point
x2, y2
...
example:
4
0 0
1 0
1 1
0 1
it gives me output >> 1 2 3
the desired output should be >> 1 2 3 4
</code></pre>
<pre><code>from math import sqrt
n = int(input())
points = []
for i in range(0, n):
x, y = list(map(float, input().split()))
points.append([x,y])
def dist(ip1, ip2):
global points
p1 = points[ip1]
p2 = points[ip2]
return sqrt((p1[0] - p2[0]) ** 2 + (p1[1] - p2[1]) ** 2)
circuit = set()
start_vertex = 0
dark_side = set(range(n)) - {start_vertex}
visited_islands = []
current_vertex = start_vertex
while len(dark_side) > 0:
min_distance = None
best_v = None
for v in dark_side:
if ((min_distance is None) or
(min_distance > dist(current_vertex, v))):
min_distance = dist(current_vertex, v)
best_v = v
visited_islands.append(current_vertex+1)
circuit.add((current_vertex, best_v))
dark_side.remove(best_v)
current_vertex = best_v
# visited_islands.append(visited_islands[0]) #going to the start when done
print(*visited_islands)
# print(len(visited_islands))
</code></pre>
|
<p>I modified the last piece of your code, assuming that you are looking for the nearest neighbor for each point in the input.</p>
<pre><code>circuit = set()
visited_islands = []
for current_vertex in range(n):
min_distance = None
best_v = None
for v in range(n):
if current_vertex == v:
continue
if ((min_distance is None) or
(min_distance > dist(current_vertex, v))):
min_distance = dist(current_vertex, v)
best_v = v
circuit.add((current_vertex, best_v))
visited_islands.append(current_vertex+1)
print(*visited_islands)
print(circuit)
</code></pre>
<p>Output:</p>
<p>1 2 3 4</p>
<p>{(0, 1), (1, 0), (2, 1), (3, 0)}</p>
|
python|algorithm|nearest-neighbor
| 0 |
1,907,589 | 67,252,385 |
How can I group a pandas dataframe by time with a minimal amount of rows for each group?
|
<p>I have a dataframe that looks like this;</p>
<pre><code> created_at value1 value2 value3
2021-04-25 11:38:33 1 1 5
2021-04-25 11:38:47 4 3 6
2021-04-25 11:39:36 1 1 8
2021-04-25 11:39:47 6 5 5
2021-04-25 11:40:50 8 7 3
</code></pre>
<p>I am trying to create groups with the mean values within timeframes of 2 minutes.</p>
<p>I am using the following code;</p>
<pre><code>pd.DataFrame(df.groupby([pd.Grouper(key='created_at', freq='2Min')]).mean())
</code></pre>
<p>This works but at the moment I am trying to add a requirement that the Grouper needs at least 20 rows within that timeframe in order to aggregate the mean values but I can't find a solution to this.</p>
|
<p>One-liner:</p>
<pre class="lang-py prettyprint-override"><code>df.groupby(
[pd.Grouper(key='created_at', freq='2Min')]
).agg(
lambda x: x.mean() if len(x) > 20 else None # get None if there are not at least 20 rows in the group
).dropna(
how='all', axis=0 # remove all the rows with all na values
)
</code></pre>
|
python|pandas
| 2 |
1,907,590 | 60,607,842 |
Python-telegram-bot issue
|
<p>I have deployed a telegram bot (with Django) on Heroku with python version 3.6.9
It worked well without any issues.
After months I did some changes and while trying to deploy it again I get issues.
Heroku doesn't support python 3.6.9 anymore. It supports 3.6.10
So I set up venv with python 3.6.10 and I'm still getting the same issue after running the server.
Short issue:</p>
<pre><code> from .callbackcontext import CallbackContext File
"/home/usr/bot-name/venv/lib/python3.6/site-packages/telegram/ext/callbackcontext.py"
, line 21, in <module> from telegram import Update
ImportError: cannot import name 'Update' from 'telegram'
</code></pre>
<p>(/home/usr/bot-name/venv/lib/python3.6/site-packages/telegram/<strong>init</strong>.py)</p>
<p>Also I have tried python 3.7.6 (it's also supported by Heroku) but I get the same issue after running the server:</p>
<pre><code>_python manage.py runserver
Watching for file changes with StatReloader
Exception in thread django-main-thread:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/usr/bot-name/venv/lib/python3.7/site-packages/django/utils/autoreload.py", line 54, in wrapper
fn(*args, **kwargs)
File "/home/usr/bot-name/venv/lib/python3.7/site-packages/django/core/management/commands/runserver.py", line 109, in inner_run
autoreload.raise_last_exception()
File "/home/usr/bot-name/venv/lib/python3.7/site-packages/django/utils/autoreload.py", line 77, in raise_last_exception
raise _exception[1]
File "/home/usr/bot-name/venv/lib/python3.7/site-packages/django/core/management/init.py", line 337, in execute
autoreload.check_errors(django.setup)()
File "/home/usr/bot-name/venv/lib/python3.7/site-packages/django/utils/autoreload.py", line 54, in wrapper
fn(*args, **kwargs)
File "/home/usr/bot-name/venv/lib/python3.7/site-packages/django/init.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/usr/bot-name/venv/lib/python3.7/site-packages/django/apps/registry.py", line 91, in populate
app_config = AppConfig.create(entry)
File "/home/usr/bot-name/venv/lib/python3.7/site-packages/django/apps/config.py", line 116, in create
mod = import_module(mod_path)
File "/home/usr/bot-name/venv/lib/python3.7/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1006, in _gcd_import
File "", line 983, in _find_and_load
File "", line 967, in _find_and_load_unlocked
File "", line 677, in _load_unlocked
File "", line 728, in exec_module
File "", line 219, in call_with_frames_removed
File "/home/usr/bot-name/venv/lib/python3.7/site-packages/django_telegrambot/apps.py", line 9, in
from telegram.ext import Dispatcher
File "/home/usr/bot-name/venv/lib/python3.7/site-packages/telegram/ext/init.py", line 25, in
from .callbackcontext import CallbackContext
File "/home/usr/bot-name/venv/lib/python3.7/site-packages/telegram/ext/callbackcontext.py", line 21, in
from telegram import Update
ImportError: cannot import name 'Update' from 'telegram' (/home/usr/bot-name/venv/lib/python3.7/site-packages/telegram/init.py)
</code></pre>
<p>Here you can find my requirements :</p>
<pre><code>cffi==1.13.2
cryptography==2.8
dj-database-url==0.5.0
Django==2.2.7
django-heroku==0.3.1
django-telegrambot==1.0.1
et-xmlfile==1.0.1
future==0.18.2
gunicorn==20.0.4
jdcal==1.4.1
mysqlclient==1.4.5
openpyxl==3.0.2
pipenv==2018.11.26
psycopg2==2.8.4
pycparser==2.19
python-telegram-bot==12.2.0
pytz==2019.3
six==1.13.0
sqlparse==0.3.0
telegram==0.0.1
tornado==6.0.3
virtualenv==16.7.9
virtualenv-clone==0.5.3
whitenoise==5.0.1
</code></pre>
|
<p>For me the solution was to uninstall both the python-telegram-bot and telegram packages and install only the python-telegram-bot package</p>
<pre><code>pip uninstall python-telegram-bot telegram
pip install python-telegram-bot --upgrade
</code></pre>
|
python|django|heroku|telegram|python-telegram-bot
| 0 |
1,907,591 | 71,392,343 |
Compare two dataframes and find rows based on a value with condition
|
<p>I have two DataFrames. column "video_path" is common in both the dataframes. I need to extract details from df1 if it matches with df2 and also with the value of yes/no.</p>
<p>df1</p>
<p><a href="https://i.stack.imgur.com/TvOFx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TvOFx.png" alt="enter image description here" /></a></p>
<p>df2</p>
<p><a href="https://i.stack.imgur.com/GFZAH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GFZAH.png" alt="enter image description here" /></a></p>
<p>Expected result:</p>
<p><a href="https://i.stack.imgur.com/L7y8m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L7y8m.png" alt="enter image description here" /></a></p>
<p>What i tried</p>
<pre><code>newdf= df1.merge(frame, left_on='video_path', right_on='Video_path', how='inner')
</code></pre>
<p>But I'm sure its not correct.</p>
<p>code to create data frames</p>
<pre><code>df1 = {'video_path': {0: 'video/file_path/1.mp4', 1: 'video/file_path/1.mp4', 2: 'video/file_path/1.mp4', 3: 'video/file_path/2.mp4', 4: 'video/file_path/2.mp4', 5: 'video/file_path/2.mp4', 6: 'video/file_path/2.mp4', 7: 'video/file_path/2.mp4', 8: 'video/file_path/3.mp4', 9: 'video/file_path/3.mp4', 10: 'video/file_path/3.mp4', 11: 'video/file_path/4.mp4', 12: 'video/file_path/4.mp4', 13: 'video/file_path/4.mp4', 14: 'video/file_path/4.mp4', 15: 'video/file_path/5.mp4', 16: 'video/file_path/5.mp4', 17: 'video/file_path/5.mp4', 18: 'video/file_path/5.mp4', 19: 'video/file_path/6.mp4', 20: 'video/file_path/6.mp4', 21: 'video/file_path/6.mp4', 22: 'video/file_path/6.mp4', 23: 'video/file_path/6.mp4'}, 'frame_details': {0: 'frame_1.jpg', 1: 'frame_2.jpg', 2: 'frame_3.jpg', 3: 'frame_1.jpg', 4: 'frame_2.jpg', 5: 'frame_3.jpg', 6: 'frame_4.jpg', 7: 'frame_5.jpg', 8: 'frame_1.jpg', 9: 'frame_2.jpg', 10: 'frame_3.jpg', 11: 'frame_1.jpg', 12: 'frame_2.jpg', 13: 'frame_3.jpg', 14: 'frame_4.jpg', 15: 'frame_1.jpg', 16: 'frame_2.jpg', 17: 'frame_3.jpg', 18: 'frame_4.jpg', 19: 'frame_1.jpg', 20: 'frame_2.jpg', 21: 'frame_3.jpg', 22: 'frame_4.jpg', 23: 'frame_5.jpg'}, 'width': {0: 520, 1: 520, 2: 520, 3: 120, 4: 120, 5: 120, 6: 120, 7: 120, 8: 720, 9: 720, 10: 720, 11: 1080, 12: 1080, 13: 1080, 14: 1080, 15: 480, 16: 480, 17: 480, 18: 480, 19: 640, 20: 640, 21: 640, 22: 640, 23: 640}, 'height': {0: 225, 1: 225, 2: 225, 3: 120, 4: 120, 5: 120, 6: 120, 7: 120, 8: 480, 9: 480, 10: 480, 11: 1920, 12: 1920, 13: 1920, 14: 1920, 15: 640, 16: 640, 17: 640, 18: 640, 19: 480, 20: 480, 21: 480, 22: 480, 23: 480}, 'hasAudio': {0: 'yes', 1: 'yes', 2: 'yes', 3: 'yes', 4: 'yes', 5: 'yes', 6: 'yes', 7: 'yes', 8: 'yes', 9: 'yes', 10: 'yes', 11: 'no', 12: 'no', 13: 'no', 14: 'no', 15: 'no', 16: 'no', 17: 'no', 18: 'no', 19: 'yes', 20: 'yes', 21: 'yes', 22: 'yes', 23: 'yes'}}
</code></pre>
<pre><code>df2 = {'Video_path': {0: 'video/file_path/1.mp4',
1: 'video/file_path/2.mp4',
2: 'video/file_path/4.mp4',
3: 'video/file_path/6.mp4',
4: 'video/file_path/7.mp4',
5: 'video/file_path/8.mp4',
6: 'video/file_path/9.mp4'},
'isPresent': {0: nan, 1: nan, 2: nan, 3: nan, 4: nan, 5: nan, 6: nan}
</code></pre>
|
<p>Swap <code>df1</code> and <code>df2</code> with left join and <code>indicator</code> parameter, last set column <code>isPresent</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a>:</p>
<pre><code>newdf= df2.merge(df1.rename(columns={'video_path':'Video_path'}),
on='Video_path',
how='left',
indicator=True)
newdf['isPresent'] = newdf.pop('_merge').map({'both':'yes', 'left_only':'no'})
print (newdf)
Video_path isPresent frame_details width height hasAudio
0 video/file_path/1.mp4 yes frame_1.jpg 520.0 225.0 yes
1 video/file_path/1.mp4 yes frame_2.jpg 520.0 225.0 yes
2 video/file_path/1.mp4 yes frame_3.jpg 520.0 225.0 yes
3 video/file_path/2.mp4 yes frame_1.jpg 120.0 120.0 yes
4 video/file_path/2.mp4 yes frame_2.jpg 120.0 120.0 yes
5 video/file_path/2.mp4 yes frame_3.jpg 120.0 120.0 yes
6 video/file_path/2.mp4 yes frame_4.jpg 120.0 120.0 yes
7 video/file_path/2.mp4 yes frame_5.jpg 120.0 120.0 yes
8 video/file_path/4.mp4 yes frame_1.jpg 1080.0 1920.0 no
9 video/file_path/4.mp4 yes frame_2.jpg 1080.0 1920.0 no
10 video/file_path/4.mp4 yes frame_3.jpg 1080.0 1920.0 no
11 video/file_path/4.mp4 yes frame_4.jpg 1080.0 1920.0 no
12 video/file_path/6.mp4 yes frame_1.jpg 640.0 480.0 yes
13 video/file_path/6.mp4 yes frame_2.jpg 640.0 480.0 yes
14 video/file_path/6.mp4 yes frame_3.jpg 640.0 480.0 yes
15 video/file_path/6.mp4 yes frame_4.jpg 640.0 480.0 yes
16 video/file_path/6.mp4 yes frame_5.jpg 640.0 480.0 yes
17 video/file_path/7.mp4 no NaN NaN NaN NaN
18 video/file_path/8.mp4 no NaN NaN NaN NaN
19 video/file_path/9.mp4 no NaN NaN NaN NaN
</code></pre>
|
python|pandas|dataframe
| 1 |
1,907,592 | 71,300,038 |
write csv in one cell
|
<p>I want to write the following script to write in one cell. Currently the script creates 2 columns, but I only want to write it in one cell. See below for the desired output format.</p>
<p>Code:</p>
<pre><code>from bs4 import BeautifulSoup
import requests
import csv
URL = 'https://www.electrive.com/2022/02/20/byd-planning-model-3-like-800-volt-sedan-called-seal/'
(response := requests.get(URL)).raise_for_status()
soup = BeautifulSoup(response.text, 'lxml')
paragraphs = soup.find('section', class_='content').find_all('p')
# the sources in the last paragraph
sources = paragraphs[-1].find_all('a')
# put the sources name and link in a dict
sources_links = []
for source in sources:
sources_links.append(f"{source.text}({source['href']})")
# write in csv
with open('electrive_scrape_source.csv', 'w') as csv_file:
csv_writer = csv.writer(csv_file)
csv_writer.writerow(['Source'])
csv_writer.writerow(sources_links)
</code></pre>
<p>Desired output in one csv cell - separated with a comma:</p>
<pre><code>xchuxing.com (https://xchuxing.com/article/45850), cnevpost.com (https://cnevpost.com/2022/02/18/byd-seal-set-to-become-new-tesla-model-3-challenger/)
</code></pre>
|
<p>First to get the comma separated string we can use the <a href="https://docs.python.org/3/library/stdtypes.html#str.join" rel="nofollow noreferrer"><code>'some delimiter'.join</code></a> function to combine everything in the list into a single string separated by that delimiter. For the delimiter you'd use either <code>','</code> or use <code>', '</code> if you would like to automatically add spaces after the commas.</p>
<p>Then secondly, because <a href="https://docs.python.org/3/library/csv.html#csv.csvwriter.writerow" rel="nofollow noreferrer"><code>csv_writer.writerow</code></a> expects a list (or some other iterable) we need to wrap our comma separated string in a list, thus creating a list that only has one item in it. Like so.</p>
<pre><code>csv_writer.writerow( [ ', '.join(sources_links) ] )
</code></pre>
|
python|csv|web-scraping
| 0 |
1,907,593 | 64,489,249 |
Generating better help from argparse when nargs='*'
|
<p>Like many command line tools, mine accepts optional filenames. Argparse seems to support this via <code>nargs='*'</code>, which is working for me as expected:</p>
<pre class="lang-py prettyprint-override"><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
'files',
help='file(s) to parse instead of stdin',
nargs='*')
parser.parse_args()
</code></pre>
<p>However, the help output is bizarre:</p>
<pre><code>$ ./help.py -h
usage: help.py [-h] [files [files ...]]
</code></pre>
<p>How can I avoid the nested optional and repeated parameter name? The repetition adds no information beyond [files ...], which is the traditional way optional parameter lists are indicated on Unix:</p>
<pre><code>$ grep --help
usage: grep [-abcDEFGHhIiJLlmnOoqRSsUVvwxZ] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
$ ls --help
Usage:
exa [options] [files...]
$ vim --help
Usage:
nvim [options] [file ...] Edit file(s)
</code></pre>
<p>Any help is appreciated. I'm trying argparse because using it seems to be a Python best practice, but this help output is a dealbreaker for me.</p>
|
<p>This was fixed in Python 3.9, see <a href="https://bugs.python.org/issue38438" rel="nofollow noreferrer">https://bugs.python.org/issue38438</a> and <a href="https://github.com/python/cpython/commit/a0ed99bca8475cbc82e9202aa354faba2a4620f4" rel="nofollow noreferrer">commit <code>a0ed99bc</code></a> that fixed it.</p>
<p>Your code produces the usage message you expect if run on 3.9:</p>
<pre class="lang-py prettyprint-override"><code>Python 3.9.0 (default, Oct 12 2020, 02:44:01)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import argparse
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('files', help='file(s) to parse instead of stdin', nargs='*')
_StoreAction(option_strings=[], dest='files', nargs='*', const=None, default=None, type=None, choices=None, help='file(s) to parse instead of stdin', metavar=None)
>>> parser.print_help()
usage: [-h] [files ...]
</code></pre>
|
python|argparse
| 6 |
1,907,594 | 64,301,782 |
How to get text in `::before` section with selenium and python?
|
<p>I would like to reach to text stored after <code>::before</code> in <code>variant__available-qty</code> class. In the example it is "14".</p>
<p>When I try with:</p>
<pre><code>variants = driver.find_elements_by_class_name("variant__available-qty")
</code></pre>
<p>or</p>
<pre><code>variants = driver.find_elements_by_class_name("variants__container-item")
</code></pre>
<p>It responds with an empty list, although there should be at least three elements.
I tried with executing some JS scripts (<code>driver.execute_script(...)</code>) but with no success.</p>
<pre class="lang-html prettyprint-override"><code><div class="variants__container">
<div class="variants__container-headers">
<div class="variants__header--item">Rozmiar</div>
<div class="variants__header--qty">Wybierz ilość</div>
</div>
<div class="variants__container-items">
<div id="variant__70224" class="variants__container-item">
<div class="variant__price">
<div class="price">71.27&nbsp;zł</div>
</div>
<div class="variant__item">
<div class="variant__attributes">35-37 </div>
<div class="variant__sku">610306143389</div>
</div>
<div class="variant__qty">
<form method="post" action="https://b2b.snapoutdoor.pl/checkout/cart/add/uenc/aHR0cHM6Ly9iMmIuc25hcG91dGRvb3IucGwvL3Jlc3QvVjEvZXh0ZW5kdmFyaWFudHN0b2NhcnQvODY1NDY_Xz0xNjAyMzE4NzI4Nzcz/product/86546/">
<div class="qty">
<div class="qty-down">-</div>
<input type="number" name="qty" data-productid="70224" onfocus="this.value=''" value="0" min="0">
<div class="qty-up">+</div>
</div>
<input type="hidden" name="super_attribute[233]" value="2028"><input type="hidden" name="form_key" value="OPk3fByYxbAMTgnu"><button disabled="" class="add-to-cart action primary" type="submit">Do koszyka</button>
</form>
</div>
<div class="variant__available-qty">12</div>
</div>
<div id="variant__70225" class="variants__container-item">
<div class="variant__price">
<div class="price">71.27&nbsp;zł</div>
</div>
<div class="variant__item">
<div class="variant__attributes">38-40 </div>
<div class="variant__sku">610306143396</div>
</div>
<div class="variant__qty">
<form method="post" action="https://b2b.snapoutdoor.pl/checkout/cart/add/uenc/aHR0cHM6Ly9iMmIuc25hcG91dGRvb3IucGwvL3Jlc3QvVjEvZXh0ZW5kdmFyaWFudHN0b2NhcnQvODY1NDY_Xz0xNjAyMzE4NzI4Nzcz/product/86546/">
<div class="qty">
<div class="qty-down">-</div>
<input type="number" name="qty" data-productid="70225" onfocus="this.value=''" value="0" min="0">
<div class="qty-up">+</div>
</div>
<input type="hidden" name="super_attribute[233]" value="2036"><input type="hidden" name="form_key" value="OPk3fByYxbAMTgnu"><button disabled="" class="add-to-cart action primary" type="submit">Do koszyka</button>
</form>
</div>
<div class="variant__available-qty">14</div>
</div>
<div id="variant__70226" class="variants__container-item">
<div class="variant__price">
<div class="price">71.27&nbsp;zł</div>
</div>
<div class="variant__item">
<div class="variant__attributes">41-43 </div>
<div class="variant__sku">610306143402</div>
</div>
<div class="variant__qty">
<form method="post" action="https://b2b.snapoutdoor.pl/checkout/cart/add/uenc/aHR0cHM6Ly9iMmIuc25hcG91dGRvb3IucGwvL3Jlc3QvVjEvZXh0ZW5kdmFyaWFudHN0b2NhcnQvODY1NDY_Xz0xNjAyMzE4NzI4Nzcz/product/86546/">
<div class="qty">
<div class="qty-down">-</div>
<input type="number" name="qty" data-productid="70226" onfocus="this.value=''" value="0" min="0">
<div class="qty-up">+</div>
</div>
<input type="hidden" name="super_attribute[233]" value="2042"><input type="hidden" name="form_key" value="OPk3fByYxbAMTgnu"><button disabled="" class="add-to-cart action primary" type="submit">Do koszyka</button>
</form>
</div>
<div class="variant__available-qty">6</div>
</div>
</div>
</div>
</code></pre>
<p><a href="https://i.stack.imgur.com/vCT89.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vCT89.png" alt="enter image description here" /></a></p>
|
<p>U can simply use <code>BeautifulSoup</code> for this:</p>
<pre><code>from bs4 import BeautifulSoup
html = """
<div class="variants__container">
<div class="variants__container-headers">
<div class="variants__header--item">Rozmiar</div>
<div class="variants__header--qty">Wybierz ilość</div>
</div>
<div class="variants__container-items">
<div id="variant__70224" class="variants__container-item">
<div class="variant__price">
<div class="price">71.27&nbsp;zł</div>
</div>
<div class="variant__item">
<div class="variant__attributes">35-37 </div>
<div class="variant__sku">610306143389</div>
</div>
<div class="variant__qty">
<form method="post" action="https://b2b.snapoutdoor.pl/checkout/cart/add/uenc/aHR0cHM6Ly9iMmIuc25hcG91dGRvb3IucGwvL3Jlc3QvVjEvZXh0ZW5kdmFyaWFudHN0b2NhcnQvODY1NDY_Xz0xNjAyMzE4NzI4Nzcz/product/86546/">
<div class="qty">
<div class="qty-down">-</div>
<input type="number" name="qty" data-productid="70224" onfocus="this.value=''" value="0" min="0">
<div class="qty-up">+</div>
</div>
<input type="hidden" name="super_attribute[233]" value="2028"><input type="hidden" name="form_key" value="OPk3fByYxbAMTgnu"><button disabled="" class="add-to-cart action primary" type="submit">Do koszyka</button>
</form>
</div>
<div class="variant__available-qty">12</div>
</div>
<div id="variant__70225" class="variants__container-item">
<div class="variant__price">
<div class="price">71.27&nbsp;zł</div>
</div>
<div class="variant__item">
<div class="variant__attributes">38-40 </div>
<div class="variant__sku">610306143396</div>
</div>
<div class="variant__qty">
<form method="post" action="https://b2b.snapoutdoor.pl/checkout/cart/add/uenc/aHR0cHM6Ly9iMmIuc25hcG91dGRvb3IucGwvL3Jlc3QvVjEvZXh0ZW5kdmFyaWFudHN0b2NhcnQvODY1NDY_Xz0xNjAyMzE4NzI4Nzcz/product/86546/">
<div class="qty">
<div class="qty-down">-</div>
<input type="number" name="qty" data-productid="70225" onfocus="this.value=''" value="0" min="0">
<div class="qty-up">+</div>
</div>
<input type="hidden" name="super_attribute[233]" value="2036"><input type="hidden" name="form_key" value="OPk3fByYxbAMTgnu"><button disabled="" class="add-to-cart action primary" type="submit">Do koszyka</button>
</form>
</div>
<div class="variant__available-qty">14</div>
</div>
<div id="variant__70226" class="variants__container-item">
<div class="variant__price">
<div class="price">71.27&nbsp;zł</div>
</div>
<div class="variant__item">
<div class="variant__attributes">41-43 </div>
<div class="variant__sku">610306143402</div>
</div>
<div class="variant__qty">
<form method="post" action="https://b2b.snapoutdoor.pl/checkout/cart/add/uenc/aHR0cHM6Ly9iMmIuc25hcG91dGRvb3IucGwvL3Jlc3QvVjEvZXh0ZW5kdmFyaWFudHN0b2NhcnQvODY1NDY_Xz0xNjAyMzE4NzI4Nzcz/product/86546/">
<div class="qty">
<div class="qty-down">-</div>
<input type="number" name="qty" data-productid="70226" onfocus="this.value=''" value="0" min="0">
<div class="qty-up">+</div>
</div>
<input type="hidden" name="super_attribute[233]" value="2042"><input type="hidden" name="form_key" value="OPk3fByYxbAMTgnu"><button disabled="" class="add-to-cart action primary" type="submit">Do koszyka</button>
</form>
</div>
<div class="variant__available-qty">6</div>
</div>
</div>
</div>
""" # The html code
soup = BeautifulSoup(html,'html5lib')
divs = soup.find_all('div',class_ = 'variant__available-qty')
for div in divs:
print(div.text)
</code></pre>
<p>Output:</p>
<pre><code>12
14
6
</code></pre>
<p>In my case, I have hardcoded the html. But u can extract the html code of the webpage using <code>driver.page_source</code>. Then u can convert that html to a <code>BeautifulSoup</code> object and perform these operations. If u still wanna use selenium, then try using the <code>xpaths</code> or <code>css selectors</code> instead of <code>class_names</code>. Hope that this helps!</p>
|
python-3.x|selenium|selenium-webdriver|web-scraping
| 0 |
1,907,595 | 53,323,775 |
Database still in use after a selenium test in Django
|
<p>I have a Django project in which I'm starting to write Selenium tests. The first one looking like this:</p>
<pre><code>from django.contrib.staticfiles.testing import StaticLiveServerTestCase
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from core.models import User
from example import settings
BACH_EMAIL = "johann.sebastian.bach@classics.com"
PASSWORD = "password"
class TestImportCRMData(StaticLiveServerTestCase):
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.webdriver = webdriver.Chrome()
cls.webdriver.implicitly_wait(10)
@classmethod
def tearDownClass(cls):
cls.webdriver.close()
cls.webdriver.quit()
super().tearDownClass()
def setUp(self):
self.admin = User.objects.create_superuser(email=BACH_EMAIL, password=PASSWORD)
def test_admin_tool(self):
self.webdriver.get(f"http://{settings.ADMIN_HOST}:{self.server_thread.port}/admin")
self.webdriver.find_element_by_id("id_username").send_keys(BACH_EMAIL)
self.webdriver.find_element_by_id("id_password").send_keys(PASSWORD)
self.webdriver.find_element_by_id("id_password").send_keys(Keys.RETURN)
self.webdriver.find_element_by_link_text("Users").click()
</code></pre>
<p>When I run it, the test pass but still ends with this error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\db\backends\utils.py", line 83, in _execute
return self.cursor.execute(sql)
psycopg2.OperationalError: database "test_example" is being accessed by other users
DETAIL: There is 1 other session using the database.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2018.2.4\helpers\pycharm\django_test_manage.py", line 168, in <module>
utility.execute()
File "C:\Program Files\JetBrains\PyCharm 2018.2.4\helpers\pycharm\django_test_manage.py", line 142, in execute
_create_command().run_from_argv(self.argv)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\core\management\commands\test.py", line 26, in run_from_argv
super().run_from_argv(argv)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\core\management\base.py", line 316, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\core\management\base.py", line 353, in execute
output = self.handle(*args, **options)
File "C:\Program Files\JetBrains\PyCharm 2018.2.4\helpers\pycharm\django_test_manage.py", line 104, in handle
failures = TestRunner(test_labels, **options)
File "C:\Program Files\JetBrains\PyCharm 2018.2.4\helpers\pycharm\django_test_runner.py", line 255, in run_tests
extra_tests=extra_tests, **options)
File "C:\Program Files\JetBrains\PyCharm 2018.2.4\helpers\pycharm\django_test_runner.py", line 156, in run_tests
return super(DjangoTeamcityTestRunner, self).run_tests(test_labels, extra_tests, **kwargs)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\test\runner.py", line 607, in run_tests
self.teardown_databases(old_config)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\test\runner.py", line 580, in teardown_databases
keepdb=self.keepdb,
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\test\utils.py", line 297, in teardown_databases
connection.creation.destroy_test_db(old_name, verbosity, keepdb)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\db\backends\base\creation.py", line 257, in destroy_test_db
self._destroy_test_db(test_database_name, verbosity)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\db\backends\base\creation.py", line 274, in _destroy_test_db
% self.connection.ops.quote_name(test_database_name))
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\db\backends\utils.py", line 68, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\db\backends\utils.py", line 77, in _execute_with_wrappers
return executor(sql, params, many, context)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\db\backends\utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\db\utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "C:\Users\pupeno\Documents\Eligible\code\example\venv\lib\site-packages\django\db\backends\utils.py", line 83, in _execute
return self.cursor.execute(sql)
django.db.utils.OperationalError: database "test_example" is being accessed by other users
DETAIL: There is 1 other session using the database.
</code></pre>
<p>The problem of course is that the next run of the tests, the database still exists, so, the tests don't run without confirming deletion of the database.</p>
<p>If I comment out the last line:</p>
<pre><code>self.webdriver.find_element_by_link_text("Users").click()
</code></pre>
<p>then I don't get this error. I guess just because the database connection is not established. Sometimes it's 1 other session, sometimes it's up to 4. In one of the cases of 4 sessions, these were the running sessions:</p>
<pre><code>select * from pg_stat_activity where datname = 'test_example';
100123 test_example 29892 16393 pupeno "" ::1 61967 2018-11-15 17:28:19.552431 2018-11-15 17:28:19.562398 2018-11-15 17:28:19.564623 idle SELECT "core_user"."id", "core_user"."password", "core_user"."last_login", "core_user"."is_superuser", "core_user"."email", "core_user"."is_staff", "core_user"."is_active", "core_user"."date_joined" FROM "core_user" WHERE "core_user"."id" = 1
100123 test_example 33028 16393 pupeno "" ::1 61930 2018-11-15 17:28:18.792466 2018-11-15 17:28:18.843383 2018-11-15 17:28:18.851828 idle SELECT "django_admin_log"."id", "django_admin_log"."action_time", "django_admin_log"."user_id", "django_admin_log"."content_type_id", "django_admin_log"."object_id", "django_admin_log"."object_repr", "django_admin_log"."action_flag", "django_admin_log"."change_message", "core_user"."id", "core_user"."password", "core_user"."last_login", "core_user"."is_superuser", "core_user"."email", "core_user"."is_staff", "core_user"."is_active", "core_user"."date_joined", "django_content_type"."id", "django_content_type"."app_label", "django_content_type"."model" FROM "django_admin_log" INNER JOIN "core_user" ON ("django_admin_log"."user_id" = "core_user"."id") LEFT OUTER JOIN "django_content_type" ON ("django_admin_log"."content_type_id" = "django_content_type"."id") WHERE "django_admin_log"."user_id" = 1 ORDER BY "django_admin_log"."action_time" DESC LIMIT 10
100123 test_example 14128 16393 pupeno "" ::1 61988 2018-11-15 17:28:19.767225 2018-11-15 17:28:19.776150 2018-11-15 17:28:19.776479 idle SELECT "core_firm"."id", "core_firm"."name", "core_firm"."host_name" FROM "core_firm" WHERE "core_firm"."id" = 1
100123 test_example 9604 16393 pupeno "" ::1 61960 2018-11-15 17:28:19.469197 2018-11-15 17:28:19.478775 2018-11-15 17:28:19.478788 idle COMMIT
</code></pre>
<p>I've been trying to find the minimum reproducible example of this problem, but so far I haven't succeeded.</p>
<p>Any ideas what could be causing this or how to find out more about what the issue could be?</p>
|
<p>This error message...</p>
<pre><code>django.db.utils.OperationalError: database "test_example" is being accessed by other users
DETAIL: There is 1 other session using the database.
</code></pre>
<p>...implies that there is an existing session using the database and the new session can't access the database.</p>
<hr>
<p>Some more information regarding the <strong>Django</strong> <em>version</em>, <strong>Database</strong> <em>type</em> and <em>version</em> along with <em>Selenium</em>, <em>ChromeDriver</em> and <em>Chrome</em> versions would have helped us to debug this issue in a better way. </p>
<p>However, you need to take care of a couple of things from <strong>Selenium</strong> perspective as follows:</p>
<ul>
<li><p>As you are initiating a new session on the next run of the tests you need to remove the line of code <code>cls.webdriver.close()</code> as the next line of code <code>cls.webdriver.quit()</code> will be enough to terminate the existing session. As per the best practices you should invoke the <strong><code>quit()</code></strong> method within the <code>tearDown() {}</code>. Invoking <strong><code>quit()</code></strong> <code>DELETE</code>s the current browsing session through sending <strong>"quit"</strong> command with <strong>{"flags":["eForceQuit"]}</strong> and finally sends the <strong>GET</strong> request on <strong>/shutdown</strong> <code>EndPoint</code>. Here is an example below :</p>
<pre><code>1503397488598 webdriver::server DEBUG -> DELETE /session/8e457516-3335-4d3b-9140-53fb52aa8b74
1503397488607 geckodriver::marionette TRACE -> 37:[0,4,"quit",{"flags":["eForceQuit"]}]
1503397488821 webdriver::server DEBUG -> GET /shutdown
</code></pre></li>
<li><p>So on invoking <strong><code>quit()</code></strong> method the <code>Web Browser</code> session and the <code>WebDriver</code> instance gets killed completely. Hence you don't have to incorporate any additional steps which will be an overhead.</p></li>
</ul>
<blockquote>
<p>You can find a detailed discussion in <a href="https://stackoverflow.com/questions/47999568/selenium-how-to-stop-geckodriver-process-impacting-pc-memory-without-calling/48003289#48003289">Selenium : How to stop geckodriver process impacting PC memory, without calling driver.quit()?</a></p>
</blockquote>
<ul>
<li>The currently build <em>Web Applications</em> are gradually moving towards dynamically rendered <a href="https://www.w3schools.com/js/js_htmldom.asp" rel="nofollow noreferrer">HTML DOM</a> so to interact with the elements within the <a href="https://javascript.info/dom-nodes" rel="nofollow noreferrer">DOM Tree</a>, <a href="https://stackoverflow.com/questions/45672693/using-implicit-wait-in-selenium/45674706#45674706">ImplicitWait</a> is no more that effective and you need to use <a href="https://stackoverflow.com/questions/45712431/replace-implicit-wait-with-explicit-wait-selenium-webdriver-java/45715759#45715759">WebDriverWait</a> instead. At this point it is worth to mention that, mixing up <a href="https://stackoverflow.com/questions/53588966/python-selenium-difference-between-driver-implicitly-wait-and-time-sleep/53589205#53589205">Implicit Wait</a> and <a href="https://stackoverflow.com/questions/50518467/how-to-properly-configure-implicit-explicit-waits-and-pageloadtimeout-through/50524225#50524225">Explicit Wait</a> can cause <a href="https://stackoverflow.com/questions/52706693/how-to-combine-implicit-and-explicit-timeouts-in-selenium/52707885#52707885"><strong>unpredictable wait times</strong></a></li>
<li><p>So you need to:</p>
<ul>
<li><p>Remove the <code>implicitly_wait(10)</code>:</p>
<pre><code>cls.webdriver.implicitly_wait(10)
</code></pre></li>
<li><p>Induce <em>WebDriverWait</em> while interacting with elements:</p>
<pre><code>WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.ID, "id_username"))).send_keys(BACH_EMAIL)
self.webdriver.find_element_by_id("id_password").send_keys(PASSWORD)
self.webdriver.find_element_by_id("id_password").send_keys(Keys.RETURN)
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.LINK_TEXT, "Users"))).click()
</code></pre></li>
</ul></li>
</ul>
<hr>
<p>Now, as per the discussion <a href="https://code.djangoproject.com/ticket/22414#no1" rel="nofollow noreferrer">Persistent connections not closed by LiveServerTestCase, preventing dropping test databases</a> this issue was observed, reported, discussed within <strong>Djangov1.6</strong> and fixed. The main issue was:</p>
<blockquote>
<p>Whenever a PostgreSQL connection is marked as persistent (<code>CONN_MAX_AGE = None</code>) and a LiveServerTestCase is executed, the connection from the server thread is never closed, leading to inability to drop the test database.</p>
</blockquote>
<p>This is exactly the reason why you see:</p>
<pre><code>select * from pg_stat_activity where datname = 'test_example';
100123 test_example 29892 16393 pupeno "" ::1 61967 2018-11-15 17:28:19.552431 2018-11-15 17:28:19.562398 2018-11-15 17:28:19.564623 idle SELECT "core_user"."id", "core_user"."password", "core_user"."last_login", "core_user"."is_superuser", "core_user"."email", "core_user"."is_staff", "core_user"."is_active", "core_user"."date_joined" FROM "core_user" WHERE "core_user"."id" = 1
100123 test_example 33028 16393 pupeno "" ::1 61930 2018-11-15 17:28:18.792466 2018-11-15 17:28:18.843383 2018-11-15 17:28:18.851828 idle SELECT "django_admin_log"."id", "django_admin_log"."action_time", "django_admin_log"."user_id", "django_admin_log"."content_type_id", "django_admin_log"."object_id", "django_admin_log"."object_repr", "django_admin_log"."action_flag", "django_admin_log"."change_message", "core_user"."id", "core_user"."password", "core_user"."last_login", "core_user"."is_superuser", "core_user"."email", "core_user"."is_staff", "core_user"."is_active", "core_user"."date_joined", "django_content_type"."id", "django_content_type"."app_label", "django_content_type"."model" FROM "django_admin_log" INNER JOIN "core_user" ON ("django_admin_log"."user_id" = "core_user"."id") LEFT OUTER JOIN "django_content_type" ON ("django_admin_log"."content_type_id" = "django_content_type"."id") WHERE "django_admin_log"."user_id" = 1 ORDER BY "django_admin_log"."action_time" DESC LIMIT 10
100123 test_example 14128 16393 pupeno "" ::1 61988 2018-11-15 17:28:19.767225 2018-11-15 17:28:19.776150 2018-11-15 17:28:19.776479 idle SELECT "core_firm"."id", "core_firm"."name", "core_firm"."host_name" FROM "core_firm" WHERE "core_firm"."id" = 1
100123 test_example 9604 16393 pupeno "" ::1 61960 2018-11-15 17:28:19.469197 2018-11-15 17:28:19.478775 2018-11-15 17:28:19.478788 idle COMMIT
</code></pre>
<p>It was further observed that, even with <code>CONN_MAX_AGE=None</code>, after <code>LiveServerTestCase.tearDownClass()</code>, querying PostgreSQL's <code>pg_stat_activity</code> shows a lingering connection in state <strong>idle</strong> (which was the connection created by the previous test in your case). So it was pretty evident that the idle connections are not closed when the thread terminates and the needle of supection was at:</p>
<ul>
<li><p><a href="https://github.com/django/django/blob/418658f453bed7fe7949dda26651aab370003e6a/django/test/testcases.py#L1230" rel="nofollow noreferrer"><strong><code>LiveServerThread(threading.Thread)</code></strong></a> which control the threads for running a live http server while the tests are running:</p>
<pre><code>class LiveServerThread(threading.Thread):
def __init__(self, host, static_handler, connections_override=None):
self.host = host
self.port = None
self.is_ready = threading.Event()
self.error = None
self.static_handler = static_handler
self.connections_override = connections_override
super(LiveServerThread, self).__init__()
def run(self):
"""
Sets up the live server and databases, and then loops over handling
http requests.
"""
if self.connections_override:
# Override this thread's database connections with the ones
# provided by the main thread.
for alias, conn in self.connections_override.items():
connections[alias] = conn
try:
# Create the handler for serving static and media files
handler = self.static_handler(_MediaFilesHandler(WSGIHandler()))
self.httpd = self._create_server(0)
self.port = self.httpd.server_address[1]
self.httpd.set_app(handler)
self.is_ready.set()
self.httpd.serve_forever()
except Exception as e:
self.error = e
self.is_ready.set()
def _create_server(self, port):
return WSGIServer((self.host, port), QuietWSGIRequestHandler, allow_reuse_address=False)
def terminate(self):
if hasattr(self, 'httpd'):
# Stop the WSGI server
self.httpd.shutdown()
self.httpd.server_close()
</code></pre></li>
<li><p><a href="https://github.com/django/django/blob/418658f453bed7fe7949dda26651aab370003e6a/django/test/testcases.py#L1276" rel="nofollow noreferrer"><strong><code>LiveServerTestCase(TransactionTestCase)</code></strong></a> which basically does the same as <code>TransactionTestCase</code> but also launches a live http server in a separate thread so that the tests may use another testing framework to be used by <strong>Selenium</strong>, instead of the built-in dummy client:</p>
<pre><code>class LiveServerTestCase(TransactionTestCase):
host = 'localhost'
static_handler = _StaticFilesHandler
@classproperty
def live_server_url(cls):
return 'http://%s:%s' % (cls.host, cls.server_thread.port)
@classmethod
def setUpClass(cls):
super(LiveServerTestCase, cls).setUpClass()
connections_override = {}
for conn in connections.all():
# If using in-memory sqlite databases, pass the connections to
# the server thread.
if conn.vendor == 'sqlite' and conn.is_in_memory_db(conn.settings_dict['NAME']):
# Explicitly enable thread-shareability for this connection
conn.allow_thread_sharing = True
connections_override[conn.alias] = conn
cls._live_server_modified_settings = modify_settings(
ALLOWED_HOSTS={'append': cls.host},
)
cls._live_server_modified_settings.enable()
cls.server_thread = cls._create_server_thread(connections_override)
cls.server_thread.daemon = True
cls.server_thread.start()
# Wait for the live server to be ready
cls.server_thread.is_ready.wait()
if cls.server_thread.error:
# Clean up behind ourselves, since tearDownClass won't get called in
# case of errors.
cls._tearDownClassInternal()
raise cls.server_thread.error
@classmethod
def _create_server_thread(cls, connections_override):
return LiveServerThread(
cls.host,
cls.static_handler,
connections_override=connections_override,
)
@classmethod
def _tearDownClassInternal(cls):
# There may not be a 'server_thread' attribute if setUpClass() for some
# reasons has raised an exception.
if hasattr(cls, 'server_thread'):
# Terminate the live server's thread
cls.server_thread.terminate()
cls.server_thread.join()
# Restore sqlite in-memory database connections' non-shareability
for conn in connections.all():
if conn.vendor == 'sqlite' and conn.is_in_memory_db(conn.settings_dict['NAME']):
conn.allow_thread_sharing = False
@classmethod
def tearDownClass(cls):
cls._tearDownClassInternal()
cls._live_server_modified_settings.disable()
super(LiveServerTestCase, cls).tearDownClass()
</code></pre></li>
</ul>
<p>The solution was to <em>close only the non-overriden connections</em> and was incorporated from this <a href="https://github.com/django/django/pull/7096/files" rel="nofollow noreferrer">pull request</a> / <a href="https://github.com/django/django/commit/f6cd669ff203192c29495174e53da6b16883b039" rel="nofollow noreferrer">commit</a>. The changes were:</p>
<ul>
<li><p>In <code>django/test/testcases.py</code> add:</p>
<pre><code>finally:
connections.close_all()
</code></pre></li>
<li><p>Add a new file <code>tests/servers/test_liveserverthread.py</code>:</p>
<pre><code>from django.db import DEFAULT_DB_ALIAS, connections
from django.test import LiveServerTestCase, TestCase
class LiveServerThreadTest(TestCase):
def run_live_server_thread(self, connections_override=None):
thread = LiveServerTestCase._create_server_thread(connections_override)
thread.daemon = True
thread.start()
thread.is_ready.wait()
thread.terminate()
def test_closes_connections(self):
conn = connections[DEFAULT_DB_ALIAS]
if conn.vendor == 'sqlite' and conn.is_in_memory_db():
self.skipTest("the sqlite backend's close() method is a no-op when using an in-memory database")
# Pass a connection to the thread to check they are being closed.
connections_override = {DEFAULT_DB_ALIAS: conn}
saved_sharing = conn.allow_thread_sharing
try:
conn.allow_thread_sharing = True
self.assertTrue(conn.is_usable())
self.run_live_server_thread(connections_override)
self.assertFalse(conn.is_usable())
finally:
conn.allow_thread_sharing = saved_sharing
</code></pre></li>
<li><p>In <code>tests/servers/tests.py</code> remove:</p>
<pre><code>finally:
TestCase.tearDownClass()
</code></pre></li>
<li><p>In <code>tests/servers/tests.py</code> add:</p>
<pre><code>finally:
if hasattr(TestCase, 'server_thread'):
TestCase.server_thread.terminate()
</code></pre></li>
</ul>
<hr>
<h2>Solution</h2>
<p>Steps:</p>
<ul>
<li>Ensure you have updated to the latest released version of <strong>Django</strong> package.</li>
<li>Ensure you are using the latest released version of <strong>selenium v3.141.0</strong>.</li>
<li>Ensure you are using the latest released version of <strong>Chrome v76</strong> and <strong>ChromeDriver 76.0</strong>.</li>
</ul>
<hr>
<h2>Outro</h2>
<p>You can find a similar discussion in <a href="https://stackoverflow.com/questions/54163537/django-db-utils-integrityerror-foreign-key-constraint-failed-while-executing-li/54286535#54286535">django.db.utils.IntegrityError: FOREIGN KEY constraint failed while executing LiveServerTestCases through Selenium and Python Django</a></p>
|
python|django|selenium|selenium-webdriver|liveservertestcase
| 3 |
1,907,596 | 70,503,507 |
How to stop visual studio 2022 from referencing .NETFramework, version=4.0? Because latest version of VS 2022 doesn't support .NET earlier than 4.5
|
<p>I am making a simple Python command-line application.I also tried to change from the application tab of project but the problem is the "application tab" doesnt exist(shown in screenshot below).
I just want it to reference to the latest version of .net. How do i fix it?</p>
<pre><code>[Error: The reference assemblies for .NETFramework,Version=v4.0 were not found. To resolve this, install the Developer Pack (SDK/Targeting Pack) for this framework version or retarget your application. You can download .NET Framework Developer Packs at https://aka.ms/msbuild/developerpacks PythonApplication6 R:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Current\Bin\amd64\Microsoft.Common.CurrentVersion.targets 1217
]
</code></pre>
<p><a href="https://i.stack.imgur.com/vO0so.png" rel="nofollow noreferrer">error</a>
<a href="https://i.stack.imgur.com/pKjsy.png" rel="nofollow noreferrer">no application tab</a></p>
|
<p>Given the timeframe of your original post, I believe it is a defect reported here: <a href="https://github.com/microsoft/PTVS/issues/6882" rel="nofollow noreferrer">https://github.com/microsoft/PTVS/issues/6882</a></p>
<p>It has been marked as fixed that has since been fixed and should be in the next release of Visual Studio 2022.</p>
<p>The workaround is to target a newer framework (given in the following link <a href="https://github.com/microsoft/PTVS/issues/6747" rel="nofollow noreferrer">https://github.com/microsoft/PTVS/issues/6747</a>)</p>
<pre><code><PropertyGroup>
<TargetFrameworkVersion>v4.7.2</TargetFrameworkVersion>
<TargetFrameworkMoniker>.NETFramework,Version=$(TargetFrameworkVersion)</TargetFrameworkMoniker>
</PropertyGroup>
</code></pre>
|
python|.net|visual-studio|visual-studio-2022|.net-4.8
| 0 |
1,907,597 | 70,463,683 |
How to get the top 5 percentile values in pandas series for each class?
|
<p>I was solving a <a href="https://platform.stratascratch.com/coding/10303-top-percentile-fraud?python=" rel="nofollow noreferrer">practice question</a> where I wanted to get the top 5 percentile of frauds for each state. I was able to solve it in SQL but the pandas gives a different answer for me than SQL.</p>
<p>Full Question</p>
<pre><code>Top Percentile Fraud
ABC Corp is a mid-sized insurer in the US
and in the recent past their fraudulent claims have increased significantly for their personal auto insurance portfolio.
They have developed a ML based predictive model to identify
propensity of fraudulent claims.
Now, they assign highly experienced claim adjusters for top 5 percentile of claims identified by the model.
Your objective is to identify the top 5 percentile of claims from each state.
Your output should be policy number, state, claim cost, and fraud score.
</code></pre>
<h1>Question: How to get the same answer in pandas that I obtained from SQL?</h1>
<h1>My attempt</h1>
<ul>
<li>I break the fraud score in 100 equal parts using pandas cut and get categorical codes for each bins, then I took values above or equal to 95, but this gives different result.</li>
<li>I am trying to get same answer that I got from SQL query.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
url = "https://raw.githubusercontent.com/bpPrg/Share/master/data/fraud_score.tsv"
df = pd.read_csv(url,delimiter='\t')
print(df.shape) # (400, 4)
df.head(2)
policy_num state claim_cost fraud_score
0 ABCD1001 CA 4113 0.613
1 ABCD1002 CA 3946 0.156
</code></pre>
<h1>Problem</h1>
<ul>
<li>Group by each state, and find top 5 percentile fraud scores.</li>
</ul>
<h1>My attempt</h1>
<pre class="lang-py prettyprint-override"><code>df['state_ntile'] = df.groupby('state')['fraud_score']\
.apply(lambda ser: pd.cut(ser,100).cat.codes+1) # +1 makes 1 to 100 including.
df.query('state_ntile >=95')\
.sort_values(['state','fraud_score'],ascending=[True,False]).reset_index(drop=True)
</code></pre>
<h1>Postgres SQL code ( I know SQL, I want answer in pandas)</h1>
<pre class="lang-sql prettyprint-override"><code>SELECT policy_num,
state,
claim_cost,
fraud_score,
a.percentile
FROM
(SELECT *,
ntile(100) over(PARTITION BY state
ORDER BY fraud_score DESC) AS percentile
FROM fraud_score)a
WHERE percentile <=5
</code></pre>
<h1>The output I want</h1>
<pre><code>
policy_num state claim_cost fraud_score percentile
0 ABCD1027 CA 2663 0.988 1
1 ABCD1016 CA 1639 0.964 2
2 ABCD1079 CA 4224 0.963 3
3 ABCD1081 CA 1080 0.951 4
4 ABCD1069 CA 1426 0.948 5
5 ABCD1222 FL 2392 0.988 1
6 ABCD1218 FL 1419 0.961 2
7 ABCD1291 FL 2581 0.939 3
8 ABCD1230 FL 2560 0.923 4
9 ABCD1277 FL 2057 0.923 5
10 ABCD1189 NY 3577 0.982 1
11 ABCD1117 NY 4903 0.978 2
12 ABCD1187 NY 3722 0.976 3
13 ABCD1196 NY 2994 0.973 4
14 ABCD1121 NY 4009 0.969 5
15 ABCD1361 TX 4950 0.999 1
16 ABCD1304 TX 1407 0.996 1
17 ABCD1398 TX 3191 0.978 2
18 ABCD1366 TX 2453 0.968 3
19 ABCD1386 TX 4311 0.963 4
20 ABCD1363 TX 4103 0.960 5
</code></pre>
|
<p>Thanks to Emma, I got the partial solution.
I could not get the ranks like 1,2,3,...,100 but the resultant table is at least same from the output of SQL. I am still learning how to use the pandas.</p>
<p>Logic:</p>
<ul>
<li>To get the top 5 percentile, we can use quantile values >= 0.95 as shown below:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
url = "https://raw.githubusercontent.com/bpPrg/Share/master/data/fraud_score.tsv"
df = pd.read_csv(url,delimiter='\t')
print(df.shape)
df['state_quantile'] = df.groupby('state')['fraud_score'].transform(lambda x: x.quantile(0.95))
dfx = df.query("fraud_score >= state_quantile").reset_index(drop=True)\
.sort_values(['state','fraud_score'],ascending=[True,False])
dfx
</code></pre>
<h1>Result</h1>
<pre><code> policy_num state claim_cost fraud_score state_quantile
1 ABCD1027 CA 2663 0.988 0.94710
0 ABCD1016 CA 1639 0.964 0.94710
3 ABCD1079 CA 4224 0.963 0.94710
4 ABCD1081 CA 1080 0.951 0.94710
2 ABCD1069 CA 1426 0.948 0.94710
11 ABCD1222 FL 2392 0.988 0.91920
10 ABCD1218 FL 1419 0.961 0.91920
14 ABCD1291 FL 2581 0.939 0.91920
12 ABCD1230 FL 2560 0.923 0.91920
13 ABCD1277 FL 2057 0.923 0.91920
8 ABCD1189 NY 3577 0.982 0.96615
5 ABCD1117 NY 4903 0.978 0.96615
7 ABCD1187 NY 3722 0.976 0.96615
9 ABCD1196 NY 2994 0.973 0.96615
6 ABCD1121 NY 4009 0.969 0.96615
16 ABCD1361 TX 4950 0.999 0.96000
15 ABCD1304 TX 1407 0.996 0.96000
20 ABCD1398 TX 3191 0.978 0.96000
18 ABCD1366 TX 2453 0.968 0.96000
19 ABCD1386 TX 4311 0.963 0.96000
17 ABCD1363 TX 4103 0.960 0.96000
</code></pre>
|
python|sql|pandas|postgresql
| 0 |
1,907,598 | 63,568,214 |
QPainter delete previously drawn shapes
|
<p>I am trying to code a simple image editor like paint.</p>
<p>I implemented drawing lines rectangles and ellipses.</p>
<p>what i want is to see an animation (a foreshadowing ?) of how the rectangle will look like, just like in paint when you draw a shape you can see what it actually looks like without really drawing on the canvas.</p>
<p>here is a shortened version of the code</p>
<pre><code>class Canvas(QWidget):
def __init__(self):
super().__init__()
self.initUI()
self.initLogic()
def initUI(self):
self.image = QImage(self.size(), QImage.Format_RGB32)
self.image.fill(Qt.white)
def initLogic(self):
self.brushSize = 1
self.brushStyle = Qt.SolidLine
self.brushColor = QColor(0, 0, 0)
self.shapeMode = None
self.drawing = False
self.mousePointer = None
def mousePressEvent(self, event):
self.drawing = True
self.mousePointer = event.pos()
def mouseMoveEvent(self, event):
#if no pen mode set draw lines from event to event
if self.drawing:
painter = QPainter(self.image)
painter.setPen(QPen(self.brushColor,
self.brushSize,
self.brushStyle))
shape = None #i try later to assign the method Qpainter.draw<someShape> to this variable
# hoping it works like in tkinter.
if self.shapeMode == None:#free shape
painter.drawLine(self.mousePointer, event.pos())
self.mousePointer = event.pos()
else:
#previous x and previous y
ox, oy = self.mousePointer.x(), self.mousePointer.y()
#current x and current y
dx, dy = event.pos().x(), event.pos().y()
width, height = dx - ox, dy - oy
#self.shapeMode is a string corresponding to a QPainter method
#we get the corresponding method using getattr builtin function
drawMethod = getattr(painter, self.shapeMode)# = painter.someFunc this works fine
shape = drawMethod(ox, oy, width, height) #assigning the method call to a variable
self.update()
if shape != None:
painter.eraseRect(shape)
"""
if self.drawing and self.shapeMode == None:
painter = QPainter(self.image)
painter.setPen(QPen(self.brushColor,
self.brushSize,
self.brushStyle))
painter.drawLine(self.mousePointer, event.pos())
self.mousePointer = event.pos()
self.update()"""
#otherwise if pen mode set draw shape at event then delete until release
def mouseReleaseEvent(self, event):
if self.shapeMode != None:
painter = QPainter(self.image)
painter.setPen(QPen(self.brushColor,
self.brushSize,
self.brushStyle))
#previous x and previous y
ox, oy = self.mousePointer.x(), self.mousePointer.y()
#current x and current y
dx, dy = event.pos().x(), event.pos().y()
width, height = dx - ox, dy - oy
#self.shapeMode is a string corresponding to a QPainter methid
#we get the corresponding method using getattr builtin function
drawMethod = getattr(painter, self.shapeMode)# = painter.someFunc
shape = drawMethod(ox, oy, width, height)
self.update()
self.mousePointer = event.pos()
self.drawing = False
#TODO end registering the action
def paintEvent(self, event):
widgetPainter = QPainter(self)
widgetPainter.drawImage(self.rect(), self.image, self.rect())
</code></pre>
<p>the canvas keeps drawing rectangles as long as i hold the mouse, what i want is the rectangle to resize and only definitively be drawn after mouse release.</p>
|
<p>You can achieve this by taking advantage of which paint device to pass to QPainter. During <code>mouseMoveEvent</code> keep a reference to the points and sizes calculated so in <code>paintEvent</code> you can draw onto the main widget. This way anything painted will only last until the next update. Then in <code>mouseReleaseEvent</code> you can paint on the QImage to permanently draw the rectangle.</p>
<pre><code>class Canvas(QWidget):
def __init__(self):
super().__init__()
self.initUI()
self.initLogic()
def initUI(self):
self.image = QImage(self.size(), QImage.Format_RGB32)
self.image.fill(Qt.white)
def initLogic(self):
self.brushSize = 1
self.brushStyle = Qt.SolidLine
self.brushColor = QColor(0, 0, 0)
self.shapeMode = 'drawRect'
self.temp_rect = QRect()
self.drawing = False
self.mousePointer = None
def mousePressEvent(self, event):
self.drawing = True
self.mousePointer = event.pos()
def mouseMoveEvent(self, event):
#if no pen mode set draw lines from event to event
if self.drawing:
painter = QPainter(self.image)
painter.setPen(QPen(self.brushColor,
self.brushSize,
self.brushStyle))
if self.shapeMode == None:#free shape
painter.drawLine(self.mousePointer, event.pos())
self.mousePointer = event.pos()
else:
#previous x and previous y
ox, oy = self.mousePointer.x(), self.mousePointer.y()
#current x and current y
dx, dy = event.pos().x(), event.pos().y()
width, height = dx - ox, dy - oy
self.temp_rect = QRect(ox, oy, width, height)
self.update()
def mouseReleaseEvent(self, event):
if self.shapeMode != None:
painter = QPainter(self.image)
painter.setPen(QPen(self.brushColor,
self.brushSize,
self.brushStyle))
#self.shapeMode is a string corresponding to a QPainter methid
#we get the corresponding method using getattr builtin function
drawMethod = getattr(painter, self.shapeMode)# = painter.someFunc
drawMethod(self.temp_rect)
self.update()
self.mousePointer = event.pos()
self.drawing = False
#TODO end registering the action
def paintEvent(self, event):
widgetPainter = QPainter(self)
widgetPainter.drawImage(self.rect(), self.image, self.rect())
if self.drawing:
drawMethod = getattr(widgetPainter, self.shapeMode)
drawMethod(self.temp_rect)
</code></pre>
<p>Outcome:</p>
<p><a href="https://i.stack.imgur.com/r3qnI.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r3qnI.gif" alt="enter image description here" /></a></p>
|
python|pyqt5|qpainter
| 1 |
1,907,599 | 69,756,067 |
Create a function that makes a smallest number from the digits of its input parameter
|
<p><strong>Create a function that makes a greatest number from the digits of its input parameter</strong></p>
<p><strong>I'm a beginner in python so I need some help.</strong></p>
<pre><code>
n = int(input("Enter a number: "))
def large(n):
a = n % 10
b = (n // 10) % 10
c = (n // 100) %10
d = (n // 1000) % 10
</code></pre>
|
<p>I would just sort the string in descending order, then convert to an <code>int</code></p>
<pre><code>def largest(s):
return int(''.join(sorted(s, reverse=True)))
</code></pre>
<p>Some examples</p>
<pre><code>>>> largest('123')
321
>>> largest('321')
321
>>> largest('102030')
321000
</code></pre>
|
python|function
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.