Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,905,200 | 33,089,381 |
"undefined symbol: __xmlStructuredErrorContext" importing etree from lxml
|
<pre><code>>>> import lxml
>>> from lxml import etree
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: /usr/local/lib/python3.4/site-packages/lxml/etree.cpython-34m.so: undefined symbol: __xmlStructuredErrorContext
</code></pre>
<p>i do have libxml2 and libxslt, i have tried uninstalling and reinstalling too, it didn't help.
lxml version: 3.4.4,
python: 3.4.2,
OS: RHEL 5.5
Please help resolve this issue</p>
<p>Thanks</p>
|
<p>Your version of <code>lxml.etree</code> was compiled against a different version of <code>libxml2</code> than the one you have actually installed. Reinstalling libxml2 doesn't help because you're just reinstalling the same code. Reinstalling the binaries that bundle your existing <code>etree.cpython-34m.so</code> binary won't work either, because that binary itself is inherently broken (<A HREF="http://stackoverflow.com/questions/26488797/failed-to-linked-symbol-in-so-file-while-the-symbol-exists">it refers to a symbol that isn't exported in all versions of libxml2</A>).</p>
<p>Uninstall the Python module -- not the C library -- and reinstall it <em>from source</em>. (<code>pip</code> should be able to do this automatically, assuming that you have -devel headers for libxml2 and libxslt installed and an appropriate compiler).</p>
|
python|lxml|libxml2|importerror|libxslt
| 3 |
1,905,201 | 73,618,753 |
python tkinter image "..." doesn't exist error
|
<pre><code>from tkinter import *
from PIL import ImageTk, Image
window = Tk()
window.geometry("350x670")
topBar = Frame(window, bg= "black", width=350, height=70).pack()
middleBar = Frame(window, bg= "grey", width=350, height=530).pack()
botBar = Frame(window, bg= "black", width=350, height=70).pack()
imgLabel1 = Label(topBar, image="profile-pic.jpg").place(x=50,y=50)
window.mainloop()
</code></pre>
<p>This is my code. I want to set an image in topBar frame. When I run the code, I get this error:</p>
<blockquote>
<p>_tkinter.TclError: image "profile-pic.jpg" doesn't exist></p>
</blockquote>
<p>How can I solve this error? Thank you</p>
|
<p>You need to pass an instance of <code>ImageTk.PhotoImage()</code> to the <code>image</code> option of <code>Label</code> widget:</p>
<pre class="lang-py prettyprint-override"><code>...
image = ImageTk.PhotoImage(file="profile-pic.jpg")
imgLabel1 = Label(topBar, image=image)
imgLabel1.place(x=50, y=50)
...
</code></pre>
|
python|tkinter
| 0 |
1,905,202 | 73,690,795 |
How to check if email was sent using smtp in python 3.7
|
<p>I'm using <code>smtplib</code> to send emails using <code>sendmail()</code> function, how to know if email was sent or not as the function doesn't return a response code</p>
<pre><code>import smtplib
s = smtplib.SMTP('smtp-mail.outlook.com', 587)
s.starttls()
s.login('myemail@outlook.com', 'mypassword')
s.sendmail('myemail@outlook.com', 'theotheremail@domain.com', 'message')
s.quit()
</code></pre>
|
<p>From the <a href="https://docs.python.org/3/library/smtplib.html#smtplib.SMTP.sendmail" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>This method will return normally if the mail is accepted for at least one recipient. Otherwise it will raise an exception. That is, if this method does not raise an exception, then someone should get your mail. If this method does not raise an exception, it returns a dictionary, with one entry for each recipient that was refused. Each entry contains a tuple of the SMTP error code and the accompanying error message sent by the server.</p>
</blockquote>
|
python|smtp|smtplib
| 1 |
1,905,203 | 73,696,645 |
How to group a json file and extract data
|
<p>I have the response below from the call of an api. I want to be able to group
this json result by userRole</p>
<pre><code> {
"greatusers": [
{
"userEmail": "jango.kusaba@yahoo.com",
"userRole": "Admin"
},
{
"userEmail": "juma.egola@yahoo.com",
"userRole": "Admin"
},
{
"userEmail": "sule.oma@yahoo.com",
"userRole": "Pricing2"
},
{
"userEmail": "abass.johnson@yahoo.com",
"userRole": "Products"
},
{
"userEmail": "sima.fatai@yahoo.com",
"userRole": "Products"
},
{
"userEmail": "peach.ogaju@yahoo.com",
"userRole": "User"
},
{
"userEmail": "yusuf.kusaba@yahoo.com",
"userRole": "Pricing2"
},
{
"userEmail": "jaguda.obika@yahoo.com",
"userRole": "Pricing2"
}
]
}
</code></pre>
<p>At the end of the day I want the output to look like this for each userRole</p>
<p>For products</p>
<pre><code> {
"greatusers": [
"abass.johnson@yahoo.com",
"sima.fatai@yahoo.com"
],
"cover": {
"name": "Products",
"id": 123456789
}
}
</code></pre>
<p>For pricing2</p>
<pre><code> {
"greatusers": [
"sule.oma@yahoo.com",
"yusuf.kusaba@yahoo.com",
"jaguda.obika@yahoo.com"
],
"cover": {
"name": "Pricing2",
"id": 4747474747
}
}
</code></pre>
<p>How can I perform this grouping so that I will be able to get the output listed above. I am using python to process this json result</p>
|
<p>You can iterate the <code>greatusers</code> array, creating an entry for each <code>userRole</code> in an output dictionary when it doesn't exist already, and then appending the <code>userEmail</code> value to that entries <code>greatusers</code> array:</p>
<pre class="lang-py prettyprint-override"><code>result = {}
for user in dd['greatusers']:
role = user['userRole']
if role not in result:
result[role] = { 'greatusers' : [], 'cover' : { 'name' : role, 'id' : 0 } }
result[role]['greatusers'].append(user['userEmail'])
</code></pre>
<p>Output:</p>
<pre><code>{
"Admin": {
"greatusers": [
"jango.kusaba@yahoo.com",
"juma.egola@yahoo.com"
],
"cover": {
"name": "Admin",
"id": 0
}
},
"Pricing2": {
"greatusers": [
"sule.oma@yahoo.com",
"yusuf.kusaba@yahoo.com",
"jaguda.obika@yahoo.com"
],
"cover": {
"name": "Pricing2",
"id": 0
}
},
"Products": {
"greatusers": [
"abass.johnson@yahoo.com",
"sima.fatai@yahoo.com"
],
"cover": {
"name": "Products",
"id": 0
}
},
"User": {
"greatusers": [
"peach.ogaju@yahoo.com"
],
"cover": {
"name": "User",
"id": 0
}
}
}
</code></pre>
<p>If you had a dict of appropriate <code>id</code> values, you could add that during the iteration e.g.</p>
<pre class="lang-py prettyprint-override"><code>role_ids = { 'Admin' : 1234, 'Pricing2' : 4567, 'Products' : 9999, 'User' : 4 }
result = {}
for user in dd['greatusers']:
role = user['userRole']
if role not in result:
result[role] = { 'greatusers' : [], 'cover' : { 'name' : role, 'id' : role_ids[role] } }
result[role]['greatusers'].append(user['userEmail'])
</code></pre>
|
json|python-3.x
| 0 |
1,905,204 | 24,552,950 |
Labelling text using Notepad++ or any other tool
|
<pre><code>I have several .dat, containing information about hotel reviews as below
/*
<Author> simmotours
<Content> review......goes here
<Date>Nov 18, 2008
<No. Reader>-1
<No. Helpful>-1
<Overall>4`enter code here`
<Value>4
<Rooms>3
<Location>4
<Cleanliness>4
<Check in / front desk>4
<Service>4
<Business service>-1
</code></pre>
<p>*/
I want to classify the review into two pos and neg , i.e. have two folder pos and neg containing several files with reviews above 3 classified as positive and below 3 classified as negative. </p>
<pre><code>How can I quickly and efficiently automate this process?
</code></pre>
|
<p>Notepad++ can do replacements with regular expressions. And allows the definition of macros. Use them to convert the file to an XML file. Check out the help file.</p>
<p>Then you can read it with any scripting language and do what you want.</p>
<p>Alternatively you could change the file to a form where you can load it into Excel and do the analysis there.</p>
|
python-3.x|notepad++|classification|text-processing|sentiment-analysis
| 0 |
1,905,205 | 8,806,530 |
Accessing the default argument values in Python
|
<p>How can I programmatically access the default argument values of a method in Python? For example, in the following</p>
<pre><code>def test(arg1='Foo'):
pass
</code></pre>
<p>how can I access the string <code>'Foo'</code> inside <code>test</code>?</p>
|
<p>They are stored in <code>test.func_defaults</code> (python 2) and in <code>test.__defaults__</code> (python 3).</p>
<p>As @Friedrich reminds me, Python 3 has "keyword only" arguments, and for those the defaults are stored in <code>function.__kwdefaults__</code></p>
|
python
| 17 |
1,905,206 | 52,430,947 |
python csv write error while writing a float value as a column
|
<p>I want to write the values fetched from url to csv file which has some float values too. The code below shows an error "float found."</p>
<pre><code>import urllib2
import json
import csv
url = 'https://earthquake.usgs.gov/fdsnws/event/1/query?format=geojson&starttime=2016-10-01&endtime=2016-10-02'
i=0
csvfile = csv.writer(open('earthquakedet.csv', 'w'))
csvfile.writerow(["Latitude", "Longitude ","Title","Place","Mag"])
json_string = urllib2.urlopen(url).read()
j = json.loads(json_string)
names = [d['properties'] for d in j['features']]
names1 = [d['geometry'] for d in j['features']]
while i <= len(names):
print names[i]['title']
print names[i]['place']
print names[i]['mag']
print names1[i]['coordinates'][0]
print names1[i]['coordinates'][1]
i=i+1
finalstr=float(names1[i]['coordinates'][0]) + float(names1[i]['coordinates'][1]) + names[i]['title'] + names[i]['place'] + names[i]['mag']
csvfile.writerow(finalstr)
csvfile.close()
</code></pre>
|
<p><a href="https://docs.python.org/3/library/csv.html#csv.csvwriter.writerow" rel="nofollow noreferrer"><code>writerow</code></a> takes a list of values to put on the row, not a string. So, instead of concatenating the values yourself, just put them in a list to pass to <code>writerow</code>:</p>
<pre><code># ...
i = i + 1
csvfile.writerow([names1[i]['coordinates'][0], names1[i]['coordinates'][1], names[i]['title'], names[i]['place'], names[i]['mag']])
</code></pre>
|
python
| 0 |
1,905,207 | 59,692,619 |
Generate weighted graph from OSMnx for NetworKX
|
<p>I want to generate a NetworKX graph with weighted edges so the weight of each edge will be its <code>distance * driving speed on this road(if it exists)</code> or if the driving speed is unknown, <code>100*distance</code> for highways and <code>60*distance</code> for city roads. </p>
<p>I couldn't find a post similar to my needs except <a href="https://stackoverflow.com/questions/56308298/how-to-get-the-weight-of-the-smallest-path-between-two-nodes">this one</a> but there has to be a way to do it automatically.</p>
<p>My goal is to find the path with the shortest time(with Dijkstra) of driving between point A to B and this is what I did until now:</p>
<pre><code>l1 = (A_lat,A_lon)
G = ox.graph_from_point(l1,distance= 100)
l1_node_id = ox.get_nearest_node(G,l1) # Find closest node ID
l2 = (B_lat,B_lon)
G = ox.graph_from_point(l2,distance = 100)
l2_node_id = ox.get_nearest_node(G,l2) # Find closest node ID
dist = vincenty(l1, l2).meters # The distance between l1 and l2
p1 = ((l1[0] + l2[0])/2,(l1[1]+l2[1])/2) #The mid point between l1 and l2
dist = vincenty(l1, l2).meters #The distance between l1 and l2
G = ox.graph_from_point(p1,distance = dist)
path = nx.shortest_path(G, l1_node_id, l2_node_id) #Find the shortest path for cutoff
for path in nx.all_simple_paths(G, source=l1_node_id, target=l2_node_id,cutoff = len(path)):
#Here I want to checke if "path" is the shortest path but right now it is without weight
</code></pre>
<p>In the <a href="https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.shortest_paths.generic.shortest_path.html?highlight=shortest_path#networkx.algorithms.shortest_paths.generic.shortest_path" rel="nofollow noreferrer">documentation</a> they wrote weight should be a string but how can I do it?</p>
<p>TIA</p>
|
<p>Functionality to <a href="https://osmnx.readthedocs.io/en/stable/osmnx.html#osmnx.speed.add_edge_travel_times" rel="nofollow noreferrer">calculate edge travel times</a> is available as of OSMnx v0.13.0. You can then use these edge travel time attributes to solve network shortest paths by travel time rather than distance. By default it imputes speed on edges missing <code>maxspeed</code> data from OSM, then calculates travel time as a function of length and speed. But you can pass in your own custom speeds for different road types:</p>
<pre class="lang-py prettyprint-override"><code>import networkx as nx
import osmnx as ox
ox.config(use_cache=True, log_console=True)
G = ox.graph_from_place('Piedmont, CA, USA', network_type='drive')
# Impute speeds on edges missing data.
G = ox.add_edge_speeds(G)
# Or assign speeds to edges missing data based on dict values.
# For edges with highway type not in dict, impute speeds.
hwy_speeds = {'motorway': 100,
'trunk': 100,
'residential': 60,
'tertiary': 60} #etc
G = ox.add_edge_speeds(G, hwy_speeds)
# Calculate shortest path by travel time.
G = ox.add_edge_travel_times(G)
orig, dest = list(G)[0], list(G)[-1]
route = nx.shortest_path(G, orig, dest, weight='travel_time')
</code></pre>
|
python|networkx|shortest-path|osmnx
| 2 |
1,905,208 | 36,648,554 |
Numpy.nonzero behaves strangely for values 1 or -1
|
<p>I have a very odd problem with numpy.nonzero(). It behaves well for values that are not 1 or -1, but for those two it seems to yield odd results.</p>
<p>For example,</p>
<pre><code>goalmat = np.matrix( [[2, 0, 1], [-1, 0, -1]])
</code></pre>
<p>makes</p>
<pre><code>matrix([[ 2, 0, 1],
[-1, 0, -1]])
</code></pre>
<p>Now, using numpy.nonzero(goalmat == x) only works partially:</p>
<pre><code>>>> np.nonzero(goalmat == 1)
(matrix([[0]]), matrix([[2]]))
>>> np.nonzero(goalmat == -1)
(matrix([[1, 1]]), matrix([[0, 2]]))
</code></pre>
<p>And</p>
<pre><code>>>> goalmat = np.matrix( [[2, 2, 1], [-1, 1, -1]])
>>> goalmat
matrix([[ 2, 2, 1],
[-1, 1, -1]])
>>> np.nonzero(goalmat == 1)
(matrix([[0, 1]]), matrix([[2, 1]]))
>>> np.nonzero(goalmat == -1)
(matrix([[1, 1]]), matrix([[0, 2]]))
</code></pre>
<p>So it seems to give the correct locations for 1 if I ask for -1...</p>
<p>Am I misusing/misunderstanding numpy.nonzero()?</p>
|
<p>It does work correctly and you actually did not use the <code>nonzero()</code> with values of -1 but with True and False being the output of the statement <code>goalmat == -1</code>.
You can check the <code>nonzero()</code> result by:</p>
<pre><code>>>> index1, index2 = np.nonzero(goalmat == -1)
>>> goalmat[index1, index2]
matrix([[-1, -1]])
</code></pre>
|
python|python-2.7|numpy
| 0 |
1,905,209 | 19,485,969 |
How to determine the source of an import?
|
<p>Forgive my ignorance here. I don't read Python well and I can't write it at all.</p>
<p>I'm trying to audit a python project for CVE-2013-1445. I believe found a source file that might need attention (among other opportunities for improvement). The file is <a href="https://github.com/openstack/keystone/blob/master/keystone/openstack/common/crypto/utils.py" rel="nofollow">util.py</a>, and it has the line:</p>
<pre><code>import base64
from Crypto.Hash import HMAC
from Crypto import Random
...
</code></pre>
<p>When I look at the <a href="http://docs.python.org/3.3/library/crypto.html" rel="nofollow">Python crypto docs</a>, I don't see mention of a <code>Random</code> class. Only <code>hashlib</code> and <code>hmac</code>:</p>
<pre><code>The modules described in this chapter implement various algorithms of a
cryptographic nature. They are available at the discretion of the
installation. On Unix systems, the crypt module may also be available.
Here’s an overview:
15.1. hashlib — Secure hashes and message digests
15.2. hmac — Keyed-Hashing for Message Authentication
...
</code></pre>
<p>Where precisely is <code>Random</code> coming from? Is it native or third party?</p>
<p>Or should my question be, where is <code>Crypto</code> coming from? If <code>Crypto</code> its third party, how do I determine how/where third party libraries and classes are included (versus native libraries and classes)?</p>
<p>For completeness, I tried to understand Python's Scopes and Namespaces, but it makes no sense to me at the moment (as this question probably demonstrates). For example, there is no obvious Scope or Namespace for <code>Crypto</code> or <code>Random</code> (other than <code>Random</code> is part of <code>Crypto</code>).</p>
<p>Thanks in advance.</p>
|
<p><code>Crypto</code> is not part of any standard Python distribution. That's why the Python docs don't mention it ;-) You can download the source here:</p>
<p><a href="https://www.dlitz.net/software/pycrypto/" rel="nofollow">https://www.dlitz.net/software/pycrypto/</a></p>
|
python|class|import
| 2 |
1,905,210 | 19,571,366 |
How do I get this output to sort by date in Pandas
|
<p>I have a date field Datetime and I want a simple count of the items but i's like it in date order.. what I have now...</p>
<pre><code>plot_data.Quradate.value_counts() # of respondents by survey date
2011-07-15 702
2011-04-15 696
2011-10-15 661
2010-01-15 636
2011-01-15 587
2010-10-15 570
2012-01-15 534
2010-07-15 525
2010-04-15 384
dtype: int64
</code></pre>
<p>Should be simple but not yet for me...</p>
|
<p>As Andy points out And (+TomAugspurger above) this is the right solution:</p>
<pre><code>plot_data.Quradate.value_counts().sort_index()
</code></pre>
<p>Ugly, but gets the job done would like to see a better solution.</p>
<pre><code>resp=pd.DataFrame(plot_data.Quradate.value_counts()) # of respondents by survey date
resp.sort_index()
</code></pre>
|
python|pandas
| 6 |
1,905,211 | 19,358,278 |
DFS algorithm in Python with generators
|
<h3>Background:</h3>
<p>I was working on a project were I needed to write some rules for text processing. After working on this project for a couple of days and implementing some rules, I realized I needed to determine the order of the rules. No problems, we have topological sorting to help. But then I realized that I can't expect the graph to be always full. So I came up with this idea, that given a single rule with a set of dependencies (or a single dependency) I need to check the dependencies of the dependencies. Sounds familiar? Yes. This subject is very similar to Depth-first-searching of a graph.<br />
I am not a mathematician, nor did I study C.S. Hence, Graph Theory is a new field for me. Nevertheless, I implemented something (see below) which works (inefficiently, I suspect).</p>
<h3>The code:</h3>
<p>This is my search and yield algorithm. If you run it on the examples below, you will see it visits some nodes more then once. Hence, the speculated inefficiency.<br />
A word about the input. The rules I wrote are basically python classes, which have a class property <code>depends</code>. I was criticized for not using <code>inspect.getmro</code>- But this would complicate thing terribly because the class would need to inherit from each other (<a href="http://codepad.org/k7iGWYFf" rel="noreferrer">See example here</a>)</p>
<pre><code>def _yield_name_dep(rules_deps):
global recursion_counter
recursion_counter = recursion_counter +1
# yield all rules by their named and dependencies
for rule, dep in rules_deps.items():
if not dep:
yield rule, dep
continue
else:
yield rule, dep
for ii in dep:
i = getattr(rules, ii)
instance = i()
if instance.depends:
new_dep={str(instance): instance.depends}
for dep in _yield_name_dep(new_dep):
yield dep
else:
yield str(instance), instance.depends
</code></pre>
<p>OK, now that you stared in the code, here is some input you can test:</p>
<pre><code>demo_class_content ="""
class A(object):
depends = ('B')
def __str__(self):
return self.__class__.__name__
class B(object):
depends = ('C','F')
def __str__(self):
return self.__class__.__name__
class C(object):
depends = ('D', 'E')
def __str__(self):
return self.__class__.__name__
class D(object):
depends = None
def __str__(self):
return self.__class__.__name__
class F(object):
depends = ('E')
def __str__(self):
return self.__class__.__name__
class E(object):
depends = None
def __str__(self):
return self.__class__.__name__
"""
with open('demo_classes.py', 'w') as clsdemo:
clsdemo.write(demo_class_content)
import demo_classes as rules
rule_start={'A': ('B')}
def _yield_name_dep(rules_deps):
# yield all rules by their named and dependencies
for rule, dep in rules_deps.items():
if not dep:
yield rule, dep
continue
else:
yield rule, dep
for ii in dep:
i = getattr(rules, ii)
instance = i()
if instance.depends:
new_dep={str(instance): instance.depends}
for dep in _yield_name_dep(new_dep):
yield dep
else:
yield str(instance), instance.depends
if __name__ == '__main__':
# this is yielding nodes visited multiple times,
# list(_yield_name_dep(rule_start))
# hence, my work around was to use set() ...
rule_dependencies = list(set(_yield_name_dep(rule_start)))
print rule_dependencies
</code></pre>
<h3>The questions:</h3>
<ul>
<li>I tried classifying my work, and I think what I did is similar to DFS. Can you really classify it like this?</li>
<li>How can I improve this function to skip visited nodes, and still use generators ?</li>
</ul>
<h3>update:</h3>
<p>Just to save you the trouble running the code, the output of the above function is:</p>
<pre><code>>>> print list(_yield_name_dep(rule_wd))
[('A', 'B'), ('B', ('C', 'F')), ('C', ('D', 'E')), ('D', None), ('E', None), ('F', 'E'), ('E', None)]
>>> print list(set(_yield_name_dep(rule_wd)))
[('B', ('C', 'F')), ('E', None), ('D', None), ('F', 'E'), ('C', ('D', 'E')), ('A', 'B')]
</code></pre>
<p>In the mean while I came up with a better solution, the question above still remain. So feel free to criticize my solution:</p>
<pre><code>visited = []
def _yield_name_dep_wvisited(rules_deps, visited):
# yield all rules by their name and dependencies
for rule, dep in rules_deps.items():
if not dep and rule not in visited:
yield rule, dep
visited.append(rule)
continue
elif rule not in visited:
yield rule, dep
visited.append(rule)
for ii in dep:
i = getattr(grules, ii)
instance = i()
if instance.depends:
new_dep={str(instance): instance.depends}
for dep in _yield_name_dep_wvisited(new_dep, visited):
if dep not in visited:
yield dep
elif str(instance) not in visited:
visited.append(str(instance))
yield str(instance), instance.depends
</code></pre>
<p>The output of the above is:</p>
<pre><code>>>>list(_yield_name_dep_wvisited(rule_wd, visited))
[('A', 'B'), ('B', ('C', 'F')), ('C', ('D', 'E')), ('D', None), ('E', None), ('F', 'E')]
</code></pre>
<p>So as you can see now the node E is visited only once.</p>
|
<p>Using the feedback from Gareth and other kind users of Stackoverflow, here is what I came up with. It is clearer, and also more general:</p>
<pre><code>def _dfs(start_nodes, rules, visited):
"""
Depth First Search
start_nodes - Dictionary of Rule with dependencies (as Tuples):
start_nodes = {'A': ('B','C')}
rules - Dictionary of Rules with dependencies (as Tuples):
e.g.
rules = {'A':('B','C'), 'B':('D','E'), 'C':('E','F'),
'D':(), 'E':(), 'F':()}
The above rules describe the following DAG:
A
/ \
B C
/ \ / \
D E F
usage:
>>> rules = {'A':('B','C'), 'B':('D','E'), 'C':('E','F'),
'D':(), 'E':(), 'F':()}
>>> visited = []
>>> list(_dfs({'A': ('B','C')}, rules, visited))
[('A', ('B', 'C')), ('B', ('D', 'E')), ('D', ()), ('E', ()),
('C', ('E', 'F')), ('F', ())]
"""
for rule, dep in start_nodes.items():
if rule not in visited:
yield rule, dep
visited.append(rule)
for ii in dep:
new_dep={ ii : rules[ii]}
for dep in _dfs(new_dep, rules, visited):
if dep not in visited:
yield dep
</code></pre>
|
python|algorithm|generator|depth-first-search
| 2 |
1,905,212 | 13,301,877 |
Efficent algorithm for creating the convex layers from a set of points
|
<p>I've got a list of points that I'm trying to generate the convex layers for in python.</p>
<p>Currently I'm simply using the following:</p>
<pre><code>def convex_layers(points):
points = sorted(set(points))
layers = []
while points:
#Create the next convex hull
hull = convex_hull(points)
#Create the new list of points
for point in hull:
points.remove(point)
#Update the list of layers
layers.append(hull)
return layers
</code></pre>
<p>Which is just to create the convex hulls one at a time. While it works, it seems a lot like trying to multiply simply by repeated addition. So what I'm asking is if there is a more efficient algorithm specifically for creating convex layers from a set of points</p>
|
<p>If you use the <a href="http://en.wikibooks.org/wiki/Algorithm_Implementation/Geometry/Convex_hull/Monotone_chain" rel="nofollow">monotone chain algorithm</a>, you will have to do the lexicographic sorting only once. Then each successive layer can be found in O(n) time. This ought to be faster than sorting for each layer. </p>
|
python|convex-hull
| 3 |
1,905,213 | 16,614,558 |
Legend using PathCollections in matplotlib
|
<p>I'm plotting groups of circles using collections and I am not able to generate the legend of the three categories. I want:</p>
<ul>
<li>Cat 1: red circles</li>
<li>Cat 2: blue circles</li>
<li>Cat 3: yellow circles</li>
</ul>
<pre class="lang-python prettyprint-override"><code>import matplotlib
import matplotlib.pyplot as plt
from matplotlib.collections import PatchCollection
from matplotlib.patches import Circle
import numpy as np
# (modified from one of the matplotlib gallery examples)
resolution = 50 # the number of vertices
N = 50
Na = 25
Nb = 10
x = np.random.random(N)
y = np.random.random(N)
radii = 0.1*np.random.random(30)
xa = np.random.random(Na)
ya = np.random.random(Na)
radiia = 0.1*np.random.random(50)
xb = np.random.random(Nb)
yb = np.random.random(Nb)
radiib = 0.1*np.random.random(60)
patches = []
patchesa = []
patchesb = []
for x1,y1,r in zip(x, y, radii):
circle = Circle((x1,y1), r)
patches.append(circle)
for x1,y1,r in zip(xa, ya, radiia):
circle = Circle((x1,y1), r)
patchesa.append(circle)
for x1,y1,r in zip(xb, yb, radiib):
circle = Circle((x1,y1), r)
patchesb.append(circle)
fig = plt.figure()
ax = fig.add_subplot(111)
colors = 100*np.random.random(N)
p = PatchCollection(patches, cmap=matplotlib.cm.jet, alpha=0.4, label= "Cat 1", facecolor="red")
pa = PatchCollection(patchesa, cmap=matplotlib.cm.jet, alpha=0.3, label= "Cat 2", facecolor="blue")
pb = PatchCollection(patchesb, cmap=matplotlib.cm.jet, alpha=0.4, label= "Cat 3", facecolor="yellow")
#p.set_array(colors)
ax.add_collection(p)
ax.add_collection(pa)
ax.add_collection(pb)
ax.legend(loc = 2)
plt.colorbar(p)
print p.get_label()
plt.show()
</code></pre>
<p><code>PathCollection</code>s are not iterable objects, so trying to generate the legend the following way; </p>
<pre><code>legend([p, pa, pb], ["cat 1", "2 cat", "cat 3"])
</code></pre>
<p>does not work.</p>
<p>How can the caption to appear?</p>
<p>My system run on Python 2.7 and Matplotlib 1.2.0_1</p>
<p>Note that the command <code>print p.get_label()</code> shows that the object has an associated label, but matplotlib is unable to mount the legend.</p>
|
<p>One possible solution is to add <code>Line2D</code> objects to use in the legend, also known as using proxy artists. To achieve this you have to add <code>from matplotlib.lines import Line2D</code> to your script, and then you can replace this code:</p>
<pre><code>ax.legend(loc = 2)
plt.colorbar(p)
print p.get_label()
</code></pre>
<p>with this:</p>
<pre><code>circ1 = Line2D([0], [0], linestyle="none", marker="o", alpha=0.4, markersize=10, markerfacecolor="red")
circ2 = Line2D([0], [0], linestyle="none", marker="o", alpha=0.3, markersize=10, markerfacecolor="blue")
circ3 = Line2D([0], [0], linestyle="none", marker="o", alpha=0.4, markersize=10, markerfacecolor="yellow")
plt.legend((circ1, circ2, circ3), ("Cat 1", "Cat 2", "Cat 3"), numpoints=1, loc="best")
</code></pre>
<p><img src="https://i.stack.imgur.com/1ESUv.png" alt="enter image description here"></p>
|
python|matplotlib
| 14 |
1,905,214 | 54,332,899 |
pyserial EOT as terminator
|
<p>I read some RFID Tags with <code><STX>RFID String<EOT></code></p>
<p>how can i use read_until with this EOT character. I tried this:</p>
<pre><code>serResponse = self.ser.read_until(chr(4))
</code></pre>
<p>didn't work, I got the string after a timeout</p>
<p>[EDIT]</p>
<pre><code>while True:
for c in ser.read():
line.append(c)
if c == '\n':
print("Line: " + ''.join(line))
line = []
break
</code></pre>
<p>how can I change the '\n' to check for EOT or STX char.</p>
<p>that the print(c) output of one tag:
2
82
51
48
52
50
70
65
50
49
65
49
4</p>
<p>I thought I can check for c=='4' or c==4, but it didn't work.</p>
|
<p>that snippet workd for me, eol as bytearray and then read one by one into a bytearray and check if the last byte is the eol </p>
<pre><code>eol = bytearray([4])
leneol = len(eol)
line = bytearray()
while True:
c = self.ser.read(1)
if c:
line += c
if line[-leneol:] == eol:
break
else:
break
</code></pre>
|
python|pyserial
| 1 |
1,905,215 | 9,162,030 |
Storing and evaluating nested string elements
|
<p>Given the <code>exampleString = "[9+[7*3+[1+2]]-5]"</code>
How does one extract and store elements enclosed by [] brackets, and then evaluate them in order?</p>
<pre><code>1+2 --+
|
7*3+3 --+
|
9+24-5
</code></pre>
<p>Does one have to create somekind of nested list? Sorry for this somewhat <em>broad</em> question and bad English.</p>
<p>I see, this question is really too broad... Is there a way to create a nested list from that string? Or maybe i should simply do regex search for every element and evaluate each? The nested list option (if it exists) would be a IMO "cleaner" approach than looping over same string and evaluating until theres no [] brackets.</p>
|
<p>Have a look at <a href="http://pyparsing.wikispaces.com/" rel="nofollow">pyparsing</a> module and some examples they have (<a href="http://pyparsing.wikispaces.com/file/view/fourFn.py" rel="nofollow">four function calculator</a> is something you want and more).</p>
<p>PS. In case the size of that code worries you, look again: most of this can be stripped. The lower half are just tests. The upper part can be stripped from things like supporting e/pi/... constants, trigonometric funcitons, etc. I'm sure you can cut it down to 10 lines for what you need.</p>
|
python|parsing
| 3 |
1,905,216 | 39,291,136 |
OpenCV Python bindings for cv::ml::SVM::trainAuto
|
<p>I want to estimate the optimal C and gamma parameters for my SVM training in OpenCV. If I understand the <a href="http://docs.opencv.org/master/d1/d2d/classcv_1_1ml_1_1SVM.html#a7691fe53ff9b30f77afac2c50c29609d" rel="nofollow">master (3.1-dev) docs</a> correctly <code>cv::ml::SVM::trainAuto</code> would be fit perfect for my needs (using cross-validation to estimate the perfect parameters). </p>
<p>But I can't find the Python Bindings for trainAuto.. I tried to find it using:</p>
<pre><code>>>> import cv2
>>> cv2.__version__
'3.1.0-dev'
>>> help(cv2.ml)
</code></pre>
<p>or in the ml_SVM object</p>
<pre><code>>>> help(cv2.ml.SVM_create())
</code></pre>
<p>But I only found </p>
<pre><code>SVM_create(...)
SVM_create() -> retval
SVM_load(...)
SVM_load(filepath) -> retval
</code></pre>
<p>in the cv2.ml module and </p>
<pre><code>train(...)
| train(trainData[, flags]) -> retval or train(samples, layout, responses) -> retval
</code></pre>
<p>in the ml_SVM object. Is there another "python way" for trainAuto or are the bindings moved/missing? I'm using python 3.4 on Ubuntu 15.10.</p>
|
<p>It is a current open issue with OpenCV for Python: see here <a href="https://github.com/opencv/opencv/issues/7224" rel="nofollow noreferrer">https://github.com/opencv/opencv/issues/7224</a>.</p>
|
opencv|svm|python-bindings
| -1 |
1,905,217 | 39,299,488 |
TensorFlow: Understanding the parameters returned by evaluate
|
<p>I've created a Linear classifier model using tensorflow. When I evaluate the model the following is returned.</p>
<pre><code>accuracy: 0.975183
eval_auc: 0.534855
loss: 0.115239
</code></pre>
<p>Could somebody please explain me how eval_auc and loss is calculated? Thanks!</p>
|
<p><code>eval_auc</code> must be the AUC = Area Under the ROC Curve.
See explanation, for example, <a href="https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it">here</a></p>
<p><code>loss</code> must be logloss = logarithmic loss.
See explanation, for example, <a href="https://www.r-bloggers.com/making-sense-of-logarithmic-loss/" rel="nofollow noreferrer">here</a></p>
|
machine-learning|tensorflow
| 3 |
1,905,218 | 39,024,795 |
EasyGui fileopenbox() Error. Has TKinter Changed? [Python]
|
<p>Keeping it relatively simple. I'm trying to open a fileopenbox to select a file using easygui.</p>
<pre><code>easygui.fileopenbox()
</code></pre>
<p>And easyGUI throws this error</p>
<pre><code>'module' object has no attribute 'askopenfilename'
</code></pre>
<p>The Stack Trace</p>
<p>Traceback (most recent call last):</p>
<pre><code>File "C:\Users\Administrator\Desktop\test.py", line 377, in <module>
easygui.fileopenbox()
File "C:\Python27\lib\site-packages\easygui\boxes\fileopen_box.py", line 103, in fileopenbox
func = ut.tk_FileDialog.askopenfilenames if multiple else ut.tk_FileDialog.askopenfilename
AttributeError: 'module' object has no attribute 'askopenfilename'
</code></pre>
<p>Whats going on here?</p>
<p>Nothings changed on my system at all, but it almost looks like for some reason python cant find this tkInter function.</p>
<p>Has anyone come across this?
Thanks!</p>
<p>Edit: An additional screenshot showing that the method is not found</p>
<p><a href="https://gyazo.com/8b9ba0f6c23561d13babe7ce4c8b67a1" rel="nofollow">https://gyazo.com/8b9ba0f6c23561d13babe7ce4c8b67a1</a></p>
|
<p>Try uninstalling your <code>Easygui</code> and install latest one.
Also try update <code>Python</code> version.</p>
|
python|tkinter|easygui
| 0 |
1,905,219 | 52,703,161 |
mysql_config not found with MySQL in Docker
|
<p>I have MySQL 5.7 installed on Docker running perfectly and python 3.7 installed locally.</p>
<p>I tried to install the flask-mysqldb using the command </p>
<p><code>pip install flask-mysqldb</code> </p>
<p>and I received an error </p>
<p><code>OSError: mysql_config not found</code></p>
<p>I never had to install a MySQL client connector in my machine and never had any problem to connect any system. </p>
<p>Is this related to my Docker config?
How can I solve this issue?</p>
|
<p>Because the offical image of mysql:5.7 does not contain <strong>libmysqlclient-dev</strong></p>
<p>just install this package and try again.</p>
<pre><code>docker exec -it my_db bash
apt-get update
apt-get install libmysqlclient-dev
</code></pre>
<p>If there issue with pip Like in testing I faced then run</p>
<pre><code>pip install --upgrade setuptools
</code></pre>
<p><a href="https://hub.docker.com/_/mysql/" rel="nofollow noreferrer">https://hub.docker.com/_/mysql/</a></p>
<p><a href="https://github.com/docker-library/mysql/blob/9d1f62552b5dcf25d3102f14eb82b579ce9f4a26/5.7/Dockerfile" rel="nofollow noreferrer">https://github.com/docker-library/mysql/blob/9d1f62552b5dcf25d3102f14eb82b579ce9f4a26/5.7/Dockerfile</a></p>
|
python|mysql|python-3.x|docker|pip
| 1 |
1,905,220 | 47,773,485 |
How can I find visited nodes called in functions?
|
<p>Here's my programme of Binary Search Tree, all the functions are working in uploading system, except the last one, I somehow have to find out which of the Nodes I visited throughout calling previous functions. Any ideas?</p>
<pre><code>class Node:
def __init__(self, value):
self.left = None
self.right = None
self.data = value
class BinarySearchTree:
def __init__(self):
self.root = None
def insert(self, value):
if self.root is None:
self.root = Node(value)
else:
self._insert(value, self.root)
def _insert(self, value, curNode):
if value < curNode.data:
if curNode.left is None:
curNode.left = Node(value)
else:
self._insert(value, curNode.left)
else:
if curNode.right is None:
curNode.right = Node(value)
else:
self._insert(value, curNode.right)
def fromArray(self, array):
for i in range(len(array)-1):
value = array[i]
self.insert(value)
i += 1
def search(self, value):
if self.root is not None:
return self._search(value, self.root)
else:
return False
def _search(self, value, curNode):
if value == curNode.data:
return True
elif value < curNode.data and curNode.left is not None:
self._search(value, curNode.left)
elif value > curNode.data and curNode.right is not None:
self._search(value, curNode.right)
else:
return False
def min(self):
curNode = self.root
while curNode.left is not None:
curNode = curNode.left
return curNode
def max(self):
curNode = self.root
while curNode.right is not None:
curNode = curNode.right
return curNode
def visitedNodes(self):
pass
</code></pre>
<p>And it has to return the values of nodes in list.</p>
|
<p>The straight forward answer would be to add a <code>visited</code> flag to each node, that is explicitly flipped in each of your functions when a <code>Node</code> is visited:</p>
<pre><code>class Node:
def __init__(self, value):
self.left = None
self.right = None
self.data = value
self.visited = False
</code></pre>
<p>and then:</p>
<pre><code>def _search(self, value, curNode):
curNode.visited = True
if value == curNode.data:
return True
# etc., unchanged
</code></pre>
<p>Same for <code>min</code> and <code>max</code>, and finally:</p>
<pre><code>def visitedNodes(self, current=self.root, accumulator=[]):
if current == None:
return
if current == self.root:
accumulator = []
if current.visited:
accumulator.append(current)
visitedNodes(current.left)
visitedNodes(current.right)
return accumulator
</code></pre>
<p>This is just one implementation, there are many other ways to do this. I also assume this function, which traverses the whole tree, should <strong>not</strong> set the <code>visited</code> flag.</p>
|
python|binary-search-tree
| 0 |
1,905,221 | 47,966,310 |
Django use dynamic choices in model by foreignkey
|
<p>I have a model; for concern to this question it matters two field one is foreignkey to other model Plan and other is choicefield as shown below:</p>
<pre><code>class MyModel(models.Model):
CHOICES = (
(1, 'A1'),
(2, 'A2'),
(3, 'B1'),
(4, 'B2'),
)
category = models.IntegerField(choices=CHOICES, default=3)
has_plan = models.ForeignKey(Plan, on_delete=models.CASCADE)
</code></pre>
<p>Below is my Plan model:</p>
<pre><code>class Plan(models.Model):
PLAN_CHOICES = [(1, "Individual"), (2, "Company")]
plan_name = models.IntegerField(choices=PLAN_CHOICES, default=2)
plan_validity = models.IntegerField(default=180, help_text="Days after plan expires")
</code></pre>
<p>I want to update <code>CHOICES</code> which are to be available in <code>category</code> field of <code>MyModel</code> depending on selection of <code>has_plan</code>.
Consider if <code>has_plan</code> points to <code>Plan</code> object with <code>plan_name</code>; (2, "Company") then <code>CHOICES</code> are to be updated to:</p>
<pre><code>CHOICES = (
(1, 'A1'),
(2, 'A2'),
(3, 'A3'),
(4, 'B1'),
(5, 'B2'),
)
</code></pre>
<p>I can achieve this in views with help of form fields but in that case I have to handle it for view and admin both hence I am looking for a better and simpler way to achieve this.</p>
<p>I am able to raise error with <code>clean()</code> method in model but I want to update <code>CHOICES</code> instead of just raising an exception.</p>
<hr />
<h1>Update:</h1>
<p>While creation of first entry I have set up multi-part form and achieved the solution for creation, but for editing in Django Admin, custom view and it seems that I have to handle both separately.
Instead of doing that I want a way so that I can update it once so that for create and edit in either django admin or custom view I just have to override single method.</p>
|
<p>If you want it to be interactive (i.e. when user changes <code>has_plan</code> in UI, category available choices change) you need to implement some client side logic. If its the case I suggest that you to just add a <code>clean</code> method to your model to check correctness of <code>category</code>, <code>has_plan</code> pair. <code>clean</code> method will be called in Django admin model forms too.</p>
<p>Update question/comment if somehow <code>has_plan</code> has a fixed value and you need another solution.</p>
|
python|django|python-3.x|django-models|django-2.0
| 0 |
1,905,222 | 47,575,469 |
AttributeError: 'Graph' object has no attribute 'density'
|
<p>I'm trying to learn how to work with NetworkX and I've ran into a problem.
Although functions for nodes and edges work fine, the ones for whole graph don't, resulting in AttributeError. Am I using them wrong or can you see some other problem?</p>
<p>The first two works but the third doesn't.</p>
<pre><code>num_of_nodes = 0
num_of_nodes = graph.number_of_nodes()
print num_of_nodes
num_of_edges = 0
num_of_edges = graph.number_of_edges()
print num_of_edges
density = 0
density = graph.density()
print density
</code></pre>
<p>Thanks.</p>
<hr>
<p>imports:</p>
<pre><code>import networkx as nx
from IPython.display import HTML
import numpy as np
import urllib3
import time
import operator
import socket
import cPickle
import re # regular expressions
from pandas import Series
import pandas as pd
from pandas import DataFrame
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
</code></pre>
|
<p><strong>edit</strong>(this answer is basically the same as glibdud put in his comment - @glibdud, feel free to add your own version of this answer, and I'll delete mine)</p>
<hr>
<p>So let's look first at <code>number_of_nodes</code>. Here's the <a href="https://networkx.github.io/documentation/stable/reference/generated/networkx.classes.function.number_of_nodes.html?highlight=number_of_nodes#networkx.classes.function.number_of_nodes" rel="nofollow noreferrer">documentation</a>. You call it like <code>nx.number_of_nodes(G)</code>. If you check the <a href="https://networkx.github.io/documentation/stable/_modules/networkx/classes/function.html#number_of_nodes" rel="nofollow noreferrer">source</a>, it simply calls <code>G.number_of_nodes()</code>. So notice - these are two different things (though they use the same name), and produce the same output. In one, <code>G</code> is the argument of the function <code>number_of_nodes</code>, in the other, <code>number_of_nodes</code> is a method of <code>G</code>.</p>
<p>However, <code>density</code> does not exist as a method of <code>G</code>. It is simply a function of networkx. Here's the <a href="https://networkx.github.io/documentation/stable/reference/generated/networkx.classes.function.density.html?highlight=density#networkx.classes.function.density" rel="nofollow noreferrer">documentation</a>. You call it like <code>nx.density(G)</code>.</p>
|
python|networkx
| 2 |
1,905,223 | 37,188,623 |
Ubuntu, how to install OpenCV for python3?
|
<p>I want to install OpenCV for python3 in ubuntu 16.04. Fist I tried running <code>sudo apt-get install python3-opencv</code> which is how I pretty much install all of my python software. This could not find a repository. The install does work however if I do <code>sudo apt-get install python-opencv</code> this issue with this is that by not adding the three to python it installs for python 2 which I do not use. I would really perfer not to have to build and install from source so is there a way I can get a repository? I also tried installing it with pip3 and it could not find it either.</p>
|
<p>Well this will be a lengthy answer, so let's start : </p>
<p><strong>Step 1: Install prerequisites :</strong>
Upgrade any pre-installed packages:</p>
<pre><code>$ sudo apt-get update
$ sudo apt-get upgrade
</code></pre>
<p>Install developer tools used to compile OpenCV 3.0:</p>
<pre><code>$ sudo apt-get install build-essential cmake git pkg-config
</code></pre>
<p>Install libraries and packages used to read various image and videos formats from disk:</p>
<pre><code>$ sudo apt-get install libjpeg8-dev libtiff5-dev libpng-dev libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
</code></pre>
<p>Install GTK so we can use OpenCV’s GUI features:</p>
<pre><code>$ sudo apt-get install libgtk2.0-dev
</code></pre>
<p>Install packages that are used to optimize various functions inside OpenCV, such as matrix operations:</p>
<pre><code>$ sudo apt-get install libatlas-base-dev gfortran
</code></pre>
<p><strong>Step 2: Setup Python (Part 1)</strong></p>
<p>Let’s download pip , a Python package manager, installed for Python 3:</p>
<pre><code>$ wget https://bootstrap.pypa.io/get-pip.py
$ sudo python3 get-pip.py
</code></pre>
<p>Let’s use our fresh pip3 install to setup virtualenv and virtualenvwrapper :</p>
<pre><code>$ sudo pip3 install virtualenv virtualenvwrapper
</code></pre>
<p>Now we can update our ~/.bashrc file (place at the bottom of the file):</p>
<pre><code># virtualenv and virtualenvwrapper
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
export WORKON_HOME=$HOME/.virtualenvs
source /usr/local/bin/virtualenvwrapper.sh
$ source ~/.bashrc
$ mkvirtualenv cv
</code></pre>
<p><strong>Step 2: Setup Python (Part 2)</strong></p>
<p>we’ll need to install the Python 3.4+ headers and development files:</p>
<pre><code>$ sudo apt-get install python3.4-dev
</code></pre>
<p>OpenCV represents images as NumPy arrays, so we need to install NumPy into our cv virtual environment:</p>
<pre><code>$ pip install numpy
</code></pre>
<p><strong>Step 3: Build and install OpenCV 3.0 with Python 3.4+ bindings</strong></p>
<pre><code>$ cd ~
$ git clone https://github.com/opencv/opencv.git
$ cd opencv
$ git checkout 3.0.0
$ cd ~
$ git clone https://github.com/opencv/opencv_contrib.git
$ cd opencv_contrib
$ git checkout 3.0.0
</code></pre>
<p>Time to setup the build:</p>
<pre><code>$ cd ~/opencv
$ mkdir build
$ cd build
$ cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_C_EXAMPLES=ON \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D BUILD_EXAMPLES=ON ..
</code></pre>
<p>Let's start OpenCV compile process :</p>
<pre><code>$ make -j4
</code></pre>
<p>Assuming OpenCV 3.0 compiled without error, you can now install it on your system:</p>
<pre><code>$ sudo make install
$ sudo ldconfig
</code></pre>
<p><strong>Step 4: Sym-link OpenCV 3.0</strong></p>
<p>If you’ve reached this step, OpenCV 3.0 should now be installed in <code>/usr/local/lib/python3.4/site-packages/</code>.</p>
<p>Here, our OpenCV bindings are stored under the name <code>cv2.cpython-34m.so</code></p>
<p>However, in order to use OpenCV 3.0 within our cv virtual environment, we first need to sym-link OpenCV into the site-packages directory of the cv environment, like this: (Be sure to take note of <code>cv2.cpython-34m.so</code>)</p>
<pre><code>$ cd ~/.virtualenvs/cv/lib/python3.4/site-packages/
$ ln -s /usr/local/lib/python3.4/site-packages/cv2.cpython-34m.so cv2.so
</code></pre>
<p>Notice how I am changing the name from cv2.cpython-34m.so to cv2.so — this is so Python can import our OpenCV bindings using the name cv2 .</p>
<p><strong>Step 5: Test out the OpenCV 3.0 and Python 3.4+ install</strong></p>
<pre><code>$ workon cv
$ python
>>> import cv2
>>> cv2.__version__
'3.0.0'
</code></pre>
<p>Hope that helps. Also, credit to Adrian Rosebrock on his <a href="http://www.pyimagesearch.com/2015/07/20/install-opencv-3-0-and-python-3-4-on-ubuntu/" rel="noreferrer">post</a>. It worked for me as a charm.</p>
|
python|opencv|ubuntu
| 47 |
1,905,224 | 34,418,667 |
Get table names from POSTGIS database with PyQGIS
|
<p>How can I access the table names inside a POSTGIS database with PyQGIS?
I am trying to load a layer from A POSTGIS database. I can do it if I know the table's name which I am gonna use.</p>
|
<p>If you want list of tables name from current database.</p>
<pre><code>from PyQt4.QtSql import *
db = QSqlDatabase.addDatabase("QPSQL");
db.setHostName("localhost");
db.setDatabaseName("postgres");
db.setUserName("postgres");
db.setPassword("postgres");
db.open();
names=db.tables( QSql.Tables)
print names
</code></pre>
|
python|postgresql|postgis|qgis|database-table
| 4 |
1,905,225 | 66,141,510 |
Python Script, Can't Create or Open Text File
|
<p>I'm trying to open a file and write some text to it. I'm using Windows, and Python 3.8.7 (I tried 3.9.1 as well). When I am in a Windows command prompt, and run my script: "python filewrite.py", I get a print statement to let me know it actually ran, but no file is created.</p>
<p>When I open the interpreter by calling "python" and then copy/paste my code from my script, a file is created with the appropriate text in it.</p>
<p>I've tried opening and closing files both ways, using "with" and just "f.open", but neither has worked. I believe it is related to my setup on Windows, but I've tried uninstalling every version of Python and reinstalling, and no luck.</p>
<p>Tried it this way</p>
<pre><code>f = open("F:\\Coding\\file.txt", 'w+')
f.write('Python loves you!')
f.close()
print("We tried")
</code></pre>
<p>and this way:</p>
<pre><code>with open("F:\\Coding\\file.txt", 'w+') as f:
f.write('Python loves you!')
print("We tried")
</code></pre>
<p>Neither work. I've also tried it without the absolute path, which has the same behavior. I've tried it with and without the '+', I've tried creating the file first and appending, with no luck. The fact that it works in the interpreter, and not when run as a script is my biggest clue as to what is wrong. I actually want to incorporate writing to a file in a more complicated script, but I can't even do this simple thing first. Any help would be much appreciated.</p>
|
<p>I've solved the issue. Turns out my Comodo Antivirus was auto-containing the created files. I told the antivirus to ignore that area for auto-containment, and my scripts can now create files.</p>
|
python|file-io
| 0 |
1,905,226 | 7,239,185 |
Swig-generated Constants in Python
|
<p>I'm using SWIG to create a Python interface to my C++ class library. </p>
<p><strong>I can't work out how to utilise the constants created by SWIG in Python</strong>. I can't even print their value.</p>
<p>For example, both these print statements in Python fail silently...</p>
<pre><code>print CONST1
print rep (CONST1)
</code></pre>
<p>In <strong>C++</strong>, I have this</p>
<pre><code>#define CONST1 0x20000
const int CONST2 = 0x20000; // No different to #define in SWIG-generated code.
</code></pre>
<p>If I look at the <strong>Python module</strong> created by SWIG it has something like this...</p>
<pre><code>CONST1 = _theCPPlibrary.CONST1
</code></pre>
<p>Additionally, I tried using the SWIG %constant directive as an experiment (I don't really want to use this if I can avoid it, as it involves duplicating my constants in the SWIG input file). The %constant directive also gives the same results.</p>
<p>I'm a C++ programmer, and a noob in Python.</p>
|
<p>After build, you will get a python source file: theCPPlibrary.py, and a pyd file: _theCPPlibrary.pyd. You must import the python module first:</p>
<pre><code>import theCPPlibrary
</code></pre>
<p>CONST1 is defined by #define, it can be access by:</p>
<pre><code>print theCPPlibrary.CONST1
</code></pre>
<p>CONST2 is defined by const, is't a global variable, access it by:</p>
<pre><code>print theCPPlibrary.cvar.CONST2
</code></pre>
|
python|swig
| 1 |
1,905,227 | 38,611,742 |
Google app engine: object has no attribute ToMessage
|
<p>I am trying to implement a service which checks if the logged in user is on a datastore, if yes returns True, if not returns False.
Here is the code I am using:</p>
<pre><code>import endpoints
from google.appengine.ext import ndb
from protorpc import remote
from protorpc import messages
from endpoints_proto_datastore.ndb import EndpointsModel
from google.appengine.api import users
class AuthRes(messages.Message):
message = messages.StringField(1)
class UserModel(EndpointsModel):
user = ndb.UserProperty()
@endpoints.api(name='myapi', version='v1', description='My Little API')
class MyApi(remote.Service):
@UserModel.method(path='myuser', http_method='GET', name='myuser.check')
def UserCheck(self, cls):
user = users.get_current_user()
if user:
myuser = cls.query().filter(cls.user.user_id() == user.user_id()).get()
if not myuser:
return AuthRes(message="False")
else:
return AuthRes(message="True")
else:
return AuthRes(message="False")
application = endpoints.api_server([MyApi], restricted=False)
</code></pre>
<p>I always get <code>'AuthRes' object has no attribute 'ToMessage'</code></p>
|
<p>I believe instead of this:</p>
<pre><code>@UserModel.method(path='myuser', http_method='GET', name='myuser.check')
</code></pre>
<p>you want this:</p>
<pre><code>from protorpc import message_types # add at the top
@endpoints.method(message_types.VoidMessage, AuthRes, path='myuser', http_method='GET', name='myuser.check')
</code></pre>
|
python-2.7|google-app-engine|google-cloud-endpoints
| 0 |
1,905,228 | 68,196,476 |
Function: make argument executable
|
<p>So I have the following Code. It should export an excelfile but the naming does not work. Though the contents are alright.
The file is named <strong>"Empty DataFrame
Columns: [Column1, Column2]
Index: [].xlsx"</strong></p>
<p>It should be named testDF.xlsx. Is there an easy solution?</p>
<pre><code>import pandas as pd
ExportExcelParam = 1;
testDF = pd.DataFrame(columns=['Column1','Column2'], dtype=object)
def ExcelExportDF(Dataframe, ExportExcelParam):
if ExportExcelParam == 1:
Filename = str(Dataframe) + ".xlsx"
Dataframe.to_excel(Filename)
ExcelExportDF(testDF, ExportExcelParam)
</code></pre>
|
<p>That is because <code>testDF</code> is not a string, it is just a name of your variable. Now that variable contains a <code>DataFrame</code>, so when you use <code>str()</code>, it will try to provide a reasonable string that represents the <code>DataFrame</code> itself.</p>
<p>An easy solution is to pass the name as string in an additional parameter. Yes, it requires to type the name twice, but it is probable the safest and most straight-forward option, unless you can retrieve the name from somehwere else.</p>
<pre class="lang-python prettyprint-override"><code>def ExcelExportDF(Dataframe, Filename, ExportExcelParam):
if ExportExcelParam == 1:
Dataframe.to_excel(Filename + ".xlsx")
ExcelExportDF(testDF, 'testDF', ExportExcelParam)
</code></pre>
<p><strong>Edit:</strong> Just saw the comments. If you know what you are doing, maybe <a href="https://stackoverflow.com/a/50620134/12661819">this</a> could work.</p>
|
python|function|arguments
| 1 |
1,905,229 | 25,942,092 |
Unique Salt per User using Flask-Security
|
<p>After reading here a bit about salting passwords, it seems that it's best to use a unique salt for each user. I'm working on implementing Flask-Security atm, and from the documentation it appears you can only set a global salt: ie SECURITY_PASSWORD_SALT = 'thesalt'</p>
<p>Question: How would one go about making a unique salt for each password? </p>
<p>Thanks!</p>
<p>edit: from the docs on Flask-Security, I found this, which seems to again suggest that this module only uses a single salt for all passwords out of the box. </p>
<pre><code>flask_security.utils.get_hmac(password)
Returns a Base64 encoded HMAC+SHA512 of the password signed with the salt
specified by SECURITY_PASSWORD_SALT.
</code></pre>
|
<p>Yes, Flask-Security does use per-user salts by design if using bcrypt (and other schemes such as des_crypt, pbkdf2_sha256, pbkdf2_sha512, sha256_crypt, sha512_crypt).</p>
<p>The config for 'SECURITY_PASSWORD_SALT' is only used for HMAC encryption. If you are using bcrypt as the hashing algorithm Flask-Security uses passlib for hashing and it generates a random salt during hashing. This confustion is noted in issue 268: <a href="https://github.com/mattupstate/flask-security/issues/268">https://github.com/mattupstate/flask-security/issues/268</a></p>
<p>It can be verified in the code, walking from encrypt to passlib:</p>
<p>flask_security/utils.py (lines 143-151, 39, and 269)</p>
<pre><code>def encrypt_password(password):
...
return _pwd_context.encrypt(signed)
_pwd_context = LocalProxy(lambda: _security.pwd_context)
</code></pre>
<p>flask_security/core.py (269, 244-251, and 18)</p>
<pre><code>pwd_context=_get_pwd_context(app)
def _get_pwd_context(app):
...
return CryptContext(schemes=schemes, default=pw_hash, deprecated=deprecated)
from passlib.context import CryptContext
</code></pre>
<p>and finally from: <a href="https://pythonhosted.org/passlib/password_hash_api.html#passlib.ifc.PasswordHash.encrypt">https://pythonhosted.org/passlib/password_hash_api.html#passlib.ifc.PasswordHash.encrypt</a></p>
<blockquote>
<p>note that each call to encrypt() generates a new salt,</p>
</blockquote>
|
python|encryption|flask|salt|flask-security
| 15 |
1,905,230 | 60,166,248 |
How to get Cartesian product of two iterables when one of them is infinite
|
<p>Let's say I have two iterables, one finite and one infinite:</p>
<pre><code>import itertools
teams = ['A', 'B', 'C']
steps = itertools.count(0, 100)
</code></pre>
<p>I was wondering if I can avoid the nested for loop and use one of the infinite iterators from the <a href="https://docs.python.org/3/library/itertools.html" rel="nofollow noreferrer"><code>itertools</code></a> module like <code>cycle</code> or <code>repeat</code> to get the Cartesian product of these iterables.</p>
<p><strong>The loop should be infinite because the stop value for <code>steps</code> is unknown upfront.</strong></p>
<p>Expected output:</p>
<pre><code>$ python3 test.py
A 0
B 0
C 0
A 100
B 100
C 100
A 200
B 200
C 200
etc...
</code></pre>
<p>Working code with nested loops:</p>
<pre><code>from itertools import count, cycle, repeat
STEP = 100
LIMIT = 500
TEAMS = ['A', 'B', 'C']
def test01():
for step in count(0, STEP):
for team in TEAMS:
print(team, step)
if step >= LIMIT: # Limit for testing
break
test01()
</code></pre>
|
<p>Try <a href="https://docs.python.org/3/library/itertools.html#itertools.product" rel="nofollow noreferrer"><code>itertools.product</code></a></p>
<pre><code>from itertools import product
for i, j in product(range(0, 501, 100), 'ABC'):
print(j, i)
</code></pre>
<p>As the docs say <code>product(A, B)</code> is equivalent to <code>((x,y) for x in A for y in B)</code>.
As you can see, <code>product</code> yield a tuple, which mean it's a generator and do not create a list in memory in order to work properly.</p>
<blockquote>
<p>This function is roughly equivalent to the following code, except that the actual implementation does not build up intermediate results in memory:</p>
<pre><code>def product(*args, **kwds):
# product('ABCD', 'xy') --> Ax Ay Bx By Cx Cy Dx Dy
# product(range(2), repeat=3) --> 000 001 010 011 100 101 110 111
pools = map(tuple, args) * kwds.get('repeat', 1)
result = [[]]
for pool in pools:
result = [x+[y] for x in result for y in pool]
for prod in result:
yield tuple(prod)
</code></pre>
</blockquote>
<p>But you can't use <code>itertools.product</code> for infinite loop due to a <a href="https://bugs.python.org/issue10109" rel="nofollow noreferrer">known issue</a>:</p>
<blockquote>
<p>According to the documentation, itertools.product is equivalent to
nested for-loops in a generator expression. But,
itertools.product(itertools.count(2010)) is not.</p>
<pre><code>>>> import itertools
>>> (year for year in itertools.count(2010))
<generator object <genexpr> at 0x026367D8>
>>> itertools.product(itertools.count(2010))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
MemoryError
</code></pre>
<p>The input to itertools.product must be a finite sequence of finite
iterables.</p>
</blockquote>
<p>For infinite loop, you can use <a href="https://stackoverflow.com/a/60166626/6251742">this code</a>.</p>
|
python|itertools|cartesian-product
| 5 |
1,905,231 | 44,330,548 |
How to deal with Imbalanced Dataset for Multi Label Classification
|
<p>I was wondering how to penalize less represented classes more then other classes when dealing with a really imbalanced dataset (10 classes over about 20000 samples but here is th number of occurence for each class : [10868 26 4797 26 8320 26 5278 9412 4485 16172 ]).</p>
<p>I read about the Tensorflow function : weighted_cross_entropy_with_logits (<a href="https://www.tensorflow.org/api_docs/python/tf/nn/weighted_cross_entropy_with_logits" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/nn/weighted_cross_entropy_with_logits</a>) but I am not sure I can use it for a multi label problem.</p>
<p>I found a post that sum up perfectly the problem I have (<a href="https://stackoverflow.com/questions/43152660/neural-network-for-imbalanced-multi-class-multi-label-classification?noredirect=1&lq=1">Neural Network for Imbalanced Multi-Class Multi-Label Classification</a>) and that propose an idea but it had no answers and I thought the idea might be good :)</p>
<p>Thank you for your ideas and answers !</p>
|
<p>So I am not entirely sure that I understand your problem given what you have written. The post you link to writes about multi-label AND multi-class, but that doesn't really make sense given what is written there either. So I will approach this as a multi-class problem where for each sample, you have a single label.</p>
<p>In order to penalize the classes, I implemented a weight Tensor based on the labels in the current batch. For a 3-class problem, you could eg. define the weights as the inverse frequency of the classes, such that if the proportions are [0.1, 0.7, 0.2] for class 1, 2 and 3, respectively, the weights will be [10, 1.43, 5]. Defining a weight tensor based on the current batch is then</p>
<pre><code>weight_per_class = tf.constant([10, 1.43, 5]) # shape (, num_classes)
onehot_labels = tf.one_hot(labels, depth=3) # shape (batch_size, num_classes)
weights = tf.reduce_sum(
tf.multiply(onehot_labels, weight_per_class), axis=1) # shape (batch_size, num_classes)
reduction = tf.losses.Reduction.MEAN # this ensures that we get a weighted mean
loss = tf.losses.softmax_cross_entropy(
onehot_labels=onehot_labels, logits=logits, weights=weights, reduction=reduction)
</code></pre>
<p>Using softmax ensures that the classification problem is not 3 independent classifications.</p>
|
tensorflow|deep-learning|multilabel-classification
| 0 |
1,905,232 | 44,091,199 |
Grabbing the text between tags using BeautifulSoup
|
<p>I'm trying to grab every piece of individual text between every tag (that is in my list) in a .txt file using beautiful soup and store them into a dictionary. This code works but is terribly slow if I run big files, so is there another way I could go about making this code faster?</p>
<pre><code>from bs4 import BeautifulSoup
words_dict = dict()
# these are all of the tags in the file I'm looking for
tags_list = ['title', 'h1', 'h2', 'h3', 'b', 'strong']
def grab_file_content(file : str):
with open(file, encoding = "utf-8") as file_object:
# entire content of the file with tags
content = BeautifulSoup(file_object, 'html.parser')
# if the content has content within the <body> tags...
if content.body:
for tag in tags_list:
for tags in content.find_all(tag):
text_list = tags.get_text().strip().split(" ")
for words in text_list:
if words in words_dict:
words_dict[words] += 1
else:
words_dict[words] = 1
else:
print('no body')
</code></pre>
|
<p>The following code is functionally equivalent to your code:</p>
<pre><code>from collections import Counter
from itertools import chain
words_dict = Counter() # An empty counter further used as an accumulator
# Probably a loop
# Create the soup here, as in your original code
content = BeautifulSoup(file_object, 'html.parser')
words_dict += Counter(chain.from_iterable(tag.string.split()
for tag in content.find_all(tags_list) if tag.string))
</code></pre>
|
python|html
| 1 |
1,905,233 | 44,330,055 |
unable to read .mp4 from opencv
|
<p>The following is the code used to read the <code>.mp4</code> video from python. The code <code>cap.isOpened()</code> is returning false.</p>
<p>FYI:
I installed the relavant codes and copied <code>opencv_ffmpeg_64.dll</code> into python folder (/usr/local/lib/python2.7)</p>
<p>-Opencv version: 3.0</p>
<p>-Python:2.7 </p>
<pre><code>import numpy as np
import cv2
import gtk
import pygtk
import gobject
count=0
loop=0
cascPath = 'haarcascade_frontalface_default.xml'
faceCascade = cv2.CascadeClassifier(cascPath)
cap = cv2.VideoCapture('sample.mp4')
print (cap.isOpened())
</code></pre>
<p>Please suggest what best can be done?</p>
|
<p>You're probably missing FFMPEG. OpenCV needs the codec information to decode the videos, which ffmpeg provides. </p>
<p>Download FFMPEG from <a href="http://ffmpeg.zeranoe.com/builds/" rel="nofollow noreferrer">http://ffmpeg.zeranoe.com/builds/</a> by hitting the 'Download FFmpeg' button. Ensure you have selected the correct version, architecture and 'Statis' build.
Unzip the downloaded file, rename it as 'ffmpeg' and move to C:\ (for eg). Now add the path C:\ffmpeg\bin to your PATH system variable.</p>
<p>The steps are detailed here with pictures: <a href="http://www.wikihow.com/Install-FFmpeg-on-Windows" rel="nofollow noreferrer">http://www.wikihow.com/Install-FFmpeg-on-Windows</a></p>
|
python|image|opencv
| 1 |
1,905,234 | 43,942,710 |
Python: Socket.timeout not handled by except
|
<p>Some times I can effectively handle the socket.timeout, although some other times I get that socket timeout error and my script stops abruptly... Is there something I'm missing in my exception handling? how come it goes right trough it?</p>
<p>Happens randomly in either one of the following pieces of code:</p>
<p>First snippet:</p>
<pre><code>for _ in range(max_retries):
try:
req = Request(url, headers={'User-Agent' :'Mozilla/5.0'})
response = urlopen(req,timeout=5)
break
except error.URLError as err:
print("URL that generated the error code: ", url)
print("Error description:",err.reason)
except error.HTTPError as err:
print("URL that generated the error code: ", url)
print("Error code:", err.code)
print("Error description:", err.reason)
except socket.timeout:
print("URL that generated the error code: ", url)
print("Error description: No response.")
except socket.error:
print("URL that generated the error code: ", url)
print("Error description: Socket error.")
if response.getheader('Content-Type').startswith('text/html'):
htmlBytes = response.read()
htmlString = htmlBytes.decode("utf-8")
self.feed(htmlString)
</code></pre>
<p>Second snippet</p>
<pre><code>for _ in range(max_retries):
try:
req = Request(i, headers={'User-Agent' :'Mozilla/5.0'})
with urlopen(req,timeout=5) as response, open(aux, 'wb') as out_file:
shutil.copyfileobj(response, out_file)
with open(path, fname), 'a') as f:
f.write(("link" + str(intaux) + "-" + auxstr + str(index) + i[-4:] + " --- " + metadata[index%batch] + '\n'))
break
except error.URLError as err:
print("URL that generated the error code: ", i)
print("Error description:",err.reason)
except error.HTTPError as err:
print("URL that generated the error code: ", i)
print("Error code:", err.code)
print("Error description:", err.reason)
except socket.timeout:
print("URL that generated the error code: ", i)
print("Error description: No response.")
except socket.error:
print("URL that generated the error code: ", i)
print("Error description: Socket error.")
</code></pre>
<p>The error:</p>
<pre><code>Traceback (most recent call last):
File "/mydir/crawler.py", line 202, in <module>
spider("urls.txt", maxPages=0, debug=1, dailyRequests=9600)
File "/mydir/crawler.py", line 142, in spider
parser.getLinks(url + "?start=" + str(currbot) + "&tab=" + auxstr,auxstr)
File "/mydir/crawler.py", line 81, in getLinks
htmlBytes = response.read()
File "/usr/lib/python3.5/http/client.py", line 455, in read
return self._readall_chunked()
File "/usr/lib/python3.5/http/client.py", line 561, in _readall_chunked
value.append(self._safe_read(chunk_left))
File "/usr/lib/python3.5/http/client.py", line 607, in _safe_read
chunk = self.fp.read(min(amt, MAXAMOUNT))
File "/usr/lib/python3.5/socket.py", line 575, in readinto
return self._sock.recv_into(b)
File "/usr/lib/python3.5/ssl.py", line 929, in recv_into
return self.read(nbytes, buffer)
File "/usr/lib/python3.5/ssl.py", line 791, in read
return self._sslobj.read(len, buffer)
File "/usr/lib/python3.5/ssl.py", line 575, in read
v = self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out
</code></pre>
<p>EDIT:</p>
<p>I noticed I missed a few lines of code thanks to @tdelaney I added them to the code above and I'm posting the solution I wrote if you post the solution or if you have a better approach to solve it I will mark the answer as correct</p>
<p>Solution:</p>
<pre><code>for _ in range(max_retries):
try:
req = Request(url, headers={'User-Agent' :'Mozilla/5.0'})
response = urlopen(req,timeout=5)
break
except error.URLError as err:
print("URL that generated the error code: ", url)
print("Error description:",err.reason)
except error.HTTPError as err:
print("URL that generated the error code: ", url)
print("Error code:", err.code)
print("Error description:", err.reason)
except socket.timeout:
print("URL that generated the error code: ", url)
print("Error description: No response.")
except socket.error:
print("URL that generated the error code: ", url)
print("Error description: Socket error.")
if response.getheader('Content-Type').startswith('text/html'):
for _ in range(max_retries):
try:
htmlBytes = response.read()
htmlString = htmlBytes.decode("utf-8")
self.feed(htmlString)
break
except error.URLError as err:
print("URL that generated the error code: ", url)
print("Error description:",err.reason)
except error.HTTPError as err:
print("URL that generated the error code: ", url)
print("Error code:", err.code)
print("Error description:", err.reason)
except socket.timeout:
print("URL that generated the error code: ", url)
print("Error description: No response.")
except socket.error:
print("URL that generated the error code: ", url)
print("Error description: Socket error.")
</code></pre>
|
<p>The python "Requests" library uses its own set of exceptions to handle errors pertaining to the HTTP protocol as well as the socket. It automatically maps exceptions returned from it's embedded socket() functions to custom ones defined in requests.exceptions. </p>
<p>So the exceptions raised from this...</p>
<pre><code>import Requests
try:
req = Request("http://stackoverflow.com", headers={'User-Agent' :'Mozilla/5.0'})
urlopen(req,timeout=5)
except Timeout:
print "Session Timed Out!"
</code></pre>
<p>Are equivalent to the exceptions raised by this...</p>
<pre><code>import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect(("127.0.0.1", 80))
except socket.timeout:
print "Session Timed Out"
</code></pre>
<p>Your fixed code...</p>
<pre><code>for _ in range(max_retries):
try:
req = Request(url, headers={'User-Agent' :'Mozilla/5.0'})
response = urlopen(req,timeout=5)
break
except error.URLError as err:
print("URL that generated the error code: ", url)
print("Error description:",err.reason)
except error.HTTPError as err:
print("URL that generated the error code: ", url)
print("Error code:", err.code)
print("Error description:", err.reason)
except Timeout:
print("URL that generated the error code: ", url)
print("Error description: Session timed out.")
except ConnectionError:
print("URL that generated the error code: ", url)
print("Error description: Socket error timed out.")
</code></pre>
|
python|sockets|exception|timeout|urllib
| 0 |
1,905,235 | 44,039,491 |
Overly large .exe file when using pyinstaller
|
<p>I've searched a little bit about this problem where people complained about the executable file size being 30mb ~ 100mb, but for some reason mine is 300mb. I may be wrong, but I don't think this is normal. I tried using other alternatives like cx_Freeze, but I get the same result. Here's my includes in my project : </p>
<pre><code>from PyQt5 import QtCore, QtGui, QtWidgets
from pyplot import functions as plot
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
window = QtWidgets.QDialog()
ui = Ui_Dialog()
ui.setupUi(window)
window.show()
sys.exit(app.exec_())
</code></pre>
<p>pyplot is another python file for my project which include : </p>
<pre><code>from numpy import power, cbrt, sin, cos, arange
from matplotlib import pyplot as plt
from matplotlib import patches as pts
from scipy import integrate as intg
</code></pre>
<p>I use this command to create my executable :</p>
<pre><code>pyinstaller --onefile --windowed montecarlo.py
</code></pre>
<p>Thanks for helping</p>
|
<p>This is normal, as the packages you import have some large transitive dependencies.</p>
<p>To quantify each package's contribution, simply comment out all the imports, run pyinstaller, then add them back one by one, noting the size of pyinstaller's output after each one. You probably won't find an action item in the stats, though, since your app needs each of those imports anyway.</p>
|
python|python-3.x|executable|pyinstaller
| 3 |
1,905,236 | 32,775,894 |
Python regex to find and regex to remove from list
|
<p>I built this little RSS reader a while ago for myself and I felt inspired to update it to exclude junk from description tag's. Im busy testing it out now to remove &'lt; (all content) &'gt; from the description tags and Im having trouble getting this rite.</p>
<p>So far my code looks something like this</p>
<pre><code>from re import findall
from Tkinter import *
from urllib import urlopen
disc = []
URL = 'http://feeds.sciencedaily.com/sciencedaily/matter_energy/engineering?format=xml'
O_W = urlopen(URL).read()
disc_ex = findall('<description>(.*)</description>',O_W)
for i in disc_ex:
new_disc = i.replace(findall('&lt;(.*)&gt;',i),'')
disc.extend([new_disc])
</code></pre>
<p>So prior to the new_disc line of code on my attempt to remove some of the rubbish text I would normally get my text to come through looking like this</p>
<pre><code>"Tailored DNA structures could find targeted cells and release their molecular payload selectively into the cells.&lt;img src="http://feeds.feedburner.com/~r/sciencedaily/matter_energy/engineering/~4/J1bTggGxFOY" height="1" width="1" alt=""/&gt;"
</code></pre>
<p>What I want is just the text without the rubbish, so essentially just:</p>
<pre><code>"Tailored DNA structures could find targeted cells and release their molecular payload selectively into the cells."
</code></pre>
<p>Any suggestions for me?</p>
|
<p>There are several solutions, BeautifulSoup for example. To follow your idea, avoid strings within '<' ...'>' brackets just change last line:</p>
<pre><code>...
for i in disc_ex:
new_disc = i.replace(findall('&lt;(.*)&gt;',i),'')
disc.extend([re.sub(r'<(.*)/>','',new_disc)])
</code></pre>
|
python|regex
| 1 |
1,905,237 | 34,606,355 |
Find values for a given mean
|
<p>I'm pretty new to python and trying to generate a defined number of numbers (e.g 3 numbers) which mean is equal to a given value.
For example, let's say I'm trying to get different list of 3 numbers whose means equals 10, which would make these lists for example :
(5,10,15) & (0, 0, 30) & (5,5,20).</p>
<p>As I fixed the number of elements in the list I know I could use only the sum but even for that I can't find how to compute different list with the same sum in a pythonic way.</p>
<p>Edit :
I want to generate a defined number of list, not all the possible combination and now that I think about it, it should be only integers</p>
|
<p>Here you go.
This is only for positive integers with no duplicates</p>
<pre><code>def make_lists(mean):
'without duplicates'
for i in range(0, mean+1):
for j in range(i, mean+1):
k = mean * 3 - i - j
assert (k+i+j) / 3.0 == float(mean), ((k+i+j) / 3.0) #just testing
yield (i,j, k)
if __name__ == '__main__':
print( list(make_lists(10)))
</code></pre>
|
python
| 0 |
1,905,238 | 34,634,555 |
Executing remote python script in background over SSH
|
<p>I have a python file "run.py" like below on my remote server.</p>
<pre><code>import subprocess
subprocess.Popen(["nohup", "python", "/home/admin/Packet/application.py", "&"])
</code></pre>
<p>I want to run that file from my local computer using SSH. I'm trying like the below. However, my local terminal got stuck there. It seems it isn't being run in the background.</p>
<p><code>ssh -n -f -i /Users/aws/aws.pem admin@hello_world.com 'python /home/admin/run.py'</code></p>
<p>After running that command, my terminal got stuck. </p>
|
<p>The following is an example I'm using, you can try something like this, customizing the ssh_options.</p>
<pre><code>import subprocess
ssh_options = '-o ConnectTimeout=10 -o PasswordAuthentication=no -o PreferredAuthentications=publickey -o StrictHostKeyChecking=no'
server_name = 'remote_server.domain'
cmd = 'ssh ' + ssh_options + ' ' + server_name + ' "/usr/bin/nohup /usr/bin/python /home/admin/run.py 2>&1 &"'
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
</code></pre>
<p>Later you can redirect the output to a flat file, changing :</p>
<pre><code>2>&1 &
</code></pre>
<p>for:</p>
<pre><code>>> /path/lo/log_file.txt 2>&1 &
</code></pre>
|
python|ssh
| 1 |
1,905,239 | 23,317,818 |
Summing up every alternate 4 elements in list
|
<p>I am new to python, I have very long list (around 100 elements in list). I would like to add every 4th element of this list and store result as <code>sum1</code>, <code>sum2</code>, <code>sum3</code>, or <code>sum4</code>.
I used the following code :</p>
<pre><code>while y in range(0,104):
if y%4==0:
sum=last_indexes[y]+sum
y=y+4
</code></pre>
<p>Questions:</p>
<ol>
<li>If the lists increases the compilation time increases. Is there any function in python by which I can fasten this process?</li>
<li>Also if the list is not a multiple of 4 and still I want to store it in sum1,sum2,sum3,sum4 then what modification should be done.</li>
</ol>
<p>Thanks for your time and consideration</p>
|
<p>No, you cannot make code run (not compile) at the same speed if the input is larger.</p>
<p>The more Pythonic way to handle this problem, which should work even if the length of the list is not a multiple of <code>4</code> is to use list slicing:</p>
<pre><code>sum1 = sum(last_indices[::4])
sum2 = sum(last_indices[1::4])
sum3 = sum(last_indices[2::4])
sum4 = sum(last_indices[3::4])
</code></pre>
<p>The slice notation of <code>a:b:c</code> says "Start at index <code>a</code>, go to index <code>b</code>, in increments of <code>c</code>". When <code>a</code> is omitted, it is assumed to be <code>0</code>. when <code>b</code> is omitted, it is assumed to be the end of the list.</p>
|
python|for-loop
| 4 |
1,905,240 | 263,690 |
launching VS2008 build from python
|
<p>if I paste this into the command prompt by hand, it works, but if I run it from python, I get <code>The filename, directgory name, or volume label syntax is incorrect</code>.</p>
<pre><code>os.system('%comspec% /k ""C:\Program Files\Microsoft Visual Studio 9.0\VC\vcvarsall.bat"" x86')
os.system('devenv Immersica.sln /rebuild Debug /Out last-build.txt')
</code></pre>
|
<p>I think the backslashes are messing you up. You need to use an R string (raw)</p>
<p>r"string"</p>
<p>See <a href="https://docs.python.org/2/reference/lexical_analysis.html#string-literals" rel="nofollow noreferrer">https://docs.python.org/2/reference/lexical_analysis.html#string-literals</a> for reference</p>
|
python|windows|visual-studio-2008|build-automation
| 1 |
1,905,241 | 832,140 |
How to set up Python in a web server?
|
<p>Not exactly about programming, but I need help with this.</p>
<p>I'm running a development sever with WampServer. I want to install Python (because I prefer to use Python over PHP), but it seems there isn't an obvious choice. I've read about mod_python and WSGI, and about how the latter is better.</p>
<p>However, from what I gathered (I may be wrong) you have to do more low-level stuff with WSGI than with PHP. So I researched about Django, but it seems too complex for what I want.</p>
<p>So, what recommendations would you give to a newbie in this area?</p>
<p>Again, sorry if this isn't about programming, but it's related, and this seems like a nice place to ask.</p>
|
<p>Django is not a web server, but a web application framework.</p>
<p>If you want a bare-bones Python webserver capable of some dynamic and some static content, have a look at <a href="http://www.cherrypy.org/" rel="noreferrer">CherryPy</a>.</p>
|
python|windows|django
| 5 |
1,905,242 | 11,589,933 |
How to find if a list contains multiple of the same element in python?
|
<p>so I just started learning python and I get mini-lessons from my friend every week. This week was to make a simple slot machine game. There are 6 items in the slot machine, and when 3 or more of the same item show up, the user wins. I've tried the below code:</p>
<pre><code>for i in slotScreen:
if slotScreen.count(i) == 3:
print('You got 3 of the same! You win!')
</code></pre>
<p>The code works when the first item in the list is part of the 3 of a kind, but the code doesn't work if none of the three elements are first on the list, like below:</p>
<pre><code>slotScreen = ['lemon', 'cherry', 'lemon', 'lemon', 'pirate', 'bar'] # works
slotScreen = ['cherry', 'lemon', 'lemon', 'lemon', 'pirate', 'bar'] # not work
</code></pre>
<p>Any idea why this happens?</p>
<p>EDIT: More code. I get the You Lose message when I should be getting the You win 3x message.</p>
<pre><code> for i in slotScreen:
if slotScreen.count(i) == 6:
print('You win 10x your bet!!!')
x = x + int(bet) * 10
break
elif slotScreen.count(i) == 5:
print('You win 5x your bet!')
x = x + int(bet) * 5
break
elif slotScreen.count(i) == 4:
print('You win 4x your bet!')
x = x + int(bet) * 4
break
elif slotScreen.count(i) == 3:
print('You win 3x your bet!')
x = x + int(bet) * 3
break
elif slotScreen.count(i) <= 2:
print('Sorry you lose')
break
</code></pre>
|
<p>Your program always does <code>break</code> in the first <code>for</code> iteration, so it only evaluates the first element of the list.</p>
|
python|python-3.x
| 4 |
1,905,243 | 46,999,337 |
Python regex match last word of a string
|
<p>I have the following string:</p>
<pre><code>"crypto map OUTSIDEMAP 540 match address 3P-DC-CRYPTO"
</code></pre>
<p>And, I am trying to match with a regex only <code>3P-DC-CRYPTO</code></p>
<p>So far, I have managed to write the below regex :</p>
<pre><code>crypto_acl = re.findall("address [-\w+]*",output)
</code></pre>
<p>However, it matches <code>address 3P-DC-CRYPTO</code></p>
<p>Any suggestion?</p>
|
<p>No regex needed, actually:</p>
<pre><code>string = "crypto map OUTSIDEMAP 540 match address 3P-DC-CRYPTO"
# check for address as well
words = string.split()
if words[-2] == 'address':
last_word = words[-1]
print(last_word)
</code></pre>
<p>This checks for <code>address</code> and then captures the last word.</p>
|
python|regex
| 2 |
1,905,244 | 37,761,416 |
Easy way to create GUI in python
|
<p>I have tried using Tkinter but it is not so easy to create Attractive GUI application.Is there an application or online website where I can use Drag and Drop approach to create GUI Interface in python.or some other easy way Also reply with some Intutive references or tutorial if possible.</p>
|
<p>Pyqt seems like a good option here. It's a cross platform GUI framework that comes with qt designer, an applicatiom that let's you build GUIs by dragging and dropping widgets on a canvas.</p>
<p>Here's a basic <a href="https://pythonprogramming.net/basic-gui-pyqt-tutorial/" rel="nofollow">tutorial</a> you can follow. There's also a series of videos included with imformation on how to download and install pyqt</p>
|
python|user-interface|tkinter|drag-and-drop
| 0 |
1,905,245 | 67,615,886 |
Pip successfully installed module not found: ImportError: No module named xlwt
|
<p>My OS: win 10 ,</p>
<p>installed:</p>
<ul>
<li>python 2.7 ( command is <code>python</code>)</li>
<li>python 3.9.5 ( command is <code>python3</code>)</li>
<li>pip , pip3 ( both for python3, seems )</li>
</ul>
<p>pip command:</p>
<pre><code>c:\>pip3 config list -v
For variant 'global', will try loading 'C:\ProgramData\pip\pip.ini'
For variant 'user', will try loading 'C:\Users\luelue\pip\pip.ini'
For variant 'user', will try loading 'C:\Users\luelue\AppData\Roaming\pip\pip.ini'
For variant 'site', will try loading 'c:\users\luelue\appdata\local\programs\python\python39\pip.ini'
c:\>pip config list -v
For variant 'global', will try loading 'C:\ProgramData\pip\pip.ini'
For variant 'user', will try loading 'C:\Users\luelue\pip\pip.ini'
For variant 'user', will try loading 'C:\Users\luelue\AppData\Roaming\pip\pip.ini'
For variant 'site', will try loading 'c:\users\luelue\appdata\local\programs\python\python39\pip.ini'
</code></pre>
<p>I installed xlwt via <code>pip</code>, and I can see it's installed :</p>
<pre><code>c:\>pip install xlwt
Requirement already satisfied: xlwt in c:\users\luelue\appdata\local\programs\python\python39\lib\site-packages\xlwt-1.3.0-py3.9.egg (1.3.0)
</code></pre>
<p><a href="https://i.stack.imgur.com/EDnJF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EDnJF.png" alt="enter image description here" /></a></p>
<p>However, when I try to import it, got error: <code>No module named xlwt</code></p>
<pre><code>c:\>python
Python 2.7.15 (v2.7.15:ca079a3ea3, Apr 30 2018, 16:30:26) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import xlwt
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named xlwt
>>> exit()
</code></pre>
<p>Also, I used <code>pip3 install xlwt</code> successfully, but run <code>python3 ... import xlwt</code> failed. full log:</p>
<pre><code>C:\files\dong_tai_pai_fang_ji_suan\python_code>pip3 install xlwt
Requirement already satisfied: xlwt in c:\users\luelue\appdata\local\programs\python\python39\lib\site-packages\xlwt-1.3.0-py3.9.egg (1.3.0)
C:\files\dong_tai_pai_fang_ji_suan\python_code>python3
Python 3.9.5 (tags/v3.9.5:0a7dcbd, May 3 2021, 17:27:52) [MSC v.1928 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import xlwt
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'xlwt'
>>>
</code></pre>
<p>how to resolve this?</p>
<p>should I configure the windows PATH or something for python/pip ?</p>
<p>thanks</p>
<p>edit:</p>
<p>maybe I installed multiple python3:
<a href="https://i.stack.imgur.com/FRCAd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FRCAd.png" alt="enter image description here" /></a></p>
|
<p>your python version is 2.7 in python console.
change to python 3.9 version.</p>
|
python|pip
| 2 |
1,905,246 | 29,938,028 |
How to check if a neighbourhood of at least five elements of an array satisfies a criterion?
|
<p>I have a numpy array, and I check for local minima which are lower than a threshold (mean value - 3 * standard deviation). Out of those minima I want to select those which are in the neighbourhood of at least five points which are all below the threshold value. If a certain neighbourhood contains multiple minima, I want to know which minimum has the lowest value. How to do this and make it run relatively fast?</p>
<p>Code similar to the one suggested by B.M. doesn't quite do what I need.</p>
<pre><code>from numpy import *
a=random.rand(10)
n = ones(7)
threshold=0.5
u=convolve(a<t,n,'same')
</code></pre>
<p>This is what it produced:
x
array([ 0.6034448 , 0.16098872, 0.39563129, 0.33611677, 0.90138981,
0.26088853, 0.45720198, 0.100786 , 0.47705187, 0.15514734])
u
array([ 3., 3., 4., 5., 6., 6., 6., 5., 5., 4.])</p>
<p>Which suggests that element at index 6 is part of a neighbourhood of 6 points below the threshold value. I guess it also counted element with index 3, which is not desirable behaviour, as there is value > 0.9 at position 4. And element at position 9 claims to be in a group of 4 elements, while I would say it is a group of 5.</p>
<p>This is my current solution to the problem:</p>
<pre><code> layer = Xa
while layer > overlap:
if d[layer] > d[layer+1] and d[layer] > d[layer-1]:
if layer > 300:
threshold = threshold_free
else:
threshold = threshold_PBL
if d[layer] <= threshold:
upper_limit = layer
lower_limit = layer
k = 1
kp = 0
while kp < k and layer + kp < Xa:
kp = k
if d[layer+k] <= threshold:
upper_limit = layer + k
k += k
k = 1
kp = 0
while kp < k and layer - kp > overlap:
kp = k
if d[layer-k] <= threshold:
lower_limit = layer - k
k += k
transition_interval = upper_limit - lower_limit
if transition_interval >= 5:
print layer, upper_limit, lower_limit, upper_limit - lower_limit
layer = lower_limit
if valid_time in layers:
layers[valid_time].append(layer)
else:
layers[valid_time] = [layer]
layer -= 1
</code></pre>
|
<p>Some tricks to do that:</p>
<pre><code>from numpy import *
from matplotlib.pyplot import *
from scipy.signal import convolve2d
from scipy.ndimage.filters import minimum_filter as mini
a=random.rand(100,100)
neighbours = ones((3,3))
threshold=0.2
u=convolve2d(a<threshold,neighbours,'same')
mins=((u>=6)*(a<threshold))
minis=mini(choose(mins,(1,a)),size=(3,3))==a
subplot(121);imshow(mins,cmap=cm.gray_r,interpolation='none')
subplot(122);imshow(minis,cmap=cm.gray_r,interpolation='none')
</code></pre>
<p>this script produce:
<img src="https://i.stack.imgur.com/RDPDY.png" alt="minimuns"></p>
<p>On the left figure those who have 5 neighbours, on the right only the min is selected. If you want the indices and values, use <code>inds=mask_indices(100,lambda x,k: minis)</code> and <code>a[inds]</code> . </p>
|
python|numpy|signal-processing
| 2 |
1,905,247 | 27,582,998 |
Python: Saved pickled Counter has data, but cannot load the file with a function
|
<p>I'm trying to build a foreign language frequency dictionary/vocab learner. </p>
<p>I want the program to:</p>
<ol>
<li>Process a book/text-file, breaking up the text into individual unique words and ordering them by frequency (I do this using <code>Counter()</code> )</li>
<li>Save the <code>Counter()</code> to a pickle file so that I don't have to process the book every time I run the program</li>
<li>Access the pickle file and pull out Nth most frequent words (easily done using <code>most_common()</code> function)</li>
</ol>
<p>Here is the problem, once I process a book and save it to a pickle file, I cannot access it again. The function that does so, loads an empty dictionary even though, when I check the pickle file, I can see that it does have data. </p>
<p>Further more, if I load the pickle file manually (using <code>pickle.load()</code>) and pull the Nth most common word manually (using <code>most_common()</code> manually instead of a custom function which loads the pickle and pulls the Nth most common word) it will work perfectly.</p>
<p>I suspect there is something wrong with the custom function that loads pickle files, but I can't figure out what it is. </p>
<p>Here is the code:</p>
<pre><code>import string
import collections
import pickle
freq_dict = collections.Counter()
dfn_dict = dict()
def save_dict(name, filename):
pickle.dump(name, open('{0}.p'.format(filename), 'wb'))
#Might be a problem with this
def load_dict(name, filename):
name = pickle.load(open('{0}.p'.format(filename), 'rb'))
def cleanedup(fh):
for line in fh:
word = ''
for character in line:
if character in string.ascii_letters:
word += character
else:
yield word
word = ''
#Opens a foreign language textfile and adds all unique
#words in it, to a Counter, ordered by frequency
def process_book(textname):
with open (textname) as doc:
freq_dict.update(cleanedup(doc))
save_dict(freq_dict, 'svd_f_dict')
#Shows the Nth most frequent word in the frequency dict
def show_Nth_word(N):
load_dict(freq_dict, 'svd_f_dict')
return freq_dict.most_common()[N]
#Shows the first N most frequent words in the freq. dictionary
def show_N_freq_words(N):
load_dict(freq_dict, 'svd_f_dict')
return freq_dict.most_common(N)
#Presents a word to the user, allows user to define it
#adds the word and its definition to another dictionary
#which is used to store only the word and its definition
def define_word(word):
load_dict(freq_dict, 'svd_f_dict')
load_dict(dfn_dict, 'svd_d_dict')
if word in freq_dict:
definition = (input('Please define ' + str(word) + ':'))
dfn_dict[word] = definition
else:
return print('Word not in dictionary!')
save_dict(dfn_dict, 'svd_d_dict')
</code></pre>
<p>And here is an attempt to pull Nth common words out, using both methods (manual and function):</p>
<pre><code>from dictionary import *
import pickle
#Manual, works
freq_dict = pickle.load(open('svd_f_dict.p', 'rb'))
print(freq_dict.most_common()[2])
#Using a function defined in the other file, doesn't work
word = show_Nth_word(2)
</code></pre>
<p>Thanks for your help!</p>
|
<p>Your load_dict function stores the result of unpickling into a local variable 'name'. This will not modify the object that you passed as a parameter to the function.</p>
<p>Instead, you need to return the result of calling pickle.load() from your load_dict() function:</p>
<pre><code>def load_dict(filename):
return pickle.load(open('{0}.p'.format(filename), 'rb'))
</code></pre>
<p>And then assign it to your variable:</p>
<pre><code>freq_dict = load_dict('svd_f_dict')
</code></pre>
|
python|dictionary|pickle
| 3 |
1,905,248 | 72,252,937 |
NLP using XLM dataset
|
<p>I am trying to do NLP on the dataset consisting of the following row</p>
<pre><code>00001 B 74457
00002 C 12804123 16026213 14627885
00004 A 15329425 9058342 11279767
</code></pre>
<p>where 1st element in the row is the identifier 2nd on is a label recommends, it can have only three labels $A, B, C$ and the number for examples 12804123 represent the id of the XML, it contains data, for example, text, location, etc. Based on this I need to extract the data from the XML file and use it to make a model. So first of all I want to extract some of the data from the XML file and make a data frame of structure data. An example of the XML file is below.
When I run the command pd.read_xml(xml) it gives</p>
<pre><code> medlinecitation pubmeddata
0 NaN NaN
</code></pre>
<p>Any example from Kaggle or any other source etc I can follow to do the analysis.</p>
<pre><code>74457.xml = '''
<pubmedarticleset>
<pubmedarticle>
<medlinecitation owner="NLM" status="MEDLINE">
<pmid version="1"> 74457 </pmid>
<datecreated>
<year> 1978 </year>
<month> 03 </month>
<day> 21 </day>
</datecreated>
<datecompleted>
<year> 1978 </year>
<month> 03 </month>
<day> 21 </day>
</datecompleted>
<daterevised>
<year> 2007 </year>
<month> 11 </month>
<day> 15 </day>
</daterevised>
<article pubmodel="Print">
<journal>
<issn issntype="Print"> 0140-6736 </issn>
<journalissue citedmedium="Print">
<volume> 1 </volume>
<issue> 7984 </issue>
<pubdate>
<year> 1976 </year>
<month> Sep </month>
<day> 4 </day>
</pubdate>
</journalissue>
<title> Lancet </title>
<isoabbreviation> Lancet </isoabbreviation>
</journal>
<articletitle>
Prophylactic treatment of alcoholism by lithium carbonate. A controlled study.
</articletitle>
<pagination>
<medlinepgn> 481-2 </medlinepgn>
</pagination>
<abstract>
<abstracttext>
Lithium therapy has been shown to have a therapeutic influence in reducing the drinking and incapacity by alcohol in depressive alcoholics in a prospective double-blind placebo-controlled trial conducted over one year, but it had no significant effect on non-depressed patients. Patients in the trial treated by placebo had significantly greater alcoholic morbidity if they were depressive than if they were non-depressive.
</abstracttext>
</abstract>
<authorlist completeyn="Y">
<author validyn="Y">
<lastname> Merry </lastname>
<forename> J </forename>
<initials> J </initials>
</author>
<author validyn="Y">
<lastname> Reynolds </lastname>
<forename> C M </forename>
<initials> CM </initials>
</author>
<author validyn="Y">
<lastname> Bailey </lastname>
<forename> J </forename>
<initials> J </initials>
</author>
<author validyn="Y">
<lastname> Coppen </lastname>
<forename> A </forename>
<initials> A </initials>
</author>
</authorlist>
<language> eng </language>
<publicationtypelist>
<publicationtype> Clinical Trial </publicationtype>
<publicationtype> Comparative Study </publicationtype>
<publicationtype> Journal Article </publicationtype>
<publicationtype> Randomized Controlled Trial </publicationtype>
</publicationtypelist>
</article>
<medlinejournalinfo>
<country> ENGLAND </country>
<medlineta> Lancet </medlineta>
<nlmuniqueid> 2985213R </nlmuniqueid>
<issnlinking> 0140-6736 </issnlinking>
</medlinejournalinfo>
<chemicallist>
<chemical>
<registrynumber> 0 </registrynumber>
<nameofsubstance> Placebos </nameofsubstance>
</chemical>
<chemical>
<registrynumber> 7439-93-2 </registrynumber>
<nameofsubstance> Lithium </nameofsubstance>
</chemical>
</chemicallist>
<citationsubset> AIM </citationsubset>
<citationsubset> IM </citationsubset>
<meshheadinglist>
<meshheading>
<descriptorname majortopicyn="N"> Adult </descriptorname>
</meshheading>
<meshheading>
<descriptorname majortopicyn="N"> Alcohol Drinking </descriptorname>
</meshheading>
<meshheading>
<descriptorname majortopicyn="N"> Alcoholism </descriptorname>
<qualifiername majortopicyn="Y"> drug therapy </qualifiername>
</meshheading>
<meshheading>
<descriptorname majortopicyn="N"> Clinical Trials as Topic </descriptorname>
</meshheading>
<meshheading>
<descriptorname majortopicyn="N"> Depression </descriptorname>
<qualifiername majortopicyn="N"> chemically induced </qualifiername>
<qualifiername majortopicyn="Y"> prevention & control </qualifiername>
</meshheading>
<meshheading>
<descriptorname majortopicyn="N"> Double-Blind Method </descriptorname>
</meshheading>
<meshheading>
<descriptorname majortopicyn="N"> Drug Evaluation </descriptorname>
</meshheading>
<meshheading>
<descriptorname majortopicyn="N"> Female </descriptorname>
</meshheading>
<meshheading>
<descriptorname majortopicyn="N"> Humans </descriptorname>
</meshheading>
<meshheading>
<descriptorname majortopicyn="N"> Lithium </descriptorname>
<qualifiername majortopicyn="Y"> therapeutic use </qualifiername>
</meshheading>
<meshheading>
<descriptorname majortopicyn="N"> Male </descriptorname>
</meshheading>
<meshheading>
<descriptorname majortopicyn="N"> Middle Aged </descriptorname>
</meshheading>
<meshheading>
<descriptorname majortopicyn="N"> Placebos </descriptorname>
</meshheading>
</meshheadinglist>
</medlinecitation>
<pubmeddata>
<history>
<pubmedpubdate pubstatus="pubmed">
<year> 1976 </year>
<month> 9 </month>
<day> 4 </day>
</pubmedpubdate>
<pubmedpubdate pubstatus="medline">
<year> 1976 </year>
<month> 9 </month>
<day> 4 </day>
<hour> 0 </hour>
<minute> 1 </minute>
</pubmedpubdate>
<pubmedpubdate pubstatus="entrez">
<year> 1976 </year>
<month> 9 </month>
<day> 4 </day>
<hour> 0 </hour>
<minute> 0 </minute>
</pubmedpubdate>
</history>
<publicationstatus> ppublish </publicationstatus>
<articleidlist>
<articleid idtype="pubmed"> 74457 </articleid>
</articleidlist>
</pubmeddata>
</pubmedarticle>
</pubmedarticleset>'''
</code></pre>
<p>Please help me to understand what is happening? And how can I make it a data frame?</p>
|
<p>Here is one way to do it:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
try:
medlinecitation = pd.read_xml("74457.xml", xpath=".//medlinecitation").dropna(
axis=1
)
except ValueError:
medlinecitation = pd.DataFrame()
try:
pubmedpubdate = pd.read_xml("74457.xml", xpath=".//pubmedpubdate")
except ValueError:
pubmedpubdate = pd.DataFrame()
df = pd.merge(
left=medlinecitation,
right=pubmedpubdate,
how="outer",
left_index=True,
right_index=True,
).fillna(method="ffill")
</code></pre>
<pre class="lang-py prettyprint-override"><code>print(df)
# Output
owner status pmid citationsubset pubstatus year month day hour \
0 NLM MEDLINE 74457.0 IM pubmed 1976 9 4 NaN
1 NLM MEDLINE 74457.0 IM medline 1976 9 4 0.0
2 NLM MEDLINE 74457.0 IM entrez 1976 9 4 0.0
minute
0 NaN
1 1.0
2 0.0
</code></pre>
|
python|pandas|xml|jupyter-notebook|nlp
| 0 |
1,905,249 | 72,372,754 |
How to convert JSON file to CSV
|
<p>I have the following JSON file:</p>
<pre><code>[
{
"Names": {
"0": "Nat",
"1": "Harry",
"2": "Joe"
},
"Marks": {
"0": 78.22,
"1": 32.54,
"2": 87.23
}
}
]
</code></pre>
<p>I have written the following code for conversion:</p>
<pre class="lang-py prettyprint-override"><code>import csv, json
def conversion(Jsonfile,Csvfile):
readfile=open(Jsonfile,"r")
print(readfile)
jsondata=json.load(readfile)
print(jsondata)
readfile.close()
data_file=open(Csvfile,'w')
csv_writer=csv.writer(data_file)
count=0
for data in jsondata:
if count==0:
header=data.keys()
print(header)
csv_writer.writerow(header)
count=count+1
csv_writer.writerow(data.values())
print(data.values())
data_file.close()
Jsonfile="Series.json"
Csvfile="convertedfile.csv"
conversion(Jsonfile,Csvfile)
</code></pre>
<p>I am getting the following output in CSV</p>
<pre><code>Names,Marks
"{'0': 'Nat', '1': 'Harry', '2': 'Joe'}","{'0': 78.22, '1': 32.54, '2': 87.23}"
</code></pre>
<p>My question is how to correct the code to get the following output ( that is each name with marks in a different line):</p>
<pre><code>Names,Marks
0,Nat,78.22
1,Harry,32.54
2,Joe,87.23
</code></pre>
|
<p><code>pandas</code> has the utility for both, reading json and writing csv.</p>
<pre class="lang-python prettyprint-override"><code>import pandas as pd
j = '[{"Names":{"0":"Nat","1":"Harry","2":"Joe"},"Marks":{"0":78.22,"1":32.54,"2":87.23}}]'
df = pd.read_json(j[1:-1], orient='records') # 1:-1 because we need to remove the square brackets
df.to_csv("output.csv")
</code></pre>
<p>Output:</p>
<pre><code>,Names,Marks
0,Nat,78.22
1,Harry,32.54
2,Joe,87.23
</code></pre>
|
python|json|csv
| 1 |
1,905,250 | 72,221,515 |
Flask_SQLAlchemy is claiming my value is not boolean?
|
<p>I ran into the following error:</p>
<pre><code> File "/home/sandbox/.local/lib/python3.6/site-packages/sqlalchemy/sql/sqltypes.py", line 1973, in _strict_as_bool
raise TypeError("Not a boolean value: %r" % (value,))
sqlalchemy.exc.StatementError: (builtins.TypeError) Not a boolean value: 'True'
[SQL: INSERT INTO projects (status) VALUES (?)]
[parameters: [{'status': 'True'}]]
127.0.0.1 - - [12/May/2022 21:53:22] "POST / HTTP/1.1" 500 -
</code></pre>
<p>I tried as boolean input everything ranging from 0|1, FALSE|TRUE, False|True on my main route. I have also tried to put in the boolean values inbetween quotations.
What am I doing wrong?</p>
<pre><code>import os
from flask import Flask
from flask import render_template
from flask import request
from flask import redirect
from flask_sqlalchemy import SQLAlchemy
database_file = "sqlite:///DATA/DATA.db"
app = Flask(__name__)
app.config["SQLALCHEMY_DATABASE_URI"] = database_file
db = SQLAlchemy(app)
class Projects(db.Model):
__tablename__="projects"
status = db.Column(db.Boolean, default=False, nullable=False, primary_key=True)
def __repr__(self):
return f"projects('{self.status}')"
db.create_all()
@app.route("/", methods=["GET", "POST"])
def home():
if request.form:
status = Projects(status=request.form.get("status"))
db.session.add(status)
db.session.commit()
return render_template("home.html")
</code></pre>
<p>My base route being as follows</p>
<pre><code>{% extends "layout.html" %}
{% block body %}
<h1> Add new project </h1>
<form method="POST" action="/">
<select name="status" placeholder="Project Status">
<option value=False> Not Active </option>
<option value=True> Active </option>
</select>
<input type="submit" value="Register Data">
</form>
{% endblock %}
</code></pre>
|
<p>The problem you have is that the form submission is returning the selection value as a string - literally <code>"True"</code> or <code>"False"</code> - while the SQL driver expects a boolean type.</p>
<p>There is a Python standard library function <a href="https://docs.python.org/3/distutils/apiref.html#distutils.util.strtobool" rel="nofollow noreferrer">distutils.util.strtobool</a> which can safely convert a representation of a true or false value into a boolean type, raising a ValueError if someone puts something naughty into your API (this is much preferred to using <code>eval()</code> which <a href="https://stackoverflow.com/questions/661084/security-of-pythons-eval-on-untrusted-strings">shouldn't be used on untrusted input</a>).</p>
<p>I would update your route to something like the following:</p>
<pre><code># At the top
from distutils.util import strtobool
@app.route("/", methods=["GET", "POST"])
def home():
if request.form:
try:
form_status = strtobool(request.form.get("status").lower())
status = Projects(status=form_status)
db.session.add(status)
db.session.commit()
except ValueError:
# Handle the error - e.g. flash a message to the user
flash("Invalid input")
return render_template("home.html")
</code></pre>
<p>One thing to note with <code>strtobool</code> is that <code>distutils</code> is now deprecated as of Python 3.10, and will be removed in 3.12. <a href="https://stackoverflow.com/a/71133268/1960180">This answer</a> shows the implementation of it as a function, which is quite trivial, so it's worth including in your own utility functions for any code expected to last beyond Python 3.12.</p>
|
python|flask|sqlalchemy|flask-sqlalchemy
| 2 |
1,905,251 | 43,073,065 |
How long it take to train English/Russian/... model from scratch with SyntaxNet/DragNN?
|
<p>I want to retrain existing models for SyntaxNet/DragNN and looking for some real numbers how long does it take to train models for any language (it will give me good baseline for my languages).
What hardware have you used during this process?</p>
<p>Thank you in advance!</p>
|
<p>it took about 24 hours on my mac pro with cpu.
(10000 iterations)
<a href="https://github.com/dsindex/syntaxnet" rel="nofollow noreferrer">https://github.com/dsindex/syntaxnet</a></p>
|
tensorflow|syntaxnet
| 3 |
1,905,252 | 43,303,664 |
No module named <app_name>
|
<p>So, I made a chat app using django-channels in a separate project and now I am copying it into the main project.</p>
<p>This is what is happening. when I run <code>./manage.py runserver</code></p>
<pre><code>Unhandled exception in thread started by <function wrapper at 0x7f695c1cfde8>
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/channels/management/commands/runserver.py", line 40, in inner_run
self.channel_layer = channel_layers[DEFAULT_CHANNEL_LAYER]
File "/usr/local/lib/python2.7/dist-packages/channels/asgi.py", line 53, in __getitem__
self.backends[key] = self.make_backend(key)
File "/usr/local/lib/python2.7/dist-packages/channels/asgi.py", line 48, in make_backend
routing=routing,
File "/usr/local/lib/python2.7/dist-packages/channels/asgi.py", line 80, in __init__
self.router = Router(self.routing)
File "/usr/local/lib/python2.7/dist-packages/channels/routing.py", line 25, in __init__
self.root = Include(routing)
File "/usr/local/lib/python2.7/dist-packages/channels/routing.py", line 201, in __init__
self.routing = Router.resolve_routing(routing)
File "/usr/local/lib/python2.7/dist-packages/channels/routing.py", line 75, in resolve_routing
raise ImproperlyConfigured("Cannot import channel routing %r: %s" % (routing, e))
django.core.exceptions.ImproperlyConfigured: Cannot import channel routing 'website.routing.channel_routing': No module named myWebsite
</code></pre>
<p><strong>I know this is some error with django not recognising my module , but in other places it's recognising so why not here</strong></p>
<p>The culprit code, the reason why I am stuck here for 2 days :</p>
<p><strong>website/website/routing.py</strong></p>
<pre><code>from channels import include
from myWebsite.routing import websocket_routing, custom_routing
channel_routing = [
# Include sub-routing from an app.
include(websocket_routing, path=r"^/chat/stream"),
include(custom_routing),
]
</code></pre>
<p><strong>website/myWebsite/routing.py</strong></p>
<pre><code>from channels import route
from .consumers import ws_connect, ws_receive, ws_disconnect, chat_join, chat_leave, chat_send
websocket_routing = [
route("websocket.connect", ws_connect),
route("websocket.receive", ws_receive),
route("websocket.disconnect", ws_disconnect),
]
custom_routing = [
# Handling different chat commands (websocket.receive is decoded and put
# onto this channel) - routed on the "command" attribute of the decoded
# message.
route("chat.receive", chat_join, command="^join$"),
route("chat.receive", chat_leave, command="^leave$"),
route("chat.receive", chat_send, command="^send$"),
]
</code></pre>
<p><strong>Later I added this in website/myWebsite/__init.py__ :</strong></p>
<pre><code>default_app_config='myWebsite.apps.MywebsiteConfig'
</code></pre>
<p><strong>website/website/settings.py</strong></p>
<pre><code>INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'myWebsite',
'django_countries',
'social_django',
'channels',
]
ROOT_URLCONF = 'website.urls'
</code></pre>
<h3>The directory structure :</h3>
<pre><code>website
├── __init__.py
├── manage.py
├── myWebsite
│ ├── admin.py
│ ├── apps.py
│ ├── backends.py
│ ├── constants.py
│ ├── consumers.py
│ ├── exceptions.py
│ ├── forms.py
│ ├── __init__.py
│ ├── media
│ ├── migrations
│ │ ├── 0001_initial.py
~~ SNIP ~~
│ ├── models.py
│ ├── routing.py
│ ├── settings.py
│ ├── signals.py
│ ├── static
~~ SNIP ~~
│ ├── templates
~~ SNIP ~~
│ ├── tests.py
│ ├── urls.py
│ ├── utils.py
│ └── views.py
└── website
├── config.py
├── __init__.py
├── routing.py
├── settings.py
├── urls.py
├── views.py
└── wsgi.py
</code></pre>
<h3>So as you can see well above I do have the <code>__init__.py</code> in <em>website/myWebsite</em> directory.</h3>
<p>Any help would be greatly appreciated. It has stalled my work for the last 2 days as I have tried it all.</p>
<h3>Thanks</h3>
<h2>Update As per comments</h2>
<p><strong>New website/website/routing.py</strong></p>
<pre><code>from channels import include
import sys
from myWebsite.routing import websocket_routing, custom_routing
print(sys.path)
channel_routing = [
include(websocket_routing, path=r"^/chat/stream"),
include(custom_routing),
]
</code></pre>
<p><strong>website/website/settings.py</strong></p>
<pre><code>INSTALLED_APPS = [
~~ SNIP ~~
'channels',
'myWebsite',
'django_countries',
'social_django',
]
</code></pre>
Since neither of it help so reverting to the original code
|
<p>There was an error in the utils.py file </p>
<p>I found this error when I used shell on my friend's recommendation.</p>
<p>It popped up when I executed this <code>from myWebsite.routing import websocket_routing, custom_routing</code></p>
<p>and there was no error while just executing <code>import myWebsite</code></p>
<p>Here are the screenshot :</p>
<p><a href="https://ibb.co/m8zQ85" rel="nofollow noreferrer">https://ibb.co/m8zQ85</a></p>
<p><a href="https://ibb.co/eAY7MQ" rel="nofollow noreferrer">https://ibb.co/eAY7MQ</a></p>
|
python|django|django-channels
| 0 |
1,905,253 | 48,740,543 |
Python Printing Quotation Marks on Two Lines Instead of One
|
<p>I am reading from an external text file, named 'greeting.txt' where the contents of the text file are simply:</p>
<pre><code>HELLO
</code></pre>
<p>However, when I attempt to print the contents of the text file enclosed in quotes the terminal prints out:</p>
<pre><code>"HELLO
"
</code></pre>
<p>I am using the following code:</p>
<pre><code>for line in open('greeting.txt', "r"): print ('"%s"' % line)
</code></pre>
<p>I desire the string to be enclosed in quotes printed on the same line.
I have never encountered this problem before despite using Python for similar purposes, any help would be appreciated. </p>
|
<p>There is a end of line character in your text file after Hello. That end of line is also getting enclosed in the double quotes and causing the second quote to get printed on the second line. You should strip the end of line using rstrip()</p>
<pre><code>for line in open('greeting.txt', "r"): print ('"%s"' % line.rstrip())
</code></pre>
|
python
| 1 |
1,905,254 | 48,529,032 |
Why the web scraping python program is giving an error?
|
<p>Following is a web scraping program that I have written to download the ID card photos of students in my college given their URL. The URL of the images of all students is same, just we have to replace the ID numbers in the URL which I have provided from a notepad file "ID.txt". Following is the code that I have written-</p>
<pre><code>from selenium import webdriver
driver=webdriver.Chrome(executable_path=r'C:\Users\user1712\Downloads\Chrome Downloads\chromedriver_win32\chromedriver.exe')
driver.get('https://swd.bits-goa.ac.in/student_pagetemp1?PHPSESSID=ecm2utnjvml8kpkpp8dh2dvnq0')
# ID.txt contains id card numbers of students. Each ID in a separate row
filename = 'ID.txt'
with open(filename) as f:
data = f.readlines()
import csv
import urllib.request
reader = csv.reader(data)
for row in reader:
# url of each student is almost same. Only thing is that we have to change the ID in the url to get the image address of a student
url="https://swd.bits-goa.ac.in/css/studentImg/"+str(row)+".jpg"
fullname=str(row)+".jpg"
urllib.request.urlretrieve(url, fullname)
</code></pre>
<p>Following is the error that I am getting-</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\KAUSTUBH\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 1318, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File "C:\Users\KAUSTUBH\AppData\Local\Programs\Python\Python36-32\lib\http\client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Users\KAUSTUBH\AppData\Local\Programs\Python\Python36-32\lib\http\client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Users\KAUSTUBH\AppData\Local\Programs\Python\Python36-32\lib\http\client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Users\KAUSTUBH\AppData\Local\Programs\Python\Python36-32\lib\http\client.py", line 1026, in _send_output
self.send(msg)
File "C:\Users\KAUSTUBH\AppData\Local\Programs\Python\Python36-32\lib\http\client.py", line 964, in send
self.connect()
File "C:\Users\KAUSTUBH\AppData\Local\Programs\Python\Python36-32\lib\http\client.py", line 1400, in connect
server_hostname=server_hostname)
File "C:\Users\KAUSTUBH\AppData\Local\Programs\Python\Python36-32\lib\ssl.py", line 407, in wrap_socket
_context=self, _session=session)
File "C:\Users\KAUSTUBH\AppData\Local\Programs\Python\Python36-32\lib\ssl.py", line 814, in __init__
self.do_handshake()
File "C:\Users\KAUSTUBH\AppData\Local\Programs\Python\Python36-32\lib\ssl.py", line 1068, in do_handshake
self._sslobj.do_handshake()
File "C:\Users\KAUSTUBH\AppData\Local\Programs\Python\Python36-32\lib\ssl.py", line 689, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\KAUSTUBH\Downloads\Web scraping\swd trial.py", line 19, in <module>
urllib.request.urlretrieve(url, fullname)
File "C:\Users\KAUSTUBH\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 248, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "C:\Users\KAUSTUBH\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\KAUSTUBH\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 526, in open
response = self._open(req, data)
File "C:\Users\KAUSTUBH\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 544, in _open
'_open', req)
File "C:\Users\KAUSTUBH\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 504, in _call_chain
result = func(*args)
File "C:\Users\KAUSTUBH\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 1361, in https_open
context=self._context, check_hostname=self._check_hostname)
File "C:\Users\KAUSTUBH\AppData\Local\Programs\Python\Python36-32\lib\urllib\request.py", line 1320, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777)>
</code></pre>
|
<p>In order to skip the SSL error, you need to add an option <code>--ignore-certificate-errors</code> when you init the chromedriver.</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_argument("--ignore-certificate-errors")
driver = webdriver.Chrome(chrome_options=chrome_options)
driver.get('https://swd.bits-goa.ac.in/student_pagetemp1?PHPSESSID=ecm2utnjvml8kpkpp8dh2dvnq0')
</code></pre>
|
python|selenium|web-scraping|selenium-chromedriver|urllib
| 1 |
1,905,255 | 48,593,124 |
Subclass Axes in matplotlib
|
<p>The title pretty much says it.
However, the way <code>matplotlib</code> is set up, it's not possible to simply inherit from <code>Axes</code> and have it work.
The <code>Axes</code> object is never used directly, typically it's only returned from calls to <code>subplot</code> or other functions.</p>
<p>There's a couple reasons I want to do this.
First, to reduce reproducing plots with similar parameters over and over.
Something like this:</p>
<pre><code>class LogTemp(plt.Axes):
""" Axes to display temperature over time, in logscale """
def __init__(self, *args, **kwargs):
super.__init__(*args, **kwargs)
self.set_xlabel("Time (s)")
self.set_ylabel("Temperature(C)")
self.set_yscale('log')
</code></pre>
<p>It wouldn't be hard to write a custom function for this, although it wouldn't be as flexible.
The bigger reason is that I want to override some of the default behavior.
As a very simple example, consider</p>
<pre><code>class Negative(plt.Axes):
""" Plots negative of arguments """
def plot(self, x, y, *args, **kwargs):
super().plot(x, -y, *args, **kwargs)
</code></pre>
<p>or </p>
<pre><code>class Outliers(plt.Axes):
""" Highlight outliers in plotted data """
def plot(self, x, y, **kwargs):
out = y > 3*y.std()
super().plot(x, -y, **kwargs)
super().plot(x[out], y[out], marker='x', linestyle='', **kwargs)
</code></pre>
<p>Trying to modify more than one aspect of behavior will very quickly become messy if using functions.</p>
<p>However, I haven't found a way to have <code>matplotlib</code> easily handle new <code>Axes</code> classes.
The docs don't mention it anywhere that I've seen.
<a href="https://stackoverflow.com/questions/27470732/how-do-i-subclass-matplotlibs-figure-class">This</a> question addresses inheriting from the <code>Figure</code> class.
The custom class can then be passed into some <code>matplotlib</code> functions.
An unanswered question <a href="https://stackoverflow.com/questions/27473683/can-i-get-the-axis-that-will-be-returned-by-pyplot-subplots-inside-the-constru">here</a> suggests the Axes aren't nearly as straightforward. </p>
<p>Update: It's possible to monkey patch <code>matplotlib.axes.Axes</code> to override the default behavior, but this can only be done once when the program is first executed. Using multiple custom <code>Axes</code> is not possible with this approach.</p>
|
<p>I've found a good approach explained on <a href="https://gist.github.com/btel/a6b97e50e0f26a1a5eaa" rel="noreferrer">github</a>.
Their goal was an <code>Axes</code> object without ticks or markers, which speeds up creation time significantly.
It's possible to register "projections" with <code>matplotlib</code>, then use those.</p>
<p>For the example I've given, it can be done by </p>
<pre><code>class Outliers(plt.Axes):
""" Highlight outliers in plotted data """
name = 'outliers'
def plot(self, x, y, **kwargs):
out = abs(y - y.mean()) > 3*y.std()
super().plot(x, y, **kwargs)
super().plot(x[out], y[out], marker='x', linestyle='', **kwargs)
import matplotlib.projections as proj
proj.register_projection(Outliers)
</code></pre>
<p>And it can then be used by</p>
<pre><code>ax = f.add_subplot(1, 1, 1, projection='outliers')
</code></pre>
<p>or</p>
<pre><code>fig, axes = plt.subplots(20,20, subplot_kw=dict(projection='outliers'))
</code></pre>
<p>The only changes required are a <code>name</code> variable in the class definition, then passing that name in the <code>projection</code> argument of subplot.</p>
|
python-3.x|matplotlib
| 6 |
1,905,256 | 48,464,861 |
numpy vectorize dimension increasing function
|
<p>I would like to create a function that has input: <code>x.shape==(2,2)</code>, and outputs <code>y.shape==(2,2,3)</code>.</p>
<p>For example:</p>
<pre><code>@np.vectorize
def foo(x):
#This function doesn't work like I want
return x,x,x
a = np.array([[1,2],[3,4]])
print(foo(a))
#desired output
[[[1 1 1]
[2 2 2]]
[[3 3 3]
[4 4 4]]]
#actual output
(array([[1, 2],
[3, 4]]), array([[1, 2],
[3, 4]]), array([[1, 2],
[3, 4]]))
</code></pre>
<p>Or maybe:</p>
<pre><code>@np.vectorize
def bar(x):
#This function doesn't work like I want
return np.array([x,2*x,5])
a = np.array([[1,2],[3,4]])
print(bar(a))
#desired output
[[[1 2 5]
[2 4 5]]
[[3 6 5]
[4 8 5]]]
</code></pre>
<p>Note that <code>foo</code> is just an example. I want a way to <code>map</code> over a numpy array (which is what vectorize is supposed to do), but have that <code>map</code> take a 0d object and shove a 1d object in its place. It also seems to me that the dimensions here are arbitrary, as one might wish to take a function that takes a 1d object and returns a 3d object, vectorize it, call it on a 5d object, and get back a 7d object.... However, my specific use case only requires vectorizing a 0d to 1d function, and mapping it appropriately over a 2d array.</p>
|
<p>It would help, in your question, to show both the actual result and your desired result. As written that isn't very clear.</p>
<pre><code>In [79]: foo(np.array([[1,2],[3,4]]))
Out[79]:
(array([[1, 2],
[3, 4]]), array([[1, 2],
[3, 4]]), array([[1, 2],
[3, 4]]))
</code></pre>
<p>As indicated in the <code>vectorize</code> docs, this has returned a tuple of arrays, corresponding to the tuple of values that your function returned.</p>
<p>Your <code>bar</code> returns an array, where as <code>vectorize</code> expected it to return a scalar (or single value):</p>
<pre><code>In [82]: bar(np.array([[1,2],[3,4]]))
ValueError: setting an array element with a sequence.
</code></pre>
<p><code>vectorize</code> takes an <code>otypes</code> parameter that sometimes helps. For example if I say that <code>bar</code> (without the wrapper) returns an object, I get:</p>
<pre><code>In [84]: f=np.vectorize(bar, otypes=[object])
In [85]: f(np.array([[1,2],[3,4]]))
Out[85]:
array([[array([1, 2, 5]), array([2, 4, 5])],
[array([3, 6, 5]), array([4, 8, 5])]], dtype=object)
</code></pre>
<p>A (2,2) array of (3,) arrays. The <code>(2,2)</code> shape matches the shape of the input.</p>
<p><code>vectorize</code> has a relatively new parameter, <code>signature</code></p>
<pre><code>In [90]: f=np.vectorize(bar, signature='()->(n)')
In [91]: f(np.array([[1,2],[3,4]]))
Out[91]:
array([[[1, 2, 5],
[2, 4, 5]],
[[3, 6, 5],
[4, 8, 5]]])
In [92]: _.shape
Out[92]: (2, 2, 3)
</code></pre>
<p>I haven't used this much, so am still getting a feel for how it works. When I've tested it, it is slower than the original scalar version of <code>vectorize</code>. Neither offers any speed advantage of explicit loops. However <code>vectorize</code> does help when 'broadcasting', allowing you to use a variety of input shapes. That's even more useful when your function takes several inputs, not just one as in this case.</p>
<pre><code>In [94]: f(np.array([1,2]))
Out[94]:
array([[1, 2, 5],
[2, 4, 5]])
In [95]: f(np.array(3))
Out[95]: array([3, 6, 5])
</code></pre>
<hr>
<p>For best speed, you want to use existing numpy whole-array functions where possible. For example your <code>foo</code> case can be done with:</p>
<pre><code>In [97]: np.repeat(a[:,:,None],3, axis=2)
Out[97]:
array([[[1, 1, 1],
[2, 2, 2]],
[[3, 3, 3],
[4, 4, 4]]])
</code></pre>
<p><code>np.stack([a]*3, axis=2)</code> also works.</p>
<p>And your <code>bar</code> desired result:</p>
<pre><code>In [100]: np.stack([a, 2*a, np.full(a.shape, 5)], axis=2)
Out[100]:
array([[[1, 2, 5],
[2, 4, 5]],
[[3, 6, 5],
[4, 8, 5]]])
</code></pre>
<p><code>2*a</code> takes advantage of the whole-array multiplication. That's true 'numpy-onic' thinking.</p>
|
python|numpy
| 3 |
1,905,257 | 20,244,774 |
django models - conditionally set blank=True
|
<p>I am trying to build an app, where user can customize forms. The Following example contains classes for creating Fields (<code>QuestionField</code>, <code>AnswerField</code>) which is used by the admin and the <code>BoolAnswer</code> which is filled by the user: This Way an admin can create a Form with questions and possible answers.</p>
<p>According to django documentation, <code>blank=True</code> is related to evaluation. The problem is that it is set on the class level rather than on object level.</p>
<p>How can I set <code>blank=True</code> depending on the related model so that I do not have to reimplement an own validator? (see pseudo code in <code>BoolAnswer</code>)</p>
<p>My <code>models.py</code>:</p>
<pre><code>class QuestionField(models.Model):
question = models.TextField(max_length=200)
models.ForeignKey(Sheet)
class BoolAnswerField(AnswerField):
question = models.ForeignKey(models.Model)
if_true_field = models.TextField(max_length=100, null=True)
class BoolAnswer(models.Model):
bool_answer_field = models.ForeignKey(BoolAnswerField)
result = models.BooleanField()
if_true = models.TextField(max_length=100, null=True,
blank=True if self.bool_answer_field.if_true_field)
</code></pre>
<p>** Short explanation **:
If the Answer to a <code>BoolAnswerField</code> question is True, <code>if_true</code> field should explain, why</p>
|
<p>Don't hate me, but validation is the way to go, see <a href="https://docs.djangoproject.com/en/1.6/ref/models/instances/#django.db.models.Model.clean_fields" rel="nofollow noreferrer">here</a></p>
<pre><code>class BoolAnswer(models.Model):
bool_answer_field = models.ForeignKey(BoolAnswerField)
result = models.BooleanField()
if_true = models.TextField(max_length=100, null=True, blank=True)
def clean(self)
if self.bool_answer_field.if_true_field and not self.if_true:
raise ValidationError('BAF is True without a reason')
</code></pre>
<p>In case you want your error message to be displayed next to the field, not at the beginning of the form, you've got to pass a <code>dict</code> to <code>ValidationError</code>, like:</p>
<pre><code>from django.utils.translation import gettext_lazy as _
...
raise ValidationError({
'field_name': _('This field is required.')})
</code></pre>
|
python|django|django-models
| 7 |
1,905,258 | 19,936,347 |
Pygame- window and sprite class - python
|
<p>Im trying to build a window class which resembles a class which sprites live in. In the window class i want to have the following things:</p>
<ul>
<li>set_background()</li>
<li>set_size()</li>
<li>add_sprite()</li>
<li>remove_sprite()</li>
</ul>
<p>In the sprite class i want the following methods:</p>
<ul>
<li>draw_sprite()</li>
</ul>
<p>For now i will have one sprite, but i would eventually like to have a list of sprites. </p>
<p>I've tried running what I have in a main class by calling these methods on its instances.</p>
<pre><code>window = Window(200,200)
sprite = Sprite(Window)
window.set_Background()
sprite.draw_sprite()
</code></pre>
<p>Heres my code:</p>
<p><strong>Sprite class:</strong></p>
<pre><code>import pygame
pygame.init()
class Sprite(object):
def __init__(self, World =None, sprite=[]):
self.Window = window
def draw_sprite(self,sprite,x,y):
sprite=pygame.image.load(sprite)
self.World.window.blit(sprite,(x,y))
pygame.display.update()
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
</code></pre>
<p><strong>Window class:</strong> </p>
<pre><code>import pygame,sys,os from pygame.locals import *
from Window import *
class Panel:
def __init__(self, width=None,height=None):
self.width= width
self.height = height
self.foreground=pygame.Surface((width, height))
self.background= pygame.Surface((width, height))
self.rect= self.foreground.get_rect()
def clear(self):
self.foreground.blit(self.background,(0,0))
def set_background(self, image=None):
if sky is not None:
bg = Window.draw_world(self,image)
self.background.blit(bg,(width,height))
class Window(Panel):
pygame.init()
def __init__(self,width,height):
self.window = pygame.display.set_mode((width,height))
self.width=width
self.height=height
Panel.__init__(self, width, height)
self._foreground = self.window
self.set_background()
def draw_world(self,image):
image=pygame.image.load(image)
for x in range(0,(290/image.get_width()+1)):
for y in range(0,(230/image.get_height()+1)):
self.background.blit(image,(x*200,y*200))
pygame.display.update()
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
def window_setSize(width,height):
self.width=width
self.heght =height
</code></pre>
<p><strong>main class:</strong></p>
<pre><code>from Sprite import *
from window import *
sprite = Sprite()
window=Window()
sprite.draw_sprite("sprite.png",100,200)
window.set_background("bg.png")
</code></pre>
<p>Has anyone got any ideas why the sprite is displaying but my background isnt? It just shows a black background. </p>
<p>Im using python 3 and pygame 3.3
Thanks</p>
|
<p>You have event loop in <code>draw_sprite()</code> so it is running to the end of game and <code>window.set_background</code> is never executed. </p>
<p>Your code is incorrect constructed.</p>
<p>I try to correct it and send code later.</p>
<hr>
<p>by the way: see: <a href="http://pygame.org/docs/ref/sprite.html" rel="nofollow noreferrer">pygame.sprite.Sprite</a>.</p>
<hr>
<p><strong>EDIT:</strong></p>
<p>Simple example how to organize code.</p>
<p>Now it is in one file. In Pygame is always one Window so there is no need to make Panel+Widnow. You have one event loop in <code>run()</code> . All code is inside Window (create Sprite, changing background, drawing etc.). </p>
<p>I add sprites to (python) list and draw all sprites from list - player is exception - so I can remove only last sprite from list :/ If you need something better see <a href="http://pygame.org/docs/ref/sprite.html#pygame.sprite.Sprite" rel="nofollow noreferrer">pygame.sprite.Sprite()</a> and <a href="http://pygame.org/docs/ref/sprite.html#pygame.sprite.Group" rel="nofollow noreferrer">pygame.sprite.Group()</a>.</p>
<p>Use <code>Arrows</code> to move red ball, <code>Space</code> to pause game, <code>ESC</code> to quit. </p>
<p>At the end I attached my bitmaps.</p>
<pre><code>import pygame
#from pygame.locals import *
#--------------------------------------------------------------------
# class for single sprite
#--------------------------------------------------------------------
class MySprite():
def __init__(self, image, x, y):
self.image = pygame.image.load(image)
image_rect = self.image.get_rect()
# Rect class to use "Sprite collision detect" - in the future
# In rect you have sprite position and size
# You can use self.rect.x, self.rect.y, self.rect.width, self.rect.height
# and self.rect.center, self.rect.centerx, self.rect.top, self.rect.bottomright etc.
self.rect = pygame.rect.Rect(x, y, image_rect.width, image_rect.height)
def draw(self, screen):
screen.blit(self.image, self.rect)
#--------------------------------------------------------------------
# class for player
#--------------------------------------------------------------------
class MyPlayer(MySprite):
def __init__(self, image, x, y):
# parent constructor always as a first in __init__
MySprite.__init__(self, image, x, y)
self.speed_x = self.speed_y = 0
#-----------------------------
def set_speed(self, x, y):
self.speed_x = x
self.speed_y = y
#-----------------------------
def update(self):
self.rect.x += self.speed_x
self.rect.y += self.speed_y
if self.rect.centerx < 0 :
self.rect.centerx = 800
elif self.rect.centerx > 800 :
self.rect.centerx = 0
if self.rect.centery < 0 :
self.rect.centery = 600
elif self.rect.centery > 600 :
self.rect.centery = 0
#--------------------------------------------------------------------
class Window():
def __init__(self, width, height):
#--------------------
self.rect = pygame.Rect(0, 0, width, height)
# or
self.width, self.height = width, height
#--------------------
pygame.init()
# most users and tutorials call it "screen"
self.screen = pygame.display.set_mode(self.rect.size)
#############################################################
self.foreground = None
self.background = None
self.set_background("background.jpg")
self.set_foreground("ball3.png")
#################################################
self.player = MyPlayer("ball1.png", 100, 200)
self.sprites_list = []
self.add_sprite(MySprite("ball2.png", 100, 400))
self.add_sprite(MySprite("ball2.png", 300, 500))
self.add_sprite(MySprite("ball2.png", 300, 200))
self.remove_last_sprite()
#-----------------------------
# red text "PAUSE"
font = pygame.font.SysFont("", 72)
self.text_pause = font.render("PAUSE", True, (255, 0, 0))
# center text on screen
screen_center = self.screen.get_rect().center
self.text_pause_rect = self.text_pause.get_rect(center=screen_center)
#--------------------------
def add_sprite(self, sprite):
self.sprites_list.append(sprite)
#--------------------------
def remove_last_sprite(self):
if self.sprites_list:
del self.sprites_list[-1]
#--------------------------
def draw_sprites(self, screen):
for sprite in self.sprites_list:
sprite.draw(screen)
#--------------------------
def draw_background(self, screen):
screen.fill((0,64,0)) # clear screen to green
if self.background:
screen.blit(self.background, (0,0))
#--------------------------
def draw_foreground(self, screen):
if self.foreground:
screen.blit(self.foreground, (0,0))
#--------------------------
def draw_world(self, image):
temp = pygame.Surface(self.rect.size, pygame.SRCALPHA, 32).convert_alpha()
image_rect = image.get_rect()
for x in range(0, self.rect.width, 60):
for y in range(0,self.rect.width, 60):
temp.blit(image,(x,y))
return temp
#--------------------------
def set_foreground(self, image=None):
if image:
img = pygame.image.load(image)
self.foreground = self.draw_world(img)
#--------------------------
def set_background(self, image=None):
if image:
self.background = pygame.image.load(image)
#--------------------------
def run(self):
clock = pygame.time.Clock()
RUNNING = True
PAUSED = False
while RUNNING:
#--- events ---
for event in pygame.event.get():
if event.type == pygame.QUIT:
RUNNING = False
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_ESCAPE:
RUNNING = False
elif event.key == pygame.K_SPACE:
PAUSED = not PAUSED
if event.key == pygame.K_UP:
self.player.set_speed(0,-10)
elif event.key == pygame.K_DOWN:
self.player.set_speed(0,10)
elif event.key == pygame.K_LEFT:
self.player.set_speed(-10,0)
elif event.key == pygame.K_RIGHT:
self.player.set_speed(10,0)
if event.type == pygame.KEYUP:
if event.key in (pygame.K_UP, pygame.K_DOWN, pygame.K_LEFT, pygame.K_RIGHT):
self.player.set_speed(0,0)
#--- changes ----
if not PAUSED:
# change elements position
self.player.update()
#--- draws ---
self.draw_background(self.screen)
self.draw_foreground(self.screen)
self.draw_sprites(self.screen)
self.player.draw(self.screen)
if PAUSED:
# draw pause string
self.screen.blit(self.text_pause, self.text_pause_rect.topleft)
pygame.display.update()
#--- FPS ---
clock.tick(25) # 25 Frames Per Seconds
#--- finish ---
pygame.quit()
#----------------------------------------------------------------------
Window(800,600).run()
</code></pre>
<p>ball1.png
<img src="https://i.stack.imgur.com/eS09b.png" alt="enter image description here">
ball2.png
<img src="https://i.stack.imgur.com/drMXl.png" alt="enter image description here">
ball3.png
<img src="https://i.stack.imgur.com/VB0KR.png" alt="enter image description here"></p>
<p>background.jpg
<img src="https://i.stack.imgur.com/TiPS6.jpg" alt="enter image description here"></p>
<p>screenshot
<img src="https://i.stack.imgur.com/HWxly.png" alt="enter image description here"></p>
|
python|pygame
| 4 |
1,905,259 | 4,777,764 |
Unicode error trying to call Google search API
|
<p>I need to perform google search to retrieve the number of results for a query. I found the answer here - <a href="https://stackoverflow.com/questions/1657570/google-search-from-a-python-app/1657597#1657597">Google Search from a Python App</a></p>
<p>However, for few queries I am getting the below error. I think the query has unicode characters.</p>
<p>UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 28: ordinal not in range(128)</p>
<p>I searched google and found I need to convert unicode to ascii, and found below code.</p>
<pre><code>def convertToAscii(text, action):
temp = unicode(text, "utf-8")
fixed = unicodedata.normalize('NFKD', temp).encode('ASCII', action)
return fixed
except Exception, errorInfo:
print errorInfo
print "Unable to convert the Unicode characters to xml character entities"
raise errorInfo
</code></pre>
<p>If I use the action ignore, it removes those characters, but if I use other actions, I am getting exceptions.</p>
<p>Any idea, how to handle this?</p>
<p>Thanks</p>
<p>== Edit ==
I am using below code to encode and then perform the search and this is throwing the error.</p>
<p>query = urllib.urlencode({'q': searchfor})</p>
|
<p>You cannot <code>urlencode</code> raw Unicode strings. You need to first encode them to UTF-8 and then feed to it: </p>
<p><code>query = urllib.urlencode({'q': u"München".encode('UTF-8')})</code></p>
<p>This returns <code>q=M%C3%BCnchen</code> which Google happily accepts.</p>
|
python|unicode|ascii|non-ascii-characters
| 2 |
1,905,260 | 4,568,588 |
Is there a commandline flag to set PYTHONHOME?
|
<p>I'm attempting to run python on a system that doesn't allow me to set environment variables. Is there a commandline flag to python that will set PYTHONHOME? I looked here: <a href="http://docs.python.org/release/2.3.5/inst/search-path.html" rel="nofollow">http://docs.python.org/release/2.3.5/inst/search-path.html</a> but didn't see anything.</p>
<p>So, hopefully something like this:</p>
<pre><code>python -magical_path_flag /my/python/install test.py
</code></pre>
<p><strong>EDIT</strong></p>
<p>Thanks for the responses everyone. I'm embarrassed to say I actually meant PYTHONHOME, not PYTHONPATH. (That's what I deserve for asking a question at 1:30 AM.) I've edited my quesiton.</p>
<p>Here's some more info. I'm trying to get python running on Android. I can run python -V no problem, but if I try and execute a script, I get:</p>
<pre><code>I/ControlActivity(18340): Could not find platform independent libraries <prefix>
I/ControlActivity(18340): Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]
</code></pre>
<p>Unfortunately when using the ProcessBuilder and changing the environment variables on Android, it says that they're not modifiable and throws an exception. I'm able to pass all the command line flags I want, so I was hoping I could set PYTHONHOME that way.</p>
<p>I've tried creating a wrapping shell script which exports PYTHONHOME and then calls python but that didn't work. (Got the same error as before.)</p>
<p>Thanks,</p>
<p>Gabe</p>
|
<p>You could simply set it in your script -- <code>sys.path</code> is a regular, modifiable list. Something like:</p>
<pre><code>import sys
sys.path.append("/path/to/libraries")
</code></pre>
<p>should do the trick</p>
|
python|android
| 5 |
1,905,261 | 69,528,141 |
How to fix the hosting address in Flask ngrok
|
<p>I'm using an API from my computer as a server via flask ngrok, which generates an address where users can remotely use my API, however every time I run my api it generates a random address like this: <code>http://1b1c-187- 121-198-62.ngrok.io</code>
How do I generate a fixed address?</p>
<p>This is my main code:</p>
<pre><code>from flask_ngrok import run_with_ngrok
from flask import Flask, flash, request, redirect, url_for, render_template
import os
from werkzeug.utils import secure_filename
from PIL import Image, ImageOps
#app = Flask(__name__)
app = Flask(__name__, template_folder='./templates')
run_with_ngrok(app) #starts ngrok when the app is run
UPLOAD_FOLDER = './static/'
app.secret_key = "secret key"
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
app.config['MAX_CONTENT_LENGTH'] = 16 * 1024 * 1024
ALLOWED_EXTENSIONS = set(['png', 'jpg', 'jpeg', 'gif'])
@app.route('/')
def index():
return render_template('index.html')
########################################
def allowed_file(filename):
return '.' in filename and filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS
@app.route('/', methods=['POST'])
def upload_image():
if 'file' not in request.files:
flash('No file part')
#return redirect(request.url)
file = request.files['file']
if file.filename == '':
flash('No image selected for uploading')
#return redirect(request.url)
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
#file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
file.save(os.path.join(app.config['UPLOAD_FOLDER'], 'foto.jpeg'))
#### Padronizando tamanho de foto
basewidth = 400
# My image is a 200x374 jpeg that is 102kb large
foo = Image.open('./static/foto.jpeg')
wpercent = (basewidth/float(foo.size[0]))
# I downsize the image with an ANTIALIAS filter (gives the highest quality)
hsize = int((float(foo.size[1])*float(wpercent)))
foo = foo.resize((basewidth,hsize), Image.ANTIALIAS)
foo = ImageOps.exif_transpose(foo)
foo.save("./static/fase1/foto.jpeg",optimize=True,quality=95)
#foo.save("./static/foto2.jpeg", optimize=True,quality=50)
#########
#### Meu código
#os.remove('./static/foto.png')
#os.remove('./static/uploads/foto2.png')
os.system('python evala.py --trained_model=./weights/yolact_plus_resnet50_meat_3700_495900.pth --config=yolact_resnet50_meat_config --score_threshold=0.8 --top_k=100 --images=./static/fase1:./static/mask')
####
# get directory path where you want to save the images
#print('upload_image filename: ' + filename)
flash('Image successfully uploaded and displayed below')
return render_template('index.html', filename=filename)
else:
flash('Allowed image types are - png, jpg, jpeg, gif')
return redirect(request.url)
@app.route('/display/<filename>')
def display_image(filename):
return redirect(url_for('static', filename='uploads/foto2.jpeg'), code=301)
app.run()
</code></pre>
<p>OBS: My code won't work, it's just to show about the base of my code.</p>
|
<p>As mentioned by others before, that requires a <a href="https://ngrok.com/pricing" rel="nofollow noreferrer">paid plan</a> that will give you custom domains and reserver domains.</p>
<p>For a free alternative, you can request a subdomain (<code>-s</code>) with <a href="http://localtunnel.github.io/www/" rel="nofollow noreferrer">localtunnel</a>.</p>
|
python|flask|ngrok
| 2 |
1,905,262 | 48,025,282 |
Saving/Retrieving igraph Graph attributes
|
<p>I am trying to save and then retrieve an igraph Graph with the graph attributes. Specifically, I have a two-terminal graph, and I am storing the source and sink as graph attributes so I can retrieve them in constant time. Note, the vertices are not in any specific order (e.g., the first vertex is the source and the last is the sink).</p>
<p>I have searched the documentation but I can't see that any of the formats support storing/retrieving graph attributes. Am I missing anything?</p>
<p>My fallback is to use boolean source/sink vertex attributes, but that takes more space and requires linear time to retrieve the right vertices.</p>
|
<p><a href="http://graphml.graphdrawing.org/" rel="nofollow noreferrer">GraphML</a> supports numeric and string attributes that can be attached to the entire graph, to individual vertices or to individual edges (actually, it supports even more complex ones but igraph's GraphML implementation is limited to numeric and string attributes). So, you could use <code>Graph.write_graphml()</code> and <code>Graph.Read_GraphML()</code>. Also, you can simply save an igraph graph using Python's <code>pickle</code> module (i.e. use <code>pickle.dump()</code> and <code>pickle.load()</code>) and you will get all the graph/vertex/edge attributes back (even complex Python objects) -- the only catch is that the <code>pickle</code> format is not interoperable with other tools outside the Python world.</p>
|
python|attributes|igraph|file-format|read-write
| 2 |
1,905,263 | 51,228,122 |
How to change the font family/style when printing to the console?
|
<p>The question is straightforward, is it possible to change the font family of text in a Python <code>print()</code> output? Like Times New Roman, Arial, or Comic Sans?</p>
<p>I only want to change some of the output. Not all of the text like in <a href="https://stackoverflow.com/questions/3592673/change-console-font-in-windows/13940780#13940780">this question</a>.</p>
<p>I'm using Python 3 and Jupyter Notebook on a Mac.</p>
<p>I know it's possible to make certain text bold like so:</p>
<pre><code>bold_start = '\033[1m'
bold_end = '\033[0m'
print(bold_start, "Hello", bold_end, "World")
</code></pre>
<p>This outputs "<strong>Hello</strong> World" instead of "Hello World" or "<strong>Hello World</strong>"</p>
|
<p>Python strings are just strings of Unicode characters, they don't say anything about font one way or another. The font is determined by whatever is rendering the characters, e.g. the terminal program you're using, or the browser you're using. The <code>print</code> function just spits out the resulting string.</p>
<p>As you pointed out, if you're in a terminal that understands those escape sequences, then you can use those to affect the output. If your output is a web page, then you can embed html code to specify whatever you like, but all the python interpreter sees is a string of characters, not a string of characters in any particular font.</p>
|
python|python-3.x|fonts
| 4 |
1,905,264 | 51,395,847 |
Understanding Meta classes in python
|
<p>I am trying to learn <code>metaclasses</code> in python, from my research i found a example like follow.</p>
<p>i have a <code>Base</code> and <code>Derived</code> classes like follow</p>
<pre><code>class Base():
def foo(self):
return self.bar()
class Derived(Base):
def foo2(self):
return "i am foo2"
</code></pre>
<p>now, when i want to make sure that whoever extending <code>Base</code> class, must need to implement <code>bar()</code> method, so i created the meta class to hook the constuction of derived class,so now <code>Base</code> class looks like follow with <code>BaseMeta</code> meta class.</p>
<pre><code>class BaseMeta(type):
def __new__(cls, name, bases, body):
if not "bar" in body:
raise TypeError("bar not implemented")
return super().__new__(cls, name, bases, body)
class Base(metaclass=BaseMeta):
def foo(self):
return self.bar()
</code></pre>
<p>The problem is when i get looks into body it returns 2 records for <code>Base</code> and <code>Derived</code> class, like follow.</p>
<pre><code> {'__module__': '__main__', '__qualname__': 'Base', 'foo': <function
Base.foo at 0x7ffbaae436a8>}
{'__module__': '__main__', '__qualname__': 'Derived', 'bar': <function
Derived.bar at 0x7ffbaae437b8>}
</code></pre>
<p>my code in <code>__new__</code> breaks since <code>Base</code> not have <code>bar</code>, but i want to check only in the Derived class so i rewrite my <code>metaclass</code> like follow.</p>
<pre><code>def __new__(cls, name, bases, body):
if name !="Base" and not "bar" in body:
raise TypeError("bar not implemented")
return super().__new__(cls, name, bases, body)
</code></pre>
<p>I am checking <code>name != Base</code> in my <code>__new__</code> method. </p>
<blockquote>
<p>Is that the right way to do it or we can use some other best way?</p>
</blockquote>
|
<p>You can use the <code>abc</code> module in the stdlib, which has tools for doing exactly this.
<a href="https://docs.python.org/3/library/abc.html" rel="nofollow noreferrer">https://docs.python.org/3/library/abc.html</a></p>
<pre><code>import abc
class Base(abc.ABC):
@abc.abstractmethod
def bar(self):
pass
class Derived(Base):
pass
# This will raise an error because foo is not implemented
# >>> Derived()
# TypeError: Can't instantiate abstract class Derived with abstract methods bar
</code></pre>
<p>Another strategy would be to have a <code>bar</code> method on your Base class that raises a NotImplementedError. The main difference is that no error is raised until you actually call something that requires <code>bar</code>. e.g.</p>
<pre><code>class Base():
def foo(self):
return self.bar()
def bar(self):
raise NotImplementedError
</code></pre>
|
python|python-3.6|metaclass
| 2 |
1,905,265 | 73,838,060 |
what is wrong with my Sum of Digits recursive function?
|
<pre><code>def digital_root(n):
if n > 0:
a.append(n%10)
if n/10 > 0:
digital_root(n/10)
else:
if len(a) > 1:
b = a
a.clear()
z = 0
for i in range(len(b)):
z += b[i]
digital_root(z)
else:
return a[0]
</code></pre>
<p>why it returns None?</p>
<p>task is: Given n, take the sum of the digits of n. If that value has more than one digit, continue reducing in this way until a single-digit number is produced. The input will be a non-negative integer.</p>
|
<p>You got <code>None</code> because you seem to miss <code>return</code> statements when you call <code>digital_root()</code>. Should be:</p>
<pre><code> return digital_root(n/10)
</code></pre>
<p>and</p>
<pre><code> return digital_root(z)
</code></pre>
|
python-3.x
| 0 |
1,905,266 | 73,593,640 |
Updating an open .txt file in python
|
<p>I want to run a script in the background which would print useful information that I would need to see every ~5min. I thought I could write text into text file while having notepad open, so I could see live updates. Is that possible to do?</p>
<p>Is there an alternative solution?</p>
|
<p><strong>Printing</strong></p>
<p>As @esqew suggested just use <code>print</code> or <code>stdout</code>. You can even print in color with some super easy libraries <a href="https://pypi.org/project/colorama/" rel="nofollow noreferrer">like this one</a>.</p>
<p>Example Code:</p>
<pre><code>from time import sleep
for _ in range(10):
print('I am a noob') # state the obvious
sleep(5 * 60) # every 5 mins
</code></pre>
<p><strong>Writing To File</strong></p>
<p>You can also write to file. Easy way is to append</p>
<p>Example Code:</p>
<pre><code>from time import sleep
for _ in range(10):
with open("myfile.txt", "w") as file:
file.write('I am a noob') # state the obvious
sleep(5 * 60) # every 5 mins
</code></pre>
|
python|text
| 0 |
1,905,267 | 17,576,615 |
Pandas date_range from DatetimeIndex to Date format
|
<p>Pandas <code>date_range</code> returns a <code>pandas.DatetimeIndex</code> which has the indexes formatted as a timestamps (date plus time). For example:</p>
<pre><code>In [114] rng=pandas.date_range('1/1/2013','1/31/2013',freq='D')
In [115] rng
Out [116]
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-01-01 00:00:00, ..., 2013-01-31 00:00:00]
Length: 31, Freq: D, Timezone: None
</code></pre>
<p>Given I am not using timestamps in my application, I would like to convert this index to a date such that:</p>
<pre><code>In [117] rng[0]
Out [118]
<Timestamp: 2013-01-02 00:00:00>
</code></pre>
<p>Will be in the form <code>2013-01-02</code>.</p>
<p>I am using pandas version 0.9.1</p>
|
<p>For me the current answer is not satisfactory because internally it is still stored as a timestamp with hours, minutes, seconds.</p>
<p>Pandas version : 0.22.0</p>
<p>My solution has been to convert it to <code>datetime.date</code>:</p>
<pre class="lang-py prettyprint-override"><code>In[30]: import pandas as pd
In[31]: rng = pd.date_range('1/1/2013','1/31/2013', freq='D')
In[32]: date_rng = rng.date # Here it becomes date
In[33]: date_rng[0]
Out[33]: datetime.date(2013, 1, 1)
In[34]: print(date_rng[0])
2013-01-01
</code></pre>
|
python|pandas
| 15 |
1,905,268 | 55,615,204 |
Can't insert new lines on json
|
<p>I am using the Fresdesk API to create tickets on my Django application. The integration is working perfectly, however, when I try to user the code <code>\n</code> to create new lines in the ticket, nothing appears on the support page at Freshdesk.</p>
<p>An example is shown below:</p>
<pre><code>items_changed = 'Nome do Item: T-Shirt Masculina Long. Cubo Mágico \n Tamanho: P / Branco \n SKU: 1913511271 - Branco - P \n Tipo: Troca \n Motivo: Não gostei \n Preço: R$79.90 \n Quantidade: 1 \n \n Nome do Item: T-Shirt Feminina Gola Choker Cansei \n Tamanho: G / Branco \n SKU: 1916211244 - Branco - G \n Tipo: Troca \n Motivo: O tamanho não serviu \n Preço: R$79.90 \n Quantidade: 1'
payload = {"description": items_changed + ' Dados do cliente: ' + client_data,
"subject": "Troca/Devolução de itens",
"email": user_email,
"priority": priority['high'],
"status": status['open'],
"group_id": group['Atendimento'],
"type": "Troca",
"product_id": client[client_id]
}
headers = {
'Content-Type': "application/json",
'Cache-Control': "no-cache"
}
response = requests.request("POST", url, data=json.dumps(payload), headers=headers, auth=('****', 'X'))
</code></pre>
<p>The problem is that the output is not what was expected.</p>
<p>The final ticket is presented below:</p>
<blockquote>
<p>Nome do Item: T-Shirt Masculina Long. Cubo Mágico Tamanho: P / Branco
SKU: 1913511271 - Branco - P Tipo: Troca Motivo: Não gostei Preço:
R$79.90 Quantidade: 1 Nome do Item: T-Shirt Feminina Gola Choker
Cansei Tamanho: G / Branco SKU: 1916211244 - Branco - G Tipo: Troca
Motivo: O tamanho não serviu Preço: R$79.90 Quantidade: 1 Dados do
cliente: Nome: Erico Scorpioni, CPF: 06734142990, Telefone:
456543456765, Endereço: Rua 1 / Fpolis - SC</p>
</blockquote>
<p>How can I make new lines to appear in the final ticket?</p>
|
<p>All control characters in valid JSON need to be escaped. So, you need to escape your newline characters with an extra <code>\</code>. </p>
<p><code>items_changed = 'Nome do Item: T-Shirt Masculina Long. Cubo Mágico \\n Tamanho: P / Branco \\n SKU: 1913511271 - Branco - P \\n Tipo: Troca \\n Motivo: Não gostei \\n Preço: R$79.90 \\n Quantidade: 1 \\n \\n Nome do Item: T-Shirt Feminina Gola Choker Cansei \\n Tamanho: G / Branco \\n SKU: 1916211244 - Branco - G \\n Tipo: Troca \\n Motivo: O tamanho não serviu \\n Preço: R$79.90 \\n Quantidade: 1'</code></p>
|
python|json|freshdesk
| 0 |
1,905,269 | 50,026,954 |
Combine columns of different types in Pandas Dataframe
|
<p>Let's say I have a DataFrame like this:</p>
<pre><code>df = pd.DataFrame({'col1':[0.2, 0.3, .5], 'col2':['a', 'b', 'c']})
</code></pre>
<p>And I want to obtain a third column col3 which would be something like:</p>
<pre><code>{'col1':['20% a', '30% b, '50% c']}
</code></pre>
<p>Is there anyway of solving this without iterating each row of the DataFrame ?</p>
|
<p>This is one way.</p>
<pre><code>df = pd.DataFrame({'col1':[0.2, 0.3, .5], 'col2':['a', 'b', 'c']})
df['col3'] = (df['col1']*100).astype(int).apply(str) + '% ' + df['col2']
print(df)
col1 col2 col3
0 0.2 a 20% a
1 0.3 b 30% b
2 0.5 c 50% c
</code></pre>
<p>As @JonClements points out, you can use <code>lambda</code> with string formatting, but I have an allergy to them.. only good <a href="https://stackoverflow.com/questions/47749018/why-is-pandas-apply-lambda-slower-than-loop-here">in small doses</a>:</p>
<pre><code>df['cole'] = df.apply(lambda r: f'{r.col1 * 100}% {r.col2}', 1)
</code></pre>
|
python|string|pandas|dataframe
| 1 |
1,905,270 | 53,021,610 |
python gspread - How to get a spreadsheet URL path in after i create it?
|
<p>I'm trying to create a new spreadsheet using the <code>gspread</code> python package, then get its URL path (inside the google drive) and send it to other people so they could go in as well.</p>
<p>I tried to find an answer <a href="https://www.twilio.com/blog/2017/02/an-easy-way-to-read-and-write-to-a-google-spreadsheet-in-python.html" rel="nofollow noreferrer">here</a> and <a href="https://github.com/burnash/gspread" rel="nofollow noreferrer">here</a>, with no luck.</p>
<p>I created a brand new Spreadsheet:</p>
<p><code>import gspread
from gspread_dataframe import get_as_dataframe, set_with_dataframe
gc = gspread_connect()
spreadsheet = gc.create('TESTING SHEET')</code></p>
<p>Then i Shared it with my account:
<code>
spreadsheet.share('my_user@my_company.com', perm_type='user', role='writer')</code></p>
<p>Then i wrote some random stuff into it:
<code>
worksheet = gc.open('TESTING SHEET').sheet1
df = pd.DataFrame.from_records([{'a': i, 'b': i * 2} for i in range(100)])
set_with_dataframe(worksheet, df)
</code></p>
<p>Now when i go to my google drive i can find this sheet by looking for its name ("TESTING SHEET")</p>
<p>But i didn't figure how do i get the URL path in my python code, so i could pass it right away to other people.</p>
<p>Tnx!</p>
|
<p>You can generate the URL by using <a href="https://gspread.readthedocs.io/en/latest/api.html#gspread.models.Spreadsheet.id" rel="nofollow noreferrer"><code>Spreadsheet.id</code></a>. Here's an example that uses <code>spreadsheet</code> variable from your code:</p>
<pre><code>spreadsheet_url = "https://docs.google.com/spreadsheets/d/%s" % spreadsheet.id
</code></pre>
|
python|url|google-sheets|google-drive-api|gspread
| 4 |
1,905,271 | 65,184,846 |
How to manipulate list
|
<p>I don't know if my title is correct or it makes sense but that's the only thing that I think of since the split() method turns/splits string inputs into list.</p>
<p>This is my code</p>
<pre><code>import re
fruits = "apple,orange,mango*banana"
listOfFruits = re.split("[,*]",fruits)
storage = ""
for i in range(0, len(listOfFruits)):
storage = storage + ("({}) \n({})\n".format(listOfFruits[i], listOfFruits[i]))
finalStorage = storage + "\n"
print(finalStorage)
</code></pre>
<p>And the output looks like this</p>
<pre><code>(apple)
(apple)
(orange)
(orange)
(mango)
(mango)
(banana)
(banana)
</code></pre>
<p>What I want is that whenever the code sees an asterisk(*), it will automatically indent itself inside of what words was before it</p>
<p>What I would like my output</p>
<pre><code>(apple)
(apple)
(orange)
(orange)
(mango)
(banana)
(banana)
(mango)
</code></pre>
<p>Other example</p>
<pre><code>fruits = "mango+banana+grapes,orange+apple
</code></pre>
<p>The expected output should look like this</p>
<pre><code>(mango)
(banana)
(grapes)
(grapes)
(banana)
(mango)
(orange)
(apple)
(apple)
(orange)
</code></pre>
|
<p>You could use a recursive method to do the hard work, something like this:</p>
<pre class="lang-py prettyprint-override"><code>def get_levels(section, tab_num=0):
if not section:
return ''
sublevels = get_levels(section[1:], tab_num + 1)
return '\t'*tab_num + '(' + section[0] + ')\n' + \
sublevels + ( '\n' if sublevels else '') + \
'\t'*tab_num + '(' + section[0] + ')'
def print_fruits(fruits):
listOfFruits = fruits.split(',')
storage = ""
for fruit in listOfFruits:
storage += get_levels(fruit.split('*'), 0) + '\n'
print(storage)
</code></pre>
<p>After calling <code>print_fruits</code> with your sample the output is the following:</p>
<pre><code>>>> print_fruits("apple,orange,mango*banana")
>>> print_fruits("mango*banana*grapes,orange*apple")
</code></pre>
<pre><code>(apple)
(apple)
(orange)
(orange)
(mango)
(banana)
(banana)
(mango)
(mango)
(banana)
(grapes)
(grapes)
(banana)
(mango)
(orange)
(apple)
(apple)
(orange)
</code></pre>
|
python|string|list
| 1 |
1,905,272 | 62,835,466 |
Create a separate logger for each process when using concurrent.futures.ProcessPoolExecutor in Python
|
<p>I am cleaning up a massive CSV data dump. I was able to split the single large file into smaller ones using <code>gawk</code> initially using a <a href="https://unix.stackexchange.com/questions/597593/create-file-name-based-on-csv-column-data-using-gawk/597596#597596">unix SE Query</a> as a following flow:</p>
<pre><code> BIG CSV file -> use gawk script + bash -> Small CSV files based on columns
</code></pre>
<p>I have about 12 split csv files that are created using the above mentioned flow and each with ~170K lines in them.</p>
<p>I am using <code>python3.7.7</code> on a <strong>Windows 10</strong> machine.</p>
<h2>Code</h2>
<pre class="lang-py prettyprint-override"><code>
def convert_raw_data(incoming_line, f_name, line_counter):
# do some decoding magic
# catch exception and try to log it into the a logger file under `f_name.log`
def convert_files(dir_name, f_name, dest_dir_name):
# Open the CSV file
# Open the Destination CSV file to store decoded data
line_counter = 1
for line in csv_reader:
# convert raw HEX to Floating point values using `convert_raw_data` function call
line_counter = line_counter + 1
status = convert_raw_data(csv)
if status:
return f'All good for {f_name}.'
else:
return f'Failed for {f_name}'
def main():
# Parse Arguments Logic here
# get CSV Files and their respective paths
csv_files = get_data_files_list(args.datasets)
# decode raw data from each split csv file as an individual process
with concurrent.futures.ProcessPoolExecutor() as executor:
results = [ executor.submit(convert_files, dir_name, f_name, dest_dir) for dir_name, f_name in csv_files ]
for f in concurrent.futures.as_completed(results):
print(f.result())
</code></pre>
<h2>Requirements</h2>
<p>I wish to set a <code>logging</code> logger with the name <code>f_name.log</code> within each process spawned by the <code>ProcessPoolExecutor</code> and want to store the logs with the respective parsed file name. I am not sure if I should use something like:</p>
<pre class="lang-py prettyprint-override"><code>
def convert_raw_data(...., logger):
logger.exception(raw_data_here)
def convert_files(....):
logger = logging.basicConfig(filename=f_name, level=logging.EXCEPTION)
</code></pre>
<p>or are there caveats for using logging modules in a multiprocessing environment?</p>
|
<p>Found out a simple way to achieve this task:</p>
<pre class="lang-py prettyprint-override"><code>import logging
def create_log_handler(fname):
logger = logging.getLogger(name=fname)
logger.setLevel(logging.ERROR)
fileHandler = logging.FileHandler(fname + ".log")
fileHandler.setLevel(logging.ERROR)
logger.addHandler(fileHandler)
formatter = logging.Formatter('%(name)s %(levelname)s: %(message)s')
fileHandler.setFormatter(formatter)
return logger
</code></pre>
<p>I called the <code>create_log_handler</code> within my <code>convert_files(.....)</code> function and then used <code>logger.info</code> and logger.error` accordingly.</p>
<p>by passing the <code>logger</code> as a parameter to <code>convert_raw_data</code> I was able to log even the erroneous data point in each of my csv file on each process.</p>
|
python|python-3.x|logging|concurrent.futures
| 1 |
1,905,273 | 59,075,109 |
Sending a txt file to a server via TCP on Python Socket
|
<p>I am trying to send a <code>.txt</code> file from client-side to a server via TCP. The server should be able to count the words and characters of the text file.</p>
<p>But I am getting an error when sending the text file to the server:</p>
<pre><code> "CLIENT_SOCKET.sendto(str(FILENAME).encode(4096)), (SERVER_HOST, SERVER_PORT)
TypeError: encode() argument 'encoding' must be str, not int"
</code></pre>
<p>I don't really understand how the error is occurring. </p>
<pre><code>import socket // This is the client
SERVER_HOST = '127.0.0.1'
SERVER_PORT = 54321
CLIENT_SOCKET = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
CLIENT_SOCKET.connect((SERVER_HOST, SERVER_PORT))
FILENAME = input(str("What file would you like to upload?"))
f = open(FILENAME, "r")
FILE_DATA = f.read(4096)
f.close()
print(FILENAME)
CLIENT_SOCKET.sendto(str(FILENAME).encode(4096)), (SERVER_HOST, SERVER_PORT)
RECEIVED_WORDS, SERVER_HOST = CLIENT_SOCKET.recvfrom(4096)
RECEIVED_CHAR, SERVER_HOST = CLIENT_SOCKET.recvfrom(4096)
print(RECEIVED_WORDS.decode())
print(RECEIVED_CHAR.decode())
CLIENT_SOCKET.close()
</code></pre>
|
<p>In <code>CLIENT_SOCKET.sendto(str(FILENAME).encode(4096)), (SERVER_HOST, SERVER_PORT)</code> you are passing the argument <code>4096</code> (an integer) to a method <code>encode</code> that expects a string.</p>
<p>Based on the fact that you are sending text I would suggest that you use utf-8 encoding, an example of how to do this follows:</p>
<p><code>CLIENT_SOCKET.sendto(str(FILENAME).encode('utf-8')), (SERVER_HOST, SERVER_PORT)</code></p>
<p>See <a href="https://docs.python.org/3/library/stdtypes.html#str.encode" rel="nofollow noreferrer">here</a> for details on how to use python string encoding and the various possible values that can be passed to <code>encode</code></p>
|
python|sockets
| 0 |
1,905,274 | 62,931,985 |
Will the generator be closed automatically after fully iteration?
|
<p>Do I have to write</p>
<pre><code>def count10():
for i in range(10):
yield i
gen = count10()
for j in gen:
print(j)
gen.close()
</code></pre>
<p>to save memory, or just</p>
<pre><code>def count10():
for i in range(10):
yield i
for j in count10():
print(j)
</code></pre>
<p>In fact I would like to learn details of lifecycle of Python generator but failed to find relevant resources.</p>
|
<p>You don't need to <code>close</code> that generator.</p>
<p><code>close</code>-ing a generator isn't about saving memory. (<code>close</code>-ing things is almost never about saving memory.) The idea behind the <code>close</code> method on a generator is that you might stop iterating over a generator while it's still in the middle of a <code>try</code> or <code>with</code>:</p>
<pre><code>def gen():
with something_important():
yield from range(10)
for i in gen():
if i == 5:
break
</code></pre>
<p><code>close</code>-ing a suspended generator throws a <code>GeneratorExit</code> exception into the generator, with the intent of triggering <code>finally</code> blocks and context manager <code>__exit__</code> methods. Here, <code>close</code> would cause the generator to run the <code>__exit__</code> method of <code>something_important()</code>. If you don't abandon a generator in the middle like this (or if your generator doesn't have any <code>finally</code> or <code>with</code> blocks, including in generators it delegates to with <code>yield from</code>), then <code>close</code> is unnecessary (and does nothing).</p>
<p>The memory management system usually runs <code>close</code> for you, but to really ensure prompt closure across Python implementations, you'd have to replace code like</p>
<pre><code>for thing in gen():
...
</code></pre>
<p>with</p>
<pre><code>with contextlib.closing(gen()) as generator:
for thing in generator:
...
</code></pre>
<p>I've never seen anyone do this.</p>
|
python|generator
| 5 |
1,905,275 | 58,674,121 |
How to Show Maximum Frequency Value from a groupBy Table
|
<pre><code> Customer Name Segment Discount Profit
1 Jane Waco Corporate 0.2 1906.4850
2 Joseph Holt Consumer 0.4 -1862.3124
3 Greg Maxwell Corporate 0.0 83.2810
4 Thomas Boland Corporate 0.0 517.4793
5 Sue Ann Reed Consumer 0.2 341.9940
6 Karen Ferguson Home Office 0.2 363.9048
7 Joel Eaton Consumer 0.3 -350.4900
8 Nora Preis Consumer 0.2 135.4068
</code></pre>
<p>In Jane Waco transaction, she has made many purchases. Each purchases have a different amount of discounts. How to show the most frequent amount of discount shown on her purchase? In this set of data of coding that i have made, the discount column only shows the highest but i want the most frequent</p>
<pre><code> from collections import Counter
L = data["Discount"]
data.groupby('Customer Name')['Discount'].nunique()
maxi = Counter(data['Discount']).most_common(1)
data.iloc[1:24,[6,7,maxi,21,24,25]]
</code></pre>
<p>Discount is index 20 but i dont know how to show the most frequent discount of Jane Waco received</p>
|
<p>If I understand your question correctly...
It boils down to a question of how to get the most common item in a list.</p>
<pre><code>l = [1, 2, 2, 3, 5, 7, 7, 7, 9, 9, 11, 12]
dic = dict([(str(i), 0) for i in l])
for value in l:
dic[str(value)] += 1
values = dic.items()
values.sort(key=lambda x: x[1])
most_common = values[-1][0]
</code></pre>
|
python
| 0 |
1,905,276 | 59,623,532 |
Can't open lib 'SQL Server Native Client 10.0' Python3, Linux ubuntu
|
<p>I'm trying to connect to an MSSQL server with python console to test connection and get the tables of a database, here is my code</p>
<pre><code>>>>from sqlalchemy import create_engine
>>>engine = create_engine("mssql+pyodbc://username:password@host:port/databasename?driver=SQL+Server+Native+Client+10.0")
>>> connection = engine.connect()
</code></pre>
<p>it returns this error to me</p>
<pre><code>sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'SQL Server Native Client 10.0' : file not found (0) (SQLDriverConnect)")
</code></pre>
<p>i've installed pyodbc.
i tried changing the Driver to 'SQL Server Native Client 11.0' and even 'SQL Server', still returns same error. </p>
<p>Please what do i do? don't seem to know what I haven't done or done wrongly.</p>
|
<p>so i made some more research and i discovered i wasn't using the right driver, and my pyodbc did not have that driver. probably because i had installed both</p>
<p>here is how i checked the drivers available to me.</p>
<pre><code>>>>import pyodbc
>>> for driver in pyodbc.drivers():
... print(driver)
</code></pre>
<p>and the out put was</p>
<pre><code>ODBC Driver 17 for SQL Server
ODBC Driver 13 for SQL Server
</code></pre>
<p>so i simply changed the code to</p>
<pre><code>>>>engine = create_engine("mssql+pyodbc://username:password@host:port/databasename?driver= ODBC Driver 17 for SQL Server")
>>> connection = engine.connect()
</code></pre>
<p>and it went through.</p>
|
python|sql-server|linux|sqlalchemy|flask-sqlalchemy
| 0 |
1,905,277 | 30,384,874 |
Python Data Uploading Error
|
<p>By using python code, I am trying to read data from json files and uploading it through an API. However I am getting HTTP Error 5000. Following is my code:</p>
<pre><code> url = 'http://sipdev1.vbi.vt.edu:8080/EpiViewer/epiviewer/services/uploadGraphData'
for i in json_file_name:
json_data = open (i, 'r')
lines=json_data.readlines()
req = urllib2.Request(url)
req.add_header('Content-Type','application/json')
data = json.dumps(lines)
response = urllib2.urlopen(req,data)
</code></pre>
<p>Here is the Error:</p>
<pre><code>raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 500: Internal Server Error
</code></pre>
<p>Input file Example:</p>
<pre><code>{
"username": "xxxxx",
"password": "yyyyy",
"timeSeriesName": "Liberia_01-18-2015",
"dataType": "Cases",
"plotType": "Cumulative",
"filename": "C_C.csv",
"dateFormat": "MM-dd-yy",
"forecastedOn": "01/18/2015",
"visibility": "Public",
"data": {
"01-25-2015":"26 38 14",
"02-01-2015":"22 33 11",
"02-08-2015":"19 32 6",
"02-15-2015":"17 32 2",
"02-22-2015":"15 18 12",
"03-01-2015":"14 26 2"
}
}
</code></pre>
<p>I think code is not properly able to parse my input files. Do you have any idea about the solution?</p>
|
<p>Your file is <em>already encoded JSON</em>. You do not need to encode it again. Send the file unchanged:</p>
<pre><code>for name in json_file_name:
with open(name) as json_data:
data = json_data.read()
req = urllib2.Request(url, data, {'Content-Type': 'application/json'})
response = urllib2.urlopen(req)
</code></pre>
|
python|json|python-2.7|api|urllib2
| 1 |
1,905,278 | 42,685,270 |
Scale widget and button with command, prevent command until scale value changes
|
<p>I have a GUI which contains a scale and a button widgets. When the button is clicked, it calls a function to delete the scale widget. I'd like this to happen only if the scale value has changed, that is, I want people to move the scale cursor before the button command could be activated. The default value on the scale is 0, and people are allowed to move the cursor and then come back on the 0. I've tried many things but couldn't figure out how to do it in a simple way. </p>
<p>Thank you in advance! </p>
<p>Here's a simplified version of my code :</p>
<pre><code>from tkinter import *
def action(widget):
widget.destroy()
window = Tk()
value = DoubleVar()
scale = Scale(window, variable=value, resolution=1)
button = Button(window, command = lambda: action(scale))
scale.pack()
button.pack()
window.mainloop()
</code></pre>
<hr>
<p>Here's a new version using the <code>.trace</code> method, as suggested by @Sun Bear . It still doesn't work, the "action" function doesn't seem to get the updated state variable. </p>
<pre><code>from tkinter import *
def scalestate(*arg):
scale_activate = True
print("scale_activate is", scale_activate)
def action(widget):
if scale_activate:
widget.destroy()
window = Tk()
scale_activate = False
print("scale_activate is initially", scale_activate)
value = DoubleVar()
value.trace('w', scalestate)
scale = Scale(window, variable=value, orient=HORIZONTAL)
button = Button(window, command = lambda: action(scale))
scale.pack()
button.pack()
window.mainloop()
</code></pre>
|
<p>You can compare the values before destroying. And DoubleVar isn't useful for Scales. Since Scales has a get() function</p>
<pre><code>from tkinter import *
def action(widget):
If widget.get() != original:
widget.destroy()
window = Tk()
scale = Scale(window, resolution=1)
original = scale.get()
button = Button(window, command = lambda: action(scale))
scale.pack()
button.pack()
window.mainloop()
</code></pre>
|
python-3.x|tkinter|widget|command
| 1 |
1,905,279 | 50,659,623 |
Concat DataFrames diagonally
|
<p>This is a self answered question. Given two dataFrames,</p>
<pre><code>x
0 1
0 1 2
1 3 4
y
0 1 2
0 5 6 7
1 8 9 X
2 Y Z 0
</code></pre>
<p>The diagonal concatenation of x and y is given by:</p>
<pre><code> 0 1 3 4 5
0 1.0 2.0 NaN NaN NaN
1 3.0 4.0 NaN NaN NaN
2 NaN NaN 5 6 7
3 NaN NaN 8 9 X
4 NaN NaN Y Z 0
</code></pre>
<p>What is the easiest and simplest way of doing this? I would like to consider two cases:</p>
<ol>
<li>concatenating two dataFrames</li>
<li>concatenating an unspecified number of dataFrames (list of DataFrames)</li>
</ol>
|
<p>First, the simple case. Assuming both the headers and indexes are monotonically numeric, you can just modify <code>y</code>'s indexers as offsets from <code>x</code>:</p>
<pre><code>y.index += x.index[-1] + 1
y.columns += x.columns[-1] + 1
pd.concat([x, y])
0 1 2 3 4
0 1.0 2.0 NaN NaN NaN
1 3.0 4.0 NaN NaN NaN
2 NaN NaN 5 6 7
3 NaN NaN 8 9 X
4 NaN NaN Y Z 0
</code></pre>
<p>Now, to generalise this to multiple DataFrames, we iterate over a loop:</p>
<pre><code>df_list = [x, y]
offset_x = offset_y = 0
for df in df_list:
df.index = np.arange(len(df)) + offset_x
df.columns = np.arange(len(df.columns)) + offset_y
offset_x += df.index[-1] + 1
offset_y += df.columns[-1] + 1
pd.concat(df_list)
0 1 2 3 4
0 1.0 2.0 NaN NaN NaN
1 3.0 4.0 NaN NaN NaN
2 NaN NaN 5 6 7
3 NaN NaN 8 9 X
4 NaN NaN Y Z 0
</code></pre>
<p>If either your index/columns are not monotonically increasing, I strongly suggest resetting them before concatenating, or look into the option below.</p>
<hr>
<p>If you're okay with 0s instead of NaNs, you can use <code>scipy</code>'s <code>block_diag</code> without having to modify either the indices or columns:</p>
<pre><code>from scipy.linalg import block_diag
pd.DataFrame(block_diag(*df_list))
0 1 2 3 4
0 1 2 0 0 0
1 3 4 0 0 0
2 0 0 5 6 7
3 0 0 8 9 X
4 0 0 Y Z 0
</code></pre>
<p>Credit to <a href="https://stackoverflow.com/a/50659544/4909087">this answer</a> for this solution.</p>
|
python|pandas|dataframe|concatenation
| 5 |
1,905,280 | 35,237,364 |
Numpy combine two 2d martixs
|
<p>I am working something like puzzle in python .</p>
<p>What i am trying to do is to cover a piece to map .</p>
<p>For example :</p>
<pre><code> gameMap = np.array([[1 0 0]
[0 1 0]
[0 1 1]])
piece = np.array([[0, 1],
[1, 1]])
</code></pre>
<p>How can I put the piece on the map that i can get a result like </p>
<pre><code>[[1 1 0]
[1 2 0]
[0 1 1]]
</code></pre>
<p>Or </p>
<pre><code>[[1 0 0]
[0 1 1]
[0 2 2]]
</code></pre>
<p>Thanks in advance .</p>
|
<p>One way to "add" your piece to the map is to use slicing. The key is selecting a slice of gameMap that is the same shape as the piece.</p>
<pre><code>gameMap[0:2, 0:2] += piece
</code></pre>
<p>Output:</p>
<pre><code>[[1 1 0]
[1 2 0]
[0 1 1]]
</code></pre>
<p>OR</p>
<pre><code>gameMap[1:3, 1:3] += piece
</code></pre>
<p>Output:</p>
<pre><code>[[1 0 0]
[0 1 1]
[0 2 2]]
</code></pre>
|
python|numpy
| 3 |
1,905,281 | 57,869,186 |
Pproblem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443):
|
<p>so I am using <strong>Pycharm 2019.2</strong> with <strong>Python 2.7</strong>
and I can't download any package.</p>
<p>I have tried:</p>
<ul>
<li>hard copying packages to Python2.7 directory</li>
<li>pip install urllib3[secure] </li>
<li>pip install --trusted-host=pypi.python.org --trusted-host=pypi.org
--trusted-host=files.pythonhosted.org xlrd </li>
</ul>
<p>all refused to connect with errors like this:</p>
<pre><code>Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '_ssl.c:499: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version'),)': /simple/xlrd/
C:\Python27\lib\site-packages\pip-19.0.3-py2.7.egg\pip\_vendor\urllib3\util\ssl_.py:150: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecurePlatformWarning
</code></pre>
<p>in Pycharm, I am getting the following error when trying to install a package:</p>
<pre><code>Could not fetch URL https://pypi.org/simple/geopy/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/geopy/ (Caused by SSLError(SSLError(1, '_ssl.c:499: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version'),)) - skipping
Proposed Solution:
Try to run this command from the system terminal. Make sure that you use the correct version of 'pip' installed for your Python interpreter located at 'C:\Python27\python.exe'.
</code></pre>
<p>Everything is working perfectly fine suing python 3.4 but only with 2.7 I'm getting SSL error.<br>
I can't change to 3.4 due to technical reasons at my workplace.</p>
|
<p>It is an Interpreter Issue. The code was originally written using python 3.7</p>
|
python-2.7
| 0 |
1,905,282 | 58,395,897 |
How I can make the legend of a plot, shows me each variable Im working on?
|
<p>Im making an overlay plot but when i add a legend and the legend shows in the graph it only shows one day that repeats several times like this</p>
<pre><code>imagen = plt.figure(figsize=(25,10))
for day in [1,2,3,4,5,6,8,11,12,13,14,15,16,17,18,19,20,23,26,27,28,30]:
dia = datos[datos['Fecha'] == "2019-06-"+(f"{day:02d}")]
tiempo= pd.to_datetime(dia['Hora'], format=' %H:%M:%S').dt.time
temp= dia['TEMP']
plt.plot(tiempo, temp) #, color = 'red' )#
plt.xlabel("Tiempo (H:M:S)(Formato 24 Horas)")
plt.ylabel("Temperatura (K)")
plt.title("Temperatura Jun 2019")
plt.legend(datos['Fecha'])
plt.show()
imagen.savefig('TEMPJUN2019')
</code></pre>
<p>The image that i get from is the next one:</p>
<p><a href="https://i.stack.imgur.com/bJ32T.jpg" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>Try something like this : </p>
<pre class="lang-py prettyprint-override"><code>
imagen = plt.figure(figsize=(25,10))
dia_lst = [] # <======================================================= Here
for day in [1,2,3,4,5,6,8,11,12,13,14,15,16,17,18,19,20,23,26,27,28,30]:
dia = datos[datos['Fecha'] == "2019-06-"+(f"{day:02d}")]
dia_lst.append(f"2019-06-{day:02d}") # <=========================== Here
tiempo= pd.to_datetime(dia['Hora'], format=' %H:%M:%S').dt.time
temp= dia['TEMP']
plt.plot(tiempo, temp) #, color = 'red' )#
plt.xlabel("Tiempo (H:M:S)(Formato 24 Horas)")
plt.ylabel("Temperatura (K)")
plt.title("Temperatura Jun 2019")
plt.legend(dia_lst) # <================================================ Here
plt.show()
imagen.savefig('TEMPJUN2019')
</code></pre>
<p>So its seems that your datos['Fecha'] contain only one date, this is this one that you should update according to your need.</p>
|
python|python-3.x|matplotlib|overlay|legend
| 0 |
1,905,283 | 45,516,458 |
python: detecting a cycle in networkX
|
<p>As the title implies, I'm trying to write a function that will calculate the number of cycles any inputted node is part of. I found a helpful <a href="https://www.educative.io/page/11000001/60001" rel="nofollow noreferrer">video</a> which explains the theory behind an algorithm to find cycles, but I'm having trouble understanding how to implement it using networkX rather than the data structure that site is using. I couldn't understand the white/grey/etc set concept as well to traverse the network and find cycles.</p>
<p>My function parameters/structure:</p>
<pre><code>def feedback_loop_counter(G, node):
c = 0
calculate all cycles in the network
for every cycle node is in, increment c by 1
return c
</code></pre>
<p>The network has input and output nodes too, and I'm unclear how those play into calculating cycles</p>
<p>This is my input network:</p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
G=nx.DiGraph()
molecules = ["CD40L", "CD40", "NF-kB", "XBP1", "Pax5", "Bach2", "Irf4", "IL-4", "IL-4R", "STAT6", "AID", "Blimp1", "Bcl6", "ERK", "BCR", "STAT3", "Ag", "STAT5", "IL-21R", "IL-21", "IL-2", "IL-2R"]
Bcl6 = [("Bcl6", "Bcl6"), ("Bcl6", "Blimp1"), ("Bcl6", "Irf4")]
STAT5 = [("STAT5", "Bcl6")]
IL_2R = [("IL-2R", "STAT5")]
IL_2 = [("IL-22", "IL-2R")]
BCR = [("BCR", "ERK")]
Ag = [("Ag", "BCR")]
CD40L = [("CD40L", "CD40")]
CD40 = [("CD40", "NF-B")]
NF_B = [("NF-B", "Irf4"), ("NF-B", "AID")]
Irf4 = [("Irf4", "Bcl6"), ("Irf4", "Pax5"), ("Irf4", "Irf4"), ("Irf4", "Blimp1")]
ERK = [("ERK", "Bcl6"), ("ERK", "Blimp1"), ("ERK", "Pax5")]
STAT3 = [("STAT3", "Blimp1")]
IL_21 = [("IL-21", "IL-21R")]
IL_21R = [("IL-21R", "STAT3")]
IL_4R = [("IL-4R", "STAT6")]
STAT6 = [("STAT6", "AID"), ("STAT6", "Bcl6")]
Bach2 = [("Bach2", "Blimp1")]
IL_4 = [("IL-4", "IL-4R")]
Blimp1 = [("Blimp1", "Bcl6"), ("Blimp1", "Bach2"), ("Blimp1", "Pax5"), ("Blimp1", "AID"), ("Blimp1", "Irf4")]
Pax5 = [("Pax5", "Pax5"), ("Pax5", "AID"), ("Pax5", "Bcl6"), ("Pax5", "Bach2"), ("Pax5", "XBP1"), ("Pax5", "ERK"), ("Pax5", "Blimp1")]
edges = Bcl6 + STAT5 + IL_2R + IL_2 + BCR + Ag + CD40L + CD40 + NF_B + Irf4 +
ERK + STAT3 + IL_21 + IL_21R + IL_4R + STAT6 + Bach2 + IL_4 + Blimp1 + Pax5
G.add_nodes_from(molecules)
G.add_edges_from(edges)
sources = ["Ag", "CD40L", "IL-2", "IL-21", "IL-4"]
targets = ["XBP1", "AID"]
</code></pre>
|
<p>The idea to find cycles is to do a <a href="https://en.wikipedia.org/wiki/Depth-first_search" rel="nofollow noreferrer">Depth-first search</a> and while you do it, remember which nodes you already saw and the path to them. If you happen to visit a node you already saw, then there is a cycle, and you can find it by concatenating paths.</p>
<p>Try writing some code to do that, and open a new question with that code if you get stuck</p>
|
python|loops|networkx|cycle|feedback
| 1 |
1,905,284 | 35,888,189 |
Drop duplicate in multiindex dataframe in pandas
|
<p>I am looking to an efficient method to drop duplicate columns in a multiindex dataframe with Pandas.</p>
<p>My data :</p>
<pre><code>TypePoint TIME Test ... T1 T1
- S Unit1 ... unit unit
(POINT, -) ...
24001 90.00 100.000 ... 303.15 303.15
24002 390.00 101.000 ... 303.15 303.15
... ... ... ... ...
24801 10000 102.000 ... 303.15 303.15
24802 10500 103.000 ... 303.15 303.15
</code></pre>
<p>The header contain two information. The variable's name and its unit.
I would like to drop the variable "T1" (duplicate variable).</p>
<ul>
<li><p><strong>.drop_duplicates()</strong> don't work. I get "Buffer has wrong number of dimensions (expected 1, got 2)" error.</p></li>
<li><p><strong>.drop(Data('T1','unit'),axis=1)</strong> don't work either. That drop the two column and not just only one of them.</p></li>
</ul>
<p>Thanks for your help</p>
|
<p>I think you can use double <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.T.html" rel="nofollow"><code>T</code></a>:</p>
<pre><code>print df
TypePoint TIME Test T1
- S Unit1 unit unit
0 24001 90 100 303.15 303.15
1 24002 390 101 303.15 303.15
2 24801 10000 102 303.15 303.15
3 24802 10500 103 303.15 303.15
print df.T.drop_duplicates().T
TypePoint TIME Test T1
- S Unit1 unit
0 24001 90 100 303.15
1 24002 390 101 303.15
2 24801 10000 102 303.15
3 24802 10500 103 303.15
</code></pre>
|
python|pandas|multi-index
| 1 |
1,905,285 | 33,128,681 |
How to unit-test code that uses python-multiprocessing
|
<p>I have some code that uses multiprocessing.Pool to fork workers and perform a task in parallel. I'm trying to find the <em>right way</em> to run unit tests of this code.</p>
<p>Note I am <em>not</em> trying to test serial code test cases in parallel which I know packages like nose support.</p>
<p>If I write a test function that tests said parallel code, and attempt to run the tests with nose via: <code>nosetests tests/test_function.py</code> the non-parallel test execute properly but the parallel tests fail when multiprocessing tries to fork because main is not importable:</p>
<pre><code> File "C:\python-2.7.10.amd64\lib\multiprocessing\forking.py", line 488, in prepare
assert main_name not in sys.modules, main_name
AssertionError: __main__
assert main_name not in sys.modules, main_name
AssertionError: _assert main_name not in sys.modules, main_name
_main__AssertionError
: __main__
</code></pre>
<p>Which just repeats until I terminate the task. I can run the tests successfully if I modify <code>tests/test_function.py</code> to include:</p>
<pre><code>if __name__ == '__main__':
import nose
nose.main()
</code></pre>
<p>and then execute with <code>python tests\test_function.py</code></p>
<p>So what is the "right" way to do this that will integrate with a unit test package (doesn't have to be nose)?</p>
<p>Environ: Python 2.7.10 amd64 on Windows 7 64-bit</p>
<p>Update 2020: With python 3 and pytest, this is not an issue, suggest upgrades!</p>
|
<p>I prefer to mock multiprocessing in unit tests using <a href="https://pypi.python.org/pypi/mock" rel="noreferrer">python mock</a>. Because unit tests should be <strong>independent</strong> and <strong>repeatable</strong>. That's why usually I'm creating mock version of multiprocessing classes (<code>Process</code> and <code>Pool</code>). Just to be sure that my tests are executed in right manner.</p>
|
python|unit-testing|nose|python-multiprocessing
| 5 |
1,905,286 | 40,329,763 |
POST between Python and PHP not working?
|
<p>I`m trying to communicate between my website and a client side script. The Python client sends a POST to a PHP file, which then prints it to show me that the POST is working, so I can move on to the next step of my development. However, for some reason, I can only post to it with an HTML5 form, and not Python 3.5.2 requests module.</p>
<p>This is the PHP:</p>
<pre><code><?php
$data = ((!empty($_POST['data'])) ? $_POST['data'] : 'N/A');
echo $data;
?>
</code></pre>
<p>The Python:</p>
<pre><code> >>> import requests
>>> url = "www.example.com/coords.php"
>>> data = "This is a string"
>>> r = requests.post(url, data)
>>> r.content
b'N/A'
</code></pre>
<p>The HTML5 that works:</p>
<pre><code><html>
<head>
</head>
<body>
<form method="post" action="www.example.com/coords.php">
<input type="text" name="data">
<input type="submit" name="submit">
</form>
</body>
</html>
</code></pre>
|
<p>Your data variable is not correct. Try following:</p>
<pre><code> >>> import requests
>>> url = "www.example.com/coords.php"
>>> data = {'data':'This is a string'}
>>> r = requests.post(url, data)
>>> r.content
b'N/A'
</code></pre>
|
php|python|post
| 0 |
1,905,287 | 52,123,409 |
Sqlalchemy best practice for creating and naming sql tables based on new table creations
|
<p>As the title suggests I need help understanding how to link new tables to an existing sqlalchemy class, if that's even the proper understanding.</p>
<pre><code>dbstring = 'sqlite:///db.db'
engine = create_engine(dbstring)
Session = sessionmaker(bind=engine)
session = Session()
Base = declarative_base()
metadata = MetaData
class SomeTable(Base):
__tablename__ = 'somename'
table_id = Column(Integer, primary_key=True)
value_a = Column(Float())
value_b = Column(String())
value_c = Column(Float())
</code></pre>
<p>I'd like to use the class as a way to control values inserted into future tables. Is there a way to use sqlalchemy to issue a create table command in a format similar to:</p>
<pre><code>table_named_foo = SomeTable(value_a = 12.3, value_b = 'bar', value_c = 45.6)
session.commit(table_named_foo)
</code></pre>
|
<p>IIUC I believe that the question is asking why you are able to insert on object of type string into a column that was defined as <code>Float</code> or vice-versa. This is because SQLLite uses dymanic typing and will let you put any type into a column regardless of how it was defined. </p>
<p>See here for more info: <a href="https://www.sqlite.org/faq.html#q3" rel="nofollow noreferrer">https://www.sqlite.org/faq.html#q3</a></p>
<p>Try this is in PostgreSQL and you will likely get the behavior you are expecting</p>
|
database|python-3.x|sqlite|sqlalchemy
| 1 |
1,905,288 | 52,486,303 |
Is it possible to have creator and updater separately in Django admin?
|
<p>In my <code>Store</code> model, I have <code>author</code> attribute so I can tack who wrote and updated a store. However, I would like to have who created and who updated separately. I think I have to get who created in my <code>save_model</code> in <code>admin.py</code>.</p>
<p>Is there a way to get a creator and updater separately in Django admin?</p>
<p><strong>models.py</strong></p>
<pre><code>class Store(models.Model):
...
author = ForeignKey(settings.AUTH_USER_MODEL, editable=False,
related_name='promotions_of_author', null=True, blank=True)
</code></pre>
<p><strong>admin.py</strong></p>
<pre><code>class StoreAdmin(SummernoteModelAdmin):
...
def save_model(self, request, obj, form, change):
if getattr(obj, 'author', None) is None:
obj.author = request.user
obj.save()
</code></pre>
|
<p>Do you have two separate fields in the model to keep track of who updated it and created it ?</p>
<p>I usually keep two fields to keep track of this.In that case you may have something like this :</p>
<pre><code> class StoreAdmin(SummernoteModelAdmin):
def save_model(self, request, obj, form, change):
# adding the entry for the first time
if not change:
obj.created_by = request.user
# updating already existing record
else:
obj.updated_by = request.user
obj.save()
</code></pre>
|
python|django|django-admin
| 2 |
1,905,289 | 34,193,198 |
Web2Py error: invalid function blog/thanks
|
<p>I am building a simple blog using Web2Py on Debian Linux.
I have a controller called blog.py, to which I added the following function, along with an if block:</p>
<pre><code>def display_form():
form = SQLFORM(db.blog)
if form.process().accepted:
session.flash = 'form accepted'
redirect(URL('thanks'))
elif form.errors:
response.flash = 'form has errors'
else:
response.flash = 'please fill out the form'
return locals()
</code></pre>
<p>I proceeded to add a "view" html file called <strong>blog/display_form.html</strong>, with a basic template, as follows:</p>
<pre><code>{{extend 'layout.html'}}
<h1>Display Form</h1>
{{=form}}
</code></pre>
<p>I load the <strong>"display_form"</strong> blog page just fine, and it accepts all the input successfully, but it does not redirect to a <em>thank you</em> page. Instead, the browser generates an <em>"invalid function blog/thanks"</em> error.</p>
<p>I tried removing the compiled app via the Web2Py admin interface, and recompiled everything. Still does not work. I added a "view" for the "Thanks" page, but that does not change anything. I restarted the Web2Py framework and the web server, but still no go.
Some web sites refer to a possible routes.py issue, but I am confused as to why that would be pertinent at all.</p>
<p>Please help,
I am hitting a brick wall here.</p>
|
<p>So, after tweaking a number of things, and removing all of the compiled files, and starting from scratch again, the solution turned out to be way more simple than I was trying to make it.
I simply defined a function called <strong>thanks</strong> in the aforementioned <strong>blog.py</strong> controller, and returned the local variables, like so:</p>
<pre><code>def thanks():
return locals()
</code></pre>
<p>I then added a <strong>blog/thanks</strong> <em>view</em> file, with a basic html header, stating: </p>
<pre><code>Thank you for submitting the form!
</code></pre>
<p>And it finally redirected the <strong>display_form</strong> blog page as intended to a <strong>thanks</strong> page, thereby flashing the <em>form accepted</em> message (also as expected).</p>
<p>Thanks for your help, Anthony!
Cheers.</p>
|
python|web2py|debian-based
| 3 |
1,905,290 | 33,012,201 |
How to display rows or select rows in pandas where any of its column contains NAN
|
<p>My table:</p>
<pre><code>Ram Shyam Kamal
2 nan 4
1 2 5
8 7 10
</code></pre>
<p>I want to select or display the first row? How should I do that.</p>
<pre><code>Ram Shyam Kamal
2 nan 4
</code></pre>
|
<p>Let <code>df</code> be your dataframe, you can:</p>
<pre><code>df = df[df.isnull().any(axis=1)]
</code></pre>
<p>This returns:</p>
<pre><code> Ram Shyam Kamal
0 2 NaN 4
</code></pre>
|
python-2.7|pandas
| 0 |
1,905,291 | 48,614,654 |
Different Celery instance objects using same broker - Is that a good practice?
|
<p>I was wondering, is it a good practice, to have different Celery instance objects using same broker?</p>
<p>Currently, I have a rabbitmq, acted as single broker shared among 3 instances of Celery. My Celery instances are as follow</p>
<ul>
<li><code>insider_transaction</code> - Fixed schedule worker. Run every minute</li>
<li><code>earning</code> - Worker created by web server.</li>
<li><code>stock_price</code> - Worker created by web server.</li>
</ul>
<p>I designed every worker runs in their own docker container. I expect 3 workers will run independent from each others.</p>
<p>However, I realize that is not the case!</p>
<p><strong>For instance, <code>earning</code> worker will mistakenly receive messages which are suppose to be received only by <code>stock_price</code> or <code>insider_transaction</code>.</strong></p>
<p>You will see this kind of message received by <code>earning</code> worker.</p>
<pre><code>earning_1 | The message has been ignored and discarded.
earning_1 |
earning_1 | Did you remember to import the module containing this task?
earning_1 | Or maybe you're using relative imports?
earning_1 |
earning_1 | Please see
earning_1 | http://docs.celeryq.org/en/latest/internals/protocol.html
earning_1 | for more information.
earning_1 |
earning_1 | The full contents of the message body was:
earning_1 | '[[], {}, {"callbacks": null, "errbacks": null, "chain": null, "chord": null}]' (77b)
earning_1 | Traceback (most recent call last):
earning_1 | File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 561, in on_task_received
earning_1 | strategy = strategies[type_]
earning_1 | KeyError: 'insider_transaction.run'
</code></pre>
<p>and this</p>
<pre><code>earning_1 | The message has been ignored and discarded.
earning_1 |
earning_1 | Did you remember to import the module containing this task?
earning_1 | Or maybe you're using relative imports?
earning_1 |
earning_1 | Please see
earning_1 | http://docs.celeryq.org/en/latest/internals/protocol.html
earning_1 | for more information.
earning_1 |
earning_1 | The full contents of the message body was:
earning_1 | '[[2, 3], {}, {"callbacks": null, "errbacks": null, "chain": null, "chord": null}]' (81b)
earning_1 | Traceback (most recent call last):
earning_1 | File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 561, in on_task_received
earning_1 | strategy = strategies[type_]
earning_1 | KeyError: 'stock_price.mul'
</code></pre>
<p>I don't expect such to happen. In my web server side code (Flask). I wrote</p>
<pre><code>celery0 = Celery('earning',
broker=CELERY_BROKER_URL,
backend=CELERY_RESULT_BACKEND)
celery1 = Celery('stock_price',
broker=CELERY_BROKER_URL,
backend=CELERY_RESULT_BACKEND)
@app.route('/do_work/<int:param1>/<int:param2>')
def do_work(param1,param2):
task0 = celery0.send_task('earning.add', args=[param1, param2], kwargs={})
task1 = celery1.send_task('stock_price.mul', args=[param1, param2], kwargs={})
</code></pre>
<p>Hence, I expect <code>earning</code> worker will only receive <code>earning</code> message, not <code>stock_price</code> nor <code>insider_transaction</code> message.</p>
<p>May I know, why this problem occur? Is it not possible for different instance of Celery sharing single broker?</p>
<p>A project which demonstrates this problem can be checkout from <a href="https://github.com/yccheok/celery-hello-world" rel="nofollow noreferrer">https://github.com/yccheok/celery-hello-world</a></p>
<pre><code>docker-compose build
docker-compose up -d
http://localhost:5000/do_work/2/3
docker-compose up earning
</code></pre>
|
<p>Are you using routing keys? You can use <a href="http://docs.celeryproject.org/en/latest/userguide/routing.html#exchanges-queues-and-routing-keys" rel="nofollow noreferrer">routing keys</a> to tell the exchange which tasks to handle with which queues. Setting these in your celery configs may help to prevent the wrong messages from being consumed by the wrong workers.</p>
|
python|celery|celery-task
| 1 |
1,905,292 | 64,217,140 |
cx_Oracle Get Boolean Return Value
|
<p>I have been working hard all day attempting to get a boolean value from a PL/SQL function using cx_Oracle. I've seen posts talking about using some other data type like char or integer to store the return value, but when I attempt to use such solutions, I get an incorrect data type error. First, let me show the code.</p>
<pre><code>def lives_on_campus(self):
cursor = conn.cursor()
ret = cursor.callfunc('students_api.lives_on_campus', bool, [self.pidm])
return ret
</code></pre>
<p>If I use the 11.2.0.4 database client, I get the following error.</p>
<pre><code>File "student-extracts.py", line 134, in <module>
if student.lives_on_campus():
File "student-extracts.py", line 58, in lives_on_campus
ret = cursor.callfunc('students_api.lives_on_campus', bool, [self.pidm])
cx_Oracle.DatabaseError: DPI-1050: Oracle Client library is at version 11.2 but version 12.1 or higher is needed
</code></pre>
<p>If I use the 12.1.0.2 database client or later, I get this error.</p>
<pre><code>Traceback (most recent call last):
File "student-extracts.py", line 134, in <module>
if student.lives_on_campus():
File "student-extracts.py", line 58, in lives_on_campus
ret = cursor.callfunc('students_api.lives_on_campus', bool, [self.pidm])
cx_Oracle.DatabaseError: ORA-03115: unsupported network datatype or representation
</code></pre>
<p>Basically, it errors out no matter which version of the SQL Client I use. Now, I know the above code will work if the database version is 12c R2. Unfortunately, we only have that version in our TEST environment and PROD uses only the 11g database. Is there any I can make that function work with an 11g database? There must be a workaround.</p>
<p>~ Bob</p>
|
<p>Try a wrapper anonymous block like:</p>
<pre><code>with connection.cursor() as cursor:
outVal = cursor.var(int)
sql="""
begin
:outVal := sys.diutil.bool_to_int(students_api.lives_on_campus(:pidm));
end;
"""
cursor.execute(sql, outVal=outVal, pidm='123456')
print(outVal.getvalue())
</code></pre>
|
python|oracle11g|oracle12c|cx-oracle
| 0 |
1,905,293 | 70,424,885 |
Scrapy is ignoring part of the text
|
<p>I'm trying to scrape text from websites using Scrapy and build a dataset of text and some of its features. For each element containing text I'm saving the text itself, element type and some other things. It works fine for the most part but it's not scraping part of the text which follows nested element(s).</p>
<p>Input example:</p>
<pre><code><p>
First part of text
<b>
Nested text
</b>
Second part of text
</p>
</code></pre>
<p>Output (just an example, in reality the output is saved to csv):</p>
<pre><code>text: First part of text, element: p
text: Nested text, element: b
</code></pre>
<p>Expected output (just an example, in reality the output is saved to csv):</p>
<pre><code>text: First part of text, element: p
text: Nested text, element: b
text: Second part of text, element: p
</code></pre>
<p>Part of my code responsible for scraping text:</p>
<pre><code>for element in response.xpath('//*[normalize-space(text())]'):
...
text_normalized = element.xpath('normalize-space(./text())').get()
...
</code></pre>
<p>How do I get second part of the text? Expect that an element can contain multiple nested elements and text itself can be split in more than just 2 parts.</p>
|
<p>If you use // with <code>text node</code> it will return all text as list and after you can use <code>.join</code> method or list slicing.</p>
<pre><code>text_normalized = element.xpath('normalize-space(.//p//text())').getall()
</code></pre>
<h1>Implementation on scrapy shell</h1>
<pre><code>In [1]: from scrapy.selector import Selector
In [2]: %paste
doc='''
<p>
First part of text
<b>
Nested text
</b>
Second part of text
</p>
'''
## -- End pasted text --
In [3]: sel = Selector(text=doc)
In [4]: sel.xpath('//p//text()').getall()
Out[4]:
['\n First part of text\n ',
'\n Nested text\n ',
'\n Second part of text\n']
In [5]: sel.xpath('//p//text()').get()
Out[5]: '\n First part of text\n '
In [6]:
In [6]: p_text=sel.xpath('//p//text()').getall()[0]
In [7]: p_text
Out[7]: '\n First part of text\n '
In [8]: p_text=sel.xpath('//p//text()').getall()[0].strip()
In [9]: p_text
Out[9]: 'First part of text'
In [10]: b_text=p_text=sel.xpath('//p//text()').getall()[1].strip()
In [11]: b_text
Out[11]: 'Nested text'
In [12]: p-text1=b_text=p_text=sel.xpath('//p//text()').getall()[2].strip()
File "<ipython-input-12-6baa2c054111>", line 1
p-text1=b_text=p_text=sel.xpath('//p//text()').getall()[2].strip()
^
SyntaxError: cannot assign to operator
In [13]: p_text1=b_text=p_text=sel.xpath('//p//text()').getall()[2].strip()
In [14]: p_text1
Out[14]: 'Second part of text'
</code></pre>
|
python|html|web-scraping|scrapy
| 0 |
1,905,294 | 72,952,998 |
Pynsist error: NoWheelError: No compatible wheels found for blinker 1.4
|
<p>I have built a web app by using streamlit. Now, I want to share my app to others without deploying in the cloud. I tried to create an executable file by using pynsist. I followed the steps mentioned in this <a href="https://stackoverflow.com/questions/69352179/package-streamlit-app-and-run-executable-on-windows">Package streamlit app and run executable on windows</a> and <a href="https://github.com/takluyver/pynsist/tree/master/examples/streamlit" rel="nofollow noreferrer">https://github.com/takluyver/pynsist/tree/master/examples/streamlit</a> but I'm getting this error.</p>
<p>File "C:\Users\user\anaconda3\lib\site-packages\nsist\wheels.py", line 144, in get_from_pypi
raise NoWheelError('No compatible wheels found for {0.name} {0.version}'.format(self))
nsist.wheels.NoWheelError: No compatible wheels found for blinker 1.4.</p>
<p>Project structure:</p>
<pre><code>|- src
|- main.py
|- run_app.py
|- wheels #empty folder
|- installer.cfg
</code></pre>
|
<p>This occurs because Blinker does not publish wheels on PyPI. Wheels are a common format for modern Python packages, but Blinker was last released in 2015, when it wasn't so normal to make wheels.</p>
<p>Blinker appears to be a simple, pure-Python package, so it should be easy to make a wheel of it locally, by running <code>pip wheel blinker==1.4</code>. Then you can tell Pynsist to use this with either the <code>extra_wheel_sources</code> option - giving it a directory of wheels to use in addition to PyPI for <code>pypi_wheels</code> - or the <code>local_wheels</code> option pointing to the specific <code>.whl</code> file.</p>
<p>See also <a href="https://pynsist.readthedocs.io/en/latest/faq.html#bundling-packages-which-don-t-have-wheels-on-pypi" rel="nofollow noreferrer">bundling packages which don't have wheels on PyPI</a> in Pynsist's FAQ.</p>
|
python|deployment|exe|streamlit|pynsist
| 1 |
1,905,295 | 55,801,208 |
Trend graph with Matplotlib
|
<p>I have the following lists:</p>
<pre><code>input = ['"25', '"500', '"10000', '"200000', '"1000000']
inComp = ['0.000001', '0.0110633', '4.1396405', '2569.270532', '49085.86398']
quickrComp=['0.0000001', '0.0003665', '0.005637', '0.1209121', '0.807273']
quickComp = ['0.000001', '0.0010253', '0.0318653', '0.8851902', '5.554448']
mergeComp = ['0.000224', '0.004089', '0.079448', '1.973014', '13.034443']
</code></pre>
<p>I need to create a trend graph to demonstrate the growth of the values of inComp, quickrComp, quickComp, mergeComp as the input values grow (input is the x-axis). I am using matplotlib.pyplot, and the following code:</p>
<pre><code>import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(input,quickrComp, label="QR")
ax.plot(input,mergeComp, label="merge")
ax.plot(input, quickComp, label="Quick")
ax.plot(input, inComp, label="Insrção")
ax.legend()
plt.show()
</code></pre>
<p>However, what is happening is this: the values of the y-axis are disordered; the values of quickrComp on the y-axis are first inserted; then all mergeComp values and so on. I need the y-axis values to start at 0 and end at the highest of the 4-row values. How can I do this?</p>
|
<p>Two things: First, your y-values are strings. You need to convert the data to numeric (<code>float</code>) type. Second, your y-values in one of the lists are huge as compared to the remaining three lists. So you will have to convert the y-scale to logarithmic to see the trend. You can, in principle, convert your x-values to float (integers) as well but in your example, you don't need it. In case you want to do that, you will also have to remove the <code>"</code> from the front of each x-value.</p>
<p>A word of caution: Don't name your variables the same as in-built functions. In your case, you should rename <code>input</code> to something else, <code>input1</code> for instance.</p>
<pre><code>import matplotlib.pyplot as plt
fig, ax = plt.subplots()
input1 = ['"25', '"500', '"10000', '"200000', '"1000000']
inComp = ['0.000001', '0.0110633', '4.1396405', '2569.270532', '49085.86398']
quickrComp=['0.0000001', '0.0003665', '0.005637', '0.1209121', '0.807273']
quickComp = ['0.000001', '0.0010253', '0.0318653', '0.8851902', '5.554448']
mergeComp = ['0.000224', '0.004089', '0.079448', '1.973014', '13.034443']
ax.plot(input1, list(map(float, quickrComp)), label="QR")
ax.plot(input1, list(map(float, mergeComp)), label="merge")
ax.plot(input1, list(map(float, quickComp)), label="Quick")
ax.plot(input1, list(map(float, inComp)), label="Insrção")
ax.set_yscale('log')
ax.legend()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/oiAr6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oiAr6.png" alt="enter image description here"></a></p>
|
python|matplotlib
| 0 |
1,905,296 | 73,218,380 |
python3 subprocess output enclosed in b' \n'
|
<p>I am trying to use subprocess module in python3 to fetch output of shell command in MacOS.</p>
<p>command I am using:</p>
<pre><code>read_key = ["binary", "arg1", "arg2", "arg3"]
proc = subprocess.Popen(read_key, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
</code></pre>
<p>Different output I got.</p>
<pre><code>>>> proc.communicate()
(b'MY_EXPECTED_OUTPUT_STRING\n', b'')
>>> proc.communicate()[0]
b'MY_EXPECTED_OUTPUT_STRING\n'
>>> proc.communicate()[0].strip()
b'MY_EXPECTED_OUTPUT_STRING'
>>>
</code></pre>
<p>But I am trying to get "MY_EXPECTED_OUTPUT_STRING" into a variable (without quotes). Is there any proper usage of subprocess which will allow that?
Or should I be using truncate, grep etc to get around this issue?</p>
<p>expectation:</p>
<pre><code>print(output)
>>>> MY_EXPECTED_OUTPUT_STRING
</code></pre>
<p>Am I doing this wrong way?
Also help me with simplest, but detailed article on subprocess module which you have bookmarked :)</p>
|
<p>Your output was in <code>bytes</code> format, you can decode it into <code>utf-8</code></p>
<pre><code>proc.communicate()[0].strip().decode('utf-8')
</code></pre>
|
python|python-3.x|macos|subprocess
| 1 |
1,905,297 | 73,363,459 |
How to count the number of high score values by group in the most efficient way?
|
<p>I have two dataframes. Here are their samples. dt1:</p>
<pre><code>id val
1 smth11
1 smth12
2 smth21
2 smth22
2 smth23
... ...
</code></pre>
<p>dt2:</p>
<pre><code>id val
1 blabla
2 bla2
2 bla3
... ...
</code></pre>
<p>i have a function which calculates a similarity score between strings (like "smth11" and "blabla" in this example) from 0 to 1: <code>my_func</code>. For each value in the "val" column in the dt1 dataset, I want to count the number of values in the "val" column in the dt2 dataset that have a score greater than 0.7. Only the values that are in the same groups of the "id" column in both datasets are compared. So desired result should look like this:</p>
<pre><code>id val count
1 smth11 2
1 smth12 2
2 smth21 5
2 smth22 7
2 smth23 3
... ...
</code></pre>
<p>The problem is that my actual datasets are huge (several thousand rows each). I wanted to know how I could do this in the most efficient way (perhaps doing the calculations in parallel?)</p>
|
<p>I think that the following code should be pretty fast since all calculations are performed by numpy.</p>
<pre><code>import pandas as pd
import numpy as np
import random
# Since the similarity function was not given,
# we'll use random.random to generate values
# between 0 and 1
random.seed(1)
a1 = np.array([
[1, 'smth11'],
[1, 'smth12'],
[2, 'smth21'],
[2, 'smth23'],
[2, 'smth24'],
])
df1 = pd.DataFrame(a1, columns = ['id','val1'])
a2 = np.array([
[1, 'blabla'],
[2, 'bla2'],
[2, 'bla3'],
])
df2 = pd.DataFrame(a2, columns = ['id','val2'])
# matrix merges the df's in such a way as to include
# all (useful) combinations of df1 and df2
matrix = df1.merge(df2, left_on='id', right_on='id')
# Here we add the 'similarity' column to the matrix df.
# You will need to modify the (smilarity) lambda function below.
# I.e. something like lambda row: <some fn of row['val1'] and row(['val2']>
matrix['similarity'] = matrix.apply(lambda row: random.random(), axis=1)
print('------ matrix with scores')
print(matrix)
# Finally we count cases with similarities > .7
counts = matrix.query("similarity > .7").groupby("val1").size()
print('------ counts')
print(counts)
print('NOTE: the type of "counts" is', type(counts))
</code></pre>
<p>Output:</p>
<pre><code>------ matrix with scores
id val1 val2 similarity
0 1 smth11 blabla 0.134364
1 1 smth12 blabla 0.847434
2 2 smth21 bla2 0.763775
3 2 smth21 bla3 0.255069
4 2 smth23 bla2 0.495435
5 2 smth23 bla3 0.449491
6 2 smth24 bla2 0.651593
7 2 smth24 bla3 0.788723
------ counts
val1
smth12 1
smth21 1
smth24 1
dtype: int64
NOTE: the type of "counts" is <class 'pandas.core.series.Series'>
</code></pre>
<p>Please let us know how this code performs with your data.</p>
|
python|python-3.x|pandas|dataframe|function
| 1 |
1,905,298 | 50,212,104 |
Pyodbc + sqlalchemy fails for more than 2100 items
|
<p>In the below code, an error is thrown when employee_code_list is longer than 2000 items as mentioned below. However, it works perfectly when the list is under 2000 items.</p>
<pre><code>query = session.query(TblUserEmployee, TblUser).filter(
and_(
(TblUser.UserId == TblUserEmployee.EmployeeId),
(func.lower(TblUserEmployee.EmployeeCode).in_(employee_code_list)),
(TblUser.OrgnId == MIG_CONSTANTS.context.organizationid),
(TblUser.UserTypeId == user_type)
))
results = query.all()
</code></pre>
<p>This is the relevant part of the error that is thrown:</p>
<pre><code>File ""site-packages\sqlalchemy\util\compat.py"", line 203, in raise_from_cause
File ""site-packages\sqlalchemy\engine\base.py"", line 1193, in _execute_context
File ""site-packages\sqlalchemy\engine\default.py"", line 507, in do_execute
DBAPIError: (pyodbc.Error) ('07002', u'[07002] [Microsoft][ODBC Driver 17 for SQL Server]COUNT field incorrect or syntax error (0) (SQLExecDirectW)') [SQL: u'SELECT [tblTaxGroup].[TaxGroupId] AS [tblTaxGroup_TaxGroupId], [tblTaxGroup].[Code] AS [tblTaxGroup_Code], [tblTaxGroupCenter].[CenterId] AS [tblTaxGroupCenter_CenterId] \nFROM [tblTaxGroup], [tblTaxGroupCenter], [tblCenterTax] \nWHERE [tblTaxGroup].[OrganizationId] = ? AND [tblTaxGroup].[Void] = 0 AND [tblTaxGroup].[TaxGroupId] = [tblTaxGroupCenter].[TaxGroupId] AND [tblTaxGroup].[Code] IN (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) AND [tblCenterTax].[CenterId] = [tblTaxGroupCenter].[CenterId] AND [tblCenterTax].[TaxGroupId] = [tblTaxGroupCenter].[TaxGroupId] AND [tblCenterTax].[ApplyToVendors] = 1 AND [tblTaxGroupCenter].[Void] = 0'] [parameters: (u'A8D7DD91-152D-4042-8205-8EEB1DB2A283', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', u'SGST18', .......u'SGST18')] (Background on this error at: http://sqlalche.me/e/dbapi)
</code></pre>
<p>This is the connector that I am using:</p>
<pre><code> engine_meta = create_engine(
'mssql+pyodbc://' +
global_json["target_database_username"] + ':' +
global_json["target_database_password"] + '@' +
global_json["target_database_server_ip"] + "/" +
global_json["target_database_name"],
implicit_returning=False)
</code></pre>
<p>We have recently changed our connector from pymssql to pyodbc for some other reasons. This problem was not present when we were using pymssql</p>
<p>Edit: Copied a little more of the error message. Had cropped it as it was showing details of my organization db. And since the same query works for lesser number of items, I am sure it is not a syntax error.
Note: The error is not due to same values being substituted, have tried it for a case with all unique values too, still has the same error.</p>
|
<p>Your query is basically forcing SQLAlchemy to emit a query with 2000+ parameters (<code>SELECT * WHERE Y IN (list of 2000+ values)</code>). Different RDBMS's (and different drivers) have limits on the number of parameters you may have.</p>
<p>Although your stack trace doesn't cover the exact error, I notice that you're using SQL Server and the numbers you're talking about are suspiciously close to a 2100 parameter limit SQL Server imposes under certain circumstances (see <strong>Parameters per user-defined function</strong> on <a href="https://docs.microsoft.com/en-us/sql/sql-server/maximum-capacity-specifications-for-sql-server?view=sql-server-2017" rel="nofollow noreferrer">this</a> Microsoft knowledge article). I would be willing to bet that this is what you're running into.</p>
<p>The easiest approach you can take is to simply run your query in batches for each, say, 1000 items in <code>employee_code_list</code>:</p>
<pre><code>results = []
batch_size = 1000
batch_start = 0
while batch_start < len(employee_code_list):
batch_end = batch_start + batch_size
employee_code_batch = employee_code_list[batch_start:batch_end]
query = session.query(TblUserEmployee, TblUser).filter(
and_(
(TblUser.UserId == TblUserEmployee.EmployeeId),
(func.lower(TblUserEmployee.EmployeeCode).in_(employee_code_batch)),
(TblUser.OrgnId == MIG_CONSTANTS.context.organizationid),
(TblUser.UserTypeId == user_type)
))
results.append(query.all())
batch_start += batch_size
</code></pre>
<p>In this example we're creating an empty results list that we will append each batch of results to. We're setting a batch size of 1000 and a start position of 0 (the first item in <code>employee_code_list</code>). We're then running your query for each batch of 1000, and appending the results to <code>results</code>, until there are no records left to query in <code>employee_code_list</code>.</p>
<p>There are other approaches of course, but this is one that won't require you to use a different RDBMS, and might be easiest to work into your code.</p>
|
python|sqlalchemy|pyodbc
| 9 |
1,905,299 | 71,857,679 |
module 'webdriver_manager.driver' has no attribute 'find_element_by_id'
|
<p>Someone please help me with this error. It is working when I run without <code>pytest-bdd</code> additions.(working with <code>pytest</code> framework). But when I create <code>.features</code> file and step definition and accessing this, This time I'm facing this issue. Nothing changed in this file while integrating the test structure with <code>pytest-bdd</code>.</p>
<p>Trying to execute below code and facing <code>"module 'webdriver_manager.driver' has no attribute 'find_element_by_id'" error.</code></p>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>from selenium.webdriver import ActionChains
from selenium.webdriver.support.select import Select
from Utilities import configReader
import logging
from Utilities.LogUtil import Logger
log = Logger(__name__, logging.INFO)
class BasePage:
def __init__(self, driver):
self.driver = driver
def click(self, locator):
if str(locator).endswith("_XPATH"):
self.driver.find_element_by_xpath(configReader.readConfig("locators", locator)).click()
elif str(locator).endswith("_CSS"):
self.driver.find_element_by_css_selector(configReader.readConfig("locators", locator)).click()
elif str(locator).endswith("_ID"):
self.driver.find_element_by_id(configReader.readConfig("locators", locator)).click()
log.logger.info("Clicking on an element: " + str(locator))
def type(self, locator, value):
if str(locator).endswith("_XPATH"):
self.driver.find_element_by_xpath(configReader.readConfig("locators", locator)).send_keys(value)
elif str(locator).endswith("_CSS"):
self.driver.find_element_by_css_selector(configReader.readConfig("locators", locator)).send_keys(value)
elif str(locator).endswith("_ID"):
self.driver.find_element_by_id(configReader.readConfig("locators", locator)).send_keys(value)
log.logger.info("Typing in an element: " + str(locator) + " value entered as : " + str(value))
def select(self, locator, value):
global dropdown
if str(locator).endswith("_XPATH"):
dropdown = self.driver.find_element_by_xpath(configReader.readConfig("locators", locator))
elif str(locator).endswith("_CSS"):
dropdown = self.driver.find_element_by_css_selector(configReader.readConfig("locators", locator))
elif str(locator).endswith("_ID"):
dropdown = self.driver.find_element_by_id(configReader.readConfig("locators", locator))
select = Select(dropdown)
select.select_by_visible_text(value)
log.logger.info("Selecting from an element: " + str(locator) + " value selected as : " + str(value))
def moveTo(self, locator):
if str(locator).endswith("_XPATH"):
element = self.driver.find_element_by_xpath(configReader.readConfig("locators", locator))
elif str(locator).endswith("_CSS"):
element = self.driver.find_element_by_css_selector(configReader.readConfig("locators", locator))
elif str(locator).endswith("_ID"):
element = self.driver.find_element_by_id(configReader.readConfig("locators", locator))
action = ActionChains(self.driver)
action.move_to_element(element).perform()
log.logger.info("Moving to an element: " + str(locator))
</code></pre>
|
<p>Failed to pass driver correctly.</p>
<p>Updated my code by passing the driver . Its working fine now .</p>
|
python|selenium|selenium-webdriver
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.