Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,903,900 | 33,562,555 |
Regex to capture string between quotes especially when the string starts with a quote
|
<p>I have a string as given below.</p>
<pre><code>string= 'Sam007's Helsen007' is a 'good' boy's in 'demand6's6'.
</code></pre>
<p>I want to extract the string inside the quotes.</p>
<p>The output should looks like,</p>
<pre><code>['Sam007's Helsen007', 'good', 'demand6's6']
</code></pre>
<p>The regex I have written in :</p>
<pre><code>re.findall("(?:[^a-zA-Z0-9]*')(.*?)(?:'[^a-zA-Z0-9*])", text)
</code></pre>
<p>But this gives output</p>
<pre><code>["Sam007's Helsen007", 'good', "s in 'demand6's6"]
</code></pre>
<p>when I use modify the regex to </p>
<pre><code>re.findall("(?:[^a-zA-Z0-9]')(.*?)(?:'[^a-zA-Z0-9*])", text)
</code></pre>
<p>It gives me an output:</p>
<pre><code>['good', "demand6's6"]
</code></pre>
<p>The second case seems more appropriate, but it cant handle the case if a string is starting with a quote. </p>
<p>How can I handle the case.</p>
|
<pre><code>st= "'Sam007's Helsen007' is a 'good' boy's in 'demand6's6'"
print re.findall(r"\B'.*?'\B",st)
</code></pre>
<p>Use <code>\B</code> i.e <code>non word boundary</code></p>
<p>Output:<code>["'Sam007's Helsen007'", "'good'", "'demand6's6'"]</code></p>
<p>If you look carefully through your string you want a string <code>'</code> which has a non word character before and <code>'</code> which has a non word character after.</p>
|
python|regex
| 6 |
1,903,901 | 46,946,204 |
How to Execute If Statements to Create a List Tensorflow?
|
<p>I am trying to execute this numpy code in tensor flow. The reason for this is because I want to make binary predictions in a customized way (not using a softmax) and use that in the loss for my network later. Output1 is what the network outputs, an array of size (1, batch_size). Here is the numpy code:</p>
<pre><code>predictions = []
for j in range(batch_size):
if output1[0, j] >= output2[0] and output1[0, j] <= output2[1]:
predictions.append(1)
else:
predictions.append(0)
</code></pre>
<p>In Tensorflow, I have tried to do something like this, using <code>tf.cond</code> since I want to evaluate the value of the output of the network and do something based on that:</p>
<pre><code>predictions = []
for j in range(batch_size):
condResult = tf.cond(output1[0, j] >= output2[0], lambda: predictions.append(1), lambda: predictions.append(0))
condResultFalse = tf.cond(output1[0, j] <= output2[1], lambda: predictions.append(1), lambda: predictions(0))
</code></pre>
<p>However, this has some problems. First, if both conditions are true, it will append 1 to the list twice, which I don't want. Second, it throws an error saying <code>ValueError: true_fn must have a return value.</code> Apparently, I must return a tensor, but I'm not sure how to do this since I just want to append to a list. </p>
<p>Any help in translating this to Tensorflow would be great!</p>
<p>Thanks</p>
|
<p>A good solution would be to use logical functions directly, saying tf.less_equal, or '<=', as follow using broadcasting:
It's gonna be '1' where your condition is <code>True</code>.</p>
<pre><code>import tensorflow as tf
import numpy as np
output1 = tf.constant(np.random.randn(1, 200), dtype='float32')
output2 = tf.constant([0.1, 0.5], dtype='float32')
a = output2[0] <= output1[0]
b = output1[0] <= output2[1]
c = tf.cast(tf.logical_and(a, b), tf.int64)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
res = sess.run(c)
print res
</code></pre>
<p>Edit: Actually working with int64</p>
|
python|numpy|tensorflow|deep-learning
| 2 |
1,903,902 | 46,925,633 |
Python fill string with formats with dictionary
|
<p>Say I have templates to fill with values in dict:</p>
<p>I have templates like this: </p>
<pre><code>templates = [
"I have four {fruit} in {place}",
"I have four {fruit} and {grain} in {place}",
...
]
</code></pre>
<p>With dictionary like this: </p>
<pre><code>my_dict = {'fruit': ['apple', 'banana', 'mango'],
'place': ['kitchen', 'living room'],
'grain' : ['wheat', 'rice']
}
</code></pre>
<p>Say I have a sentence like this: </p>
<pre><code>sentence = "I have four apple in kitchen"
</code></pre>
<p>Given this sentence, templates, and dictionary,
I would like to know that sentence matched one of the templates and return values which it matched like this: </p>
<pre><code>{'fruit': 'apple', 'place': 'kitchen'}
</code></pre>
<p>And similar to above if:</p>
<pre><code>Input: "I have four apple and wheat in kitchen"
Output: {'fruit': 'apple', 'grain': 'wheat', 'place': 'kitchen'}
</code></pre>
<p>And it would be great if it can handle this too: </p>
<pre><code>Input: "I have four apple in bedroom"
Output: {'fruit': 'apple'}
</code></pre>
<p>Notice it only returns fruit and not bedroom since bedroom is not in the values of place. </p>
|
<p>Turn your formatted strings into regular expressions:</p>
<pre><code>import re
words = {k: '(?P<{}>{})'.format(k, '|'.join(map(re.escape, v))) for k, v in my_dict.items()}
patterns = [re.compile(template.format(**words)) for template in templates]
</code></pre>
<p>This produces patterns of the form <code>I have four (?P<fruit>apple|banana|mango) in (?P<place>kitchen|living room)"</code>. Matching these then provides you with your expected output:</p>
<pre><code>for pattern in patterns:
match = pattern.match(sentence)
if match:
matched_words = match.groupdict()
</code></pre>
<p>This is a very fast, O(N) approach to matching sentences exactly:</p>
<pre><code>>>> import re
>>> templates = [
... "I have four {fruit} in {place}",
... "I have four {fruit} and {grain} in {place}",
... ]
>>> my_dict = {'fruit': ['apple', 'banana', 'mango'],
... 'place': ['kitchen', 'living room'],
... 'grain' : ['wheat', 'rice']
... }
>>> def find_matches(sentence):
... for pattern in patterns:
... match = pattern.match(sentence)
... if match:
... return match.groupdict()
...
>>> find_matches("I have four apple in kitchen")
{'fruit': 'apple', 'place': 'kitchen'}
>>> find_matches("I have four apple and wheat in kitchen")
{'fruit': 'apple', 'grain': 'wheat', 'place': 'kitchen'}
</code></pre>
<p>If you need your templates to match <em>partial</em> sentences, wrap the optional parts in <code>(?...)</code> groups:</p>
<pre><code>"I have four {fruit} in (?{place})"
</code></pre>
<p>or add <code>\w+</code> to the words list (in addition to the valid words), then validate <code>groupdict()</code> result against <code>my_dict</code> after matching. For the <code>in bedroom</code> case, <code>\w+</code> will match the <code>bedroom</code> part but won't be found in the <code>my_dict</code> list for <code>place</code>, for example.</p>
|
python|dictionary|format|zip|itertools
| 6 |
1,903,903 | 38,022,329 |
Handling Xpaths and Submitting Webforms using Requests and lxml
|
<p>Hello I am currently working on a program that will submit a phone number to a reverse phone number website and then follow the correct Xpath to determine whether the phone is wwireless or not.</p>
<p>The xpath of the element is </p>
<pre><code>//*[@id="content"]/fieldset/div/table/tbody/tr[3]/td[2]/strong
</code></pre>
<p>my code thus far is</p>
<pre><code>def Phone_Checker(number):
url = 'http://www.reversephonelookup.com/'
data={'Enter Number': number}
r = requests.post(url, data=data)
tree=html.fromstring(r.content)
Service_type=tree.xpath('//fieldset[@id="content"]/text()')
print(Service_type)
if "wireless" in Service_type:
print(True)
return True
else:
print(False)
return False
</code></pre>
<p>I was just wondering am I inputting my xpath wrong and if my code should submit the phone number correctly as well I am a mediocre programmer and would like to know how I would be able to make this code function as I would like.</p>
|
<p>Your approach is missing quite a lot of necessary data and steps, when I first looked I saw the page was using a lot of javascript but monitoring the requests I saw you can actually get it using requests, first we need to post to:</p>
<p><em><code>http://www.reversephonelookup.com/results.php</code></em>, with the correct post data:</p>
<p><a href="https://i.stack.imgur.com/yyB4N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yyB4N.png" alt="enter image description here"></a></p>
<p>Once we have done that we need to make a get request to <em><code>http://www.reversephonelookup.com/number/the_number</code></em>:</p>
<p><a href="https://i.stack.imgur.com/0vnky.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0vnky.png" alt="enter image description here"></a></p>
<p>So putting that all together:</p>
<pre><code>def Phone_Checker(number):
head = {
"User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"}
url = 'http://www.reversephonelookup.com/results.php'
data = {"phone": number, "image.x": "26", "image.y": "37"}
with requests.Session() as s:
s.post(url, data=data, headers=head)
r = s.get("http://www.reversephonelookup.com/number/{}/".format(number),headers=head)
tree = html.fromstring(r.content)
Service_type = tree.xpath('//*[@id="content"]//fieldset//text()')
return "wireless" in Service_type
Phone_Checker("2068675309")
</code></pre>
<p><code>return Service_type and "wireless" in Service_type</code> will only return True if wireless is a string in list. I also tweaked your xpath to get all the text.</p>
<p>A more useful way to use the function would be to return the lxml tree:</p>
<pre><code>def Phone_Checker(number):
head = {
"User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"}
url = 'http://www.reversephonelookup.com/results.php'
data = {"phone": number, "image.x": "26", "image.y": "37"}
with requests.Session() as s:
s.post(url, data=data, headers=head)
r = s.get("http://www.reversephonelookup.com/number/{}/".format(number),headers=head)
return html.fromstring(r.content)
</code></pre>
<p>Then:</p>
<pre><code>xml = Phone_Checker(....)
</code></pre>
<p>An example:</p>
<pre><code>In [5]: xml = Phone_Checker("8598795756")
In [6]: print(xml.xpath("//fieldset//tr/td[text()='Original Service Type:']/following::strong/text()"))
['Landline', 'Independent Telephone Company', 'Versailles, KY', 'VRSLKYXADS0']
</code></pre>
<p>The first result is the type of connection, which if you just want that you can use:</p>
<pre><code>"//fieldset//tr/td[text()='Original Service Type:']/following::strong[1]/text()"
</code></pre>
|
xml|python-3.x|web-scraping|python-requests
| 0 |
1,903,904 | 37,692,974 |
How to print EC2 Tag name along with IP address?
|
<p>I have a code which prints the Public IP's for the running instances, </p>
<pre><code>regions = ['us-east-1','us-west-1','us-west-2','eu-west-1','sa-east-1','ap-southeast-1','ap-southeast-2','ap-northeast-1']
for region in regions:
client = boto3.client('ec2',aws_access_key_id=ACCESS_KEY,aws_secret_access_key=SECRET_KEY,region_name=region,)
addresses_dict = client.describe_addresses()
for eip_dict in addresses_dict['Addresses']:
if 'PrivateIpAddress' in eip_dict:
print eip_dict['PublicIp']
</code></pre>
<p>This is fine, now i also want to print the <code>tag name</code> and store it in another dict, i know i this can be done by :</p>
<pre><code>regions = ['us-east-1','us-west-1','us-west-2','eu-west-1','sa-east-1','ap-southeast-1','ap-southeast-2','ap-northeast-1']
for region in regions:
client = boto3.client('ec2',aws_access_key_id=ACCESS_KEY,aws_secret_access_key=SECRET_KEY,region_name=region,)
dex_dict = client.describe_tags()
for dexy_dict in dex_dict['Tags']:
print dexy_dict['Value']
</code></pre>
<p>The problem is how do i combine it in one function and use 2 dict : one to store IP's and another to store the tag-name ? Please HELP</p>
|
<p>Try the following code, it will give you a dictionary where the key is the <code>InstanceId</code> and the value is a list of <code>[PublicIP, Name]</code>.</p>
<pre><code>import boto3
def instance_info():
instance_information = {}
ip_dict = {}
client = boto3.client('ec2')
addresses_dict = client.describe_addresses().get('Addresses')
for address in addresses_dict:
if address.get('InstanceId'):
instance_information[address['InstanceId']] = [address.get('PublicIp')]
dex_dict = client.describe_tags().get('Tags')
for dex in dex_dict:
if instance_information.get(dex['ResourceId']):
instance_information[dex['ResourceId']].append(dex.get('Value'))
for instance in instance_information:
if len(instance_information[instance]) == 2:
ip_dict[instance_information[instance][0]] = instance_information[instance][1]
else:
ip_dict[instance_information[instance][0]] = ''
return instance_information, ip_dict
</code></pre>
|
python|amazon-ec2|boto3
| 0 |
1,903,905 | 67,758,243 |
Web application using Python3 not working when Dockerized
|
<h2>HelloWorld-1.py</h2>
<pre><code>app = Flask(__name__)
@app.route('/')
def printHelloWorld():
print("+++++++++++++++++++++")
print("+ HELLO WORLD-1 +")
print("+++++++++++++++++++++")
return '<h1>Bishwajit</h1>'
# return '<h1>Hello %s!<h1>' %name
if name == '__main__':
app.run(debug='true')
</code></pre>
<h2>Dockerfile</h2>
<pre><code>FROM python:3
ADD HelloWorld-1.py /HelloWorld-1.py
RUN pip install flask
EXPOSE 80
CMD [ "python", "/HelloWorld-1.py"]
</code></pre>
<p>Building docker using the below command</p>
<pre><code>docker build -t helloworld .
</code></pre>
<p>Running docker image using below command</p>
<pre><code>docker run -d --name helloworld -p 80:80 helloworld
</code></pre>
<p>when i run the below command</p>
<pre><code>docker ps -a
</code></pre>
<p>i get the below output</p>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cebfe8a22493 helloworld "python /home/HelloW…" 2 minutes ago Up 2 minutes (unhealthy) 0.0.0.0:80->80/tcp helloworld
</code></pre>
<p>If I hit in the browser(127.0.0.1:5000), it does not give response,
But when i run the python file individually, it runs properly in the browser.</p>
|
<p>I reproduced your problem and there were four main problems:</p>
<ol>
<li>Not importing <code>flask</code>.</li>
<li>Using <code>name</code> instead of <code>__name__</code></li>
<li>Not assigning the correct port.</li>
<li>Not assigning the host.</li>
</ol>
<p>This is how your <code>HelloWorld-1.py</code> should look like:</p>
<pre><code>from flask import Flask
app = Flask(__name__)
@app.route('/')
def printHelloWorld():
print("+++++++++++++++++++++")
print("+ HELLO WORLD-1 +")
print("+++++++++++++++++++++")
return '<h1>Bishwajit</h1>'
# return '<h1>Hello %s!<h1>' %name
if __name__ == '__main__':
app.run(host='0.0.0.0')
</code></pre>
<p>This is how you <code>Dockerfile</code> should look like:</p>
<pre><code>FROM python:3
ADD HelloWorld-1.py .
RUN pip install flask
CMD [ "python", "/HelloWorld-1.py"]
</code></pre>
<p>Then simply build and run:</p>
<pre><code>docker build . -t helloflask
docker run -dit -p 5000:5000 helloflask
</code></pre>
<p>Now go to <code>localhost:5000</code> and it should work.</p>
<p>Additionally: You could actually assign any other port, for example 4444, and then go to <code>localhost:4444</code>:</p>
<pre><code>docker run -dit -p 4444:5000 helloflask
</code></pre>
|
python-3.x|docker|flask
| 1 |
1,903,906 | 61,181,825 |
Search within subdatasets in python
|
<p>I want to search within each subdataset in <code>df</code>:</p>
<p><code>df</code>:</p>
<pre><code> id timestamp data gradient Start
timestamp
2020-01-15 06:12:49.213 40250 2020-01-15 06:12:49.213 20.0 0.00373 NaN
2020-01-15 06:12:49.313 40251 2020-01-15 06:12:49.313 19.5 0.00354 0.0
2020-01-15 08:05:10.083 40256 2020-01-15 08:05:10.083 20.0 0.00020 1.0
2020-01-15 08:05:10.183 40257 2020-01-15 08:05:10.183 20.5 -0.00440 0.0
...
2020-01-31 09:01:50.993 40310 2020-01-31 09:01:50.993 21.0 0.55473 1.0
2020-01-31 09:01:51.093 40311 2020-01-31 09:01:51.093 21.5 0.00589 0.0
...
</code></pre>
<p>A sub-dataset starts with <code>Start==1</code> and ends with next <code>Start==1</code>. I want to search within each sub-dataset time until when <code>gradient >0.0003</code> ** but not inclusive**(<code>end_time</code>) from <code>start==1</code>(<code>start_time</code>), then calculate the average of <code>data</code>, to obtain a table like this:</p>
<pre><code>start_time end_time Average
2020-01-15 08:05:10.083 2020-01-15 08:05:23.273 35(for example)
...
</code></pre>
<hr>
<p>Edit:
Reproducible dataframe:</p>
<pre><code>d = {'timestamp':["2020-01-15 06:12:49.213", "2020-01-15 06:12:49.313", "2020-01-15 08:05:10.083", "2020-01-15 08:05:10.183", "2020-01-15 09:01:50.993", "2020-01-15 09:01:51.093", "2020-01-15 09:51:01.890", "2020-01-15 09:51:01.990", "2020-01-15 10:40:59.657", "2020-01-15 10:40:59.757", "2020-01-15 10:42:55.693", "2020-01-15 10:42:55.793", "2020-01-15 10:45:35.767", "2020-01-15 10:45:35.867", "2020-01-15 10:45:46.770", "2020-01-15 10:45:46.870", "2020-01-15 10:47:19.783", "2020-01-15 10:47:19.883", "2020-01-15 10:47:22.787"],
'data': [20.0, 19.5, 20.0, 20.5, 21.0, 21.5, 22.0, 22.5, 23.0, 23.5, 23.0, 22.5, 23.0, 23.5, 24.0, 24.5, 25.0, 25.5, 26],
'gradient': [NaN, NaN, 0.000000, 0.000148, 0.000294, 0.000294, 0.000339, 0.000339, 0.000334, 0.000334, 0.000000, -0.008618, 0.000000, 0.006247, 0.090884, 0.090884, 0.010751, 0.010751, 0.332889],
'Start': [0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,]
}
df = pd.DataFrame(d)
</code></pre>
<p>Expected output for reproducible dataframe:</p>
<pre><code>start_time end_time Average
2020-01-15 08:05:10.083 2020-01-15 09:01:51.093 20.75 = average of (20.0, 20.5, 21.0, 21.5)
2020-01-15 10:45:35.767 2020-01-15 10:45:35.767 23.00 = average of (23.0)
</code></pre>
|
<p>I believe you need:</p>
<pre><code>df['g'] = df['Start'].cumsum()
df['m'] = df['gradient'].gt(0.0003)
#filter first group - rows before first 1
df1 = df[df['g'].ne(0)].copy()
#filter rows to first True in column m
df1 = df1[df1.groupby('g')['m'].cumsum().eq(0)]
#named aggregation
df2 = df1.groupby('g').agg(start_time=('timestamp','first'),
end_time=('timestamp','last'),
Average=('data','mean')).reset_index(drop=True)
print (df2)
start_time end_time Average
0 2020-01-15 08:05:10.083 2020-01-15 09:01:51.093 20.75
1 2020-01-15 10:45:35.767 2020-01-15 10:45:35.767 23.00
</code></pre>
|
python|pandas|numpy|select|group-by
| 0 |
1,903,907 | 43,190,074 |
Convert Numpy Array to Monotone Graph (networkx)
|
<p>I have a simple array of 1s and 0s, and I want to convert this array to a graph using NetworkX with the following conditions:</p>
<ul>
<li>monotone </li>
<li>Directional</li>
<li>Weighted graph (go/no go areas)</li>
<li>Starts in the lower left hand corner and works right</li>
</ul>
<p>There is a built in function called <code>from_numpy_matrix</code></p>
<p>See <a href="http://networkx.github.io/documentation/networkx-1.7/reference/generated/networkx.convert.from_numpy_matrix.html" rel="nofollow noreferrer">this</a></p>
<p>The goal is to take this graph and show that I can get from the lower left hand corner of the matrix (think raster dataset) to the upper right hand corner without moving backwards or down. </p>
<p>Example array:</p>
<pre><code>array = [[0,0,1,0,0],
[1,0,0,1,0],
[1,0,1,1,0],
[0,0,1,1,0]]
myarray = np.array(array)
</code></pre>
<p><code>0 means go area, 1 means blocked.</code></p>
|
<p>That was fun. </p>
<p><code>from_numpy_matrix</code> doesn't help as there is no simple transformation from your maze to an adjacency matrix. Instead it is much easier to iterate over allowed positions (i.e. "not wall") and check if there is an allowed position in the allowed directions (up, right, diagonal up-right). </p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import networkx as nx
def maze_to_graph(is_wall, allowed_steps):
"""
Arguments:
----------
is_wall -- 2D boolean array marking the position of walls in the maze
allowed_steps -- list of allowed steps; e.g. [(0, 1), (1, 1)] signifies that
from coming from tile (i, j) only tiles (i, j+1) and (i+1, j+1)
are reachable (iff there is no wall)
Returns:
--------
g -- networkx.DiGraph() instance
pos2idx -- dict mapping (i, j) position to node idx (for testing if path exists)
idx2pos -- dict mapping node idx to (i, j) position (for plotting)
"""
# map array indices to node indices and vice versa
node_idx = range(np.sum(~is_wall))
node_pos = zip(*np.where(~is_wall))
pos2idx = dict(zip(node_pos, node_idx))
# create graph
g = nx.DiGraph()
for (i, j) in node_pos:
for (delta_i, delta_j) in allowed_steps: # try to step in all allowed directions
if (i+delta_i, j+delta_j) in pos2idx: # i.e. target node also exists
g.add_edge(pos2idx[(i,j)], pos2idx[(i+delta_i, j+delta_j)])
idx2pos = dict(zip(node_idx, node_pos))
return g, idx2pos, pos2idx
def test():
arr = np.array([[0,0,1,0,0],
[1,0,0,1,0],
[1,0,1,1,0],
[0,0,1,1,0]]).astype(np.bool)
steps = [(0, 1), # right
(-1, 0), # up
(-1, 1)] # diagonal up-right
g, idx2pos, pos2idx = maze_to_graph(arr, steps)
nx.draw(g, pos=idx2pos, node_size=1200, node_color='w', labels=idx2pos)
start = (3, 0)
stop = (0, 4)
print "Has path: ", nx.has_path(g, pos2idx[start], pos2idx[stop])
return
</code></pre>
<p><a href="https://i.stack.imgur.com/ZW7wU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZW7wU.png" alt="enter image description here"></a></p>
|
python|numpy|networkx
| 2 |
1,903,908 | 37,012,234 |
Pandas: reshape data frame
|
<p>I have the following data frame:</p>
<pre><code>url='https://raw.githubusercontent.com/108michael/ms_thesis/master/crsp.dime.mpl.df'
zz=pd.read_csv(url)
zz.head(5)
date feccandid feccandcfscore.dyn pacid paccfscore cid catcode type_x di amtsum state log_diff_unemployment party type_y bills years_exp disposition billsum
0 2006 S8NV00073 0.496 C00000422 0.330 N00006619 H1100 24K D 5000 NV -0.024693 Republican rep s22-109 12 support 3
1 2006 S8NV00073 0.496 C00375360 0.176 N00006619 H1100 24K D 4500 NV -0.024693 Republican rep s22-109 12 support 3
2 2006 S8NV00073 0.496 C00113803 0.269 N00006619 H1130 24K D 2500 NV -0.024693 Republican rep s22-109 12 support 2
3 2006 S8NV00073 0.496 C00249342 0.421 N00006619 H1130 24K D 5000 NV -0.024693 Republican rep s22-109 12 support 2
4 2006 S8NV00073 0.496 C00255752 0.254 N00006619 H1130 24K D 4000 NV -0.024693 Republican rep s22-109 12 support 2
</code></pre>
<p>I want to manipulate it such that the <code>date</code> column is an index, the <code>feccandid</code> values are the column headers (I will later make them a second index so I can send the frame to panel) and the other column headers become rows. Desired output would <em>look</em> something like this:</p>
<pre><code>date feccandid S8NV00072 S8NV00074 S8NV00075 S8NV00076 S8NV00077
2006 feccandcfscore.dyn 0.496 0.496 0.496 0.496 0.496
2006 pacid C00000422 C00375360 C00113803 C00249342 C00255752
2006 paccfscore 0.33 0.176 0.269 0.421 0.254
2006 cid N00006619 N00006619 N00006619 N00006619 N00006619
2006 catcode H1100 H1100 H1130 H1130 H1130
2006 type_x 24K 24K 24K 24K 24K
2006 di D D D D D
2006 amtsum 5000 4500 2500 5000 4000
2006 state NV NV NV NV NV
2006 log_diff_unemployment -0.024693 -0.024693 -0.024693 -0.024693 -0.024693
2006 party Republican Republican Republican Republican Republican
2006 type_y rep rep rep rep rep
2006 bills s22-109 s22-109 s22-109 s22-109 s22-109
2006 years_exp 12 12 12 12 12
2006 disposition support support support support support
2006 billsum 3 3 2 2 2
</code></pre>
<p>I have tried the following as recommended by <em>jezrael</em></p>
<pre><code>zz=zz.pivot_table(index='date', columns='feccandid', aggfunc=np.mean)
zz.head()
feccandcfscore.dyn ... billsum
feccandid H0AL02087 H0AL07060 H0AR01083 H0AR02107 H0AR03055 H0AR04038 H0AZ01259 H0AZ03362 H0CA15148 H0CA19173 ... S8MI00158 S8MN00438 S8MS00055 S8MT00010 S8NC00239 S8NE00117 S8NM00010 S8NV00073 S8OR00207 S8WI00026
date
2005 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2006 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN 2.125 NaN NaN
2007 NaN 0.016 NaN NaN NaN -0.151 NaN NaN -0.777 NaN ... 1.000000 NaN 1.666667 1.552632 NaN NaN 2.0 1.000 NaN 2.0
2008 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ... 1.285714 NaN NaN 5.431373 NaN NaN NaN NaN NaN NaN
2009 NaN NaN NaN NaN NaN -0.086 NaN NaN -0.790 NaN ... NaN NaN NaN 2.433333 NaN NaN NaN NaN 3.0 2.8
</code></pre>
<p>This is something close to what I would like except that I'm trying to get the <code>feccandid</code>as the only column headers and the original column headers (which are--in this last example--as the topmost column headers) to betransposed as rows.</p>
|
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow noreferrer"><code>pivot_table</code></a> (default aggregate function is <code>np.mean</code>):</p>
<pre><code>df = zz.pivot_table(index='date', columns='feccandid', fill_value='0', aggfunc=np.mean)
df.columns = ['_'.join(col) for col in df.columns.values]
print df
</code></pre>
<p>If you need replace <code>NaN</code> to <code>0</code>:</p>
<pre><code>print zz.pivot_table(index='date', columns='feccandid', fill_value='0', aggfunc=np.mean)
</code></pre>
<p>EDIT:</p>
<p>I created small sample <code>DataFrame</code> As <a href="https://stackoverflow.com/questions/37012234/pandas-reshape-data-frame#comment61586900_37012234">ptrj</a> says, you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.T.html" rel="nofollow noreferrer"><code>T</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_panel.html" rel="nofollow noreferrer"><code>to_panel</code></a> for creating <code>panel</code>. Then maybe you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Panel.transpose.html" rel="nofollow noreferrer"><code>transpose</code></a>:</p>
<pre><code>import pandas as pd
zz = pd.DataFrame({'date': {0: 2001, 1: 2001, 2: 2002, 3: 2002},
'feccandid': {0: 'S8NV00072', 1: 'S8NV00074',
2: 'S8NV00072', 3: 'S8NV00074'},
'pacid': {0: 0.3, 1: 0.1, 2: 0.7, 3: 0.4},
'billsum': {0: 1, 1: 2, 2: 5, 3: 6}})
print zz
billsum date feccandid pacid
0 1 2001 S8NV00072 0.3
1 2 2001 S8NV00074 0.1
2 5 2002 S8NV00072 0.7
3 6 2002 S8NV00074 0.4
zz = zz.pivot_table(index='date',
columns='feccandid',
fill_value=0,
aggfunc=np.mean)
print zz.T
date 2001 2002
feccandid
billsum S8NV00072 1.0 5.0
S8NV00074 2.0 6.0
pacid S8NV00072 0.3 0.7
S8NV00074 0.1 0.4
</code></pre>
<pre><code>wp = zz.T.to_panel()
print wp
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 2 (major_axis) x 2 (minor_axis)
Items axis: 2001 to 2002
Major_axis axis: billsum to pacid
Minor_axis axis: S8NV00072 to S8NV00074
print wp.transpose(2, 0, 1)
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 2 (major_axis) x 2 (minor_axis)
Items axis: S8NV00072 to S8NV00074
Major_axis axis: 2001 to 2002
Minor_axis axis: billsum to pacid
</code></pre>
|
python|pandas|pivot|melt
| 1 |
1,903,909 | 36,886,361 |
Django-Rest-Framework: Globally set pagination class in django settings.py
|
<p>I'm trying to make a default pagination on all api calls:</p>
<p><a href="http://www.django-rest-framework.org/api-guide/pagination/#modifying-the-pagination-style" rel="nofollow">http://www.django-rest-framework.org/api-guide/pagination/#modifying-the-pagination-style</a></p>
<p>And now I want to make my <code>CustomPagination</code> work globally:</p>
<pre><code>class CustomPagination(PageNumberPagination):
"""
自定义分页器
"""
page_size = 10
page_size_query_param = 'page_size'
max_page_size = 1000
</code></pre>
<p>I want the register the class to the <code>settings.py</code>:</p>
<pre><code># =========== REST Framework ==============
REST_FRAMEWORK = {
'DEFAULT_PAGINATION_CLASS': 'football.views.CustomPagination',
'DEFAULT_FILTER_BACKENDS': ('rest_framework.filters.DjangoFilterBackend',),
}
</code></pre>
<p>Still, it raised an error:</p>
<blockquote>
<p>ImportError: Could not import 'football.views.CustomPagination' for API setting 'DEFAULT_PAGINATION_CLASS'. AttributeError: module 'football.views' has no attribute 'CustomPagination'.</p>
</blockquote>
<p>How can I work around it?</p>
|
<p>I encountered the same problem, and finally I fingered out it was because the module <code>views.py</code> was not correctly loaded because I didn't create the rest api folder by <code>manage.py startapp</code> and there is no item for it in <code>INSTALLED_APPS</code> of project's <code>setting.py</code> file.<br>
I moved the <code>CustomPagination</code> paging class to <code>views.py</code> of my first app which was created by <code>manage.py startapp</code> then it worked. </p>
<p>To debug, you can add following line to <a href="https://github.com/encode/django-rest-framework/blob/master/rest_framework/settings.py#L179" rel="nofollow noreferrer">rest_framework/settings.py source code</a> like this:</p>
<pre><code>module = import_module(module_path) # Original code
if (setting_name == "DEFAULT_PAGINATION_CLASS"): # Added code
print(dir(module)) # Added code
return getattr(module, class_name) # Original code
</code></pre>
<p>If <code>AttributeError</code> raised, it should be like: (only builtin attributes in the list)</p>
<pre>
# ./manage.py runserver 0:8000
Performing system checks...
['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__']
Unhandled exception in thread started by .wrapper at 0x7fd60a265510>
Traceback (most recent call last):
</pre>
<p>If it works, <code>CustomPagination</code> should be listed in the list:</p>
<pre>
# ./manage.py runserver 0:8000
Performing system checks...
['some-other-classes', 'PageNumberPagination', 'CustomPagination', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', 'models', 'random', 'reverse', 'settings']
System check identified no issues (0 silenced).
</pre>
|
python|django|rest|pagination|django-rest-framework
| 0 |
1,903,910 | 19,916,810 |
For loop issue in python while using regex for pattern matching in DNA analysis
|
<p>I am fairly new to Python and I have an issue with my <code>for</code> loop that I can't quite seem to figure out.</p>
<p>I am trying to read into a FASTA file which has the following example text:</p>
<pre><code>>seq1
AAACTACCGCGTTT
>seq2
AAACTGCAACTAGCGTTT
>seq3
AAACCGGAGTTACCTAGCGTTT
</code></pre>
<p>What I would like to do is read into my file and print the FASTA header (e.g. header is >seq1), then I want to match two unique patterns (in this e.g. "AAA" and "TTT") present in the DNA sequence and print the DNA sequence that is between these two patterns. </p>
<p>So my will like my output to look like this:</p>
<pre><code>>seq1
CTACCGCG
>seq2
CTGCAACTAGCG
>seq3
CCGGAGTTACCTAGCG
</code></pre>
<p>I have the following code:</p>
<pre><code>import re
def find_seq(filename):
with open(filename) as file:
seq=''
for line in file:
header = re.search(r'^>\w+', line)
if(header):
print (header.group())
seq = seq.replace('\n','')
find_Lpattern = re.sub(r'.*AAA', '',seq)
find_Rpattern = re.sub(r'TTT.*', '',find_Lpattern)
if(find_Rpattern):
print (find_Rpattern)
seq = ''
else:
seq += line
filename = 'test.txt'
print(find_seq(filename))
</code></pre>
<p>I keep getting this as my output:</p>
<pre><code>>seq1
>seq2
CTACCGCG
>seq3
CTGCAACTAGCG
</code></pre>
<p>Essentially my for loop skips over seq1 and then assigns the DNA sequence from seq1 to seq2, and the iteration on my for loop is off. Could anyone please point me in the right direction so I can fix this issue?</p>
|
<p>Even assuming your indentation is set in the way that would produce the results you describe, your logic is off. You're printing the header before you handle the accumulated <code>seq</code>.</p>
<p>When you read line 1 of your file, your <code>header</code> regexp matches. At that point, <code>seq</code> is the empty string. It therefore prints the match, and runs your replace and <code>re.sub</code> calls on the empty string.</p>
<p>Then it reads line 2, "AAACTACCGCGTTT", and appends that to <code>seq</code>.</p>
<p>Then it reads line 3, ">seq2". That matches your header regexp, so it prints the header. Then in runs your replace and sub calls on <code>seq</code> - which is still "AAACTACCGCGTTT" from line 2.</p>
<p>You need to move your <code>seq</code> handling to before you print the headers, and consider what will happen when you run off the end of the file without finding a final header - you will still have 'seq' contents that you want to parse and print after your for loop has ended.</p>
<p>Or maybe look into the third-party biopattern library, which has the <a href="http://biopython.org/wiki/SeqIO" rel="nofollow"><code>SeqIO</code></a> module to parse FASTA files.</p>
|
python|regex
| 2 |
1,903,911 | 67,096,624 |
Convert list of string to dict - Remove extra comma
|
<p>I am trying to create a dictionary from a list of strings. My attempt to convert this list of string to list of dictionary is as below:</p>
<pre><code>author_dict = [[dict(map(str.strip, s.split(':')) for s in author_transform.split(','))] for author_transform in list_of_strings]
</code></pre>
<p>Everything was working fine until I encountered this piece of string:</p>
<pre><code>[[country:United States,affiliation:University of Maryland, Baltimore County,name:tim oates,id:2217452330,gridid:grid.266673.0,affiliationid:79272384,order:2],........,[]]
</code></pre>
<p>As this string has an extra comma(,) in the middle of the intended value of affiliation key: my list is getting a spit at the wrong place. Is there a way (or idea) I can use to avoid this kind of situation?
If it is not possible, any suggestions on how can I ignore thiskind of list?</p>
|
<p>I would solve this by using a regular expression for splitting. This way you can split only on those commas that are followed by a colon without another comma in between.</p>
<p>In your code, replace</p>
<pre class="lang-py prettyprint-override"><code>author_transform.split(',')
</code></pre>
<p>with</p>
<pre class="lang-py prettyprint-override"><code>re.split(',(?=[^,]+:)', author_transform)
</code></pre>
<p>(And don’t forget to <code>import re</code>, of course.)</p>
<p>So, the whole code snippet becomes this:</p>
<pre><code>author_dict = [
[
dict(map(str.strip, s.split(':'))
for s in re.split(',(?=[^,]+:)', author_transform))
]
for author_transform in list_of_strings
]
</code></pre>
<p>I took the liberty of reformatting the code, so the structure of the list comprehensions becomes clear.</p>
|
python
| 2 |
1,903,912 | 66,807,311 |
Add Widget to Tkinter colorchooser
|
<p>As far as I understand, it is not possible to modify the <code>tkinter.colorchooser.askcolor</code> as it uses the systems colorpicker dialog. Is this true?</p>
<p>from the source code: <a href="https://github.com/python/cpython/blob/3.9/Lib/tkinter/colorchooser.py" rel="nofollow noreferrer">https://github.com/python/cpython/blob/3.9/Lib/tkinter/colorchooser.py</a></p>
<pre><code># this module provides an interface to the native color dialogue
# available in Tk 4.2 and newer.
</code></pre>
<p>The reason being is I wish to add an entry box to the dialog so that I would get the color code and user-entered text returned. Maybe it is possible to embed the dialog within a larger window? Is something like this possible, without using multiple windows?</p>
<p>I cannot find previous discussion anywhere else so I guess it is not a simple issue.</p>
|
<blockquote>
<p>As far as I understand, it is not possible to modify the tkinter.colorchooser.askcolor as it uses the systems colorpicker dialog. Is this true?</p>
</blockquote>
<p>Yes, that is true. At least on Windows and OSX. On Linux it's a <a href="https://github.com/tcltk/tk/blob/main/library/clrpick.tcl" rel="nofollow noreferrer">custom dialog written in tcl/tk</a>. You could start with that code and then make modifications to it, then write a tkinter wrapper around it. That wouldn't be particularly difficult if you know tcl/tk, but it's not exactly trivial either.</p>
<blockquote>
<p>Maybe it is possible to embed the dialog within a larger window?</p>
</blockquote>
<p>No, it's not.</p>
|
python|ubuntu|tkinter
| 1 |
1,903,913 | 48,418,396 |
Writing to json file using function return data in python
|
<blockquote>
<p>I need to add the function getvalues returned dictionary into the "data.update".<br>
I can add as seperate json. but unable to add it inside the fields key. please, check the output and desired output.</p>
</blockquote>
<p>This is the code i have written:</p>
<pre><code>import json
import csv
import glob
import os
csvfile = open('file.csv', 'r')
name = (os.path.splitext('file.csv')[0])
exampleReader = csv.reader(csvfile)
exampleData = list(exampleReader)
def getvalues():
for row in exampleData[:1]:
lis = {}
for r in row:
lis.update({r:r})
return lis
data = {}
data.update({
"pattern": name+'.csv',
"source_args": {
"encoding": "UTF-16"
},
"parser_args": {
"type": "csv",
"delimiter": ","
},
"outputs": [
{
"name": name,
"fields": {
}
}
]
})
result =json.dumps(data)
result1 =json.dumps(getvalues())
file = open("data.json","w")
file.write(result)
file.write(result1)
</code></pre>
<p>Here is the actual output and desired output:</p>
<pre><code>#Output : {"pattern": "file.csv",
"source_args":
{
"encoding": "UTF-16"
},
"parser_args": {
"type": "csv",
"delimiter": ","
}, "outputs":
[
{
"name": "file",
"fields": {}
}
]}
{
"facility_id": "facility_id",
"facility_type": "facility_type",
"facility_name": "facility_name",
"facility_branch": "facility_branch",
}
#Desired Output : {"pattern": "file.csv",
"source_args":
{
"encoding": "UTF-16"
},
"parser_args": {
"type": "csv",
"delimiter": ","
}, "outputs":
[
{
"name": "file",
"fields": {
"facility_id": "facility_id",
"facility_type": "facility_type",
"facility_name": "facility_name",
"facility_branch": "facility_branch",
}
}
]}
</code></pre>
<blockquote>
<p>Please, Let me know how can i accomplish this. </p>
<p>Update: ERROR<br>
If i add the function directly in the following way. <strong>fields { getvalues() }</strong>. I am getting the following error.</p>
</blockquote>
<pre><code>Traceback (most recent call last):
File "chej.py", line 50, in <module>
getvalues()
TypeError: unhashable type: 'dict'
</code></pre>
|
<p>You can try this :</p>
<pre><code>"outputs": [
{
"name": name,
"fields": getvalues()
}
</code></pre>
<p>]</p>
|
python|json|python-3.x
| 0 |
1,903,914 | 51,463,146 |
python webrtc voice activity detection is wrong
|
<p>I need to do voice activity detection as a step to classify audio files.</p>
<p>Basically, I need to know with certainty if a given audio has spoken language.</p>
<p>I am using py-webrtcvad, which I found in git-hub and is scarcely documented:</p>
<p><a href="https://github.com/wiseman/py-webrtcvad" rel="nofollow noreferrer">https://github.com/wiseman/py-webrtcvad</a></p>
<p>Thing is, when I try it on my own audio files, it works fine with the ones that have speech but keeps yielding false positives when I feed it with other types of audio (like music or bird sound), even if I set aggressiveness at 3.</p>
<p>Audios are 8000 sample/hz</p>
<p>The only thing I changed to the source code was the way I pass the arguments to main function (excluding sys.args).</p>
<pre><code>def main(file, agresividad):
audio, sample_rate = read_wave(file)
vad = webrtcvad.Vad(int(agresividad))
frames = frame_generator(30, audio, sample_rate)
frames = list(frames)
segments = vad_collector(sample_rate, 30, 300, vad, frames)
for i, segment in enumerate(segments):
path = 'chunk-%002d.wav' % (i,)
print(' Writing %s' % (path,))
write_wave(path, segment, sample_rate)
if __name__ == '__main__':
file = 'myfilename.wav'
agresividad = 3 #aggressiveness
main(file, agresividad)
</code></pre>
|
<p>I'm seeing the same thing. I'm afraid that's just the extent to which it works. Speech detection is a difficult task and webrtcvad wants to be light on resources so there's only so much you can do. If you need more accuracy then you would need different packages/methods that will necessarily take more computing power.</p>
<p>On aggressiveness, you're right that even on 3 there are still a lot of false positives. I'm also seeing false negatives however so one trick I'm using is running three instances of the detector, one for each aggressiveness setting. Then instead of classifying a frame 0 or 1 I give it the value of the highest aggressiveness that still said it was speech. In other words each sample now has a score of 0 to 3 with 0 meaning even the least strict detector said it wasn't speech and 3 meaning even the strictest setting said it was. I get a little bit more resolution like that and even with the false positives it is good enough for me.</p>
|
python|audio|webrtc|speech-recognition|voice-recognition
| 1 |
1,903,915 | 17,651,232 |
python pretty printing simple if
|
<p>I have pretty printed a content in this way using this code. This code prints everything out, how do I print a specific location using IF ? Such as Upper Bukit Timah, West Coast...</p>
<p>Area: Upper Bukit Timah
Summary: Cloudy
Latitude: 1.356084
Longitude: 103.768873</p>
<p>Area: West Coast
Summary: Cloudy
Latitude: 1.30039493
Longitude: 103.7504196</p>
<p>Area: Woodlands
Summary: Cloudy
Latitude: 1.44043052
Longitude: 103.7878418</p>
<p>Area: Yishun
Summary: Cloudy
Latitude: 1.42738834
Longitude: 103.8290405</p>
<pre><code>import urllib2
from BeautifulSoup import BeautifulStoneSoup #Using bs3
url="https://api.projectnimbus.org/neaodataservice.svc/NowcastSet"
request = urllib2.Request(url)
request.add_header("accept", "*/*")
request.add_header('AccountKey', "OSJeROQjTg4v7Ec3kiecjw==")
request.add_header('UniqueUserID', "00000000000000000000000000000001")
result = urllib2.urlopen(request)
xml_str = result.read()
soup = BeautifulStoneSoup(xml_str)
prop_list = []
for content in soup.findAll("m:properties"):
props = {}
for prop in content.findChildren():
props[prop.name[2:]] = prop.text
prop_list.append(props)
for prop in sorted(prop_list):
print "Area: %(area)s\nSummary: %(summary)s\nLatitude: %(latitude)s\nLongitude: %(longitude)s\n" % prop
</code></pre>
|
<p>Well, you'd have to add an <code>if</code> statement to your final <code>for</code> loop, checking whether the current entry in in some positive list. Something like this:</p>
<pre><code>areas_to_print = ["Upper Bukit Timah", "West Coast", "Woodlands", "Yishun"]
for prop in sorted(prop_list):
if prop["area"] in areas_to_print:
print "Area: %(area)s\nSummary: %(summary)s\nLatitude: %(latitude)s\nLongitude: %(longitude)s\n" % prop
</code></pre>
<p>Alternatively, you could just as well add that same <code>if</code> statement to your first <code>for</code> loop, so only those entries are added to the <code>prop_list</code> in the first place.</p>
|
python|if-statement|beautifulsoup
| 0 |
1,903,916 | 55,872,271 |
Split list into randomised ordered sub lists
|
<p>I would like to improve the below code to split a list of values into two sub lists, which have been randomised and sorted. The below code works, but I'm sure there is a better/cleaner way to do it.</p>
<pre><code>import random
data = list(range(1, 61))
random.shuffle(data)
Intervention = data[:30]
Control = data[30:]
Intervention.sort()
Control.sort()
f = open('Randomised_Groups.txt', 'w')
f.write('Intervention Group = ' + str(Intervention) + '\n' + 'Control Group = ' + str(Control))
f.close()
</code></pre>
<p>The expected output is:</p>
<pre><code>Intervention = [1,3,7,9]
Control = [2,4,5,6,8,10]
</code></pre>
|
<p>Something like this might be what you want: </p>
<pre><code>import random
my_rng = [random.randint(0,1) for i in range(60)]
Control = [i for i in range(60) if my_rng[i] == 0]
Intervention = [i for i in range(60) if my_rng[i] == 1]
print(Control)
</code></pre>
<p>The idea is to create 60 random 1s or 0s to use as indicators for which list to put each number in. This will only work if you do not need the two lists to be the same length. To get the same length would require changing how <code>my_rng</code> is created in this example.</p>
<p>I have tinkered a bit further and got the lists of the same length: </p>
<pre><code>import random
my_rng = [0 for i in range(30)]
my_rng.extend([1 for i in range(30)])
random.shuffle(my_rng)
Control = [i for i in range(60) if my_rng[i] == 0]
Intervention = [i for i in range(60) if my_rng[i] == 1]
</code></pre>
<p>Here, instead of adding randomly 1 or 0 to <code>my_rng</code> I get a list of 30 0s and 30 1s to shuffle, then continue like before. </p>
|
python-3.x
| 1 |
1,903,917 | 73,486,227 |
How do you terminate a running file in a Windows command prompt using Python?
|
<p>For example - I have Test.exe file. Using python script, Opened CMD and did cd (moved to directory path) & Started <strong>Test.exe</strong> file, Now its was running until we did force exit, So Using UI we using this command (CTRL+c) to stop exit the running file But using python automation script how to exit it and Also Need to read the cmd data.</p>
|
<p>You can try</p>
<pre><code>import os
os.system("taskkill /im test.exe")
</code></pre>
<p>or</p>
<pre><code>os.system('wmic process where name="test.exe" delete')
</code></pre>
|
python|windows|cmd|automation|command-prompt
| 1 |
1,903,918 | 49,807,789 |
Tensorflow installation: 'utf-8' codec can't decode byte
|
<p>I am trying to install Tensorflow on my computer. The python version is 3.6.5 64x and I believe all the prerequisites are satisfied. Below is the error I get. Do you know how to solve this problem?</p>
<pre><code> Building wheels for collected packages: absl-py
Running setup.py bdist_wheel for absl-py ... error
Failed building wheel for absl-py
Running setup.py clean for absl-py
Failed to build absl-py
Installing collected packages: absl-py, tensorflow
Running setup.py install for absl-py ... error
Exception:
Traceback (most recent call last):
File "c:\users\name\appdata\local\programs\python\python36\lib\site-packages\pip\compat\__init__.py", line 73, in console_to_str
return s.decode(sys.__stdout__.encoding)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb9 in position 24: invalid start byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\name\appdata\local\programs\python\python36\lib\site-packages\pip\basecommand.py", line 215, in main
status = self.run(options, args)
File "c:\users\name\appdata\local\programs\python\python36\lib\site-packages\pip\commands\install.py", line 342, in run
prefix=options.prefix_path,
File "c:\users\name\appdata\local\programs\python\python36\lib\site-packages\pip\req\req_set.py", line 784, in install
**kwargs
File "c:\users\name\appdata\local\programs\python\python36\lib\site-packages\pip\req\req_install.py", line 878, in install
spinner=spinner,
File "c:\users\name\appdata\local\programs\python\python36\lib\site-packages\pip\utils\__init__.py", line 676, in call_subprocess
line = console_to_str(proc.stdout.readline())
File "c:\users\name\appdata\local\programs\python\python36\lib\site-packages\pip\compat\__init__.py", line 75, in console_to_str
return s.decode('utf_8')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb9 in position 24: invalid start byte
</code></pre>
|
<p>Try this or try to get the different tfBinaryURL from <a href="https://www.tensorflow.org/install/install_linux#the_url_of_the_tensorflow_python_package" rel="nofollow noreferrer">https://www.tensorflow.org/install/install_linux#the_url_of_the_tensorflow_python_package</a></p>
<pre><code>pip3 install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.7.0-cp36-cp36m-linux_x86_64.whl
</code></pre>
|
python|tensorflow
| 0 |
1,903,919 | 62,033,838 |
Change x labels of matplotlib graph to particular words using ax object
|
<p>given the following data:</p>
<pre class="lang-py prettyprint-override"><code>mock_data_x = [1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3]
mock_data_y = [1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5]
mock_data_val = [
"a",
"b",
"c",
"d",
"e",
"d",
"e",
"a",
"b",
"c",
"d",
"a",
"b",
"e",
"c",
]
df_mock = pd.DataFrame(dict(x=mock_data_x, y=mock_data_y, v=mock_data_val,))
</code></pre>
<p>which looks as:</p>
<pre><code> x y v
0 1 1 a
1 1 2 b
2 1 3 c
3 1 4 d
4 1 5 e
5 2 1 d
6 2 2 e
7 2 3 a
8 2 4 b
9 2 5 c
10 3 1 d
11 3 2 a
12 3 3 b
13 3 4 e
14 3 5 c
</code></pre>
<p>I can create the following plot:</p>
<pre class="lang-py prettyprint-override"><code>fig, ax = plt.subplots(figsize=(8, 5))
x_axis_labels = ["one", "two", "three"]
values = df_mock["v"].unique()
for val in values:
dt = df_mock[df_mock["v"].eq(val)]
ax.scatter(dt["x"], dt["y"])
ax.plot(dt["x"], dt["y"])
positions = [1, 2, 3]
labels = ["r", "q"]
_ = plt.xticks(positions, x_axis_labels)
</code></pre>
<p>Which looks as:</p>
<p><a href="https://i.stack.imgur.com/QLCU0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QLCU0.png" alt="enter image description here"></a></p>
<p>I feel as though there should be an alternative to the line</p>
<pre class="lang-py prettyprint-override"><code>_ = plt.xticks(positions, x_axis_labels)
</code></pre>
<p>Something which actually uses the <code>ax</code> object rather than <code>plt</code>.</p>
<p>I've looked in <code>dir(ax)</code>, and <code>dir(ax.xaxis)</code>, and it's not obvious what I
should use to achieve this.</p>
|
<p>The equivalent for <code>ax</code> is:</p>
<pre><code>ax.set_xticks(positions)
ax.set_xticklabels(x_axis_labels)
</code></pre>
<p>and you get pretty much the same plot.</p>
<p>However, for this case, you can simply use <code>map</code> and pandas' plot function:</p>
<pre><code>maps = {p:v for p,v in zip(positions, x_axis_labels)}
fig, ax = plt.subplots(figsize=(8,5))
(df_mock.set_index(['x','v'])['y']
.unstack()
.rename(index=maps)
.plot(marker='o', ax=ax)
)
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/u9JO3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u9JO3.png" alt="enter image description here"></a></p>
|
python|matplotlib|plot|data-visualization
| 1 |
1,903,920 | 60,683,144 |
Pandas groupby data frame for duplicate rows
|
<p>I have the following data frame:</p>
<pre><code>data = dict(t=[0, 1, 0, 1], s=[0, 31, 4, 26])
df = pd.DataFrame(data=data)
</code></pre>
<p>How can I use <code>df.groupby(['t'])</code> in order to end up with a data frame that looks like this:</p>
<pre><code>t s_0 s_1
0 0 31
1 4 26
</code></pre>
<p>Thanks for any help.</p>
|
<p>Idea is create for each group new row with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.apply.html" rel="nofollow noreferrer"><code>GroupBy.apply</code></a> and then reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>Series.unstack</code></a> with first level, last some data cleaning:</p>
<pre><code>df1 = (df.groupby('t')['s']
.apply(lambda x: pd.Series(x.to_numpy()))
.unstack(0)
.add_prefix('s_')
.rename_axis(index='t', columns=None)
.reset_index()
)
print (df1)
t s_0 s_1
0 0 0 31
1 1 4 26
</code></pre>
|
python|pandas|dataframe
| 1 |
1,903,921 | 63,705,962 |
How to use a client certificate from the Windows certificate store in python?
|
<p>I want to invoke a web request using a client certificate (public+private key) stored in the Windows certificate store.</p>
<p>With PowerShell my call would look like this (this works):<br />
<code>Invoke-WebRequest -CertificateThumbprint $thumbprint -Uri $uri</code></p>
<p>Now I am searching for an equivalent in python. I do not want to extract the certificate and pass the file but directly use the store or at least only keep the certificate in memory.</p>
<p>I have tried <a href="https://pypi.org/project/wincertstore/" rel="nofollow noreferrer">wincertstore</a> but the certificate lies in the UserStore(cert:\CurrentUser\My) so I cannot access it. Same problem with <a href="https://docs.python.org/3/library/ssl.html#ssl.enum_certificates" rel="nofollow noreferrer">sslContext</a>.</p>
<p>Installing <a href="https://pypi.org/project/python-certifi-win32/" rel="nofollow noreferrer">python-certifi-win32</a> as mentioned in this <a href="https://stackoverflow.com/a/57053415/14208537">answer</a> seems to only load the CA-certificates in order to verify the server, but what I need is a client certificate to verify myself against the server.</p>
<p>Are there any ways other than calling powershell with subprocess to achieve this?<br />
Many thanks in advance.</p>
|
<p>For anyone with the same problem. I solved it using <a href="https://pypi.org/project/pythonnet/" rel="nofollow noreferrer">clr</a> to export the certificate into memory and <a href="https://pypi.org/project/requests-toolbelt/" rel="nofollow noreferrer">requests_toolbelt</a> to use it with requests.</p>
<p>Code example to make it work:</p>
<pre><code>import clr
import requests
import requests_toolbelt
from cryptography.hazmat.primitives.serialization.pkcs12 import load_key_and_certificates
from cryptography.hazmat.primitives.serialization import Encoding, PrivateFormat, NoEncryption
from cryptography.hazmat.backends import default_backend
from requests_toolbelt.adapters.x509 import X509Adapter
clr.AddReference('System')
clr.AddReference('System.Linq')
clr.AddReference('System.Security.Cryptography.X509Certificates')
clr.AddReference('System.Security.Cryptography')
from System.Security.Cryptography.X509Certificates import X509Store, StoreName, StoreLocation,OpenFlags,X509Certificate2Collection,X509FindType,X509Certificate2, X509ContentType
from System.Security.Cryptography import AsymmetricAlgorithm
store = X509Store(StoreName.My, StoreLocation.CurrentUser)
store.Open(OpenFlags.ReadOnly)
user = os.environ['USERNAME']
certCollection = store.Certificates.Find(
X509FindType.FindBySubjectName,
user,
False)
cert = certCollection.get_Item(0)
pkcs12 = cert.Export(X509ContentType.Pkcs12, <passphrase>)
backend = default_backend()
pkcs12_password_bytes = "<password>".encode('utf8')
pycaP12 = load_key_and_certificates(pkcs12, pkcs12_password_bytes, backend)
cert_bytes = pycaP12[1].public_bytes(Encoding.DER)
pk_bytes = pycaP12[0].private_bytes(Encoding.DER, PrivateFormat.PKCS8, NoEncryption())
adapter = X509Adapter(max_retries=3, cert_bytes=cert_bytes, pk_bytes=pk_bytes, encoding=Encoding.DER)
session = requests.Session()
session.mount('https://', adapter)
session.get('url', verify=True)
</code></pre>
|
python|windows|certificate
| 2 |
1,903,922 | 61,118,819 |
python effective binary exponentation with modulo
|
<p>What is the most effective way of implementing the binary exponentiation in python?
This is my approach</p>
<pre><code> def quad_pow(base, exponent, modul):
alpha = (bin(exponent).replace('0b', ''))[::-1]
a = 1
b = base
for i in range(0, len(alpha)):
if int(alpha[i]) == 1:
a = (a * b) % modul
b = (b*b) % modul
return a
</code></pre>
<p>Is this the best way of doing it?</p>
|
<p>A method which is 2X faster than OP and comparable to builtin function</p>
<p><strong>Code</strong></p>
<pre><code>def power_mod(b, e, m):
x = 1
while e > 0:
if e % 2:
b, e, x = (b * b) % m, e // 2, (b * x) % m
else:
b, e, x = (b * b) % m, e // 2, x
return x
</code></pre>
<p><strong>Timing Summary</strong></p>
<blockquote>
<p>Normal Integers
1. 2X faster than quad_pow
2. only ~20% slower than native function</p>
<p>Big Integers</p>
<ol>
<li>power_mod and quad_power comparable in speed</li>
<li>pow (Native) is ~2X faster</li>
</ol>
</blockquote>
<p><strong>Timing Details</strong></p>
<p><em>Normal Integers (i.e. int64)</em></p>
<pre><code>a = 1234
b = 15
c = 1000000007
Timing: quad_pow
%timeit quad_pow(a, b, c)
4.69 µs ± 167 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Timing: power_mod
%timeit power_mod(a, b, c)
2.05 µs ± 39.6 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
Timing: pow (Python builtin function)
power(a, b, c)
1.73 µs ± 37 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
</code></pre>
<p><em>Big Integers (i.e. requires arbitrary precision)</em></p>
<pre><code>a = 2988348162058574136915891421498819466320163312926952423791023078876139
b = 2351399303373464486466122544523690094744975233415544072992656881240319
m = 10 ** 40
Timing: quad_pow
%timeit quad_pow(a, b, c)
263 µs ± 5.86 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Timing: power_mod
%timeit power_mod(a, b, c)
263 µs ± 8.05 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Timing: pow (Python builtin function)
power(a, b, c)
144 µs ± 2.05 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
</code></pre>
|
python|math
| 2 |
1,903,923 | 58,703,546 |
python error lib dont execute the library
|
<p>Before enter data, I import a lib, but this lib give an error like this /</p>
<blockquote>
<p>Warning (from warnings module): File
"C:\Users\Samuel\AppData\Local\Programs\Python\Python37-32\lib\site-packages\pydub\utils.py",
line 165
warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning) RuntimeWarning: Couldn't find ffmpeg or
avconv - defaulting to ffmpeg, but may not work</p>
<p>Warning (from warnings module): File
"C:\Users\Samuel\AppData\Local\Programs\Python\Python37-32\lib\site-packages\pydub\utils.py",
line 179
warn("Couldn't find ffplay or avplay - defaulting to ffplay, but may not work", RuntimeWarning) RuntimeWarning: Couldn't find ffplay or
avplay - defaulting to ffplay, but may not work</p>
</blockquote>
|
<p>TL;DR: As the source code indicates, you should install <code>ffmpeg</code> add it to your %PATH%. Since <code>ffplay</code> comes with <code>ffmpeg</code>, this should solve your problem.</p>
<p>You can install <code>ffmpeg</code> here: <a href="http://ffmpeg.org/" rel="nofollow noreferrer">http://ffmpeg.org/</a></p>
<p>After installation, you can open your control panel, and then search <em>environment</em>. There you can adjust your %PATH% variable. Add the <code>ffmpeg</code> installation's binary path to the %PATH%. </p>
<p>And here's why from source code:</p>
<pre class="lang-py prettyprint-override"><code>def get_encoder_name():
"""
Return enconder default application for system, either avconv or ffmpeg
"""
if which("avconv"):
return "avconv"
elif which("ffmpeg"):
return "ffmpeg"
else:
# should raise exception
warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning)
return "ffmpeg"
def get_player_name():
"""
Return enconder default application for system, either avconv or ffmpeg
"""
if which("avplay"):
return "avplay"
elif which("ffplay"):
return "ffplay"
else:
# should raise exception
warn("Couldn't find ffplay or avplay - defaulting to ffplay, but may not work", RuntimeWarning)
return "ffplay"
def which(program):
"""
Mimics behavior of UNIX which command.
"""
# Add .exe program extension for windows support
if os.name == "nt" and not program.endswith(".exe"):
program += ".exe"
envdir_list = [os.curdir] + os.environ["PATH"].split(os.pathsep)
for envdir in envdir_list:
program_path = os.path.join(envdir, program)
if os.path.isfile(program_path) and os.access(program_path, os.X_OK):
return program_path
</code></pre>
<p>From this we can know that it looks up those programs from your environment variable %PATH%. And that's why installing those softwares and adding them to your %PATH% should solve the problem.</p>
|
python
| 0 |
1,903,924 | 65,630,676 |
Python f-string surprising results on floats
|
<p>I am trying to format float numbers in a fixed point notation: x.xxx, three digits following the decimal point regardless of the value of the number. I am getting surprising results. The first in particular would suggest that it is giving me <em>three significant places</em> rather than <em>three digits after the decimal point</em>. How do I tell it what I really want?</p>
<pre><code>>>> print(f"{.0987:5.03}")
0.0987
*expected: 0.099*
>>> print(f"{0.0:05.03}")
000.0
*expected: 0.000*
>>> print(f"{0.0:5.3}")
0.0
</code></pre>
|
<pre><code># added "3f" to specify decimals places
print(f"{.0987:5.3f}")
#expected: 0.099*
print(f"{0.9687:05.3f}")
#expected: 0.000*
print(f"{0.0:5.3f}")
</code></pre>
|
python|if-statement|formatting|f-string
| 1 |
1,903,925 | 61,563,835 |
Calculate AWS Comprehend Sentiment cost
|
<p>I'd like to programmatically estimate the cost to call the AWS Comprehend Sentiment API. I searched SO and the <a href="https://calculator.aws/#/addService" rel="nofollow noreferrer">AWS calculators</a> but couldn't find a way. Also, I'm sure the costs for the amount of text I'll be sending will be small but I really want to know. </p>
<p>Based on the pricing info <a href="https://aws.amazon.com/comprehend/pricing/" rel="nofollow noreferrer">here</a> I wrote the code below. Is it correct?</p>
<pre><code>text = ["What a horrible rainy day today",
"What a great day today",
"This is a neutral statement"]
numChars = sum(len(i) for i in text)
#Sentiment is measured in units of 100 characters, with a 3 unit (300 character) minimum charge per request.
numUnits = int(math.ceil(numChars / 100))
# Up to 10M units
if numUnits < 10000000:
pricePerunit = 0.0001
sentimentCost = numUnits * pricePerunit
# From 10M-50M units
elif numUnits >= 10000000 and numUnits <= 50000000:
pricePerunit = 0.0001
sentimentCost = 9999999 * pricePerunit
pricePerunit = 0.00005
sentimentCost = sentimentCost + ((numUnits - 10000000) * pricePerunit)
# Over 50M units.
elif numUnits > 50000000:
pricePerunit = 0.0001
sentimentCost = 9999999 * pricePerunit
pricePerunit = 0.00005
sentimentCost = sentimentCost + (40000000 * pricePerunit)
pricePerunit = 0.000025
sentimentCost = sentimentCost + ((numUnits - 49999999) * pricePerunit)
print("\nEstimated $ charges to call AWS Comprehend Sentiment are: %0.5f\n" % sentimentCost)
</code></pre>
|
<p>No, this calculation is not correct. Specifically:</p>
<ul>
<li>you need to round up for units so use <code>math.ceil(numChars / 100)</code></li>
<li>the cost/unit is different for the first 10M, the next 40M, and anything beyond 50M, and you have mistakenly assumed that <em>all</em> units are charged at the marginal rate. Your code will calculate the cost of 10M+1 units as (10M+1) * 0.00005 when it should be 10M*0.0001 + 1*0.00005</li>
<li>also, your code will crash with exactly 10000000 or 50000000 units</li>
</ul>
|
python-3.x|amazon-web-services|amazon-comprehend
| 1 |
1,903,926 | 28,523,556 |
If I don't use def rewind()..than what can I do to print 3 lines separately
|
<pre><code>from sys import argv
script, input_file = argv
def print_all(f):
print f.read()
def rewind(f):
f.seek(0)
def print_a_line(line_count, f):
print line_count, f.readline()
current_file = open(input_file)
print "First let's print the whole file:\n"
print_all(current_file)
print "Now let's rewind, kind of like a tape."
rewind(current_file)
print "Let's print three lines:"
current_line = 1
print_a_line(current_line, current_file)
current_line = current_line + 1
print_a_line(current_line, current_file)
current_line = current_line + 1
print_a_line(current_line, current_file)
</code></pre>
|
<p>Using <code>seek(0)</code> is the simplest way to re-read the contents of a file. Alternatively, you can simply <code>close()</code> the file and <code>open()</code> it again.</p>
<p>But you can't always <code>seek()</code> (or re-open) a file-like object, eg if it's a terminal or a pipe. So if you need to access its contents multiple times you should read it once, saving its contents into a string, or even better, into a list of lines. You can use <code>.readlines()</code> to read the file directly into a list of lines, or you can <code>.read()</code> it into a string and then use the <code>str.splitlines()</code> method to create a list from that string.</p>
|
python
| 0 |
1,903,927 | 41,351,156 |
How to insert javascript code into Jupyter
|
<p>I'm trying to insert this script on custom.js. I changes to color red all the negative currency.</p>
<p>I want it to be applied to all pandas dataframes printed on Jupyter. After adding it to all custom.js available on jupyter/anaconda folders, it still didn't change anything. Can someone help me?</p>
<pre><code>var allTableCells = document.getElementsByTagName("td");
for(var i = 0, max = allTableCells.length; i < max; i++) {
var node = allTableCells[i];
//get the text from the first child node - which should be a text node
var currentText = node.childNodes[0].nodeValue;
//check for 'one' and assign this table cell's background color accordingly
if (currentText.includes("$ -"))
node.style.color = "red";
}
</code></pre>
|
<pre><code>%%javascript
var allTableCells = document.getElementsByTagName("td");
for(var i = 0, max = allTableCells.length; i < max; i++) {
var node = allTableCells[i];
//get the text from the first child node - which should be a text node
var currentText = node.childNodes[0].nodeValue;
//check for 'one' and assign this table cell's background color accordingly
if (currentText.includes("$ -"))
node.style.color = "red";
}
</code></pre>
|
javascript|python|pandas|anaconda|jupyter
| 3 |
1,903,928 | 49,659,968 |
django class list object and filtering lines
|
<p>In Django I have a list of phone results. Here is a screenshot below:</p>
<p><a href="https://i.stack.imgur.com/e1XLH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e1XLH.png" alt="List details"></a></p>
<p>I am trying to query the list and get the result lines (like in querysets where I can use filter(condition). </p>
<p>In my example I want to get phonetype=='Cep' lines. I can do it by a for loop and if condition; however if there is a more decent way I want to learn it.
Thanks.</p>
|
<p>Instead of basic <code>for</code> loop with <code>if</code> condition you can use <a href="https://www.python-course.eu/list_comprehension.php" rel="nofollow noreferrer">List Comprehension</a>.</p>
<p>For your use-case it should look like something like this:</p>
<pre><code>results = [phone for phone in phones if phone.get('phonetype')=='Cep']
</code></pre>
|
python|django
| 1 |
1,903,929 | 41,192,236 |
Executing a file a large number of times
|
<p>How do I get a file to run a large number of times, say even a million? For instance, randomly choose a number from a list a million times and find it's average. Example:</p>
<pre><code>fib = [2,3,5,8,13,21,34,55,89]
i = random.choice(fib)
print i
</code></pre>
<p>I want the average of a million trials. It seems like the method around here is to help and not so much feed me the answer. That is greatly appreciated as well. </p>
|
<p>How about looping a million times, summing up the chosen values and dividing by a million:</p>
<pre><code>from __future__ import print_function
import random
n = 1e6
fib = [2,3,5,8,13,21,34,55,89]
print(sum(random.choice(fib) for _ in range(int(n))) / n)
</code></pre>
<p>Output:</p>
<pre><code>25.565039
</code></pre>
<p>The above code contains a <a href="https://docs.python.org/3/glossary.html#term-generator-expression" rel="nofollow noreferrer">generator expression</a>. It is equivalent to this loop version:</p>
<pre><code>sum_ = 0
for x in range(int(n)):
sum_ += random.choice(fib)
print(sum_/n)
</code></pre>
<p>Output:</p>
<pre><code>25.576006
</code></pre>
|
python-2.7
| 1 |
1,903,930 | 40,042,596 |
why windll.user32.GetWindowThreadProcessID can't find the function?
|
<p>I'm reading <em>Black Hat Python</em> and in chapter 8 I find "user32.GetWindowThreadProcessID(hwnd,byref(pid))" doesn't work, just like the picture shows.</p>
<p>It seems that python can't find <em>GetWindowThreadProcessID</em>, but it can find <em>GetForegroundWindow</em> which also is exported from user32.dll.</p>
<p>I also try "windll.LoadLibrary("user32.dll")", but it still doesn't work.</p>
<p>Thank you!</p>
|
<p>It should work if you your OS version is at least Windows 2000 Professional:</p>
<pre><code>import ctypes
import ctypes.wintypes
pid = ctypes.wintypes.DWORD()
hwnd = ctypes.windll.user32.GetForegroundWindow()
print( ctypes.windll.user32.GetWindowThreadProcessId(hwnd,ctypes.byref(pid)) )
</code></pre>
|
python
| 2 |
1,903,931 | 43,880,018 |
My image will print to PNG but nothing is showing, it does show the image when plot.show is run
|
<p>My code is as follows. I am using MatPlotLib to create an image, but the image is not rendering in the png. Someone, please fix my code so it will render. </p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
#Sets the size of the chart
from pylab import rcParams
rcParams['figure.figsize'] = 7, 2
#AverageAnnualMedicalCostPerEE = "{}".format('jpg')
#print AverageAnnualMedicalCostPerEE
#Creates the dataframe
raw_data = {'plan_type': ['Total Annual Cost, Single', 'Total Annual Cost,
Family'],
'Your Plan': [6000, 3000],
'Benchmark': [4800, 1600],
'Region': [1800, 2800],
'Industry': [4900, 1300],
'Size': [5700, 1600],
}
data = ['Your Plan','Benchmark','Region','Industry','Size']
df = pd.DataFrame(raw_data,
columns = ['plan_type','Your Plan', 'Benchmark', 'Region',
'Industry', 'Size'])
#Plots the bars, adding desired colors
ax = df.plot.bar(rot=0, color=['#ffc000',"#305496", '#8ea9db', '#b4c6e7',
'#D9E1F2'],
width = 0.8 )
#Adds data labels to top of bars
for p in ax.patches[0:]:
h = p.get_height()
x = p.get_x()+p.get_width()/2.
if h != 0:
ax.annotate( '$' + "%g" % p.get_height(), xy=(x,h), xytext=(0,4),
rotation=0,
textcoords="offset points", ha="center", va="bottom",
fontsize='small', color='grey')
# Remove Bordering Frame
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_color('#B4C7E7')
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
# Remove Y-axis Labels
ax.axes.get_yaxis().set_visible(False)
#Sets x-axis limits and margins
ax.set_xlim(-0.5, +1.5)
ax.margins(y=0)
# Set Y-Axis Ticks to the Right
ax.yaxis.tick_right()
# Set the Y Axis Limits
ax.set_ylim(0,(df[['Your
Plan','Benchmark','Region','Industry','Size']].max(axis=1).max(axis=0)*1.5))
ax.margins(y=0)
#Adds legend to the top of the page
ax.legend(ncol=len(df.columns), loc="Lower Left", bbox_to_anchor=
(0,1.02,1,0.08),
borderaxespad=0, mode="expand",frameon=False, fontsize='small')
#Add labels to the x-axis
ax.set_xticklabels(df["plan_type"], fontsize='small')
ax.xaxis.set_ticks_position('none')
#shows the plot and prints it to
plt.show()
plt.savefig('AverageAnnualMedicalCostPerEE.png')
</code></pre>
<p>So, again I am looking to get a png I can then later import into a table and add to my story. The latter is easy, except the image rendering issue. Please let me know if you can solve this, probably a quick fix.</p>
|
<p>I think you should change the order of saving the image and showing it, since the figure will be reset after the <code>plt.show()</code>. So you should be able to fix this by either removing the <code>plt.show()</code> command from your code or switch <code>plt.savefig(...)</code> and <code>plt.show()</code>.</p>
|
python|image|matplotlib|reportlab
| 1 |
1,903,932 | 51,528,480 |
Write date and variable to file
|
<p>I am trying to write a variable and the date and time on the same line to a file, which will simulate a log file.</p>
<p>Example: <code>July 25 2018 6:00 pm - Variable contents here</code></p>
<p>So far I am able to write the variable to the file but I am unsure how to use the datetime library or other similar libraries. Some guidance would be appreciated.</p>
<p>Below is the current script.</p>
<pre><code>import subprocess
import datetime
var = "test"
with open('auditlog.txt', 'a') as logfile:
logfile.write(var + "\n")
</code></pre>
|
<p>The fastest way I found is doing something like this:</p>
<pre><code>import time
var = time.asctime()
print(var)
</code></pre>
<p>Result: Thu Jul 26 00:46:04 2018</p>
<p>If you want to change the placements of y/m/d etc. you can alternatively use this:</p>
<pre><code>import time
var = time.strftime("%B %d %Y %H:%M pm", time.localtime())
print(var)
</code></pre>
<p>Result: July 26 2018 00:50 pm</p>
<p>Have a look <a href="https://docs.python.org/2/library/time.html#time.strftime" rel="nofollow noreferrer">here</a>.</p>
<p>By the way, is the subprocess intended in your code? You don't need it to open/write to files. Also you should do <code>logfile.close()</code> in your code after you wrote to it.</p>
|
python-3.x|date|variables|logging
| 0 |
1,903,933 | 70,472,278 |
argparse input range function
|
<p>I need to pass a function in command line and parsing with argparse.
How can I do that?</p>
<p>Eg.
<code>python program.py --params a=range(1, 10, 1), b=range(2, 5, 1)...</code></p>
<p>I've tested:
<code>parser.add_argument('-p', "--params", type=json.loads)</code></p>
<p>But when I try to launch cmd:
<code>python program.py --p {"a": range(1, 10, 1}</code></p>
<p>returns this error:
<code>error: argument -p/--params: invalid loads value: '{"a": range(1,10,1)}'</code></p>
|
<p>you should create your own custom function for that and pass it in <code>type</code></p>
<p>e.g.</p>
<pre><code>>>> def hyphenated(string):
... return '-'.join([word[:4] for word in string.casefold().split()])
...
>>> parser = argparse.ArgumentParser()
>>> _ = parser.add_argument('short_title', type=hyphenated)
>>> parser.parse_args(['"The Tale of Two Cities"'])
Namespace(short_title='"the-tale-of-two-citi')
</code></pre>
<p>more info: <a href="https://docs.python.org/3/library/argparse.html#type" rel="nofollow noreferrer">here</a></p>
|
python|function|argparse
| 1 |
1,903,934 | 69,908,066 |
Scrapy Scrape crawlspider next page with input tag
|
<p>I'm using scrapy and crawlspinder. I want to get all posts on this <a href="http://pstrial-2019-12-16.toscrape.com/browse/insunsh" rel="nofollow noreferrer">website</a>. Here is the code.</p>
<pre><code>import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
class ToscrapeSpider(CrawlSpider):
name = 'toscrape'
allowed_domains = ['pstrial-2019-12-16.toscrape.com']
start_urls = ['http://pstrial-2019-12-16.toscrape.com/browse/insunsh']
rule_articles_page = Rule(LinkExtractor(restrict_xpaths="//div[@id='body']/div[2]/a"), callback='parse_item', follow=False)
rule_next_page = Rule(LinkExtractor(restrict_xpaths="//form[@class='nav next']/input[1]/@value", tags=('input'), attrs=('value',), process_value='process_value'),
follow=True,)
rules = (
rule_articles_page,
rule_next_page,
)
def parse_item(self, response):
yield {
'Image': response.xpath("//div[@id='body']/img/@src").extract(),
'Title': response.xpath("//div[@id='content']/h1/text()").extract(),
'artist': response.xpath("//div[@id='content']/h2/text()").extract(),
'Description': response.xpath("//div[@class='description']/p/text()").extract(),
'URL': response.url,
'Dimention' : response.xpath("//tbody/tr/td[text()='Dimensions']/text()").extract(),
}
</code></pre>
<p>Now the problem is it does not go to the next page. Because the next page button is a form, not an anchor tag.</p>
<p>Also, Help me to get image dimensions (if available in cm) on the article page.</p>
|
<p>this is the basic loop i created maybe you can find better one but this also work on your problem.</p>
<pre><code>import scrapy
class PagedataSpider(scrapy.Spider):
name = 'pagedata'
page=1
allowed_domains = ['pstrial-2019-12-16.toscrape.com']
start_urls = ['http://pstrial-2019-12-16.toscrape.com/browse/insunsh?page=1']
def parse(self, response):
yield {
'Title': response.css("div h1::text").getall()
}
# next_page=response.css('input[name="page"]::attr(value)').get()
if PagedataSpider.page <=114:
PagedataSpider.page+=1
nextPage=f'http://pstrial-2019-12-16.toscrape.com/browse/insunsh?page={PagedataSpider.next_page}'
yield scrapy.Request(nextPage,callback=self.parse)
</code></pre>
|
python|web-scraping|scrapy
| 1 |
1,903,935 | 72,946,967 |
How to show a value with query_set in a serializer using Django rest framework?
|
<p>I am making an API and want to list all choices of a model for who use it.</p>
<pre class="lang-py prettyprint-override"><code># -------------------------------------------------------------
# Image category serializer
# -------------------------------------------------------------
class ImageCategorySerializer(serializers.ModelSerializer):
#category = CategorySerializer()
category = serializers.PrimaryKeyRelatedField(
source="category.category",
many=True,
queryset=Category.objects.all(),
)
image = serializers.IntegerField(source="image.id")
class Meta:
fields = '__all__'
model = ImageCategory
# -------------------------------------------------------------
# Category
# -------------------------------------------------------------
class Category(models.Model):
"""
This model define a category
Args:
category (datetime): creation date
created_at (str): description of the image
"""
class CategoryChoice(models.TextChoices):
"""
This inner class define our choices for several categories
Attributes:
VIDEOGAMES tuple(str): Choice for videogames.
ANIME tuple(str): Choice for anime.
MUSIC tuple(str): Choice for music.
CARTOONS tuple(str): Choice for cartoons.
"""
VIDEOGAMES = ('VIDEOGAMES', 'Videogames')
ANIME = ('ANIME', 'Anime')
MUSIC = ('MUSIC', 'Music')
CARTOONS = ('CARTOONS', 'Cartoons')
category = models.CharField(max_length=15, choices=CategoryChoice.choices, null=False, blank=False)
created_at = models.DateTimeField(auto_now_add=True)
</code></pre>
<br>
<p>This is shown<br>
<a href="https://i.stack.imgur.com/yK2k8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yK2k8.png" alt="enter image description here" /></a></p>
<br>
<p>I want to replace the "Category object('number')" options by the choices of Category model(VIDEOGAMES, ANIME, MUSIC and CARTOONS).</p>
|
<p>have you tried implementing <code>__str__</code> in your model?</p>
<pre><code>def __str__(self):
/*return what you want to display */
</code></pre>
|
python-3.x|django|django-rest-framework
| 1 |
1,903,936 | 55,985,245 |
Formatting numbers with same width using f-strings python
|
<p>I want to format array of numbers with same width using f-strings. Numbers can be both positive or negative.</p>
<p>Minimum working example </p>
<pre><code>import numpy as np
arr = np.random.rand(10) - 0.5
for num in arr:
print(f"{num:0.4f}")
</code></pre>
<p>The result is </p>
<pre><code>0.0647
-0.2608
-0.2724
0.2642
0.0429
0.1461
-0.3285
-0.3914
</code></pre>
<p>Due to the negative sign, the numbers are not printed off with the same width, which is annoying. How can I get the same width using f-strings?</p>
<p>One way that I can think of is converting number to strings and print string. But is there a better way than that? </p>
<pre><code>for num in a:
str_ = f"{num:0.4f}"
print(f"{str_:>10}")
</code></pre>
|
<p>Use a space <em>before</em> the format specification:</p>
<pre><code># v-- here
>>> f"{5: 0.4f}"
' 5.0000'
>>> f"{-5: 0.4f}"
'-5.0000'
</code></pre>
<p>Or a plus (<code>+</code>) sign to force <em>all</em> signs to be displayed:</p>
<pre><code>>>> f"{5:+0.4f}"
'+5.0000'
</code></pre>
|
python|python-3.x|f-string
| 13 |
1,903,937 | 55,929,370 |
using Estimator interface for inference with pre-trained tensorflow object detection model
|
<p>I'm trying to load a pre-trained tensorflow object detection model from the <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" rel="nofollow noreferrer">Tensorflow Object Detection</a> repo as a <code>tf.estimator.Estimator</code> and use it to make predictions. </p>
<p>I'm able to load the model and run inference using <code>Estimator.predict()</code>, however the output is garbage. Other methods of loading the model, e.g. as a <code>Predictor</code>, and running inference work fine. </p>
<p>Any help properly loading a model as an <code>Estimator</code> calling <code>predict()</code> would be much appreciated. My current code:</p>
<h2>Load and prepare image</h2>
<pre class="lang-py prettyprint-override"><code>def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(list(image.getdata())).reshape((im_height, im_width, 3)).astype(np.uint8)
image_url = 'https://i.imgur.com/rRHusZq.jpg'
# Load image
response = requests.get(image_url)
image = Image.open(BytesIO(response.content))
# Format original image size
im_size_orig = np.array(list(image.size) + [1])
im_size_orig = np.expand_dims(im_size_orig, axis=0)
im_size_orig = np.int32(im_size_orig)
# Resize image
image = image.resize((np.array(image.size) / 4).astype(int))
# Format image
image_np = load_image_into_numpy_array(image)
image_np_expanded = np.expand_dims(image_np, axis=0)
image_np_expanded = np.float32(image_np_expanded)
# Stick into feature dict
x = {'image': image_np_expanded, 'true_image_shape': im_size_orig}
# Stick into input function
predict_input_fn = tf.estimator.inputs.numpy_input_fn(
x=x,
y=None,
shuffle=False,
batch_size=128,
queue_capacity=1000,
num_epochs=1,
num_threads=1,
)
</code></pre>
<p>Side note:</p>
<p><code>train_and_eval_dict</code> also seems to contain an <code>input_fn</code> for prediction</p>
<pre class="lang-py prettyprint-override"><code>train_and_eval_dict['predict_input_fn']
</code></pre>
<p>However this actually returns a <code>tf.estimator.export.ServingInputReceiver</code>, which I'm not sure what to do with. This could potentially be the source of my problems as there's a fair bit of pre-processing involved before the model actually sees the image.</p>
<h2>Load model as <code>Estimator</code></h2>
<p>Model downloaded from TF Model Zoo <a href="http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2018_01_28.tar.gz" rel="nofollow noreferrer">here</a>, code to load model adapted from <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/model_main.py" rel="nofollow noreferrer">here</a>.</p>
<pre class="lang-py prettyprint-override"><code>model_dir = './pretrained_models/tensorflow/ssd_mobilenet_v1_coco_2018_01_28/'
pipeline_config_path = os.path.join(model_dir, 'pipeline.config')
config = tf.estimator.RunConfig(model_dir=model_dir)
train_and_eval_dict = model_lib.create_estimator_and_inputs(
run_config=config,
hparams=model_hparams.create_hparams(None),
pipeline_config_path=pipeline_config_path,
train_steps=None,
sample_1_of_n_eval_examples=1,
sample_1_of_n_eval_on_train_examples=(5))
estimator = train_and_eval_dict['estimator']
</code></pre>
<h2>Run inference</h2>
<pre class="lang-py prettyprint-override"><code>output_dict1 = estimator.predict(predict_input_fn)
</code></pre>
<p>This prints out some log messages, one of which is:</p>
<pre><code>INFO:tensorflow:Restoring parameters from ./pretrained_models/tensorflow/ssd_mobilenet_v1_coco_2018_01_28/model.ckpt
</code></pre>
<p>So it seems like pre-trained weights are getting loaded. However results look like:</p>
<p><a href="https://i.stack.imgur.com/SGoW9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SGoW9.png" alt="Image with bad detections"></a></p>
<h2>Load same model as a <code>Predictor</code></h2>
<pre class="lang-py prettyprint-override"><code>from tensorflow.contrib import predictor
model_dir = './pretrained_models/tensorflow/ssd_mobilenet_v1_coco_2018_01_28'
saved_model_dir = os.path.join(model_dir, 'saved_model')
predict_fn = predictor.from_saved_model(saved_model_dir)
</code></pre>
<h2>Run inference</h2>
<pre class="lang-py prettyprint-override"><code>output_dict2 = predict_fn({'inputs': image_np_expanded})
</code></pre>
<p>Results look good:</p>
<p><a href="https://i.stack.imgur.com/leR4O.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/leR4O.png" alt="enter image description here"></a></p>
|
<p>When you load the model as an estimator and from a checkpoint file, here is the restore function associated with <code>ssd</code> models. From <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/meta_architectures/ssd_meta_arch.py" rel="nofollow noreferrer"><code>ssd_meta_arch.py</code></a></p>
<pre><code>def restore_map(self,
fine_tune_checkpoint_type='detection',
load_all_detection_checkpoint_vars=False):
"""Returns a map of variables to load from a foreign checkpoint.
See parent class for details.
Args:
fine_tune_checkpoint_type: whether to restore from a full detection
checkpoint (with compatible variable names) or to restore from a
classification checkpoint for initialization prior to training.
Valid values: `detection`, `classification`. Default 'detection'.
load_all_detection_checkpoint_vars: whether to load all variables (when
`fine_tune_checkpoint_type='detection'`). If False, only variables
within the appropriate scopes are included. Default False.
Returns:
A dict mapping variable names (to load from a checkpoint) to variables in
the model graph.
Raises:
ValueError: if fine_tune_checkpoint_type is neither `classification`
nor `detection`.
"""
if fine_tune_checkpoint_type not in ['detection', 'classification']:
raise ValueError('Not supported fine_tune_checkpoint_type: {}'.format(
fine_tune_checkpoint_type))
if fine_tune_checkpoint_type == 'classification':
return self._feature_extractor.restore_from_classification_checkpoint_fn(
self._extract_features_scope)
if fine_tune_checkpoint_type == 'detection':
variables_to_restore = {}
for variable in tf.global_variables():
var_name = variable.op.name
if load_all_detection_checkpoint_vars:
variables_to_restore[var_name] = variable
else:
if var_name.startswith(self._extract_features_scope):
variables_to_restore[var_name] = variable
return variables_to_restore
</code></pre>
<p>As you can see even if the config file sets <code>from_detection_checkpoint: True</code>, only the variables in the feature extractor scope will be restored. To restore all the variables, you will have to set</p>
<pre><code>load_all_detection_checkpoint_vars: True
</code></pre>
<p>in the config file.</p>
<p>So, the above situation is quite clear. When load the model as an <code>Estimator</code>, only the variables from feature extractor scope will be restored, and the predictors's scope weights are not restored, the estimator would obviously give random predictions.</p>
<p>When load the model as a predictor, all weights are loaded thus the predictions are reasonable.</p>
|
tensorflow|object-detection|object-detection-api
| 2 |
1,903,938 | 50,077,922 |
Sort dataframe multiindex level and by column
|
<p>#Updated: pandas version 0.23.0 solves this problem with</p>
<p><a href="https://pandas.pydata.org/docs/whatsnew/v0.23.0.html#sorting-by-a-combination-of-columns-and-index-levels" rel="nofollow noreferrer">Sorting by a combination of columns and index levels</a></p>
<hr />
<p>I have struggled with this and I suspect there is a better way. How do I sort the following dataframe by index level name 'idx_0', level=0 and by column, 'value_1' descending such that the column 'MyName' reads vertical 'SCOTTBOSTON'.</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'idx_0':[2]*6+[1]*5,
'idx_1':[6,4,2,10,18,5,11,1,7,9,3],
'value_1':np.arange(11,0,-1),
'MyName':list('BOSTONSCOTT')})
df = df.set_index(['idx_0','idx_1'])
df
</code></pre>
<p>Output:</p>
<pre><code> MyName value_1
idx_0 idx_1
2 6 B 11
4 O 10
2 S 9
10 T 8
18 O 7
5 N 6
1 11 S 5
1 C 4
7 O 3
9 T 2
3 T 1
</code></pre>
<p>#Excepted output using:</p>
<pre><code>df.sort_values(['value_1'], ascending=False)\
.reindex(sorted(df.index.get_level_values(0).unique()), level=0)
</code></pre>
<p>I suspect there is an easier way without resetting indexes</p>
<pre><code> MyName value_1
idx_0 idx_1
1 11 S 5
1 C 4
7 O 3
9 T 2
3 T 1
2 6 B 11
4 O 10
2 S 9
10 T 8
18 O 7
5 N 6
</code></pre>
<h3>Failure #1:</h3>
<pre><code>df.sort_values('value_1', ascending=False).sort_index(level=0)
</code></pre>
<p>Sort by values first then sort index level=0, but level=1 get sorted also.</p>
<pre><code> MyName value_1
idx_0 idx_1
1 1 C 4
3 T 1
7 O 3
9 T 2
11 S 5
2 2 S 9
4 O 10
5 N 6
6 B 11
10 T 8
18 O 7
</code></pre>
<h3>Failure #2</h3>
<pre><code>df.sort_index(level=0).sort_values('value_1', ascending=False)
</code></pre>
<p>Sort by index level=0 then sort by values, but index=0 gets jumbled again.</p>
<pre><code> MyName value_1
idx_0 idx_1
2 6 B 11
4 O 10
2 S 9
10 T 8
18 O 7
5 N 6
1 11 S 5
1 C 4
7 O 3
9 T 2
3 T 1
</code></pre>
|
<p>Here are some potential solutions for your needs:</p>
<p><strong>Method-1:</strong></p>
<pre><code> (df.sort_values('value_1', ascending=False)
.sort_index(level=[0], ascending=[True]))
</code></pre>
<p><strong>Method-2:</strong></p>
<pre><code> (df.set_index('value_1', append=True)
.sort_index(level=[0,2], ascending=[True,False])
.reset_index('value_1'))
</code></pre>
<p>Tested on pandas 0.22.0, Python 3.6.4</p>
|
python|pandas|dataframe|multi-index
| 5 |
1,903,939 | 66,479,853 |
Why is the greeting function not defined and how to change the code
|
<pre class="lang-py prettyprint-override"><code>class Person:
def __init__(self, name):
self.name = name
def greeting(self):
"""Outputs a message with the name of the person"""
print("Hello! My name is {name}.".format(name=self.name))
help(greeting)
</code></pre>
<p>The error message says</p>
<pre><code>Error on line 8:
help(greeting)
NameError: name 'greeting' is not defined
</code></pre>
|
<p>You need to reference the class you placed the method under:</p>
<pre><code>help(Person.greeting)
</code></pre>
<pre><code>Help on function greeting in module __main__:
greeting(self)
Outputs a message with the name of the person
</code></pre>
<p>Or you could output help on the class itself:</p>
<pre><code>help(Person)
</code></pre>
<pre><code>Help on class Person in module __main__:
class Person(builtins.object)
| Person(name)
|
| Methods defined here:
|
| __init__(self, name)
| Initialize self. See help(type(self)) for accurate signature.
|
| greeting(self)
| Outputs a message with the name of the person
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
</code></pre>
|
python
| 2 |
1,903,940 | 66,635,668 |
Increasing accuracy in integration using quad
|
<pre><code>import math
from scipy.integrate import quad
def integrand(x):
return 1/math.log(x)
for i in range(1,13):
n = 10**i
I = quad(integrand, 1.45137, n)
print('Li(',n,') = ' ,I[0], ' Error bound = ', I[1], sep = "")
</code></pre>
<p>In evaluating the <code>logarithmic integral function</code> the code above returns values with sufficient accuracy for <code>n</code> up to 1,000,000, then accuracy deteriorates. For my requirement I would wish to keep the error bound well below <code>1</code> even for much larger arguments, say up to <code>10**12</code>. I experimented with the <code>epsabs</code> and <code>limit</code> parameters, without any visible effect for any <code>n</code>, and since this is not a situation where the function values fall below the floating point lower limit I did not think it worthwhile trying my luck with multi-precision shenanigans. Any tips anyone?</p>
|
<p>You can set the error tolerance:</p>
<pre><code>I = quad(integrand, 1.45137, n,epsrel = 1e-012)
</code></pre>
<p>output:</p>
<pre><code>Li(10) = 6.165597450825269 Error bound = 1.3760057428455556e-12
Li(100) = 30.126139530117598 Error bound = 3.845093652017017e-10
Li(1000) = 177.60965593619017 Error bound = 1.0048009489777205e-08
Li(10000) = 1246.1372138454267 Error bound = 2.5251966557222983e-11
Li(100000) = 9629.808998996832 Error bound = 4.4348515334357425e-10
Li(1000000) = 78627.54915740821 Error bound = 8.356797391525394e-09
Li(100000000) = 5762209.375445976 Error bound = 1.7291372054824457e-06
Li(1000000000) = 50849234.956999734 Error bound = 1.7689864637237228e-05
Li(10000000000) = 455055614.58662117 Error bound = 0.00014576268969193965
Li(100000000000) = 4118066400.621609 Error bound = 0.0009848731833003443
Li(1000000000000) = 37607950280.804855 Error bound = 0.01345062255859375
</code></pre>
|
python|numpy|scipy
| 2 |
1,903,941 | 65,033,210 |
Assertion error when inheriting multiprocessing.Process
|
<p>I needed a separate process that would open some files on initialization and close them gently at the end. For this, I inherited a class from <code>Process</code>. Here is a minimal demo:</p>
<pre class="lang-python prettyprint-override"><code>from multiprocessing import Process
from multiprocessing.process import BaseProcess
class Proxy(Process):
def __init__(self):
super().__init__(self)
def run(self):
pass
if __name__ == "__main__":
proxy = Proxy()
proxy.start()
proxy.join()
</code></pre>
<p>With this code I get an assertion exception:</p>
<pre><code>Traceback (most recent call last):
File "mp_proxy.py", line 11, in <module>
proxy = Proxy()
File "mp_proxy.py", line 6, in __init__
super().__init__(self)
File "/home/user/opt/anaconda3/lib/python3.7/multiprocessing/process.py", line 74, in __init__
assert group is None, 'group argument must be None for now'
AssertionError: group argument must be None for now
</code></pre>
<p>The same happens if to replace <code>Process</code> with <code>BaseProcess</code>. Next I added a debug print into the <code>process.py</code>, to the <code>BaseProcess.__init__</code> function, just to look at the <code>group</code> variable, and then I got something different:</p>
<pre><code>multiprocessing.process : Traceback (most recent call last):
File "mp_proxy.py", line 11, in <module>
proxy = Proxy()
File "mp_proxy.py", line 6, in __init__
super().__init__(self)
File "/home/user/opt/anaconda3/lib/python3.7/multiprocessing/process.py", line 74, in __init__
print(__name__, ":", group)
File "/home/user/opt/anaconda3/lib/python3.7/multiprocessing/process.py", line 254, in __repr__
elif self._closed:
AttributeError: 'Proxy' object has no attribute '_closed'
</code></pre>
<p>The question is: How to inherit Process in a proper way? Maybe the concept I took is wrong?</p>
<p>Earlier, in another post '<a href="https://stackoverflow.com/questions/52948447/error-group-argument-must-be-none-for-now-in-multiprocessing-pool">Error group argument must be None for now in multiprocessing.pool</a>' a similar error was described, however I did not see a solution to the problem. As far as I understood, the behavior is highly dependent on the Python sub-version. It's not cool at all.</p>
<p>P.S.: Ubuntu 20.04, Anaconda 3 with Python 3.7.6.</p>
|
<p>It should be <code>super().__init__()</code> instead of <code>super().__init__(self)</code>.</p>
<p><a href="https://docs.python.org/3/library/functions.html#super" rel="nofollow noreferrer"><code>super()</code></a> in this case translates to <code>super(Proxy, self)</code>, already binding the super-object to your <code>Proxy</code>-instance. You <em>call</em> methods on the super-object like you always do with methods, <em>without</em> explicitly passing <code>self</code>.</p>
<p><code>group</code> is the second parameter in <code>BaseProcess.__init__(self, group=None, target=None...)</code> and with calling <code>super().__init__(self)</code> in your code, you're setting it to <code>self</code>, hence the <code>AssertionError</code>.</p>
|
python|multiprocessing|python-multiprocessing
| 2 |
1,903,942 | 64,646,193 |
Python Asyncio confusion between asyncio.sleep() and time.sleep()
|
<p>From the documentation, if we want to implement a non-blocking delay we should implement <code>await asyncio.sleep(delay)</code> because <code>time.sleep(delay)</code> is blocking in nature. But from what I could understand, it is the <code>await</code> keyword that cedes control of the thread to do something else while the function <code>f()</code> following the keyword i.e. <code>await f()</code> finishes its calculations.</p>
<p>So if we need the <code>await</code> keyword in order for <code>asyncio.sleep(delay)</code> to be non-blocking, what is the difference with its counterpart <code>time.sleep(delay)</code> if both are not awaited for?</p>
<p>Also, can't we reach the same result by preceding both sleep functions with a <code>await</code> keyword?</p>
|
<p>From one answer of somewhat similar <a href="https://stackoverflow.com/questions/62493718">topic</a>:</p>
<blockquote>
<p>The function asyncio.sleep simply registers a future to be called in x seconds while time.sleep suspends the execution for x seconds.</p>
</blockquote>
<p>So the execution of that coroutine called <code>await asyncio.sleep()</code> is suspended until event loop of asyncio revokes after timer-expired event.</p>
<p>However, <code>time.sleep()</code> literally blocks execution of <em>current thread</em> until designated time is passed, preventing chance to run either an event loop or other tasks while waiting - which makes concurrency possible despite being <em>single threaded</em>.</p>
<p>For what I understand that's difference of following:</p>
<ol>
<li><em>counting X seconds with stopwatch <strong>yourself</strong></em></li>
<li><em>let the <strong>clock</strong> ticking and periodically check if X seconds has passed</em></li>
</ol>
<p>You, a <code>thread</code> probably can't do other thing while looking at stopwatch yourself, while you're free to do other jobs between periodic check on latter case.</p>
<hr />
<p>Also, you can't use synchronous functions with <code>await</code>.</p>
<p>From <a href="https://www.python.org/dev/peps/pep-0492/#await-expression" rel="nofollow noreferrer">PEP 492</a> that implements <code>await</code> and <code>async</code>:</p>
<blockquote>
<p><code>await</code>, similarly to <code>yield from</code>, suspends execution of read_data <strong>coroutine</strong> until db.fetch awaitable completes and returns the result data.</p>
</blockquote>
<p>You can't suspend normal subroutine, python is imperative language.</p>
|
python|python-asyncio
| 1 |
1,903,943 | 63,964,031 |
driver.findelement values are changing every time
|
<p>I'm trying to download the job log from maestro tool. Every time the driver.findelement values are changing. Can some one please help me with solution.</p>
<p><strong>Example 1)</strong> driver.find_element_by_css_selector('#AjaxTable<strong>11851_t1</strong> > tbody > tr.tvg_table_row_stripe0 > td:nth-child(1) > input[type=checkbox]').click()</p>
<p><strong>Example 2)</strong> driver.find_element_by_css_selector('#AjaxTable<strong>11859_t1</strong> > tbody > tr.tvg_table_row_stripe0 > td:nth-child(1) > input[type=checkbox]').click()</p>
|
<p>One way to create a single CSS selector would be the below</p>
<pre><code>[id^=AjaxTable] > tbody > tr.tvg_table_row_stripe0 > td:nth-child(1) > input[type=checkbox]
</code></pre>
<p>... but, I can't tell for sure that it will find only the element you want without testing it on the page.</p>
|
python-3.x|selenium-webdriver
| 0 |
1,903,944 | 63,911,111 |
Error adding layers in neural network in tensorflow
|
<pre><code>import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras.utils import to_categorical
network=models.Sequential() # this initializes a sequential model that we will call network
network.add(layers.Dense(10, activation = 'relu') # this adds a dense hidden layer
network.add(layers.Dense(8, activation = 'softmax')) # this is the output layer
</code></pre>
<p>I am trying to create a 2 layer neural network model in tensorflow and am getting this error:</p>
<pre><code>File "<ipython-input-6-0dde2ff676f8>", line 7
network.add(layers.Dense(8, activation = 'softmax')) # this is the output layer
^
SyntaxError: invalid syntax
</code></pre>
<p>May I know why I'm getting this error for output layer but not for hidden layer? Thanks.</p>
|
<p>You have <code>missed a closing bracket</code>.</p>
<pre><code>import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras.utils import to_categorical
network=models.Sequential() # this initializes a sequential model that we will call network
network.add(layers.Dense(10, activation = 'relu')) # this adds a dense hidden layer
network.add(layers.Dense(8, activation = 'softmax')) # this is the output layer
</code></pre>
|
tensorflow|neural-network|tensorflow2.0
| 1 |
1,903,945 | 65,323,716 |
Want to filter more then 10 million numbers on Telegram
|
<p>I need a little help. I have more then 10 millions number and i want to filter those contacts on telegram like which number exist on telegram or not.
I have search a lot but not get any method which fulfil my needs.</p>
<p>I have checked IsphoneRegistered or CheckPhone it return true in every request.</p>
|
<p>Telethon API > <a href="https://tl.telethon.dev/methods/contacts/import_contacts.html" rel="nofollow noreferrer">ImportContactsRequest</a></p>
<p>With Telethon API <a href="https://tl.telethon.dev/methods/contacts/import_contacts.html" rel="nofollow noreferrer">ImportContactsRequest</a> you can check if the given phone exists on Telegram. If found it will return Array with user details otherwise it will return emtpy array.</p>
<p>Note: As of today, you can only get telegram_id of the user, first_name, last_name are returned null even if they exist.</p>
|
telegram|python-telegram-bot|php-telegram-bot|telegram-webhook|node-telegram-bot-api
| 1 |
1,903,946 | 68,673,064 |
How to capture a screenshot of a certain size of window with selenium?
|
<p>I try to take of the whole screen, a capture of a small fragment.
I try to take a screenshot with the dimensions (w, x, y, z) of the window.
Where the photo takes the values:</p>
<pre><code>w = 240, 200
x = 540, 200
y = 240, 600
z = 600, 600
</code></pre>
<p>But honestly, I don't know how to implement it in my code. Is this:</p>
<pre><code>from selenium import webdriver
driver.save_screenshot(r'C:\Users\youna\Pictures\1\{}.png'.format(codigos_1[i]))
</code></pre>
<p>How could I do it?</p>
|
<p>You can take a screenshot of the entire screen and then crop from it the part according to desired dimensions.<br />
This will be done with use of PIL Imaging library.<br />
It can be installed with<br />
<code>pip install Pillow</code><br />
The code can be as following:</p>
<pre><code>from selenium import webdriver
from PIL import Image
from io import BytesIO
#save screenshot of entire page
png = driver.get_screenshot_as_png()
#open image in memory with PIL library
im = Image.open(BytesIO(png))
#define crop points
im = im.crop((left, top, right, bottom))
#save new cropped image
im.save('screenshot.png')
</code></pre>
<p>where<br />
left = location['x']<br />
top = location['y']<br />
right = location['x'] + size['width']<br />
bottom = location['y'] + size['height']</p>
|
python|selenium|selenium-webdriver
| 1 |
1,903,947 | 68,665,273 |
Mapping gps coordinates with census tract Python
|
<p>I did not yet find an answer to solve my confusion of a small project I am working on.</p>
<p>My goal is to match a census block ID / block_fips to lat/lon pairs in my dataframe.</p>
<p>I have not worked with an API for complementing data in Python previously.</p>
<p>Here is a snippet of a dataset of lat and lon coordinates:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'lat': [40.760659, 40.768254, 40.761573], 'lon': [-73.980420, -73.988639, -73.972628]})
print(df)
</code></pre>
<p>I came across the Census conversion API <a href="https://www.fcc.gov/census-block-conversions-api" rel="nofollow noreferrer">https://www.fcc.gov/census-block-conversions-api</a>. If using the Area API, how can I (1) obtain and then (2) match the "block_fips" to the first lat/lon pair, in this case "360610131001003" using Python in a Jupyter notebook in the Anaconda environment.</p>
<p>The output I wish is then:</p>
<pre><code>dfcensus = pd.DataFrame({'lat': [40.760659, 40.768254, 40.761573], 'lon': [-73.980420, -73.988639, -73.972628], 'block': [360610131001003, 360610139003000, 360610112021004]})
print(dfcensus)
</code></pre>
<p>Many thanks for any input!</p>
|
<ul>
<li>a row by row call to the API is simplest approach</li>
<li>API is simple to use, use <strong>requests</strong> building URL parameters documented in API</li>
<li>just assign this back to new column in dataframe</li>
<li>this has been run in a jupyter lab environment</li>
</ul>
<pre><code>import requests
url = "https://geo.fcc.gov/api/census/block/find"
df = pd.DataFrame({"lat": [40.760659, 40.768254, 40.761573],
"lon": [-73.980420, -73.988639, -73.972628],})
df.assign(
block=df.apply(
lambda r: requests.get(
url, params={"latitude": r["lat"], "longitude": r["lon"], "format": "json"}
).json()["Block"]["FIPS"],
axis=1,
)
)
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">lat</th>
<th style="text-align: right;">lon</th>
<th style="text-align: right;">block</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">40.7607</td>
<td style="text-align: right;">-73.9804</td>
<td style="text-align: right;">360610131001003</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">40.7683</td>
<td style="text-align: right;">-73.9886</td>
<td style="text-align: right;">360610139003000</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">40.7616</td>
<td style="text-align: right;">-73.9726</td>
<td style="text-align: right;">360610112021004</td>
</tr>
</tbody>
</table>
</div>
|
python|pandas|api|geolocation|census
| 1 |
1,903,948 | 10,541,760 |
Can I set the umask for tempfile.NamedTemporaryFile in python?
|
<p>In Python (tried this in 2.7 and below) it looks like a file created using <code>tempfile.NamedTemporaryFile</code> doesn't seem to obey the umask directive:</p>
<pre><code>import os, tempfile
os.umask(022)
f1 = open ("goodfile", "w")
f2 = tempfile.NamedTemporaryFile(dir='.')
f2.name
Out[33]: '/Users/foo/tmp4zK9Fe'
ls -l
-rw------- 1 foo foo 0 May 10 13:29 /Users/foo/tmp4zK9Fe
-rw-r--r-- 1 foo foo 0 May 10 13:28 /Users/foo/goodfile
</code></pre>
<p>Any idea why <code>NamedTemporaryFile</code> won't pick up the umask? Is there any way to do this during file creation? </p>
<p>I can always workaround this with os.chmod(), but I was hoping for something that did the right thing during file creation.</p>
|
<p>This is a security feature. The <code>NamedTemporaryFile</code> is always created with mode <code>0600</code>, hardcoded at <a href="http://hg.python.org/cpython/file/63bde882e311/Lib/tempfile.py#l235"><code>tempfile.py</code>, line 235</a>, because it is private to your process until you open it up with <code>chmod</code>. There is no constructor argument to change this behavior.</p>
|
python|file|permissions
| 39 |
1,903,949 | 61,922,353 |
Adding highlighting to a run in python-pptx
|
<p>I would like to automate adding a highlight to a run of text (really a background colour) using python-pptx.</p>
<p>I've done a lot of work with python-pptx and have done a tiny amount of fiddling with _element in the past.</p>
<p>Could someone post a quick sample of highlighting a run of text using python-pptx? That I can work up into something fitting my need. (I don't care what the colour of the highlight is;I think there's some kind of enumeration of valid colours for this.)</p>
<p>Thanks!</p>
|
<p>So, with a little reading of the code and guesswork I've got a complete working example:</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
from pptx import Presentation, slide
from pptx.oxml.xmlchemy import OxmlElement
import xml.etree
prs = Presentation()
title_slide_layout = prs.slide_layouts[0]
slide = prs.slides.add_slide( title_slide_layout )
title = slide.shapes.title
title.text = 'Presentation with Internal Hyperlinks'
tf = title.text_frame
p=tf.paragraphs[0]
run = p.add_run()
run.text="Hello"
rPr = run._r.get_or_add_rPr()
hl = OxmlElement("a:highlight")
srgbClr = OxmlElement("a:srgbClr")
setattr(srgbClr,'val','FFFF00')
hl.append(srgbClr)
rPr.append(hl)
prs.save( 'test.pptx' )
</code></pre>
<p>I can now package this up as a function which fiddles with a run - and add it to my main code.</p>
|
python|python-pptx
| 1 |
1,903,950 | 60,479,354 |
List + string inconsistent behavior in Python 3.7.6
|
<pre><code>>>> r = [[]]
>>> r[0] = r[0] + 'abc'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can only concatenate list (not "str") to list
>>> r[0] += 'abc'
>>> r
[['a', 'b', 'c']]
</code></pre>
<p>Could somebody explain why second assignment works but not the first one ?</p>
|
<p>Why <code>+=</code> works and <code>+</code> doesn't work is "that's how its coded". But I haven't figured out any good reason for it. Lets focus simply on list addition</p>
<pre><code>operator magic method list equiv
-------- ------------ ----------
+= (inplace add) __iadd__ list_inplace_concat
+ (add) __add__ list_concat
</code></pre>
<p>Inplace Add / list_inplace_concat works on any sequence. Under the covers, python simply calls <code>list.extend</code> which turns the right hand side into an iterator and so works with all sequences</p>
<pre><code>>>> test = []
>>> test += 'abc'
>>> test
['a', 'b', 'c']
</code></pre>
<p>Add / list_concat is hardcoded to work only with other lists. The underlying C code uses the internal data structure of the list to copy its elements.</p>
<pre><code>>>> test + 'abc'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can only concatenate list (not "str") to list
</code></pre>
<p>Change the right hand side to a list and it works</p>
<pre><code>>>> test + list('abc')
['a', 'b', 'c', 'a', 'b', 'c']
>>>
</code></pre>
<p><code>list_concat</code> is optimized to use the size of the two lists to know exactly how large the new list needs to be. Then it does member copy at the C structure level. What puzzles me is why there isn't a fallback when the "not a list" condition is detected. The list could be copied and extended.</p>
|
python|python-3.x|list|append
| 1 |
1,903,951 | 70,610,534 |
Python Selenium Chromedriver Can't disable images to load
|
<p>I would like to disable images to load in Chrome with Selenium,</p>
<p>when I use this code (and other codes I found online):</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
chrome_options = webdriver.ChromeOptions()
prefs = {"profile.managed_default_content_settings.images": 2}
chrome_options.add_experimental_option("prefs", prefs)
driver = webdriver.Chrome(chrome_options=chrome_options)
</code></pre>
<p>I get these error messages:</p>
<pre><code><ipython-input-36-fb16a130c9b1>:7: DeprecationWarning: use options instead of chrome_options driver = webdriver.Chrome(chrome_options=chrome_options)
WebDriverException: Message: 'chromedriver' executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home
</code></pre>
<p>EDIT 1</p>
<p>last I Tried as suggested was:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from webdriver_manager.chrome import ChromeDriverManager
driver =webdriver.Chrome(ChromeDriverManager().install(),options=options)
options = Options()
chrome_options = webdriver.ChromeOptions()
prefs = {"profile.managed_default_content_settings.images": 2}
chrome_options.add_experimental_option("prefs", prefs)
driver=webdriver.Chrome(options=options)
</code></pre>
<p>but this line :</p>
<pre><code>driver = webdriver.Chrome(ChromeDriverManager().install(),options=options)
</code></pre>
<p>drives this error, meanwhile my chromedriver-py 97.0.4692.71:</p>
<pre><code>====== WebDriver manager ======
Current google-chrome version is 97.0.4692
Get LATEST chromedriver version for 97.0.4692 google-chrome
Driver [C:\Users\48791\.wdm\drivers\chromedriver\win32\97.0.4692.71\chromedriver.exe] found in cache
</code></pre>
<p>and this line:</p>
<pre><code>driver=webdriver.Chrome(options=options)
WebDriverException: Message: 'chromedriver' executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home
</code></pre>
|
<p>I managed to do it:</p>
<pre><code>option = webdriver.ChromeOptions()
chrome_prefs = {}
option.experimental_options["prefs"] = chrome_prefs
chrome_prefs["profile.default_content_settings"] = {"images": 2}
chrome_prefs["profile.managed_default_content_settings"] = {"images": 2}
PATH = 'C:\Program Files (x86)\chromedriver.exe'
browser = webdriver.Chrome(executable_path = PATH, options=option)
browser.get('https://www.yahoo.com/')
browser.find_element_by_xpath('//*[@class="btn primary"]').click()
</code></pre>
|
python|selenium|selenium-webdriver|selenium-chromedriver
| 0 |
1,903,952 | 63,404,012 |
Print CSV from a python list
|
<p>So I have this small python script that I'm using to validate whether or not a folder contains csv files. I have it working so far but I'm trying to print the CSV file from the result list of csv files to the console.</p>
<pre><code>import os
import logging
import csv
logger = logging.getLogger()
logger.setLevel(logging.INFO)
files = []
source = '/Users/username/source_folder'
files = checkFile(source)
if len(files) != 0:
for i in files:
with open(i, newline='') as file:
for row in csv.reader(file):
print(row)
def checkFile(directory):
result = []
if len(os.listdir(directory)) == 0:
return result
else:
for file in os.listdir(directory):
if file.endswith('.csv'):
result.append(file)
else:
continue
return result
</code></pre>
<p>I keep getting this error:</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory:
</code></pre>
<p>So the result is something like this: <code>['test_01.csv','test_02.csv']</code>
But I would like to read the each file from the list to the console.</p>
<p><code>test_01.csv</code></p>
<pre><code>id,first_name,last_name,phone,
1,joe,black,555555555
2,jack,flash,1111111111
</code></pre>
<p><code>test_02.csv</code></p>
<pre><code>id,first_name,last_name,phone,
1,joe,black,555555555
2,jack,flash,1111111111
</code></pre>
|
<p><code>os.listdir(...)</code> returns the file/sub-folder names only (not the full paths). Unless the csv files are in your current working directory, you will need to combine their names with the directory, e.g. with <code>os.path.join(...)</code>:</p>
<pre><code>def checkFile(directory):
result = []
if len(os.listdir(directory)) == 0:
return result
else:
for file in os.listdir(directory):
if file.endswith('.csv'):
result.append(os.path.join(directory, file))
else:
continue
return result
</code></pre>
|
python|csv
| 1 |
1,903,953 | 63,596,430 |
How do I run a Docker container as a persistent server?
|
<p>I have two things:</p>
<ol>
<li>A Java server API that receives a file</li>
<li>A Python script I need to be able to feed the received file into</li>
</ol>
<p>I want to be able to run my Python script in multiple Docker containers that listen for a file input, so I can run the script on multiple files at the same time. How can I containerise my script so it runs as a small persistent server? At the moment I have a container that just runs the script and then immediately exits.</p>
<p>So in a nutshell I need this structure:
File -> Java API -> Containerised Python script running on a port</p>
<p>I'm new to the concept and didn't understand the Docker documentation and hoped some kind soul could simplify it for me. Thank you</p>
|
<p>Create one Dockerfile.</p>
<pre><code>FROM python:3.8.2-buster
USER root
RUN mkdir -p '/script'
WORKDIR script
COPY requirements.txt /script
RUN pip3 install --upgrade pip && \
pip3 install --no-cache-dir -r requirements.txt
ENTRYPOINT [ "python", "your_script.py" ]
</code></pre>
<p>Then build it and run it as daemon.</p>
<pre><code>docker build -t myscript .
docker run -d --name script -v $(pwd):/script -p <local_port:port_inside_container> myscript
</code></pre>
|
java|python|docker|server
| -1 |
1,903,954 | 69,754,431 |
what is the correct syntax here? getting type TypeError: unhashable type: 'dict
|
<pre><code>query={"colourCode" : "orange" },{"createdOn":{ "$gt" : my_datetime}},{"assignmentRef":{'$ne':None}}
cursor = collection.find({query},{'createdOn':1,'assignmentRef.name':1,'_id':0,'colourCode':1})
list_cur = list(cursor)
df = DataFrame(list_cur)
print(df)
Result
TypeError: unhashable type: 'dict'
</code></pre>
<p>what is the problem here? please rewrite the code with correct syntax, so that I clearly can understand it.</p>
|
<p>You have two issues; the query needs to be constructed as a dictionary (yours creates a tuple), and the first parameter of the find needs to just be <code>query</code> not <code>{query}</code>.</p>
<p>This should be closer to what you need:</p>
<pre><code>import datetime
from pandas import DataFrame
from pymongo import MongoClient
db = MongoClient()['mydatabase']
collection = db.mycollection
my_datetime = datetime.datetime.now()
query = {"colourCode": "orange", "createdOn": {"$gt": my_datetime}, "assignmentRef": {'$ne': None}}
cursor = collection.find(query, {'createdOn': 1, 'assignmentRef.name': 1, '_id': 0, 'colourCode': 1})
list_cur = list(cursor)
df = DataFrame(list_cur)
print(df)
</code></pre>
|
python|mongodb|dataframe|nosql|pymongo
| 1 |
1,903,955 | 69,888,817 |
How to Ignore errors in python dictionary creation?
|
<p>I have image data in a number of formats (too many for a bunch of if/else statements to be reasonable or look clean).
I have created a number of (python) classes to read in the data depending on the format i.e. <code>framesource.pngs</code>, and <code>framesource.mat</code>. that use <code>def __init__(self,path):...</code>
Within my UI (using pyqtgraph) the user provides the path for the data and the data type. I would like to have a dictionary to map the user choice to the correct reading function, for Example:</p>
<pre><code>### these would be set via the gui
filepath= 'some//path//to//data'
source_type = 'pngs'
### in the code for processing
import variousReaderFunctions # like framesource.pngs and framesource.mat
readerFuncDict={
...
'pngs':framesource.pngs(filepath)
'mat' :framesource.mat(filepath)
...
}
resulting_frames = readerFuncDict[source_type]
</code></pre>
<p>Each data set may have data in one or more data types that would be found at the <code>filepath</code>. However, if a type isn't there (for example if there are .pngs but no .mat), the dictionary fails with a <code>[Errno 2] No such file or directory: 'some//path//to//data//.mat'</code>. and the later code isn't run even if it doesn't refer back to the bugged dict key.</p>
<p>Is there a way to set the dictionary creation to simply not initialize keys that run into errors?
Something like</p>
<pre><code>readerFuncDict={}
listOfOptions=[...,'pngs','mat',...]
listOfFunctions=[...,'framesource.pngs(filepath)','framesource.mat(filepath)',...]
for idx,opt in enumerate(listOfOptions):
try:
readerFuncDict[opt]=listOfOptions[idx]
except:
continue
resulting_frames = readerFuncDict[source_type]
with resulting_frames as _frames:...
</code></pre>
<p>I've tried leaving the classes uninitialized in the dictionary i.e.:</p>
<pre><code>readerFuncDict={
...
'pngs':framesource.pngs
'mat' :framesource.mat
...
}
resulting_frames = readerFuncDict[source_type].__init__(self,path=filepath)
with resulting_frames as _frames:...
</code></pre>
<p>but it gives a <code><class 'AttributeError'> : __enter__</code> with a traceback of :</p>
<pre><code>File "/home/cnygren/miniconda3/envs/snipres/lib/python3.9/site-packages/pyqtgraph/flowchart/Node.py", line 311, in update
out = self.process(**strDict(vals))
File "/home/cnygren/snipy/Flowchart_v1.py", line 133, in process
with resulting_frames as _frames:
</code></pre>
|
<p>You can define a wrapper which will return <code>None</code> if function call raised and exception:</p>
<pre class="lang-py prettyprint-override"><code>def try_or_none(func, *args, **kwargs):
try:
return func(*args, **kwargs)
except Exception:
pass
</code></pre>
<p>Then you can init your dictionary using <a href="https://docs.python.org/3/library/stdtypes.html#dict" rel="nofollow noreferrer"><code>dict()</code></a> call and pass there a generator which will filter pairs with <code>None</code> value:</p>
<pre class="lang-py prettyprint-override"><code>from math import sqrt
...
some_dict = dict((key, value) for key, value in (
("a", try_or_none(sqrt, 4)),
("b", try_or_none(sqrt, 9)),
("c", try_or_none(sqrt, -1)), # will throw an exception
("d", try_or_none(sqrt, 16))
) if value is not None)
</code></pre>
<p>It looks a bit cumbersome, but it's the simplest solution.</p>
<p>Another way is to implement some kind of "lazy" dictionary. Take a look on next question <em>(if you're interested)</em>: <a href="https://stackoverflow.com/q/16669367/10824407">Setup dictionary lazily</a>.</p>
|
python|dictionary|try-catch
| 1 |
1,903,956 | 18,123,377 |
Batch create or get path in py2neo
|
<p>I'm trying to create a date/time tree in neo4j as Nigel Small described <a href="http://blog.nigelsmall.com/2012/09/modelling-dates-in-neo4j.html" rel="nofollow">here</a>. I want to pre-populate all dates for a certain period of time, and as such, want to run multiple get_or_create_path()s in a go. However, I can't seem to find a batch version of this function, or a batch equivalent of 'run cypher query' - if I have to run them all individually, it's going to hit my runtime massively.</p>
<p>Is there any way to batch this process? Hopefully I'm being stupid and have just missed an obvious function! I don't mind if it's a batch version of running cypher queries, or of get_or_create_path().</p>
<p>Many thanks in advance,</p>
<p>Louis</p>
|
<p>There isn't a batch <code>get_or_create_path</code> in 1.5 but I am introducing one for 1.6. I am planning to release this on 1st October but you are welcome to try it sooner if you wish (release/1.6.0 branch on GitHub). Please bear in mind though that this release is still in development so it may change between now and release and therefore, depending on your needs, may be a bit unstable.</p>
|
python|neo4j|cypher|py2neo
| 2 |
1,903,957 | 66,297,828 |
Incorrect Solution When Squaring Gekko Integer Variables
|
<p>If I initialize a boolean variable as 0 I get an incorrect solution (0). If I initialize it as 1 I get the correct solution (1).</p>
<pre><code># Squaring doesn't work
#######################################################
m = GEKKO(remote=False)
b = m.Var(lb=0,ub=1,integer=True, value=0)
m.Maximize(b**2)
m.options.SOLVER = 1
m.solve(debug=0, disp=True)
</code></pre>
<p>Returns:</p>
<pre><code>Successful solution
Objective: 0.
</code></pre>
<p>with <code>b: [0]</code></p>
<p>This is a follow up to a previous question (<a href="https://stackoverflow.com/questions/66271411/gekko-returning-incorrect-successful-solution">Gekko returning incorrect successful solution</a>) that concerns a model involving matrix multiplication of two gekko arrays with gekko integer variables. I believe I've traced that issue to this problem.</p>
|
<p>Try this:</p>
<pre class="lang-python prettyprint-override"><code>from gekko import GEKKO
m = GEKKO()
b = m.Var(value=0, integer=True)
m.Equation(b>=0)
m.Equation(b<=1)
m.Maximize(b**2)
m.options.SOLVER = 1
m.solve(disp=False)
print(b.value)
</code></pre>
<p>Output:</p>
<pre><code>[1.0]
</code></pre>
<p>See a demo in <a href="https://colab.research.google.com/drive/1q1Nh7ZVp5gelYXbumZ4zKPMJ2M0ccf4-?usp=sharing" rel="nofollow noreferrer">this colab</a>.</p>
<p>In the <a href="https://gekko.readthedocs.io/en/latest/examples.html" rel="nofollow noreferrer">gekko examples</a> I saw that <code>Obj()</code> (which minimizes) is used along with <code>Equation()</code>, so I thought, well maybe the lower and upper bounds of the variable could be expressed as equations instead. Apparently, it works that way.</p>
|
python|gekko
| 3 |
1,903,958 | 66,314,895 |
Problem - Getting all href from beautifulsoup content
|
<p>I want to get <strong>all the href links</strong> from the code below but getting just the first href. Couldnt solve where I am wrong. Can you please help me with this?</p>
<pre><code>for i in range(1,3):
url = "https://www.gittigidiyor.com/samsung-cep-telefonu?sf=" + str(i)
r = requests.get(url)
source = BeautifulSoup(r.content,"lxml")
liste = source.find_all('div', attrs={"class":"gg-w-24 gg-d-24 gg-t-24 gg-m-24 root-column padding-none"})
for url in liste:
url_phone = "https:" + url.a.get("href")
print(url_phone)
</code></pre>
|
<p>You need to <code>find_all('a')</code> and iterate through those, as opposed to using just <code>find('a')</code> or <code>.a</code> as it'll only grab the first <code><a></code> tag it finds.</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
for i in range(1,3):
url = "https://www.gittigidiyor.com/samsung-cep-telefonu?sf=" + str(i)
r = requests.get(url)
source = BeautifulSoup(r.content,"lxml")
liste = source.find_all('div', attrs={"class":"gg-w-24 gg-d-24 gg-t-24 gg-m-24 root-column padding-none"})
for url in liste:
all_hrefs = url.find_all('a', href=True)
for href in all_hrefs:
url_phone = "https:" + href['href']
print(url_phone)
</code></pre>
|
python|web-scraping|beautifulsoup
| 0 |
1,903,959 | 66,313,364 |
Python Solution for project euler's ex 94
|
<p><strong>Note: This is a revised version of a post I made some days ago that was closed due to incompleteness. I have now done my best to optimise it, and it should now be a minimal reproducible example.</strong></p>
<p>The question:</p>
<p>"It is easily proved that no equilateral triangle exists with integral length sides and integral area. However, the almost equilateral triangle 5-5-6 has an area of 12 square units.</p>
<p>We shall define an almost equilateral triangle to be a triangle for which two sides are equal and the third differs by no more than one unit.</p>
<p>Find the sum of the perimeters of all almost equilateral triangles with integral side lengths and area and whose perimeters do not exceed one billion (1,000,000,000)."</p>
<p>The answer to the problem is <code>518408346</code>.</p>
<p>My result is much larger than this number. How come? After looking through the comments on the previous post prior to its suspension, I believe that it is due to a floating-point error.</p>
<p>I assume that my code generates numbers that are border-line integers which Python falsely takes for integers. That would explain why my result is much larger than the correct. I have observed that python does this when the number of leading zeros after the decimal point exceed 15 (e.g., 3.0000000000000005 is taken as 3.0000000000000005 whereas 3.(>15x 0) is taken as 3.0. If there was a way to change this setting, my method could work. Do you agree? I have thought that the module, <strong>decimal</strong>, could prove useful here, but I am not sure how to utilize it for this purpose.</p>
<p>This is my code:</p>
<pre><code>sum_of_p=0
for i in range(2,333333334):
if i%(5*10**6)==0:
print(i)
h=(i**2-((i+1)*0.5)**2)**0.5
if int(h)==h:
a=0.5*(i+1)*h
if int(a)==a:
sum_of_p+=3*i+1
h=(i**2-((i-1)*0.5)**2)**0.5
if int(h)==h:
a=0.5*(i-1)*h
if int(a)==a:
sum_of_p+=3*i-1
print(sum_of_p)
</code></pre>
|
<p>I assume that using floats is not a good idea for integer values problem. Here is solution that I have found. If your version or python is below 3.8, then you will have to use more slow is_square_ function</p>
<pre><code>import math
def is_square_(apositiveint):
# Taken from:
# https://stackoverflow.com/questions/2489435/check-if-a-number-is-a-perfect-square
x = apositiveint // 2
seen = set([x])
while x * x != apositiveint:
x = (x + (apositiveint // x)) // 2
if x in seen: return False
seen.add(x)
return True
def is_square(i: int) -> bool:
return i == math.isqrt(i) ** 2
def check(a, b, c):
""" return preimeter if area of triangle with sides of lengts a,b,c is integer """
perimeter = a + b + c
if perimeter % 2 == 1:
# preimeter should be even
return 0
p = perimeter // 2
# Use Heron's formula
H = p*(p-a)*(p-b)*(p-c)
if is_square(H):
return perimeter
return 0
sum_of_p = 0
max_i = 1000000000 // 3
for i in range(2, max_i + 1):
if i % (10**5) == 0:
print(i*100 / max_i )
sum_of_p += check(i, i, i+1)
sum_of_p += check(i, i, i-1)
print(sum_of_p)
</code></pre>
|
python|precision|trigonometry
| 1 |
1,903,960 | 66,337,717 |
TensorFlow / Keras Error : dlerror: cudart64_101.dll not found
|
<p>I wrote a program using Keras. When I run the program it crashes with an error as seen below:</p>
<pre><code>2021-02-23 18:50:50. : W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found
2021-02-23 18:50:50. : I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
File "C:/Users/USER/PycharmProjects/Sofia/main.py", line 26, in <module>
X = dataset[:,0:8]
File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\core\frame.py", line 3024, in __getitem__
indexer = self.columns.get_loc(key)
File "C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\core\indexes\base.py", line 3080, in get_loc
return self._engine.get_loc(casted_key)
File "pandas\_libs\index.pyx", line 70, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\index.pyx", line 75, in pandas._libs.index.IndexEngine.get_loc
TypeError: '(slice(None, None, None), slice(0, 8, None))' is an invalid key
</code></pre>
<p>Here is my code:</p>
<pre><code># Develop Neural Network with Keras
# Load Libraries
# first neural network with keras tutorial
from keras.models import Sequential
from keras.layers import Dense
import numpy
from pandas import read_csv
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# Load Data
dataset = read_csv('pima-indians-diabetes.data.csv', delimiter=',')
# split into input (X) and output (y) variables
X = dataset[:,0:8]
y = dataset[:,8]
# Define Keras Model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile Keras Model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit Keras Model
model.fit(X, y, epochs=150, batch_size=10)
# Evaluate Keras
_, accuracy = model.evaluate(X, y)
print('Accuracy: %.2f' % (accuracy*100))
</code></pre>
<p>How would I go about fixing this error?</p>
|
<p>You probably haven't installed the NVIDIA GPU Computing Toolkit, OR your paths are not configured correctly OR your tensorflow installation version requires a different cuda version. You can try using tensorflow CPU if you don't want or can't set up your GPU (e.g. because you don't have on).</p>
|
python|tensorflow|keras|deep-learning|neural-network
| 0 |
1,903,961 | 66,097,482 |
SyntaxError: invalid syntax: python -c "import numpy"
|
<p>I have a <code>sample.py</code> python file, which contains this line:</p>
<pre><code>python -c "import numpy"
</code></pre>
<p>Run Command:</p>
<pre class="lang-sh prettyprint-override"><code>$ python sample.py
</code></pre>
<p>When executed, the script got the error:</p>
<blockquote>
<p>SyntaxError: invalid syntax</p>
</blockquote>
<p>Can anyone help me what is the issue with this line?</p>
|
<p>The content is not python, it's a command.</p>
<p>The content of sample.py should be just</p>
<pre><code>import numpy
</code></pre>
|
python|python-3.x|linux
| 0 |
1,903,962 | 69,122,943 |
Local development with serverless framework
|
<p>I have a microservices project using <code>Serverless Framework</code> that has the following structure:</p>
<pre><code>project
└───service1
│ │ handlers.py
│ │ serverless.yml
│ │ requirements.txt
| | package.json
└───service2
└───service3
└───service4
</code></pre>
<p>Each folder is a microservice and each microservice has its own serverless.yml configuration file.</p>
<p>I would like to know what is the best way to run the project in a totally local way.</p>
<p>I've already tried using the <code>serverless-offline</code> plugin, but it only runs one microservice at a time offline.</p>
<p>I've read a bit about creating an AWS virtual environment with localstack, but I don't know how it would actually help me.</p>
<p>I would like a tip, an article or any information that can help me run these microservices locally.</p>
<p>PS.: I'm using <code>python</code></p>
|
<p>I would keep only 1 serverless.yml inside ./project.
Then have 4 functions inside serverless.yml with handlers pointing to corresponding handler.py, this way you'll have 4 lambdas.</p>
<p>This way you may use serverless-offline with no problem and still have 4 microservices.</p>
|
python|serverless-framework|serverless|aws-serverless|serverless-architecture
| 0 |
1,903,963 | 69,137,586 |
Convert a Column of a Dataframe using Shapely
|
<p>I'm trying to convert a whole column of a DataFrame to a geometry column using shapely.</p>
<p>After reading the file into pandas, I convert one value of the column using the next code:</p>
<pre><code>from shapely.wkb import loads
geometry=loads('0101000020E61000006A6AD95A5FCC58C0A6272CF1807A3340', hex=True)
geometry.wkt
</code></pre>
<p>and the output is <code>'POINT (-99.19332 19.47853)'</code></p>
<p>but i need to convert all the column and idk how</p>
|
<p>Use <code>apply</code> with right parameter:</p>
<p>Input data:</p>
<pre><code>>>> df
data
0 0101000000715AF0A2AF064140D6E59480988F5DC0
1 0101000000B610E4A0845D44401FF64201DB7B52C0
2 0101000000BE67244223E04740FFE7305F5E2F5EC0
</code></pre>
<pre><code>df['geometry'] = df['data'].apply(loads, hex=True)
</code></pre>
<p>Output result:</p>
<pre><code>>>> df
data geometry
0 0101000000715AF0A2AF064140D6E59480988F5DC0 POINT (34.052235 -118.243683)
1 0101000000B610E4A0845D44401FF64201DB7B52C0 POINT (40.73061 -73.935242)
2 0101000000BE67244223E04740FFE7305F5E2F5EC0 POINT (47.751076 -120.740135)
</code></pre>
|
python|pandas|shapely
| 0 |
1,903,964 | 68,327,251 |
How can I remove @everyone in user roles? (Discord.py)
|
<p>I'm currently updating my userinfo command. But I still have a problem which i don't know how to fix. So here is my question how can I remove @everyone from the roles (you can see what I mean in the picture).</p>
<p><a href="https://i.stack.imgur.com/BoMv1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BoMv1.png" alt="enter image description here" /></a></p>
<p>Code:</p>
<pre><code>roles = [role for role in member.roles]
embed.add_field(name=f'Roles ({len(roles)}):', value="".join(
[role.mention + "|" for role in roles]), inline=False)
</code></pre>
<p>Code Image:
<a href="https://i.stack.imgur.com/1KY9H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1KY9H.png" alt="enter image description here" /></a></p>
<p>I would be very grateful if someone could help me.</p>
|
<p>You can always skip the first role which is <code>@everyone</code> like this.</p>
<pre class="lang-py prettyprint-override"><code>roles = [role for role in member.roles[1:]]
embed.add_field(name=f'Roles ({len(roles)}):',
value="".join([role.mention + "|" for role in roles]),
inline=False)
</code></pre>
|
python|list|discord|discord.py|roles
| 0 |
1,903,965 | 68,084,166 |
python prepared requests - removing unwanted header
|
<p>I'm having troubles with this code chunk:</p>
<pre><code>with requests.Session() as s:
_hs = s.headers
req = requests.Request('POST', url, data=json.dumps(data), headers=headers)
prepared_req = req.prepare()
if 'Content-Length' in prepared_req.headers:
prepared_req.headers.pop('Content-Length')
rsp = s.send(prepared_req, timeout=self._TIMEOUT)
try:
rsp.raise_for_status()
except requests.HTTPError:
self._logger.exception("error in retrieving response from %s -- response content: %s",
url, rsp.content)
raise
return rsp.json()
</code></pre>
<p><code>Content-Length</code> is correctly removed from the PreparedRequest headers, however during <code>send</code> something goes wrong:</p>
<pre><code>Traceback (most recent call last):
File "/opt/projects/MyProj/my-http-client/my_http_client/http_client.py", line 297, in _http_post
rsp = s.send(prepared_req, timeout=self._TIMEOUT)
File "/home/user/venvs/my-http-client-venv/lib/python3.8/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/home/user/venvs/my-http-client-venv/lib/python3.8/site-packages/requests/adapters.py", line 472, in send
low_conn.send(i)
File "/usr/lib/python3.8/http/client.py", line 975, in send
self.sock.sendall(d)
TypeError: a bytes-like object is required, not 'str'
</code></pre>
<p>same thing happens if I remove the header with <code>del</code>:</p>
<pre><code>del prepared_req.headers['Content-Length']
</code></pre>
<p>anyone does know what's wrong? without the <code>headers.pop</code>, everything runs fine.</p>
|
<p>As per this answer to a related <a href="https://stackoverflow.com/questions/42612002/python-sockets-error-typeerror-a-bytes-like-object-is-required-not-str-with/42612820">question</a>, in python3 all strings are Unicode by default but when sending over a network need to be converted to bytes.</p>
<p>I suspect the issue is with the line <code>data=json.dumps</code>, as dumping the data converts it to a string not bytes:</p>
<pre><code>In [1]: data=json.dumps(dict(a='somestring'))
In [2]: data
Out[3]: '{"a": "somestring"}'
In [4]: type(data)
Out[5]: str
</code></pre>
<p>The solution is to encode the string to bytes:</p>
<pre><code>In [6]: data=json.dumps(dict(a='somestring')).encode('utf-8')
In [7]: data
Out[8]: b'{"a": "somestring"}'
In [9]: type(data)
Out[10]: bytes
</code></pre>
|
python|python-requests|http-headers|urllib3
| 0 |
1,903,966 | 72,861,043 |
Python Trouble with matrix pathfinding (DFS)
|
<p>I am having issues with dfs, probably from a RecursionError when facing a wall.</p>
<p>Instead of continuously running an attempt which can only lead to a wall, it is supposed to return to its previous position and try another path.</p>
<p>Also, it leans heavily on the order in which attempts are made (N,S,E,W or other combinations)</p>
<p>Thanks in advance</p>
<pre class="lang-py prettyprint-override"><code>def portal(M,l,c): # locates current portal and changes position (if entering portal 1, go to portal 2)
for elem in D.keys():
x = 1
if elem == M[l][c]:
if [l,c] == D[elem][0] and x: [l,c] = D[elem][1]; x -=1
if [l,c] == D[elem][1] and x: [l,c] = D[elem][0]; x-=1
return [l,c]
def dfs(Q,p):
l, c = Q[-1] # previous position
if [l,c] == COORD: return True
for dir in [(l-1, c,1), (l, c - 1,2), (l , c+1,3), (l+1 ,c,4)]: # dir = (l,c,p) = next row, next column and a number that represents the direction taken (1 = north)
j=1
if p:
if dir[2] == p: # if passing through a portal, p should not update, or else it would attempt to move in a direction it's not intended to
nextl,nextc = dir[:2]
print(nextl,nextc,l,c)
else: j = 0 # p is not 0 and no move happens (j = 0)
else: # if no portal has been used: try all possible directions
nextl, nextc, p = dir
if j:
if len(M[0]) > c >= 0 and 0 <= l < len(M): # if inbounds
if M[nextl][nextc] in D: # if the next move lands in a portal, change coordinates using the function portal()
Q.append((nextl,nextc))
nextl,nextc = portal(M,nextl,nextc)
if M[nextl][nextc] and ((nextl, nextc) not in Q) and (M[nextl][nextc] not in D):
# if there is no wall on the next position and it has not been visited yet, call dfs again for the next position
Q.append((nextl, nextc))
if dfs(Q,0):
return True
else:
Q.pop()
elif M[nextl][nextc] and (M[nextl][nextc] in D) and ((nextl, nextc) not in Q):
# if a portal has been used, the next move should keep previous p (direction). Therefore, the function call is different, to prevent it from attempting to move in all 4 directions.
Q.append((nextl,nextc))
if dfs(Q,p):
return True
else:
Q.pop()
elif M[l][c] not in D: p = 0
# resets p if no move is possible ( allows it to gain a new direction from the for loop above)
else: p = 0
M = [];L = int(input())
for _ in range(L): M.append(input().split()) # Matrix input
for i in M:
for h in i:
if h == "#": M[M.index(i)][i.index(h)] = 0
elif h == ".": M[M.index(i)][i.index(h)] = 1 # Replaces path with 1 (easier to write)
l0, c0 = (int(i) for i in input().split())
queue = [(l0, c0)] # initial position
COORD = 0
D={};lineP=-1
for LINE in M: # locates portals and assigns to the corresponding letter both coordinates
colP = -1; lineP+=1
for COL in LINE:
colP+=1
if COL not in ["*",0,1]:
if COL in D: D[COL].append([lineP,colP])
else: D[COL] = [[lineP,colP]]
if COL == "*": COORD=[lineP,colP] # locates the destination (marked with "*")
if dfs(queue,0):
print("Success"
else:
print("Failure")
</code></pre>
|
<p>I created a script that finds one path that can reach <code>goal</code>. If the path does not exists, it prints that the task is impossible.</p>
<p>Note: the algorithm does not seek for all possible paths. Once it finds one, it returns that path. Consequently, it does not return the shortest path to goal. It returns a path, if it exists. Otherwise, it returns an empty list.</p>
<p>The return of <code>dfs_pathfinder</code> is a boolean, telling if the path was found or not. To retrive the path, store a list to <code>path</code>, and let the function fill the list passed as parameter by reference.</p>
<p>I tried to explain every single line of the script using comments. If you didn't understood something, or got an unexpected bug from it, feel free to post it on the comments of this answer.</p>
<pre><code># Swap rows by columns of a grid.
#
# Useful for printing on terminal only. Or, if you
# need to transpose the input grid.
def transpose(grid):
tgrid = []
ncols = range(len(grid[0]))
nrows = range(len(grid))
for j in ncols:
tgrid.append([])
for i in nrows:
tgrid[-1].append(grid[i][j])
return tgrid
# Handles the file input, loading:
# 1. The Character's starting position
# 2. The Grid itself
# 3. The Goal you want to reach
# 4. The List of Portals
def handle_input(filename):
portals = {}
goal = (-1, -1)
# Try to open the file and read all its lines
try:
file = open(filename, 'rt')
except:
print('Error: Couldn\'t open ' + filename)
else:
lines = file.readlines()
file.close()
# Get the number of rows
nrows = int(lines[0][:-1])
# Mount the grid based on the file data
# Append all portals
# Find the goal's position
grid = []
for nr in range(nrows):
row = lines[nr+1][:-1].split(' ')
for co in range(len(row)):
if (row[co] != '.' and row[co] != '#' and row[co] != '*'):
if (row[co] in portals):
if (len(portals[row[co]]) == 2):
print("Warning: There are 3 portals of same kind: ", row[co])
portals[row[co]].append((nr, co))
else:
portals[row[co]] = [(nr, co)]
elif (row[co] == '*'):
goal = (nr, co)
grid.append(row)
# Retrieve the starting position
pos = lines[nrows+1][:-1].split(' ')
pos = (int(pos[0]), int(pos[1]))
# Check if the goal exists
if (goal[0] == -1 or goal[1] == -1):
print('Warning: No goal * is set')
return grid, pos, goal, portals
# Eases the way of retrieving the current tile character from the grid, given
# a position pos: (x, y)
#
# If a position outside of the grid is requested, return an empty character.
def get(grid, pos):
if (pos[0] >= 0 and pos[1] >= 0 and pos[0] < len(grid) and pos[1] < len(grid[pos[0]])):
return grid[pos[0]][pos[1]]
else:
return ''
# Each portal is a pair of positions: [(a, b), (c, d)]
#
# This means that portal1 is at (a, b)
# and portal2 is at (c, d)
#
# Calling enterPortal(portal, (a, b)) -> (c, d)
# Calling enterPortal(portal, (c, d)) -> (a, b)
def enterPortal(portal, pos):
if (portal[0][0] == pos[0] and portal[0][1] == pos[1]):
return portal[1]
else:
return portal[0]
# This function does not measures the best path. Nor, it searches all
# possible paths available to goal.
#
# It returns the first path to goal that is found. Otherwise, an empty
# path is returned, signalizing that the goal is unreachable.
def dfs_pathfinder(grid, pos, goal, portals, path, reached):
char = get(grid, pos)
reached.append(pos)
# Still walking
if (char == '.'):
# Store all movement choices
movement = [(-1, 0), (1, 0), (0, 1), (0, -1)]
# Try all movement choices
for mv in movement:
# Get a hint of next position
nextPos = (pos[0] + mv[0], pos[1] + mv[1])
# If the next position is not reached yet, continue
if (nextPos not in reached):
# Append current position to path
path.append(pos)
# And check if you are in the correct path
if (dfs_pathfinder(grid, nextPos, goal, portals, path, reached)):
# If you reached goal, signalize to other levels of recursion
# that you found it.
return True
else:
# If you're not, pop current position from path and try again
path.pop()
# Didn't found goal. Giving up.
return False
# Steped on portal
elif (char in portals):
# Portals are unreachable. They teleport the character.
#
# However, we do append their position into path list.
reached.pop()
# Get the other end of the current portal
other = enterPortal(portals[char], pos)
# Get previous position, which will be used to calculate
# which direction the character should face when stepping
# outside of the portal exit
prev = path[-1]
# Check which position you came from
diff = (prev[0] - pos[0], prev[1] - pos[1])
# Get position you will face when exiting the portal
# @ -> P ... O -> .
# . <- O ... P <- @
move = (-diff[0], -diff[1])
# Calculate a hint of next position on other side of the portal
nextPos = (other[0] + move[0], other[1] + move[1])
# If the hint position is not reached yet:
if (nextPos not in reached):
path.append(pos) # Stepped on portal
path.append(other) # Stepped on other side of portal
# Check if you're on the correct path to goal
if (dfs_pathfinder(grid, nextPos, goal, portals, path, reached)):
# You are. Signalize to other levels of recursion that you
# found the goal.
return True
else:
path.pop() # pop other side of portal
path.pop() # pop current portal
# You're wrong. Try again.
# Didn't found goal. Giving Up.
return False
# Reached goal
elif (char == '*'):
path.append(pos) # Append goal to path.
# Signalize to other levels of recursion that you found goal.
return True
# Cannot reach goal anymore. You are on a non-walkable tile.
else:
# Giving Up.
return False
if __name__ == '__main__':
# Step 1: Extract data
grid, pos, goal, portals = handle_input('file.txt')
# Step 2: Call Pathfinder
path = []
if (dfs_pathfinder(grid, pos, goal, portals, path, [])):
print("A Path was found: ", path)
else:
print("No Path was found. Task is impossible.")
# Step 3: Show the path inside the grid using counters
k = 0
for pos in path:
grid[pos[0]][pos[1]] = str(k)
k += 1
# Step 4: Print the grid and the path the character must
# go through to reach the goal
print()
grid = transpose(grid)
gout = ''
for j in range(len(grid[0])):
for i in range(len(grid)):
gout += ((' ' + get(grid, (i, j)) + ' ') if (len(get(grid, (i, j))) == 1) else (' ' + get(grid, (i, j)) + ' '))
gout += '\n'
print('The Grid: ')
print(gout)
print()
</code></pre>
<p>The script gets input from a file. In this case, <code>file.txt</code>, which contents are:</p>
<pre><code>7
# # * # # # # # #
. . . . T . . . .
# # # # # # # # .
. . . T . . . . .
Q . . . . # # # #
. . . . . Q . . .
. . . # # # . . .
5 1
</code></pre>
<p>After testing the script, I got this output:</p>
<pre><code>A Path was found: [(5, 1), (4, 1), (3, 1), (3, 2), (4, 2), (5, 2), (5, 3), (4, 3), (4, 4), (3, 4), (3, 3), (1, 4), (1, 3), (1, 2), (0, 2)]
The Grid:
# # 14 # # # # # #
. . 13 12 11 . . . .
# # # # # # # # .
. 2 3 10 9 . . . .
Q 1 4 7 8 # # # #
. 0 5 6 . Q . . .
. . . # # # . . .
</code></pre>
<p>The starting position is at <code>0</code> position, and <code>14</code> is the <code>goal</code>. The tiles <code>10</code> and <code>11</code> are both ends of the portal <code>T</code>, which tells that the algorithm used the portal <code>T</code> to reach the <code>goal</code>.</p>
|
python|python-3.x|recursion|depth-first-search|breadth-first-search
| 0 |
1,903,967 | 63,045,429 |
Boxplot visualization
|
<p>So I have to do this boxplot, and I want to limit the variables from a column in a dataset, and the problem I am having is that I don't know how to do that. <a href="https://i.stack.imgur.com/iGGAj.png" rel="nofollow noreferrer">this is what I have for now</a>, I want to pick the top ten nationalities that are in the column, but I cannot figure out how to do it.</p>
|
<p>If I understand your question correctly, this should work for a dataframe called <code>df</code> with a "Nationality" column called <code>Nationality</code>:</p>
<pre><code>import collections
counts = collections.Counter(df.Nationality)
top10countries = [elem for elem, _ in counts.most_common(10)]
df_top10 = df[df['Nationality'].isin(top10countries)]
</code></pre>
<p>and then use <code>df_top10</code> to make boxplots.</p>
|
python|data-visualization|boxplot
| 0 |
1,903,968 | 62,303,604 |
100% training and valuation accuracy, tried gradient clipping too
|
<p>I get always 100% training and validation accuracies. Here's how it looks:</p>
<pre><code>Epoch 17/20
27738/27738 [==============================] - 228s 8ms/step - loss: 4.1600e-05 - accuracy: 1.0000 - val_loss: 4.6773e-05 - val_accuracy: 1.0000
Epoch 18/20
27738/27738 [==============================] - 229s 8ms/step - loss: 3.6246e-05 - accuracy: 1.0000 - val_loss: 4.0900e-05 - val_accuracy: 1.0000
Epoch 19/20
27738/27738 [==============================] - 221s 8ms/step - loss: 3.1839e-05 - accuracy: 1.0000 - val_loss: 3.6044e-05 - val_accuracy: 1.0000
Epoch 20/20
27738/27738 [==============================] - 7616s 275ms/step - loss: 2.8176e-05 - accuracy: 1.0000 - val_loss: 3.1987e-05 - val_accuracy: 1.0000
</code></pre>
<p>Here's the whole code for the process:</p>
<pre><code>encoder_input_sequences = pad_sequences(input_integer_seq, maxlen=max_input_len)
decoder_input_sequences = pad_sequences(output_input_integer_seq, maxlen=max_out_len, padding='post')
import numpy as np
read_dictionary = np.load('/Users/Downloads/wordvectors-master/hinvec.npy',allow_pickle='TRUE').item()
num_words = min(MAX_NUM_WORDS, len(word2idx_inputs) + 1)
embedding_matrix = np.zeros((num_words, EMBEDDING_SIZE))
for word, index in word2idx_inputs.items():
embedding_vector = read_dictionary.get(word)
if embedding_vector is not None:
embedding_matrix[index] = embedding_vector
embedding_layer = Embedding(num_words, EMBEDDING_SIZE, weights=[embedding_matrix], input_length=max_input_len)
decoder_targets_one_hot = np.zeros((
len(input_sentences),
max_out_len,
num_words_output
),
dtype='float32'
)
decoder_targets_one_hot.shape
encoder_inputs_placeholder = Input(shape=(max_input_len,))
x = embedding_layer(encoder_inputs_placeholder)
encoder = LSTM(LSTM_NODES, return_state=True)
encoder_outputs, h, c = encoder(x)
encoder_states = [h, c]
decoder_inputs_placeholder = Input(shape=(max_out_len,))
decoder_embedding = Embedding(num_words_output, LSTM_NODES)
decoder_inputs_x = decoder_embedding(decoder_inputs_placeholder)decoder_lstm = LSTM(LSTM_NODES, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs_x, initial_state=encoder_states)
###########################from here I add activation function and apply some parameters:
decoder_dense = Dense(num_words_output, activation='sigmoid')
decoder_outputs = decoder_dense(decoder_outputs)
opt = keras.optimizers.Adam(learning_rate=0.0001, clipvalue=1.0)
model = Model([encoder_inputs_placeholder,
decoder_inputs_placeholder], decoder_outputs)
model.compile(
optimizer=opt,
loss='binary_crossentropy',
metrics=['accuracy']
)
history = model.fit(
[encoder_input_sequences, decoder_input_sequences],
decoder_targets_one_hot,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_split=0.1,
)
plt.plot(history.history['accuracy'])
plt.show()
</code></pre>
<p>EDIT:
I changed following piece of code:</p>
<pre><code>decoder_targets_one_hot.shape
############################ Added this
decoder_output_sequences = pad_sequences(output_integer_seq, maxlen=max_out_len, padding='post')
for i, d in enumerate(decoder_output_sequences):
for t, word in enumerate(d):
decoder_targets_one_hot[i, t, word] = 1
#############################
encoder_inputs_placeholder = Input(shape=(max_input_len,))
</code></pre>
<p>I think its the right approach but I'm still getting 100% accuracies. is this the correct way to implement? Btw here's the link to tutorial if u wanna get idea about the output im following, only difference is my dataset is eng-hin instead of eng-fra: <a href="https://stackabuse.com/python-for-nlp-neural-machine-translation-with-seq2seq-in-keras/" rel="nofollow noreferrer">https://stackabuse.com/python-for-nlp-neural-machine-translation-with-seq2seq-in-keras/</a></p>
|
<p>You initialize <code>decoder_targets_one_hot</code> as vectors of zeros, but do not set the index of true class as <code>1</code> anywhere. So, basically the target vectors are not one-hot vectors. The model tries to learn same target for all inputs, i.e. the vector of zeros.</p>
|
python|machine-learning|keras|deep-learning|neural-network
| 1 |
1,903,969 | 62,124,265 |
Text comparison based on numbers/digits
|
<p>I would need to compare texts by extracting only numbers from the following two texts: </p>
<pre><code>text_1="source="The previous low was 27,523, recorded in May 1900. The 1.35 trillion ($22.5 million ) program could start in October. The number of people who left the country plunged 99.8 percent from a year earlier to 2,750, according to the data from the agency."
text_2="The subsidies, totalling 1.35tn, are expected to form part of a second budget. New plans to allocate $22.5 billion to a new reimbursement programme."
</code></pre>
<p>However, it seems also to be relevant the next words (for example trillion /tn, billion).
Do you know how I could get this information?</p>
<p>I have tried with</p>
<pre><code>t_1=[int(s) for s in text_1.split() if s.isdigit()]
t_2=[int(s) for s in text_2.split() if s.isdigit()]
</code></pre>
<p>then to compare them, but it gives me not all numbers in texts. </p>
<p>Expected output: </p>
<pre><code>differences
text_1: {27,523, 1900, 99.8, 2,750}
text_2: {}
common
{1.35,22.5}
</code></pre>
|
<p>It is not impossible to it do the way you propose, but that is best achieved with regular expressions:</p>
<pre><code>import re
text_1="The previous low was 27,523, recorded in May 1900. The 1.35 trillion ($22.5 million ) program could start in October. The number of people who left the country plunged 99.8 percent from a year earlier to 2,750, according to the data from the agency."
print(re.findall("\d+[,.\d]\d+", text_1))
</code></pre>
<p>In case you are not familiar with it, check <a href="https://www.rexegg.com/regex-quickstart.html" rel="nofollow noreferrer">cheatsheet</a> and try it with <a href="https://regex101.com/" rel="nofollow noreferrer">tester</a>. Once you got that, it is straight forward to get your expected output:</p>
<pre><code>nums_1 = re.findall("\d+[,.\d]\d+", text_1)
nums_2 = re.findall("\d+[,.\d]\d+", text_2)
common_nums = []
for num in nums_1:
if num in nums_2: common_nums.append(num)
print(common_nums)
</code></pre>
|
python|text-mining
| 0 |
1,903,970 | 73,513,397 |
Easiest way to clean email address in python
|
<p>I am having issues with emails address and with a small correction, they are can be converted to valid email addresses.</p>
<p>For Ex:</p>
<pre><code>%20adi@gmail.com, --- Not valid
'sam@tell.net, --- Not valid
(hi@telligen.com), --- Not valid
(gii@weerte.com), --- Not valid
:qwert34@embright.com, --- Not valid
//24adifrmaes@microsot.com --- Not valid
tellei@apple.com --- valid
...
</code></pre>
<p>I could write "if else", but if a new email address comes with new issues, I need to write "ifelse " and update every time.</p>
<p>What is the best way to clean all these small issues, some python packes or regex? PLease suggest.</p>
|
<p>You can do this (I basically check if the elements in the email are alpha characters or a point, and remove them if not so):</p>
<pre><code>emails = [
'sam@tell.net',
'(hi@telligen.com)',
'(gii@weerte.com)',
':qwert34@embright.com',
'//24adifrmaes@microsot.com',
'tellei@apple.com'
]
def correct_email_format(email):
return ''.join(e for e in email if (e.isalnum() or e in ['.', '@']))
for email in emails:
corrected_email = correct_email_format(email)
print(corrected_email)
</code></pre>
<p>output:</p>
<pre><code>sam@tell.net
hi@telligen.com
gii@weerte.com
qwert34@embright.com
24adifrmaes@microsot.com
tellei@apple.com
</code></pre>
|
python|email|data-cleaning
| 2 |
1,903,971 | 31,471,822 |
How django cache RawQuerySet
|
<p>I meet a memory problem when execute huge RawQuerySet in Django. And gc.collect() could not works to release the memory after query. And I check the code in Django. Find this code snippet <a href="https://github.com/django/django/blob/stable/1.6.x/django/db/models/query.py#L1391-L1396" rel="nofollow">https://github.com/django/django/blob/stable/1.6.x/django/db/models/query.py#L1391-L1396</a>:</p>
<pre><code> # Cache some things for performance reasons outside the loop.
db = self.db
compiler = connections[db].ops.compiler('SQLCompiler')(
self.query, connections[db], db
)
need_resolv_columns = hasattr(compiler, 'resolve_columns')
</code></pre>
<p>But I could not understand how django cache it. Seem it just get the columns here. My questions how django cache it in this code snippet? Thank you very much.</p>
<p>Update:</p>
<p>Thank you for @bruno-desthuilliers help, but I find the true reason is MySQLdb.Cursor. <a href="https://github.com/PyMySQL/mysqlclient-python/blob/master/MySQLdb/cursors.py#L533-L534" rel="nofollow">https://github.com/PyMySQL/mysqlclient-python/blob/master/MySQLdb/cursors.py#L533-L534</a> Django only could use StoreResultCursor and it fetch all result and store it to memory. And as @bruno-desthuilliers says, the comment is wrong here. Here is not have any cache operations.</p>
|
<p>The "caching" term is a bit confusing here. There's no real "caching" involved here, only the creation of local variables to store loop invariants as to avoid any attribute lookup, call or whatever for these names in the following code. </p>
<p>FWIW, there's not much in term of "caching" in <code>RawQuerySet</code>... Neither <a href="https://github.com/django/django/blob/stable/1.6.x/django/db/models/query.py#L1382" rel="nofollow">RawQuerySet.<strong>iter</strong>()</a> - which only iterates over the <code>RawQuery</code>, processes the raw results and yield them - nor <a href="https://github.com/django/django/blob/stable/1.6.x/django/db/models/sql/query.py#L70" rel="nofollow"><code>RawQuery.__iter__()</code></a> - are caching anything. At this point the only potential cause of high memory consumption is the <a href="https://github.com/django/django/blob/stable/1.6.x/django/db/models/sql/query.py#L77" rel="nofollow">conditional call to <code>list(self.cursor)</code></a> which is not a cache but can indeed eat quite some space on a huge dataset. </p>
<p>As a side note: <a href="http://effbot.org/pyfaq/why-doesnt-python-release-the-memory-when-i-delete-a-large-object.htm" rel="nofollow">Python processes tend to keep most of the already allocated memory</a>. </p>
|
python|django
| 0 |
1,903,972 | 31,327,315 |
A country that is not Canada
|
<p>For my code below in python. How do I make it so if the country is not canada then just have it print the total_price with no tax? Right now if I put USA it gives me the right price but it also gives me the price for the other provinces not mentioned.</p>
<pre><code>country = raw_input('What country are you from? ').lower()
if country == 'canada':
total_price = int(raw_input('What was your total price? '))
province = raw_input('What province are you from? ').lower()
elif country != 'canada':
total_price = int(raw_input('What was your total price? '))
if province == 'alberta':
total_alberta = (total_price * .00005) + total_price
print 'Your total price is ' + str(total_alberta)
if province == 'ontario' or province == 'new brunswick'\
or province == 'nova scotia':
total_onn = (total_price * .0013) + total_price
print 'Your total price is ' + str(total_onn)
if country == 'canada' and province != 'ontario' and province != 'new brunswick' and province != 'nova scotia' and province != 'alberta':
total_else = ((total_price * .0006) + (total_price * .0005)) \
+ total_price
print 'Your total price is ' + str(total_else)
else:
print 'Your total price is ' + str(total_price)
</code></pre>
|
<p>clean and pythonic version - your logic and ifs were poorly nested:</p>
<pre><code>base_canada_tax = 0.13
provinces = {'alberta': 0.05, 'ontario': base_canada_tax, 'new brunswick': base_canada_tax, 'nova scotia': base_canada_tax}
country = raw_input('What country are you from? ').lower()
total_price = int(raw_input('What was your total price? '))
if country == 'canada':
province_in = raw_input('What province are you from? ').lower()
total_price *= 1 + provinces.get(province_in, base_canada_tax)
print 'Your total price is {0}'.format(total_price)
</code></pre>
|
python|if-statement
| 2 |
1,903,973 | 15,548,506 |
Node labels using networkx
|
<p>I'm creating a graph out of given sequence of Y values held by <code>curveSeq</code>. (the X values are enumerated automatically: 0,1,2...)</p>
<p>i.e for <code>curveSeq = [10,20,30]</code>, my graph will contain the points:</p>
<pre><code><0,10>, <1,20>, <2,30>.
</code></pre>
<p>I'm drawing a series of graphs on the same <code>nx.Graph</code> in order to present everything in one picture.</p>
<p>My problem is:</p>
<ul>
<li>Each node presents its location. i.e the node in location <code><0,10></code> presents its respective label and I don't know how to remove it.</li>
<li>There are specific nodes that I want to add a label to, but I don't know how.</li>
</ul>
<p>for example, for the sequence:</p>
<pre><code>[0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1,1]
</code></pre>
<p>The received graph is:</p>
<p><img src="https://i.stack.imgur.com/hoxkv.jpg" alt="graph"></p>
<p>The code is:</p>
<pre><code>for point in curveSeq:
cur_point = point
#assert len(cur_point) == 2
if prev_point is not None:
# Calculate the distance between the nodes with the Pythagorean
# theorem
b = cur_point[1] - prev_point[1]
c = cur_point[0] - prev_point[0]
a = math.sqrt(b ** 2 + c ** 2)
G.add_edge(cur_point, prev_point, weight=a)
G.add_node(cur_point)
pos[cur_point] = cur_point
prev_point = cur_point
#key:
G.add_node((curve+1,-1))
pos[(curve+1,-1)] = (curve+1,-1)
nx.draw(G, pos=pos, node_color = colors[curve],node_size=80)
nx.draw_networkx_edges(G,pos=pos,alpha=0.5,width=8,edge_color=colors[curve])
plt.savefig(currIteration+'.png')
</code></pre>
|
<p>You can add the <code>with_labels=False</code> keyword to suppress drawing of the labels with <code>networkx.draw()</code>, e.g.</p>
<pre><code>networkx.draw(G, pos=pos, node_color=colors[curve],
node_size=80, with_labels=False)
</code></pre>
<p>Then draw specific labels with</p>
<pre><code>networkx.draw_networkx_labels(G,pos, labels)
</code></pre>
<p>where labels is a dictionary mapping node ids to labels.</p>
<p>Take a look at this example: <a href="https://networkx.org/documentation/stable/auto_examples/drawing/plot_labels_and_colors.html" rel="nofollow noreferrer">https://networkx.org/documentation/stable/auto_examples/drawing/plot_labels_and_colors.html</a></p>
|
python|matplotlib|networkx
| 25 |
1,903,974 | 25,137,502 |
decorators doesn't work in globals
|
<p>I set attribute for func by a decorator function, but can not get it in globals()</p>
<p>here is the code</p>
<pre><code>def tag(name):
def do_it(func):
func.tag = name
return func
return do_it
@tag('p')
def article():
return 'article here'
@tag('h1')
def title():
return 'title here'
# we got attribute here
assert hasattr(article, 'tag', None)
# I got nothing here
for obj in list(globals()):
if hasattr(obj, 'tag'):
print(obj)
</code></pre>
<p>anybody know why ?</p>
|
<p>The problem lies with your use of <code>globals()</code>; it returns a dictionary, and calling <code>list()</code> on a dictionary gives you the <em>keys</em>, not the objects. You are testing against strings here.</p>
<p>Use:</p>
<pre><code>for name, obj in list(globals().items()):
if hasattr(obj, 'tag'):
print name
</code></pre>
<p>The <code>list()</code> is still needed as the code adds 2 extra globals (<code>name</code> and <code>obj</code>), so the <code>globals()</code> dictionary changes size in the first iteration, something that'll throw an exception otherwise.</p>
<p>Your decorator is working fine otherwise.</p>
|
python|decorator|python-decorators
| 4 |
1,903,975 | 60,001,995 |
How to get multiple values selected from the drop down to views.py
|
<p>Am using the following code:</p>
<p><strong>index.html</strong> </p>
<pre><code><div class="col-sm-3" style="margin-left: 30px;">
<select id="month" name="month" multiple>
<option value="01">January</option>
<option value="02">February</option>
<option value="03">March</option>
<option value="04">April</option>
<option value="05">May</option>
<option value="06">June</option>
<option value="07">July</option>
<option value="08">August</option>
<option value="09">September</option>
<option value="10">October</option>
<option value="11">November</option>
<option value="12">December</option>
</select>
</div>
</code></pre>
<p><strong>Views.py</strong></p>
<pre><code>def internal(request):
try:
year = ''
month = []
year = request.GET.get('year')
month = request.GET.get('month')
print(month)
response_list = []
except Exception as e:
print(str(e))
return HttpResponse(json.dumps(response_list))
</code></pre>
<p>I am selecting more than one value in the front-end but when I fetch it in views.py only one options is getting fetched. How do I fetch all the values selected from dropdown? </p>
|
<p>You need to use <a href="https://docs.djangoproject.com/en/3.0/ref/request-response/#django.http.QueryDict.getlist" rel="nofollow noreferrer"><code>getlist(key)</code></a>.</p>
<p>Like this:</p>
<pre><code>month = request.GET.getlist('month')
</code></pre>
<p><a href="https://docs.djangoproject.com/en/3.0/ref/request-response/#django.http.QueryDict.get" rel="nofollow noreferrer"><code>get(key)</code></a> will get you only the last selected value:</p>
<blockquote>
<p>If the key has more than one value, it returns the last value. </p>
</blockquote>
|
python|django
| 3 |
1,903,976 | 60,099,158 |
How to vectorize this simple NumPy function?
|
<p>Given the function: </p>
<pre><code>def f(x, c=0.7):
if x >= 0:
if x <= c:
return 0.0
if x <= 2*c:
return x-c
else:
return c
else:
return -f(-x, c=c)
</code></pre>
<p>I would like to apply it to NumPy arrays. I used to do that with <code>np.vectorize</code>, but I'm failing. What's the idea here?</p>
|
<p>I just wanted to point out the following from the documentation on <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.vectorize.html" rel="nofollow noreferrer"><code>np.vectorize</code></a>:</p>
<blockquote>
<p>The <code>vectorize</code> function is provided primarily for convenience, not for performance. The implementation is essentially a for loop.</p>
</blockquote>
<p>So, actually, you do NOT make use of NumPy's vectorization abilities here. Using NumPy's <a href="https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#boolean-array-indexing" rel="nofollow noreferrer">boolean array indexing</a> and <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where</code></a>, you can rewrite your function, such that you have "real" vectorization.</p>
<p>Here's an idea from my side. The actual code looks quite ugly, I have to admit, but by pre-calculating the boolean arrays, we minimize processing time and memory usage.</p>
<pre class="lang-py prettyprint-override"><code>def f_vec(x, c=0.7):
# Initialize output array of same size and type as input array
out = np.zeros_like(x)
# Pre-calculate boolean arrays to prevent multiple calculation in following steps
x_gtq_0 = (x >= 0)
x_lt_0 = (x < 0)
x_gt_c = (x > c)
x_ltq_2c = (x <= 2 * c)
x_gt_2c = (x > 2 * c)
abs_x = np.abs(x)
abs_x_gt_c = abs_x > c
abs_x_ltq_2c = abs_x <= 2 * c
abs_x_gt_2c = (abs_x > 2 * c)
# Re-writing if-else blocks as operations on before calculated boolean arrays
out[np.where(x_gtq_0 & x_gt_c & x_ltq_2c)] = x[np.where(x_gtq_0 & x_gt_c & x_ltq_2c)] - c
out[np.where(x_gtq_0 & x_gt_2c)] = c
out[np.where(x_lt_0 & abs_x_gt_c & abs_x_ltq_2c)] = c - abs_x[np.where(x_lt_0 & abs_x_gt_c & abs_x_ltq_2c)]
out[np.where(x_lt_0 & abs_x_gt_2c)] = -c
return out
</code></pre>
<p>I added the following, small test function to run some comparisons:</p>
<pre class="lang-py prettyprint-override"><code>def test(x):
print(x.shape)
vfunc = np.vectorize(f)
tic = time.perf_counter()
res_func = vfunc(x, c=0.7)
print(time.perf_counter() - tic)
tic = time.perf_counter()
res_vec = f_vec(x, c=0.7)
print(time.perf_counter() - tic)
print('Differences: ', np.count_nonzero(np.abs(res_func - res_vec) > 10e-9), '\n')
test((np.random.rand(10) - 0.5) * 4)
test((np.random.rand(1000, 1000) - 0.5) * 4)
test((np.random.rand(1920, 1280, 3) - 0.5) * 4)
</code></pre>
<p>These are the results:</p>
<pre class="lang-none prettyprint-override"><code>(10,)
0.0001590869999999467
7.954300000001524e-05
Differences: 0
(1000, 1000)
1.53853834
0.0843256779999999
Differences: 0
(1920, 1280, 3)
10.974010127
0.7489308680000004
Differences: 0
</code></pre>
<p>So, performance-wise the difference between <code>np.vectorize</code> and an actual vectorized approach is huge for larger inputs. Nevertheless, if the <code>np.vectorize</code> solution is sufficient for your inputs, and you don't want put too much effort into re-writing your code, stick to that! As I said, I just wanted to show, that vectorization is more than that.</p>
<p>Hope that helps!</p>
<pre class="lang-none prettyprint-override"><code>----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.8.1
NumPy: 1.18.1
----------------------------------------
</code></pre>
|
python|numpy|vectorization
| 3 |
1,903,977 | 3,203,119 |
Images are not being stored?? - Django
|
<p>These are my following <strong>settings</strong>:</p>
<pre><code>MEDIA_ROOT = '/home/webapps/test_project/media/'
MEDIA_URL = 'http://192.168.0.2:8090/site_media/'
ADMIN_MEDIA_PREFIX = '/media/'
</code></pre>
<p>These are my <strong>model fields</strong>:</p>
<pre><code>large = models.ImageField(blank=True, null=True, upload_to="images")
thumb = models.ImageField(blank=True, null=True, upload_to="images")
</code></pre>
<hr>
<p>What happens is... when I save a model with the images, the <strong>paths to the images are stored</strong> but <strong>not the actual images</strong>.</p>
<p><a href="http://192.168.0.2:8090/site_media/images/BluUnicorn.png" rel="nofollow noreferrer">http://192.168.0.2:8090/site_media/images/BluUnicorn.png</a>
<a href="http://192.168.0.2:8090/site_media/images/DarkNinja.png" rel="nofollow noreferrer">http://192.168.0.2:8090/site_media/images/DarkNinja.png</a>
...</p>
<p>Django behaves as if the images have been saved succesfully under the images folder, but the images are not actually there!!</p>
<hr>
<p>Anyone have a clue of what I did wrong?</p>
<p>PS: I'm using Django 2.1+ SVN</p>
|
<p>Mistyped </p>
<pre><code>MEDIA_ROOT = '/home/webapps/test_project/media/'
</code></pre>
<p><strong>wrote home instead of root</strong></p>
|
python|django|django-models|django-uploads
| 1 |
1,903,978 | 2,339,993 |
How to force Excel VBA to use updated COM server
|
<p>I'm developing a COM server to be used from Excel VBA. When I update the server (edit code, unregister, re-register) Excel seems to carry on using the original version of the COM server, not the updated version. The only way I have found to get it to use the updated version is to close and re-open Excel, which gets a bit irritating. Is there a way to force Excel to use the newly registered version (maybe some kind of "clear cache" option)?</p>
<p>More details:</p>
<p>The server is being developed in Python using win32com. </p>
<p>In VBA I'm doing something like:</p>
<pre><code>set obj=CreateObject("Foo.Bar")
obj.baz()
</code></pre>
<p>Where Foo.Bar is the COM server I have registered in the registry.</p>
<p>If I unregister the server then run the VBA code, I get a "can't create object" error from VBA, so it must realise that something is going on. But once I reregister it picks up the old version.</p>
<p>Any hints appreciated!</p>
<p>Thanks,</p>
<p>Andy</p>
|
<p>I've found a solution to my problem - the general idea is to set things up so that the main COM server class dynamically loads the rest of the COM server code when it is called. So in Python I've created a COM server class that looks something like:</p>
<pre><code>import main_code
class COMInterface:
_public_methods_ = [ 'method1' ]
_reg_progid_ = "My.Test"
_reg_clsid_ = "{D6AA2A12-A5CE-4B6C-8603-7952B711728B}"
def methods(self, input1,input2,input3):
# force python to reload the code that does the actual work
reload(main_code)
return main_code.Runner().go(input1,input2,input3)
</code></pre>
<p>The main_code module contains the code that does the actual work and is reloaded each time the COM method is called. This works as long as the inputs don't change. There will presumably be a runtime penalty for this, so might want to remove the reload for the final version, but it works for development purposes.</p>
|
excel|com|python|win32com|vba
| 2 |
1,903,979 | 2,798,451 |
Python: Why do some packages get installed as eggs and some as "egg folders"?
|
<p>I maintain a few Python packages. I have a very similar <code>setup.py</code> file for each of them. However, when doing <code>setup.py install</code>, one of my packages gets installed as an egg, while the others get installed as "egg folders", i.e. folders with an extension of "egg".</p>
<p>What is the difference between them that causes this different behavior?</p>
|
<p><a href="https://setuptools.readthedocs.io/en/latest/deprecated/python_eggs.html#zip-safe-and-not-zip-safe" rel="nofollow noreferrer">The Internal Structure of Python Eggs, Zip Support Metadata</a> :</p>
<blockquote>
<p>If <code>zip-safe</code> exists, it means that the project will work properly when installed as an <code>.egg</code> zipfile, and conversely the existence of <code>not-zip-safe</code> means the project should not be installed as an <code>.egg</code> file [ie. as an <code>.egg</code> directory]. The <code>zip_safe</code> option to setuptools' <code>setup()</code> determines which file will be written. If the option isn't provided, setuptools attempts to make its own assessment of whether the package can work, based on code and content analysis.</p>
</blockquote>
|
python|packaging|setuptools|egg
| 28 |
1,903,980 | 6,255,641 |
Counting the number of unique words in a document with Python
|
<p>I am Python newbie trying to understand the answer given <a href="https://stackoverflow.com/questions/914382/how-can-i-count-unique-terms-in-a-plaintext-file-case-insensitively/930185#930185">here</a> to the question of counting unique words in a document. The answer is:</p>
<pre><code>print len(set(w.lower() for w in open('filename.dat').read().split()))
</code></pre>
<blockquote>
<p>Reads the entire file into memory, splits it into words using
whitespace, converts each word to lower case, creates a (unique) set
from the lowercase words, counts them and prints the output</p>
</blockquote>
<p>To try understand that, I am trying to implement it in Python step by step. I can import the text tile using open and read, divide it into individual words using split, and make them all lower case using lower. I can also create a set of the unique words in the list. However, I cannot figure out how to do the last part - count the number of unique words.</p>
<p>I thought I could finish by iterating through the items in the set of unique words and counting them in the original lower-case list, but I find that that the set construct is not indexable. </p>
<p>So I guess I am trying to do something that in natural language is like, for all the items in the set, tell me how many times they occur in the lower case list. But I cannot quite figure out how to do that, and I suspect some underlying misunderstanding of Python is holding me back.</p>
<ul>
<li>EDIT - </li>
</ul>
<p>Guys thanks for the answers. I have just realised I did not explain myself correctly - I wanted to find not only the total number of unique words (which I understand is the length of the set) but also the number of times each individual word was used, e.g. 'the' was used 14 times, 'and' was used 9 times, 'it' was used 20 times and so on. Apologies for the confusion.</p>
|
<p>I believe that <a href="http://docs.python.org/library/collections.html?highlight=counter#collections.Counter" rel="noreferrer">Counter</a> is all that you need in this case:</p>
<pre><code>from collections import Counter
print Counter(yourtext.split())
</code></pre>
|
python
| 20 |
1,903,981 | 42,616,214 |
SyntaxWarning: name 'color' is assigned to before global declaration global color Python
|
<p>In the code below it says:
"SyntaxWarning: name 'color' is assigned to before global declaration
global color"
However, I declared global color before I assign it? I am very confused. I ran it and it works but I just don't understand what the syntax warning is pointing to...</p>
<pre><code>from Tkinter import *
from sys import exit
from random import *
color = "black" #Sets default color to black
w,h=640,480 #Width, height of canvas
root=Tk()
pixelcount = 0 #Sets the inital pixelcount to 1
tool = 1 #Sets deafu
ptX, ptY, ptX2, ptY2 = 0, 0, 0, 0
cnvs=Canvas(root,width=w,height=h,bg='#D2B48C') # 210 180 140
cnvs.pack(side = RIGHT)
buttons = Frame(root, width = 80, height = h) #Creates region for buttons
buttons.pack(side = LEFT) #Put button region on left
def quit(evt):
exit(0)
def menu(arg): #Accepts arguments from button clicks and binds appropriate stimulus to appropriate tool function
print arg
global tool
cnvs.unbind('<Button-1>')
if arg == 1:
cnvs.bind('<Button-1>', line)
elif arg == 2:
cnvs.bind('<Button-1>', poly)
elif arg == 3:
cnvs.bind('<Button-1>', rect)
elif arg == 4:
cnvs.bind('<B1-Motion>', pencil)
elif arg == 5:
cnvs.bind('<B1-Motion>', spray)
elif arg == 6:
cnvs.bind('<B1-Motion>', blotter)
elif arg == 7:
global color
color = "red"
elif arg == 8:
global color
color = "black"
elif arg == 9:
global color
color = "blue"
elif arg == 10:
global color
color = "purple"
def line(evt): #Line function
global pixelcount
global color
pixelcount += 1
if pixelcount % 2 == 1:
ptX, ptY, = (evt.x, evt.y)
global ptX, ptY
print ptX, ptY
else:
ptX2, ptY2, = (evt.x, evt.y)
cnvs.create_line(ptX, ptY, ptX2, ptY2, fill = color)
def lineButtonClick(): #Activated when line button clicked
menu(1)
lineButton = Button(root, text = "line", command = lineButtonClick)
lineButton.pack()
lineButton.config(width = 10)
def poly(evt): #Poly function
global pixelcount
pixelcount += 1
global color
print str(pixelcount) + "pixel"
if pixelcount == 1:
global ptX, ptY
ptX, ptY, = (evt.x, evt.y)
print ptX, ptY
else:
global ptX2, ptY2
ptX2, ptY2, = (evt.x, evt.y)
print str(ptX2) + " " + " " +str(ptY2) + "pt2"
cnvs.create_line(ptX, ptY, ptX2, ptY2, fill = color)
ptX, ptY = ptX2, ptY2
def polyButtonClick(): #Activated when poly button clicked
menu(2)
polyButton = Button(root, text = "poly", command = polyButtonClick)
polyButton.pack()
polyButton.config(width = 10)
def rect(evt): #Rectangle function
global pixelcount
if pixelcount % 2 == 0:
global ptX, ptY
ptX, ptY, = (evt.x, evt.y)
print ptX, ptY
pixelcount += 1
else:
global ptX2, ptY2
ptX2, ptY2, = (evt.x, evt.y)
pixelcount += 1
cnvs.create_rectangle(ptX, ptY, ptX2, ptY2, fill = color, outline = color)
def rectButtonClick(): #Activated when rectangle button clicked
menu(3)
rectButton = Button(root, text = "rect", command = rectButtonClick)
rectButton.pack()
rectButton.config(width = 10)
def pencil(evt):#Pencil function
global pixelcount
if cnvs.bind('<ButtonRelease-1>'):
pixelcount = 0
pixelcount += 1
print str(pixelcount) + "pixel"
if pixelcount == 1:
global ptX, ptY
ptX, ptY, = (evt.x, evt.y)
print ptX, ptY
else:
global ptX2, ptY2
ptX2, ptY2, = (evt.x, evt.y)
print str(ptX2) + " " + " " +str(ptY2) + "pt2"
cnvs.create_line(ptX, ptY, ptX2, ptY2, fill = color)
ptX, ptY = ptX2, ptY2
def pencilButtonClick():
menu(4)
pencilButton = Button(root, text = "pencil", command = pencilButtonClick)
pencilButton.pack()
pencilButton.config(width = 10)
def spray(evt): #Spray function
global pixelcount
if cnvs.bind('<ButtonRelease-1>'):
pixelcount = 0
pixelcount += 1
print str(pixelcount) + "pixel"
ptX, ptY, = (evt.x, evt.y)
randomX = evt.x + randint(-10, 10)
randomY = evt.y + randint(-10, 10)
cnvs.create_oval(randomX -1, randomY-1, randomX + 1, randomY + 1, fill = color)
def sprayButtonClick():#Activated when spray button clicked
menu(5)
sprayButton = Button(root, text = "spray", command = sprayButtonClick)
sprayButton.pack()
sprayButton.config(width = 10)
def blotter(evt): #Blotter function
global pixelcount
if cnvs.bind('<ButtonRelease-1>'):
pixelcount = 0
pixelcount += 1
print str(pixelcount) + "pixel"
ptX, ptY, = (evt.x, evt.y)
cnvs.create_oval(ptX-5, ptY-5,ptX + 5, ptY + 5, fill = color)
def blotterButtonClick():#Activated when blotter button clicked
menu(6)
blotterButton = Button(root, text = "blotter", command = blotterButtonClick)
blotterButton.pack()
blotterButton.config(width = 10)
def red(): #Red color function
menu(7)
redButton = Button(root, text = "red", command = red)
redButton.pack()
redButton.config(width = 10)
def black(): #Black color function
menu(8)
blackButton = Button(root, text = "black", command = black)
blackButton.pack()
blackButton.config(width = 10)
def blue(): #Blue color function
menu(9)
blueButton = Button(root, text = "blue", command = blue)
blueButton.pack()
blueButton.config(width = 10)
def purple(): #Purple color function
menu(10)
purpleButton = Button(root, text = "purple", command = purple)
purpleButton.pack()
purpleButton.config(width = 10)
mainloop()
</code></pre>
<p>Thank you so much!!!</p>
|
<p>You don't put a <code>global</code> declaration immediately before every use of the variable; you use it <em>once</em>, at the beginning of the function in which the variable is declared global:</p>
<pre><code>def menu(arg):
global tool
global color
cnvs.unbind('<Button-1>')
if arg == 1:
cnvs.bind('<Button-1>', line)
elif arg == 2:
cnvs.bind('<Button-1>', poly)
elif arg == 3:
cnvs.bind('<Button-1>', rect)
elif arg == 4:
cnvs.bind('<B1-Motion>', pencil)
elif arg == 5:
cnvs.bind('<B1-Motion>', spray)
elif arg == 6:
cnvs.bind('<B1-Motion>', blotter)
elif arg == 7:
color = "red"
elif arg == 8:
color = "black"
elif arg == 9:
color = "blue"
elif arg == 10:
color = "purple"
</code></pre>
|
python|tkinter|global-variables|draw|sys
| 9 |
1,903,982 | 51,107,959 |
Why method accepts class name and name 'object' as an argument?
|
<p>Consider the following code, I expected it to generate error. But it worked. <code>mydef1(self)</code> should only be invoked with instance of MyClass1 as an argument, but it is accepting <code>MyClass1</code> as well as rather vague <code>object</code> as instance.<br>
Can someone explain why mydef is accepting class name(<code>MyClass1</code>) and <code>object</code> as argument?</p>
<pre><code>class MyClass1:
def mydef1(self):
return "Hello"
print(MyClass1.mydef1(MyClass1))
print(MyClass1.mydef1(object))
</code></pre>
<p>Output</p>
<pre><code>Hello
Hello
</code></pre>
|
<p>Python is dynamically typed, so it doesn't care what gets passed. It only cares that the single required parameter gets an argument as a value. Once inside the function, you never use <code>self</code>, so it doesn't matter what the argument was; you can't misuse what you don't use in the first place.</p>
<p>This question only arises because you are taking the uncommon action of running an instance method as an unbound method with an explicit argument, rather than invoking it on an instance of the class and letting the Python runtime system take care of passing that instance as the first argument to <code>mydef1</code>: <code>MyClass().mydef1() == MyClass.mydef1(MyClass())</code>.</p>
|
python
| 1 |
1,903,983 | 51,069,082 |
Replace empty dicts in nested dicts
|
<p>I have a nested dictionary whose structure looks like this</p>
<pre><code>{"a":{},"b":{"c":{}}}
</code></pre>
<p>Every key is a string and every value is a dict.
I need to replace every empty dict with <code>""</code>. How would I go about this?</p>
|
<p>Use recursion:</p>
<pre><code>def foo(the_dict):
if the_dict == {}:
return ""
return {k : foo(v) for k,v in the_dict.items()}
</code></pre>
<p>Here you have a <a href="https://repl.it/repls/CornyPeachpuffHack" rel="noreferrer">live example</a></p>
|
python
| 6 |
1,903,984 | 50,680,820 |
bokeh openStreetMap tile not visible in all browsers
|
<p>With the python bokeh (version 0.12.13) module i'm creating a .html with a line plotted on top of an openStreetMap tile (CARTODBPOSITRON):</p>
<pre><code>from bokeh.models import ColumnDataSource
from bokeh.plotting import figure
from bokeh.tile_providers import CARTODBPOSITRON
from bokeh.io import save,output_file
#the data
xList=[0.0, 111319, 222638, 333958, 445277, 556597, 667916, 779236, 890555]
yList=[6446275, 5012341, 3763310, 2632018, 4163881, 5465442, 6800125, 6621293, 6446275]
source=ColumnDataSource({'x':xList,'y':yList})
x_range=(min(xList),max(xList))
y_range=(min(yList),max(yList))
plot = figure(title='printed line on map',tools= "pan,wheel_zoom",x_range=x_range,y_range=y_range,width=1200, height=400)#create a figure
plot.add_tile(CARTODBPOSITRON)#add the CARTODBPOSITRON background tile
#plot a dot an a line
plot.line(source=source,x='x',y='y',line_color ='red')#line
#save to html file
output_file("file.html")
save(plot)
</code></pre>
<p>The .html looks fine in my Chrome browser, but the map does not appear in my IE browser. When i shared the file with two friends, one of them also could not see the map in her Chrome browser. I've checked and un-checked the Chrome parameter "2D hardware acceleration", cleared cookies and so on but haven't found the solution there.</p>
<p>Good (in my chrome browser, Version 66.0.3359.117 (Official Build) (32-bit), Windows 7, my friend her IE11):<a href="https://i.stack.imgur.com/JRydR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JRydR.png" alt="enter image description here"></a></p>
<p>Bad (my IE11 (Version 11.0.9600.19002CO) browser, my friend her Chrome (also version 66.0.3359.117) browser):<a href="https://i.stack.imgur.com/6TIHT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6TIHT.png" alt="enter image description here"></a></p>
<p>EDIT: For me it works in Chrome but not in IE11, for my friend it is opposite.</p>
<p>This is a screenshot of the Network log from my IE11 (where it doesn't work):
<a href="https://i.stack.imgur.com/xwhDD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xwhDD.png" alt="enter image description here"></a></p>
<p>This is a screenshot of the Network log from my friend's Chrome (where it doesn't work):
<a href="https://i.stack.imgur.com/8Vffb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8Vffb.png" alt="enter image description here"></a></p>
|
<p>I think you use not up-to-date Bokeh version, there is update <a href="https://github.com/bokeh/bokeh/pull/7264" rel="nofollow noreferrer">https://github.com/bokeh/bokeh/pull/7264</a> which requests CARTO tiles properly and securely using HTTPS, not http as your screenshots suggest. There is autoforward to HTTPS, but it seems it does not work with all browsers. Just update your Bokeh - does it fix it?</p>
|
python-3.x|google-chrome|internet-explorer-11|openstreetmap|bokeh
| 1 |
1,903,985 | 50,458,837 |
Unpacking numpy using ndarray
|
<p>I'm new to python. Any help would be appreciated.</p>
<p>I want to <a href="https://i.stack.imgur.com/gteAJ.png" rel="nofollow noreferrer">show this graph
</a>, using the first block of codes which I have tried, but when I try to run this code: </p>
<pre><code>date, value = np.loadtxt(revenue_ar, delimiter=',', unpack=True, converters={ 0: bytespdate2num('%Y-%m-%d')})
</code></pre>
<p>using <a href="https://i.stack.imgur.com/aHUuR.png" rel="nofollow noreferrer">revenue_ar</a> (numpy.ndarray) this error message pops up:</p>
<pre><code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>First block of codes:</p>
<pre><code>import time
import requests
import intrinio
import pandas as pd
import numpy as np
api_username = 'hidden'
api_password = 'hidden'
def bytespdate2num(fmt, encoding='utf-8'):
strconverter = mdates.strpdate2num(fmt)
def bytesconverter(b):
s = b.decode(encoding)
return strconverter(s)
return bytesconverter
ticker = 'AAPL'
revenue_data = requests.get('https://api.intrinio.com/historical_data?identifier=' + ticker + '&item=totalrevenue', auth=(api_username, api_password))
revenue1 = revenue_data.json()['data']
revenue = pd.DataFrame(revenue1)
revenue_ar = revenue.values
date, value = np.loadtxt(revenue_ar, delimiter=',', unpack=True,
converters={ 0: bytespdate2num('%Y-%m-%d')})('%Y-%m-%d')})('%Y-%m-%d')})
fig = plt.figure()
ax1 = plt.subplot2grid((6,4), (0,0), rowspan=6, colspan=4)
ax1.plot(date,value)
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d'))
plt.show()
</code></pre>
<p>However, This seems to work using <a href="https://i.stack.imgur.com/TwpHu.png" rel="nofollow noreferrer">revenue.txt</a>:</p>
<pre><code>date, value = np.loadtxt('revenue.txt', delimiter='\t', unpack=True,
converters={0: bytespdate2num('%Y-%m-%d')})
</code></pre>
<p>Please let me know if I need to clarify my question further.
Thanks in advance.</p>
<hr>
<p>revenue1:</p>
<pre><code>[{'date': '2018-03-31', 'value': 247417000000.0},
{'date': '2017-12-30', 'value': 239176000000.0},
{'date': '2017-09-30', 'value': 229234000000.0},
{'date': '2017-07-01', 'value': 223507000000.0},
{'date': '2017-04-01', 'value': 220457000000.0},
{'date': '2016-12-31', 'value': 218118000000.0},
{'date': '2016-09-24', 'value': 215639000000.0},
{'date': '2016-06-25', 'value': 220288000000.0},
{'date': '2016-03-26', 'value': 227535000000.0},
{'date': '2015-12-26', 'value': 234988000000.0},
{'date': '2015-09-26', 'value': 233715000000.0},
{'date': '2015-06-27', 'value': 224337000000.0},
{'date': '2015-03-28', 'value': 212164000000.0},
</code></pre>
<p>revenue_ar:</p>
<pre><code>array([['2018-03-31', 247417000000.0],
['2017-12-30', 239176000000.0],
['2017-09-30', 229234000000.0],
['2017-07-01', 223507000000.0],
['2017-04-01', 220457000000.0],
['2016-12-31', 218118000000.0],
['2016-09-24', 215639000000.0],
['2016-06-25', 220288000000.0],
['2016-03-26', 227535000000.0],
['2015-12-26', 234988000000.0],
['2015-09-26', 233715000000.0],
</code></pre>
<p>revenue.txt:</p>
<pre><code>2007-09-29 2.457800e+10
2008-09-27 3.749100e+10
2009-09-26 4.290500e+10
2009-12-26 4.670800e+10
2010-03-27 5.112300e+10
2010-06-26 5.708900e+10
2010-09-25 6.522500e+10
2010-12-25 7.628300e+10
2011-03-26 8.745100e+10
2011-06-25 1.003220e+11
2011-09-24 1.082490e+11
</code></pre>
<hr>
<p>This would be the solution as you have suggested.
This is awesome as it runs smoothly.</p>
<pre><code>import time
import urllib.request
from urllib.request import urlopen
import requests
import intrinio
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
import matplotlib.dates as mdates
import datetime
api_username = 'hidden'
api_password = 'hidden'
def grab_intrinio(ticker):
try:
revenue_data = requests.get('https://api.intrinio.com/historical_data? identifier=' + ticker + '&item=totalrevenue', auth=(api_username, api_password))
revenue1 = revenue_data.json()['data']
revenue = pd.DataFrame(revenue1)
revenue['date'] = pd.to_datetime(revenue['date'])
plt.plot(revenue['date'], revenue['value'])
except Exception as e:
print('failed in the main loop',str(e))
pass
grab_intrinio('AAPL')
</code></pre>
<p>This produce output as:</p>
<p><a href="https://i.stack.imgur.com/da8Gk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/da8Gk.png" alt="revenue graph"></a></p>
<p>**I have 2 more things to work on.
First, I want to graph two more variables(net_income and roe)</p>
<p>Second, my roe data has an value of nm which can not be converted to float or integer. </p>
<p>How could I resolve this problem?**</p>
<p>As a final output, I want to show a graph like this one(I can do my own work on plots and details of configuration):</p>
<p><a href="https://i.stack.imgur.com/g5uFs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g5uFs.png" alt="final graph"></a></p>
<p>I have tried this line, but this doesn't seem to work with an error showing <code>'list' object has no attribute 'plot'.</code></p>
<pre><code>fig = plt.figure()
ax1 = plt.plot(net_income['date'], net_income['value'])
ax1.plot(net_income['date'], net_income['value'])
ax2 = plt.plot(revenue['date'], revenue['value'])
ax2.plot(revenue['date'], revenue['value'])
</code></pre>
<p>This one produces net_income and revenue in same plot:</p>
<pre><code>plt.plot(net_income['date'], net_income['value'])
plt.plot(revenue['date'], revenue['value'])
</code></pre>
<p><a href="https://i.stack.imgur.com/cvcbU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cvcbU.png" alt="enter image description here"></a></p>
<blockquote>
<p>Blockquote</p>
</blockquote>
<p>Here are the codes for net_income and roe(same format as revenue)</p>
<pre><code>net_income_data = requests.get('https://api.intrinio.com/historical_data?identifier=' + ticker + '&item=totalrevenue', auth=(api_username, api_password))
net_income1 = net_income_data.json()['data']
net_income = pd.DataFrame(net_income1)
net_income['date'] = pd.to_datetime(net_income['date'])
roe_data = requests.get('https://api.intrinio.com/historical_data?identifier=' + ticker + '&item=roe', auth=(api_username, api_password))
roe1 = roe_data.json()['data']
roe = pd.DataFrame(roe1)
roe['date'] = pd.to_datetime(revenue['date'])
</code></pre>
<p>This is a roe_date with <code>nm value</code></p>
<pre><code> date value
30 2010-09-25 0.352835
31 2010-06-26 0.354701
32 2010-03-27 0.274779
33 2009-12-26 0.261631
34 2009-09-26 0.305356
35 2008-09-27 0.274432
36 2007-09-29 nm
</code></pre>
<p>Here is the results for the <code>roe.dtypes</code></p>
<pre><code>In: roe.dtypes
Out: date datetime64[ns]
value object
dtype: object
</code></pre>
<p>Whereas, both <code>net_income.dtypes</code> and <code>revenue.dtypes</code> produce output as follows:</p>
<pre><code>In: net_income.dtypes(revenue.dtypes)
Out: date datetime64[ns]
value float64
dtype: object
</code></pre>
<hr>
<p>Your amendment on roe to convert from object to float worked to plot the graph. When I aggregate the function as a final step, I'm getting an <code>invalid syntax</code> error as the following: </p>
<pre><code>File "<ipython-input-141-537d7c6c91a3>", line 28
fig axs = plt.subplots(3)
</code></pre>
<p>For this function written with your assistance. </p>
<pre><code>def grab_intrinio(ticker):
try:
net_income_data = requests.get('https://api.intrinio.com/historical_data?identifier=' + ticker + '&item=netincome', auth=(api_username, api_password)) #
net_income1 = net_income_data.json()['data']
net_income = pd.DataFrame(net_income1)
net_income['date'] = pd.to_datetime(net_income['date'])
revenue_data = requests.get('https://api.intrinio.com/historical_data?identifier=' + ticker + '&item=totalrevenue', auth=(api_username, api_password))
revenue1 = revenue_data.json()['data']
revenue = pd.DataFrame(revenue1)
revenue['date'] = pd.to_datetime(revenue['date'])
revenue
roe_data = requests.get('https://api.intrinio.com/historical_data?identifier=' + ticker + '&item=roe', auth=(api_username, api_password))
roe1 = roe_data.json()['data']
roe = pd.DataFrame(roe1)
roe['date'] = pd.to_datetime(roe['date'])
roe.index = roe['date']
roe = roe.drop(columns=['date'])
nm_idx = roe['value'] =='nm'
roe.value[nm_idx] = np.nan
roe.value = roe.value.astype(float)
fig axs = plt.subplots(3)
for ax, dat in zip(axs, [net_income, Revenue, roc]):
ax.plot(dat['date'], dat['value'])
except exception as e:
print('failed in the main loop',str(e))
pass
grab_intrinio('AAPL')
</code></pre>
<p>Thank you for your help in advance.</p>
|
<p><code>np.loadtxt</code> expects a filename or a string variable, from which it can parse the data out. That's why it works by telling it a path but not by telling it an array of values.</p>
<p>So you obviously get valid json data via <code>requests.get</code> and decode it via</p>
<pre><code>revenue1 = revenue_data.json()['data']
</code></pre>
<p>and put it in a dataframe with</p>
<pre><code>df = pd.DataFrame(revenue1)
</code></pre>
<p>This is what it looks like:</p>
<pre><code>In: df.head()
Out:
date value
0 2018-01-31 247417000000
1 2017-12-30 239176000000
2 2017-09-30 229234000000
3 2017-07-01 223507000000
</code></pre>
<p>and this is how to check the data types of the columns in your dataframe:</p>
<pre><code>In: df.dtypes
Out:
date object
value int64
dtype: object
</code></pre>
<p><code>value</code> is an integer, which is nice, but <code>date</code> was not parsed, it's just object data, so let's fix this:</p>
<pre><code>df['date'] = pd.to_datetime(df['date'])
In: df
Out:
date value
0 2018-01-31 247417000000
1 2017-12-30 239176000000
2 2017-09-30 229234000000
3 2017-07-01 223507000000
In: df.dtypes
Out:
date datetime64[ns]
value int64
dtype: object df = df.drop(columns=['date'])
</code></pre>
<p>Now <code>date</code> has the right datatype and you could plot it like</p>
<pre><code>plt.plot(df['date'], df['value'])
</code></pre>
<p><a href="https://i.stack.imgur.com/OlRHm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OlRHm.png" alt="enter image description here"></a></p>
<p>However, you could make it even more convenient, if you put the date as your index:</p>
<pre><code>df.index = pd.to_datetime(df['date'])
df = df.drop(columns=['date'])
</code></pre>
<p>Because the you could simply call</p>
<pre><code>df.plot()
</code></pre>
<p>as pandas has a matplotlib interface on board.</p>
<p>[![enter image description here][2]][2]</p>
<p>For your triple plot you'd need sth like:</p>
<pre><code>fig axs = plt.subplots(3)
for ax, dat in zip(axs, [net_income, Revenue, roc]):
ax.plot(dat['date'], dat['value'])
</code></pre>
<hr>
<p>Some of your data can't be cast to float because of <code>nm</code>-entries. Replace them by <code>np.nan</code> so that plotting commands can handle it and you can use your data:</p>
<pre><code>In: roe
Out:
date value
30 2010-09-25 0.352835
31 2010-06-26 0.354701
32 2010-03-27 0.274779
33 2009-12-26 0.261631
34 2009-09-26 0.305356
35 2008-09-27 0.274432
36 2007-09-29 nm
roe.index = roe['date']
roe = roe.drop(columns=['date'])
nm_idx = roe['value'] =='nm'
roe.value[nm_idx] = np.nan
roe.value = roe.value.astype(float)
In: roe
Out:
value
date
2010-09-25 0.352835
2010-06-26 0.354701
2010-03-27 0.274779
2009-12-26 0.261631
2009-09-26 0.305356
2008-09-27 0.274432
2007-09-29 NaN
In: roe.dtypes
Out:
value float64
dtype: object
</code></pre>
<p>roe.plot()</p>
<p><a href="https://i.stack.imgur.com/OlRHm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OlRHm.png" alt="enter image description here"></a></p>
|
python|matplotlib|numpy-ndarray
| 0 |
1,903,986 | 26,715,053 |
Python multiprocessing: knowing the thread/CPU number
|
<p>I'm trying to code a parallel code in Python using the <code>multiprocessing</code> module and I would like to know of a way to locally know which CPU is computing, but I only know of <code>multiprocessing.CPU_count()</code> to know the total CPU cores. </p>
<p>I'm looking for an equivalent of:</p>
<pre><code>omp_get_thread_num()
</code></pre>
<p>in C++ openMP.</p>
<p>Is there such a method in Python.multiprocessing?</p>
|
<p>It's not trivial to retrieve which CPU a process is running at (if possible at all), but if you:</p>
<ul>
<li><p>Start the same number of processes as there are CPUs available, as reported by <code>multiprocessing.CPU_count()</code>, as most applications do;</p></li>
<li><p>Assume that each process will run in a distinct CPU core, as expected when using <code>multiprocessing</code> module;</p></li>
</ul>
<p>Then you can "cheat" and give each process a unique <em>name</em> that will identify its CPU core! :)</p>
<pre><code>for i in xrange(multiprocessing.CPU_count()):
mp = multiprocessing.Process(target=foo, args=(bar,), name=i).start()
</code></pre>
<p>And then retrieve it inside the worker function spawned in the subprocess:</p>
<pre><code>print "I'm running on CPU #%s" % multiprocessing.current_process().name
</code></pre>
<p>From the <a href="https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Process.name" rel="noreferrer">official documentation</a>:</p>
<p><strong><code>multiprocessing.current_process().name</code></strong></p>
<pre><code>The process’s name.
The name is a string used for identification purposes only. It has no semantics.
Multiple processes may be given the same name.
The initial name is set by the constructor.
</code></pre>
|
python|multithreading|multiprocessing
| 7 |
1,903,987 | 26,818,574 |
My Python script hosted on OpenShift inside the .openshift/cron/minutely directory doesn't run. What's wrong?
|
<p>I wrote the following script, which sends an email to a specific email address, and saved it inside the .openshift/cron/minutely directory:</p>
<pre><code>import smtplib
g = smtplib.SMTP('smtp.gmail.com:587')
g.ehlo()
g.starttls()
g.ehlo()
g.login('myusername','mypassword')
g.sendmail('myemail','otheremail','message')
</code></pre>
<p>I then pushed the script to the server.</p>
<p>I expected the program to run once every minute, and receive an email every minute. However, there is no evidence indicating that my code is being run. Any idea what might be causing the problem? Did I forget a step while setting up my application?</p>
<p>Note: I've checked that the email address and password I provided were correct, and that cron is installed.</p>
<p>EDIT: It seems that the problem is originating from the server:
I deleted the original contents of the file, created 'testfile.txt', and wrote this code instead:</p>
<pre><code>a = open('testfile.txt','r+')
if not a.read():
a.write('Test writing')
a.close()
</code></pre>
<p>after waiting for the code to run and ssh-ing into the server, I changed to the directory named <code>app-root/logs</code> and displayed the contents of <code>cron.log</code>, which looked something like this:</p>
<pre><code>Sat Nov 8 11:01:11 EST 2014: START minutely cron run
__________________________________________________________________________
/var/lib/openshift/545a6ac550044652510001d3/app-root/runtime/repo//.openshift/cron/minutely/test_openshift.py:
/var/lib/openshift/545a6ac550044652510001d3/app-root/runtime/repo//.openshift/cron/minutely/test_openshift.py: line 1: syntax error near unexpected token `('
/var/lib/openshift/545a6ac550044652510001d3/app-root/runtime/repo//.openshift/cron/minutely/test_openshift.py: line 1: `a = open('testfile.txt','r+')'
__________________________________________________________________________
Sat Nov 8 11:01:11 EST 2014: END minutely cron run - status=0
__________________________________________________________________________
</code></pre>
<p>Could it be that the server is not interpreting the code in my file as python code? Any suggestions welcome.</p>
|
<p>connect to openshift console</p>
<pre><code>rhc ssh app_name
</code></pre>
<p>Change to a directory to have permission to create script:</p>
<pre><code>cd $OPENSHIFT_DATA_DIR
</code></pre>
<p>create test01.py script</p>
<pre><code>touch test01.py
</code></pre>
<p>Give executing permission to test01.py</p>
<pre><code>chmod +x test01.py
</code></pre>
<p>Edit script </p>
<pre><code>nano test01.py
</code></pre>
<p>Add a simple code like</p>
<pre><code>print("Hello")
</code></pre>
<p>run script:</p>
<pre><code>./test01.py
</code></pre>
<p>Error:</p>
<pre><code>./test01.py: line 1: syntax error near unexpected token `"Hello"'
./test01.py: line 1: `print("Hello")'
</code></pre>
<p>Now check python path</p>
<pre><code>which python
</code></pre>
<p>Output</p>
<pre><code>/var/lib/openshift/your-sesseion-id/python/virtenv/venv/bin/python
</code></pre>
<p>Now add a she bang to test01.py</p>
<pre><code>#!/var/lib/openshift/your-sesseion-id/python/virtenv/venv/bin/python
print("Hello")
</code></pre>
<p>Now Execute it</p>
<pre><code>./test01.py
</code></pre>
<p>Output:</p>
<pre><code>Hello
</code></pre>
<p><strong>Conclusion:</strong>
Your script should know how to run and where is python path, so add it at the first line of your script</p>
|
python|cron|openshift
| 3 |
1,903,988 | 61,299,165 |
How can I upload multiple images in python using pygame
|
<p>I'm trying to use multiple images using pygame for a character animation. I have 9 images that are all .png and are in the same folder as my code. I also need to upload a background image. My code looks like this, but I did only use 2 of the images for the example instead of all nine of them.</p>
<pre class="lang-py prettyprint-override"><code>walk right = [pygame.image.load('r1.png'), pygame.image.load('r2.png')
bg = pygame.image.load('bg.jpg')
</code></pre>
|
<p>Yes, you can do that</p>
<pre><code>walk_right = [pygame.image.load('r1.png'), pygame.image.load('r2.png')...]
</code></pre>
<p>to make it a bit tidier, you can use a loop</p>
<pre><code>walk_right = []
for i in range(9):
image = pygame.image.load("r" + str(i) + ".png")
walk_right.append(image)
</code></pre>
<p>or to do the loop in one line</p>
<pre><code>walk_right = [pygame.image.load("r" + str(i) + ".png") for i in range(9)]
</code></pre>
<hr>
<p>The directory can be any, the above example is if the images are in the same folder, if the images are in another folder inside game files you can do</p>
<pre><code>"Images/r" + str(i) + ".png"
</code></pre>
<p>Or get the whole directory to the images</p>
<pre><code>Dir = "C:/Users/user/Documents/GameFiles/Images/"
pygame.image.load(Dir + "r" + str(i) + ".png")
</code></pre>
<p>If its still not right, make sure everything is spelt the same, do the images start at 0 or 1, the above example starts at 0. </p>
|
python|pygame
| 1 |
1,903,989 | 57,920,177 |
passing mixed keyword non keyword arguments programmatically to a bound method in Python
|
<p>I have an object.</p>
<pre><code>class myClass:
def f1(self,arg1,param1,dictionary_settings={}):
# stuff
def f2(self,arg1,dictionary_settings={}):
# stuff
myobj=myClass()
</code></pre>
<p>I am now trying to use this object from another class.
I need to call programmatically f1 or f2, and inject parameters accordingly.
I can do that with getattr:</p>
<pre><code>f_to_call=getattr(myobj,'f1')
</code></pre>
<p>How do I then pass my arguments which may be keyword and non-keyword arguments?</p>
|
<p>The line <code>f = getattr(myobj,'f1')</code> just makes <code>f</code> a bound method. Passing the parameters happens just like in any other method.</p>
<pre><code>f(1, 2, a=3, b=4)
</code></pre>
<p>or, </p>
<pre><code>positional = [1, 2]
keyword = {'a': 3, 'b': 4}
f(*positional, **keyword)
</code></pre>
|
python|arguments|parameter-passing
| 2 |
1,903,990 | 18,647,744 |
Opening Multiple Files with their paths in a list
|
<p>I'm trying to write a script that gathers a certain type of file from a data folder, opens all of them in their default program (called iNMR) and processes them with different mathematical functions, gather some values, and record those in a text file.</p>
<p>Right now i'm just trying to focus on opening all of the files.</p>
<p>Here's what I have so far:</p>
<pre><code>import os
import glob
import subprocess
os.chdir("/Users/BabyJ/Desktop/MRSDATA")
reflist = glob.glob('*raw_ref.SDAT')
actlist = glob.glob('*raw_act.SDAT')
for i in reflist:
open('%r') %i
for i in actlist:
open('%r') %i
</code></pre>
<p>Yes I want to open all of the files at once, but I'm not too sure of the syntax of open(). I need to open the file as if I were double clicking the file, but i'm pretty sure it only opens it in the python background or whatever it is so that I can edit it. But I need to do physical clicks on it, so I need it open physically.</p>
|
<p>You are attempting to open the file in python, while what you want is to open the file in whatever operating system you are running.</p>
<p>This is done through the os.system.</p>
<p>Here are similar questions with examples: <a href="https://stackoverflow.com/questions/434597/open-document-with-default-application-in-python">click me</a> and <a href="https://stackoverflow.com/questions/1679798/how-to-open-a-file-with-the-standard-application">and me</a></p>
|
python|file|list
| 0 |
1,903,991 | 18,682,517 |
Dynamically setting a property in ndb object
|
<p>In my GAE based web app I load a NDB Entity and try to edit that. But the problem is the field I am going to edit is dynamically decided from a string so I can't hardcode it.</p>
<p>I tried these things but none worked</p>
<pre><code>obj[fieldName] = newValue
obj.populate(fieldName,newValue)
obj.populate(Modlue._properties[fieldName] = newValue) #keyword can't be an expression
setattr(obj, fieldName,newValue) #value not being set
</code></pre>
<p>There must be some correct syntax to do that. Can anybody help me regarding that</p>
|
<p>What you are looking for is the <a href="https://cloud.google.com/appengine/docs/python/ndb/creating-entity-models#creating_an_expando_model_class" rel="nofollow">Expando class</a></p>
<p>You can safely replace <code>ndb.Model</code> with <code>ndb.Expando</code> in your model classes, your persisted entities will still be perfectly usable. </p>
<p><strong>Note</strong>: However, it might not work the other way around; <code>ndb</code> will crash if you try to manipulate (fetch/put) an <code>ndb.Model</code> entity that has attributes that are not declared in its class.</p>
|
python|google-app-engine|app-engine-ndb
| 1 |
1,903,992 | 71,456,649 |
Can't get all html page with Beautiful soup
|
<p>I'm trying to get the content of this webpage : <a href="https://www.zillow.com/homes/for_rent/1-_beds/?searchQueryState=%7B%22pagination%22%3A%7B%7D%2C%22mapBounds%22%3A%7B%22west%22%3A-122.67022170019531%2C%22east%22%3A-122.19643629980469%2C%22south%22%3A37.615282466144976%2C%22north%22%3A37.93495488175342%7D%2C%22mapZoom%22%3A11%2C%22isMapVisible%22%3Atrue%2C%22filterState%22%3A%7B%22price%22%3A%7B%22max%22%3A872627%7D%2C%22beds%22%3A%7B%22min%22%3A1%7D%2C%22fore%22%3A%7B%22value%22%3Afalse%7D%2C%22mp%22%3A%7B%22max%22%3A3000%7D%2C%22nc%22%3A%7B%22value%22%3Afalse%7D%2C%22fr%22%3A%7B%22value%22%3Atrue%7D%2C%22cmsn%22%3A%7B%22value%22%3Afalse%7D%2C%22fsba%22%3A%7B%22value%22%3Afalse%7D%7D%2C%22isListVisible%22%3Atrue%7D" rel="nofollow noreferrer">https://www.zillow.com/homes/for_rent/1-_beds/?searchQueryState=%7B%22pagination%22%3A%7B%7D%2C%22mapBounds%22%3A%7B%22west%22%3A-122.67022170019531%2C%22east%22%3A-122.19643629980469%2C%22south%22%3A37.615282466144976%2C%22north%22%3A37.93495488175342%7D%2C%22mapZoom%22%3A11%2C%22isMapVisible%22%3Atrue%2C%22filterState%22%3A%7B%22price%22%3A%7B%22max%22%3A872627%7D%2C%22beds%22%3A%7B%22min%22%3A1%7D%2C%22fore%22%3A%7B%22value%22%3Afalse%7D%2C%22mp%22%3A%7B%22max%22%3A3000%7D%2C%22nc%22%3A%7B%22value%22%3Afalse%7D%2C%22fr%22%3A%7B%22value%22%3Atrue%7D%2C%22cmsn%22%3A%7B%22value%22%3Afalse%7D%2C%22fsba%22%3A%7B%22value%22%3Afalse%7D%7D%2C%22isListVisible%22%3Atrue%7D</a></p>
<p>I can't get all of it. Many elements are empty. I was told that it was the case because it was js code and bs4 can't read js and I had to use selenium instead, but I want to do it with bs4 and I know there is a way to do so. I also was told that it was the case, because I wasn't in the correct iframe, but I doesn't seem to be true. For example if you inspect one of the prices listed (e.g $2,200/mo) you will see that it is contained in a ul list and each apartment listed is a li element of that list. But when I scrape the page with bs it seems that most of these li elements are empty.
Also, bear in mind I'm a newbie in web-scraping and in python, so be cool please.
Thanks!</p>
<p>Here is the code I'm using to get the page html:</p>
<pre><code>self.response = requests.get(url=URL, headers=headers)
self.html_doc = self.response.text
self.soup = BeautifulSoup(self.html_doc, 'html.parser')
</code></pre>
|
<p>Yes, this site use react. Check browser developer tool NETWORK on chorme or firefox and look how files and request make you browser. Check callstack and more request details pointing on data. I see on dt network this link <a href="https://www.zillow.com/search/GetSearchPageState.htm?searchQueryState=%7B%22pagination%22%3A%7B%7D%2C%22mapBounds%22%3A%7B%22west%22%3A-122.83501662207031%2C%22east%22%3A-122.03164137792969%2C%22south%22%3A37.548623602126355%2C%22north%22%3A38.00126648128239%7D%2C%22mapZoom%22%3A11%2C%22isMapVisible%22%3Atrue%2C%22category%22%3A%22cat2%22%2C%22filterState%22%3A%7B%22price%22%3A%7B%22max%22%3A872627%7D%2C%22beds%22%3A%7B%22min%22%3A1%7D%2C%22isForSaleForeclosure%22%3A%7B%22value%22%3Afalse%7D%2C%22monthlyPayment%22%3A%7B%22max%22%3A3000%7D%2C%22isNewConstruction%22%3A%7B%22value%22%3Afalse%7D%2C%22isComingSoon%22%3A%7B%22value%22%3Afalse%7D%2C%22isForSaleByAgent%22%3A%7B%22value%22%3Afalse%7D%2C%22sortSelection%22%3A%7B%22value%22%3A%22globalrelevanceex%22%7D%7D%2C%22isListVisible%22%3Atrue%7D&wants=%7B%22cat2%22:%5B%22listResults%22,%22mapResults%22%5D,%22cat1%22:%5B%22total%22%5D%7D&requestId=6" rel="nofollow noreferrer">https://www.zillow.com/search/GetSearchPageState.htm?searchQueryState=%7B%22pagination%22%3A%7B%7D%2C%22mapBounds%22%3A%7B%22west%22%3A-122.83501662207031%2C%22east%22%3A-122.03164137792969%2C%22south%22%3A37.548623602126355%2C%22north%22%3A38.00126648128239%7D%2C%22mapZoom%22%3A11%2C%22isMapVisible%22%3Atrue%2C%22category%22%3A%22cat2%22%2C%22filterState%22%3A%7B%22price%22%3A%7B%22max%22%3A872627%7D%2C%22beds%22%3A%7B%22min%22%3A1%7D%2C%22isForSaleForeclosure%22%3A%7B%22value%22%3Afalse%7D%2C%22monthlyPayment%22%3A%7B%22max%22%3A3000%7D%2C%22isNewConstruction%22%3A%7B%22value%22%3Afalse%7D%2C%22isComingSoon%22%3A%7B%22value%22%3Afalse%7D%2C%22isForSaleByAgent%22%3A%7B%22value%22%3Afalse%7D%2C%22sortSelection%22%3A%7B%22value%22%3A%22globalrelevanceex%22%7D%7D%2C%22isListVisible%22%3Atrue%7D&wants={%22cat2%22:[%22listResults%22,%22mapResults%22],%22cat1%22:[%22total%22]}&requestId=6</a>. React builds the site page based on this data. Sry my english not good but i hope i helped.</p>
|
python|html|web-scraping|beautifulsoup
| -1 |
1,903,993 | 69,416,000 |
Python parse user input the same way as CLI input?
|
<p>I'm building a CLI and I just discovered <a href="https://google.github.io/python-fire/guide/" rel="nofollow noreferrer">Fire</a> and it's a wonderful way to pass parameters to a function from the command line. It's very clean and intuitive.</p>
<p>However, one problem I have is I need to perform some actions while the program is still running and values are in memory. So for that I can't use Fire (or at least I don't think I can). But I would like to use something that works the same as Fire. I think that I need to use input() to have users input a string, but then I need to interpret that.</p>
<p>For those not aware of how Fire works, here's how. It turns CLI commands into function parameters and executes with those values.</p>
<p>example</p>
<pre><code>command line:
function_name parameter1 parameter2 parameter3 --parameter6_name parameter6
python script:
def function_name(parameter1, parameter2, parameter3=0... parameter6_name='No'):
</code></pre>
<p>I can think of a few ways I might go about this manually in a crude way, but it would be hard and I don't think I would be able to get it to work exactly right. Is there some existing way to parse like this? I've tried searching around for a few hours but I'm not sure I know the right search terms for this problem. I'd appreciate it if anyone can point me in the right direction.</p>
<p>edit. Say script is called script.py. I'm aware you can use argparsse to call:</p>
<pre><code>script.py param1 param2 --param4_name param4
</code></pre>
<p>(thought I think Fire is better for this purpose)</p>
<p>What I'm trying to do is not pass the parameters during the command line command to launch the app, but pass the parameters while another python script is running, using something like input(). ex.</p>
<pre><code>python3 script.py
Type a search phrase for the option you want: input()
Choose a character to select the option: input()
Type the parameters for a function to call to use with that option: input()
option_func1 param1 param2 --param4_name param4
or
option_func2 param1 param2 --param4_name param4
</code></pre>
<p>(then it runs that function with those parameters using values from the initial option)</p>
|
<p>I suggest using <a href="https://docs.python.org/3/library/argparse.html" rel="nofollow noreferrer">argparse</a>, a module makes it easy to write user-friendly command-line interfaces</p>
|
python|arguments|command-line-interface|parameter-passing|argparse
| 1 |
1,903,994 | 69,534,510 |
Enter single digit but the result have the tens digit when use python search value
|
<p>When I input <code>7,17</code> but the result is</p>
<pre class="lang-none prettyprint-override"><code>ifDescr.7
ifDescr.70
ifDescr.17
</code></pre>
<p>If I want the result is 7 and 17 when I input <code>7 17</code>, how do I code it?</p>
<pre class="lang-none prettyprint-override"><code>ifDescr.7
ifDescr.17
</code></pre>
<p>text file</p>
<pre class="lang-none prettyprint-override"><code>ifDescr.7
ifDescr.70
ifDescr.17
</code></pre>
<pre><code>def search_multiple(file_name, list_of_strings):
line_number = 0
list_of_results = []
with open(file_name, 'r') as read:
for line in read:
line_number += 1
for x in list_of_strings:
if x in line:
list_of_results.append((x,line_number,line.rstrip()))
return list_of_results
def main ():
folder = ('single.txt')
verify1,verify2 = input ("Input number").split()
matched_lines = search_multiple(folder,['ifDescr.' + verify1, 'ifDescr.' + verify2,])
for x in matched_lines:
print('Line = ', x[2])
if __name__ == '__main__':
main()
</code></pre>
|
<p>The reason for this behavior is you are using <code>in</code> to check if string is in the line. As <code>ifDescr.70</code> contains <code>ifDecsr.7</code> in it,the result contains it as well. Try out the below function:</p>
<pre><code>def search_multiple(file_name, list_of_strings):
line_number = 0
list_of_results = []
with open(file_name, 'r') as read:
for line in read:
line_number += 1
for x in list_of_strings:
if x == line.strip():
list_of_results.append((x,line_number,line.rstrip()))
return list_of_results
</code></pre>
|
python
| 2 |
1,903,995 | 57,651,357 |
How can I get texts with certain criteria in python with selenium? (texts with certain siblings)
|
<p>It's really tricky one for me so I'll describe the question as detail as possible.</p>
<p>First, let me show you some example of html.</p>
<pre><code>....
....
<div class="lawcon">
<p>
<span class="b1">
<label> No.1 </label>
</span>
</p>
<p>
"I Want to get 'No.1' label in span if the div[@class='lawcon'] has a certain <a> tags with "bb" title, and with a string of 'Law' in the text of it."
<a title="bb" class="link" onclick="javascript:blabla('12345')" href="javascript:;">Law Power</a>
</p>
</div>
<div class="lawcon">
<p>
<span class="b1">
<label> No.2 </label>
</p>
<p>
"But I don't want to get No.2 label because, although it has <a> tag with "bb" title, but it doesn't have a text of law in it"
<a title="bb" class="link" onclick="javascript:blabla('12345')" href="javascript:;">Just Power</a>
</p>
</div>
<div class="lawcon">
<p>
<span class="b1">
<label> No.3 </label>
</p>
<p>
"If there are multiple <a> tags with the right criteria in a single div, I want to get span(No.3) for each of those" <a>
<a title="bb" class="link" onclick="javascript:blabla('12345')" href="javascript:;">Lawyer</a>
<a title="bb" class="link" onclick="javascript:blabla('12345')" href="javascript:;">By the Law</a>
<a title="bb" class="link" onclick="javascript:blabla('12345')" href="javascript:;">But not this one</a>
...
...
...
</code></pre>
<p>So, here is the thing. I want to extract the text of (e.g. No.1) in div[@class='lawcon'] only if the div has a tag with "bb" title, with a string of 'Law' in it.</p>
<p>If inside of the div, if there isn't any tag with "bb" title, or string of "Law" in it, the span should not be collected.</p>
<p>What I tried was</p>
<pre><code>div_list = [div.text for div in driver.find_elements_by_xpath('//span[following-sibling::a[@title="bb"]]')]
</code></pre>
<p>But the problem is, when it has multiple tag with right criteria in a single div, it only return just one div.</p>
<p>What I want to have is a location(: span numbers) list(or tuple) of those text of tags</p>
<p>So it should be like</p>
<pre><code>[[No.1 - Law Power], [No.3 - Lawyer], [No.3 - By the Law]]
</code></pre>
<p>I'm not sure I have explained enough. Thank you for your interests and hopefully, enlighten me with your knowledge! I really appreciate it in advance.</p>
|
<p>As your requirement is to extract the texts <strong>No.1</strong> and so on, which are within a <code><label></code> tag, you have to induce <em>WebDriverWait</em> for the <code>visibility_of_all_elements_located()</code> and you will have only 2 matches (against your expectation of 3) and you can use the following <a href="https://stackoverflow.com/questions/48369043/official-locator-strategies-for-the-webdriver/48376890#48376890">Locator Strategy</a>:</p>
<ul>
<li><p>Using <code>XPATH</code>:</p>
<pre><code>print([my_elem.get_attribute("innerHTML") for my_elem in WebDriverWait(driver, 5).until(EC.visibility_of_all_elements_located((By.XPATH, "//div[@class='lawcon']//a[@title='bb' and contains(.,'Law')]//preceding::label[1]")))])
</code></pre></li>
</ul>
|
python-3.x|selenium|selenium-webdriver|xpath|webdriverwait
| 0 |
1,903,996 | 57,724,013 |
How to show the framerate in PyQTgraph?
|
<p>I am doing some live data updating using some external hardware. I need to know the framerate of the animation, to know if the problem is my potato computer or the sampling rate of the hardware. </p>
<p>Is there a way to display the framerate in pyqtgraph?<br>
i am using it in combination with openGL and am displaying a heatmap that changes live based on touch to the hardware. However i have the feeling it is lagging a bit.</p>
<p>I am imagining something inside the update.self routine.</p>
<p>My code is not really relevant, as it is working. I just need some (probably very obvious) way to read the update rate.</p>
<p>I already looked into realtime imaging, but it is not worth designing a multiple thread approach...</p>
|
<p>This works somewhat, not sure if it does what i want...
I define in the start</p>
<pre><code> import datetime
def __init__(self):
self.start_time=0
</code></pre>
<p>for the first loop.
And then i read the frame rate like this:</p>
<pre><code>def update:
#some code
frames= datetime.datetime.now().time()
frames=frames.second-self.start_time.second + (frames.microsecond-self.start_time.microsecond)/1000000
print(1/frames)
self.start_time = datetime.datetime.now().time()
</code></pre>
|
python|animation|opengl|frame-rate|pyqtgraph
| 0 |
1,903,997 | 57,624,493 |
how to give a function a parameter without executing it in python?
|
<p>I want to execute a function with the module Threading and it has a parameter but every different call need every different parameter so how do I do.</p>
<p>I tried this code it's a simple one:</p>
<pre><code>import threading
def printer(a):
print(a)
x=threading.Thread(target=printer("hello"))
z=threading.Thread(target=printer("world"))
x.start()
z.start()
</code></pre>
<p>when giving that () into printer(a) it automatically calls the function not with threading module</p>
|
<p>The call signature of threading.Thread is the following</p>
<pre><code>threading.Thread(group=None, target=None, name=None, args=(), kwargs={}, *, daemon=None)
</code></pre>
<p>We can see that it takes the keyword argumets args and kwargs this is were your arguments go.
So in your code you write:</p>
<pre><code>x = threading.Thread(target=printer, args=('hello',))
</code></pre>
|
python-3.x
| 0 |
1,903,998 | 42,153,066 |
python : list and dictionary, TypeError: list indices must be integers, not str
|
<p><strong>i want to calculte the specfic numbers of words from the given sentence...words are alredy save in my dictonary and sentence is will be input from user.....
Here is my code.</strong> </p>
<pre><code>from collections import Counter
Find_word= raw_input('Write Sentence:')
wordTosearch=['is', 'am']
sentence= Find_word.split()
cnt = Counter(sentence)
for k in sentence:
if k in wordTosearch:
print k, wordTosearch[k]
if cnt[wordTosearch]>1:
print "aggresive"
else:
print "Not agressive"
</code></pre>
|
<p><code>wordTosearch</code> is a list of words.</p>
<p>The following is iterating through that list of words:</p>
<pre><code>if k in wordTosearch:
print k, wordTosearch[k] # <----
</code></pre>
<p>where <code>k</code> is a word, and <code>wordTosearch[k]</code> is the attempt to access a list value by a string key which gives you <em>"TypeError: list indices must be integers, not str"</em>.<br>
You can not access list values by string indices as <em>Python</em> lists are "numbered" sequences</p>
|
python-2.7
| 0 |
1,903,999 | 53,959,458 |
Django: Suggestion for models design
|
<p>I need help with creating models for my simple Django app.</p>
<p>The purpose of the application is to let users (referees) register for matches, then admin will choose 2 users (referees) from the list of registered for given match. Right now my Matches model looks like below:</p>
<pre><code>class Match(models.Model):
match_number = models.CharField(
max_length=10
)
home_team = models.ForeignKey(
Team,
on_delete=models.SET_NULL,
null=True,
related_name='home_team'
)
away_team = models.ForeignKey(
Team,
on_delete=models.SET_NULL,
null=True,
related_name='away_team'
)
match_category = models.ForeignKey(
MatchCategory,
on_delete=models.SET_NULL,
null=True
)
date_time = models.DateTimeField(
default=timezone.now
)
notes = models.TextField(
max_length=1000,
blank=True
)
</code></pre>
<p>What I thought to do is to create new Model named MatchRegister where I will be saving match_id and user_id, something like below:</p>
<pre><code>class MatchRegister(models.Model):
match_id = models.ForeignKey(
Match
)
user_id = models.ForeignKey(
Users
)
</code></pre>
<p>And than admin will have list of registered user for given match from which he will choose two, so I thought to modify my Match model like this (add two new Fields):</p>
<pre><code>class Match(models.Model):
match_number = models.CharField(
max_length=10
)
home_team = models.ForeignKey(
Team,
on_delete=models.SET_NULL,
null=True,
related_name='home_team'
)
away_team = models.ForeignKey(
Team,
on_delete=models.SET_NULL,
null=True,
related_name='away_team'
)
match_category = models.ForeignKey(
MatchCategory,
on_delete=models.SET_NULL,
null=True
)
date_time = models.DateTimeField(
default=timezone.now
)
notes = models.TextField(
max_length=1000,
blank=True
)
ref_a = models.ForeignKey(
Users,
on_delete=models.SET_NULL,
null=True,
related_name='ref_a'
)
ref_b = models.ForeignKey(
Users,
on_delete=models.SET_NULL,
null=True,
related_name='ref_b'
)
</code></pre>
<p>This is my solution but I don't know if it is done in proper way so I want to ask you for help.</p>
|
<p>If you know for certain that matches will only ever have two refs, then what you propose is just fine. However, if there's an opportunity in the future for the number to change (only one, or perhaps three), an alternative would be to add a flag to the intermediate table:</p>
<pre><code>class MatchRegister(models.Model):
match_id = models.ForeignKey(Match)
user_id = models.ForeignKey(Users)
chosen = models.BooleanField(default=False)
</code></pre>
<p>You would need business logic to constrain the number of "chosen" refs to the number you anticipate. This option makes it easy to increase or decrease the number of refs without adding or removing columns (just change the business logic).</p>
|
python|django
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.