Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,904,700 | 61,396,614 |
How to sort a column that have dates in a dataframe?
|
<p>I have a dataframe like this:</p>
<pre><code> SEMANAS HIDROLOGICAS METEOROLOGICAS
0 02042020 36.00583090379008 31.284418529316522
1 05032020 86.91690962099126 77.01136731748973
2 12032020 87.31778425655976 77.24180581323434
3 19032020 59.2201166180758 54.57343110404338
4 26032020 32.39795918367347 29.049238743116323
</code></pre>
<p>I used this code to change df.SEMANAS to datetime </p>
<pre><code>Semanas_Oper['SEMANAS']=pd.to_datetime(Semanas_Oper['SEMANAS'], format='%d%m%Y').dt.strftime('%d/%m/%Y')
SEMANAS HIDROLOGICAS METEOROLOGICAS
02/04/2020 36.01 31.28
05/03/2020 86.92 77.01
12/03/2020 87.32 77.24
19/03/2020 59.22 54.57
26/03/2020 32.4 29.05
</code></pre>
<p>But pd.to_datetime is not sorting the dates of the column df.SEMANAS
Can you tell me how to sort this columns. 02/04/2020 must be in the last row.</p>
|
<p><code>dt.strftime()</code> undoes the datetime conversion and brings you back to strings. If you sort on this, you'll be left with lexiographical sorting, not what you want given your format is <code>'%d/%m/%Y'</code> (would be fine with <code>'%Y/%m/%d'</code>).</p>
<p>When <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html" rel="nofollow noreferrer">working with dates in pandas</a> you should <strong>keep the datetime64[ns] dtype</strong>. It's the easiest way to perform all datetime operations. Only use <code>.strftime</code> when you need to move to some other library or file output that requires a very specific string format. </p>
<pre><code>df['SEMANAS'] = pd.to_datetime(df['SEMANAS'], format='%d%m%Y')
df.dtypes
#SEMANAS datetime64[ns]
#HIDROLOGICAS object
#METEOROLOGICAS object
df = df.sort_values('SEMANAS')
# SEMANAS HIDROLOGICAS METEOROLOGICAS
#1 2020-03-05 86.91690962099126 77.01136731748973
#2 2020-03-12 87.31778425655976 77.24180581323434
#3 2020-03-19 59.2201166180758 54.57343110404338
#4 2020-03-26 32.39795918367347 29.049238743116323
#0 2020-04-02 36.00583090379008 31.284418529316522
</code></pre>
|
pandas|sorting|datetime
| 2 |
1,904,701 | 61,449,162 |
Store the results of a multiprocessing queue in python
|
<p>I'm trying to store the results of multiple API requests using multiprocessing queue as the API can't handle more than 5 connections at once. </p>
<p>I found part of a solution of <a href="https://stackoverflow.com/questions/54858979/how-to-use-multiprocessing-with-requests-module">How to use multiprocessing with requests module?</a></p>
<pre><code>def worker(input_queue, stop_event):
while not stop_event.is_set():
try:
# Check if any request has arrived in the input queue. If not,
# loop back and try again.
request = input_queue.get(True, 1)
input_queue.task_done()
except queue.Empty:
continue
print('Started working on:', request)
api_request_function(request) #make request using a function I wrote
print('Stopped working on:', request)
def master(api_requests):
input_queue = multiprocessing.JoinableQueue()
stop_event = multiprocessing.Event()
workers = []
# Create workers.
for i in range(3):
p = multiprocessing.Process(target=worker,
args=(input_queue, stop_event))
workers.append(p)
p.start()
# Distribute work.
for requests in api_requests:
input_queue.put(requests)
# Wait for the queue to be consumed.
input_queue.join()
# Ask the workers to quit.
stop_event.set()
# Wait for workers to quit.
for w in workers:
w.join()
print('Done')
</code></pre>
<p>I've looked at the documentation of threading and pooling but missing a step. So the above runs and all requests get a 200 status code which is great. But I do I store the results of the requests to use?</p>
<p>Thanks for your help
Shan</p>
|
<p>I believe you have to make a Queue. The code can be a little tricky, you need to read up on the multiprocessing module. In general, with multiprocessing, all the variables are copied for each worker, hence you can't do something like appending to a global variable. Since that will literally be copied and the original will be untouched. There are a few functions that already automatically incorporate workers, queues, and return values. Personally, I try to write my functions to work with mp.map, like below: </p>
<pre><code>def worker(*args,**kargs):
#do stuff
return 'thing'
output = multiprocessing.Pool().map(worker,[1,2,3,4,5])
</code></pre>
|
python|multiprocessing
| 0 |
1,904,702 | 55,302,355 |
Python: Creating a List Filter String with Duplicates
|
<p>I'm trying to create a list filter that'll delete a word if it contains a duplicate of a letter. e.g (Deleted {aPPle, AAron, iRRational})
I'm not sure where to proceed, or even start. Any tips would be greatly appreciated.</p>
<pre><code>q = 0
x = 0
j = 0
def filter(original, q, x,j ):
for x in range len(orignal):
for l in original:
for x in l:
for j in l:
if j == x:
input_string = input("Enter names that are separated by space: ")
original = input_string.split()
print("original list:", original)
filter(original, q, x,j)
restricted = {"aa", "bb", "cc", "dd", "ee", "ff", "gg", "hh", "ii", "jj",
"kk", "ll", "mm", "nn", "oo", "pp", "qq", "rr", "ss", "tt", "uu", "vv",
"ww", "xx", "yy", "zz"}
if original == restricted:
#remove???
</code></pre>
|
<p>You can use <code>any</code> to detect adjacent equal letters, then filter out in a list comprehension</p>
<pre><code>>>> words = ['apple', 'bob', 'aaron', 'foo', 'bar']
>>> [word for word in words if not any(i == j for i,j in zip(word[:-1], word[1:]))]
['bob', 'bar']
</code></pre>
|
python
| 1 |
1,904,703 | 54,199,612 |
How to check if the input is only string or characters not in or special char?
|
<pre><code>students = []
while True:
Ask = input("Do you Want to Add Student:").lower()
try:
if Ask == "Yes".lower():
student_name = input("Enter Student Name:")
student_id = int(input("Enter Student ID:"))
def get_students_titlecase():
students_titlecase = []
for student in students:
students_titlecase = student["name"].title()
return students_titlecase
def print_students_titlecase():
students_titlecase = get_students_titlecase()
print(students_titlecase)
def add_student(name, student_id):
student = {"name": name, "student_id": student_id}
students.append(student)
Ask = input("Do you Want to Add Student:").lower()
if Ask == "Yes".lower():
student_list = get_students_titlecase()
add_student(student_name, student_id)
print_students_titlecase()
print(students)
else:
continue
else:
print("Type Yes if you To Enter The Student Cred".center(50))
except Exception:
print("Write Valid Values")
`
</code></pre>
<p>As in age variable, we have to strictly input numbers only because of the int(). So is there any way for name also. To have only alphabets, not any number or string?</p>
<p>The conclusion should be:</p>
<p>If the input is entered a number or special char in the name it should give Error.</p>
|
<p>Take a look at the following two standard <code>str</code> functions, the might be what you are looking for:</p>
<ul>
<li><a href="https://docs.python.org/3.7/library/stdtypes.html#str.isalpha" rel="nofollow noreferrer"><code>str.isalpha()</code></a>: <code>true</code> if <code>str</code> only contains letters (at least one)</li>
<li><a href="https://docs.python.org/3.7/library/stdtypes.html#str.isdigit" rel="nofollow noreferrer"><code>str.isdigit()</code></a>: <code>true</code> if <code>str</code> only contains digits (at least one)</li>
</ul>
<p>You could use them like so:</p>
<pre><code>def input_name():
while True:
name = input("Enter your name: ")
if name.isalpha():
return name
print("Please only use letters: [a-zA-Z]. Try again.")
def input_age():
while True:
age = input("Enter your age: ")
if age.isdigit():
return int(age)
print("Please only use digits: [0-9]. Try again.")
name = input_name()
age = input_age()
</code></pre>
|
python-3.x
| 1 |
1,904,704 | 57,103,160 |
How to add Django statements to Javascript
|
<p>I'm trying to implement some JS code in django so anytime i click a button, it returns the current year.. how do i do this</p>
<p>I've tried using some JS events handler in my django based templates </p>
<pre><code>from datetime import datetime
from django.template import Template, Context
from django.http import HttpResponse as toHTML
cont = Context({'dat':datetime.now()})
def say_year(request):
htm = Template("<button onclick='alert({dat})'>click me</button>")
htm = htm.render(cont)
return toHTML(htm)
</code></pre>
<p>i'm expecting an alert box showing full datetime.now() methods</p>
|
<p>I prefer that you do it that way in order to have full control over the template.
1 - make the views.py that way :</p>
<pre><code>from datetime import datetime
from django.template import Template, Context
from django.shortcuts import render
def say_year(request):
context = {
'dat': datetime.now()
}
return render(request, 'mytemplate.html', context)
</code></pre>
<p>2- The urls.py should look that way : </p>
<pre><code>from django.contrib import admin
from django.urls import path
from your_app import views
urlpatterns = [
path('admin/', admin.site.urls),
path('my_template/', views.say_year )
]
</code></pre>
<p>3- You create a templates folder in the root directory of your project where all your templates will live.
For your question i have create my_template.html and it should be that way :</p>
<pre><code><button onclick="alert('{{dat}}')">click me</button>
</code></pre>
<p>If you have more questions please let me know.</p>
|
django|python-3.x
| 1 |
1,904,705 | 49,427,372 |
BeaufitulSoup Remove br in table and add full link
|
<p>I am finally getting closer to extract a table from a specific website but my problem is that I cannot seem to figure out how to</p>
<ol>
<li>Display the full link to the download file</li>
<li>remove the br in certain rows</li>
</ol>
<p>the html code as follows</p>
<pre><code><table border="1" cellpadding="5" cellspacing="0">
<tr class="bg">
<td><strong>Reference</strong></td>
<td stytle="width:100px"><strong>Description</strong></td>
<td><strong>Download Documents</strong></td>
<td stytle="width:50px"><strong>Closing Date</strong></td>
<td stytle="width:50px"><strong>Contact Details</strong></td>
<td><strong>Briefing</strong></td>
<!--<td><strong>PUBLISHED</strong></td>-->
</tr>
<tr>
<td>123456</td>
<td>text 123</td>
<td><a href="/downloads/linktofile.zip" target="_blank">Documents click here </a></td>
<td>2 weeks</td>
<td>me<br />
you</td>
<td>next week</td>
</tr>
<tr>
<td>123456</td>
<td>text 123</td>
<td><a href="/downloads/linktofile.zip" target="_blank">Documents click here </a></td>
<td>2 weeks</td>
<td>me<br />
you</td>
<td>next week</td>
</tr>
<tr>
<td>123456</td>
<td>text 123</td>
<td><a href="/downloads/downloads/linktofile.zip" target="_blank">Documents click here </a></td>
<td>2 weeks</td>
<td>me<br />
you</td><td>next week</td>
</tr>
<tr>
<td>123456</td>
<td>text 123</td>
<td><a href="/downloads/linktofile.zip" target="_blank">Documents click here </a></td>
<td>2 weeks</td>
<td>me<br />
you</td><td>next week</td>
</tr>
<tr>
<td>123456</td>
<td>text 123</td>
<td><a href="/downloads/downloads/linktofile.zip" target="_blank">Documents click here </a></td>
<td>2 weeks</td>
<td>me</td>
<td>next week</td>
</tr>
<tr>
<td>123456</td>
<td>text 123</td>
<td><a href="/downloads/linktofile.zip" target="_blank">Documents click here </a></td>
<td>2 weeks</td>
<td>me</td>
<td>next week</td>
</tr>
</table>
</code></pre>
<p>I want to achieve that the <strong>br</strong> in the contact details will be removed and that the full link instead of "documents click here" is displayed.</p>
<p>Please be advised that this is an example table - rebuilt from the original project.</p>
<p>My python code works fine, just that it adds the content after <br> into a new link and the entire output.csv is mixed up.</p>
<h1>!/usr/bin/env python</h1>
<h1>-<em>- coding: utf-8 -</em>-</h1>
<pre><code>import csv
import requests
import os
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup
from bs4 import Tag
testwebsite = 'https://example.com'
uClient = uReq(testwebsite)
page_html = uClient.read()
uClient.close()
page_soup = soup(page_html, "html.parser")
testwebsitetendersaved=""
#Table is very ugly formated in a span tag and tables within tables
testwebsite_container = page_soup.find("span", id="MainContent2_ctl00_lblContent").findAll("table")[1]
for record in testwebsite_container.findAll('tr'):
testwebsitetender=""
for data in record.findAll('td'):
testwebsitetender=testwebsitetender+","+data.text
testwebsitetendersaved = testwebsitetendersaved + "\n" + testwebsitetender[1:]
header="Tender Number, Description, Documents Link, Closing Date, Contact Details, Briefing"+"\n"
file = open(os.path.expanduser("output.csv"), "wb")
file.write(bytes(header, encoding="ascii",errors='ignore'))
file.write(bytes(testwebsitetendersaved, encoding="ascii",errors='ignore'))
print(testwebsitetendersaved)
</code></pre>
|
<p>I hope this is what you want.</p>
<pre><code>testwebsitetendersaved=""
#Table is very ugly formated in a span tag and tables within tables
testwebsite_container = page_soup.find("span", id="MainContent2_ctl00_lblContent").findAll("table")[1]
header="Tender Number, Description, Documents Link, Closing Date, Contact Details, Briefing"+"\n"
file = open(os.path.expanduser("output.csv"), "wb")
file.write(bytes(header, encoding="ascii",errors='ignore'))
skiptrcnt=1 # skip first tr block
for i,record in enumerate(testwebsite_container.findAll('tr')):
if skiptrcnt>i:
continue
testwebsitetender=""
tnum = record('td')[0].text
desc = record('td')[1].text
doclink = record('td')[2].text
alink = record('td')[2].find("a")
if alink :
doclinkurl=testwebsite+alink['href']
closingdate = record('td')[3].text
detail = record('td')[4].text
detail = detail.replace('\n', '')
brief = record('td')[5].text
brief = brief.replace('\n', '')
print(tnum, desc, doclink, doclinkurl, closingdate, detail, brief)
testwebsitetendersaved="{},{},{},{},{},{}\n".format(tnum, desc, doclink, doclinkurl, closingdate, detail, brief)
file.write(bytes(testwebsitetendersaved, encoding="ascii",errors='ignore'))
file.close()
</code></pre>
<p>my output is</p>
<pre><code>123456 text 123 Documents click here https://example.com/downloads/linktofile.zip 2 weeks me you next week
123456 text 123 Documents click here https://example.com/downloads/linktofile.zip 2 weeks me you next week
123456 text 123 Documents click here https://example.com/downloads/downloads/linktofile.zip 2 weeks me you next week
123456 text 123 Documents click here https://example.com/downloads/linktofile.zip 2 weeks me you next week
123456 text 123 Documents click here https://example.com/downloads/downloads/linktofile.zip 2 weeks me next week
123456 text 123 Documents click here https://example.com/downloads/linktofile.zip 2 weeks me next week
</code></pre>
|
python|beautifulsoup
| 0 |
1,904,706 | 49,680,440 |
TF Slim: Fine Tune mobilenet v2 on custom dataset
|
<p>I am trying to fine-tune <strong>Mobilenet_v2_1.4_224</strong> model on my custom dataset for Image Classification task.
I am following this tutorial <a href="https://github.com/tensorflow/models/tree/546fd48ecb70635e5b086143c252649b217df59d/slim/#Tuning" rel="nofollow noreferrer">TensorFlow-Slim image classification library</a>.
I have already created the .tfrecord train and validation files. When I try to fine tune from an existing checkpoint I get the following error:</p>
<blockquote>
<p>InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [1,1,24,144] rhs shape= [1,1,32,192]
[[Node: save/Assign_149 = Assign[T=DT_FLOAT, _class=["loc:@MobilenetV2/expanded_conv_2/expand/weights"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](MobilenetV2/expanded_conv_2/expand/weights, save/RestoreV2:149)]]</p>
</blockquote>
<p><strong>The fine-tuning script that I have used is:</strong></p>
<p>DATASET_DIR=G:\Dataset</p>
<p>TRAIN_DIR=G:\Dataset\emotion-models\mobilenet_v2</p>
<p>CHECKPOINT_PATH=C:\Users\lenovo\Desktop\mobilenet_v2\mobilenet_v2_1.4_224.ckpt</p>
<pre><code>python train_image_classifier.py \
--train_dir=${TRAIN_DIR} \
--dataset_dir=${DATASET_DIR} \
--dataset_name=emotion \
--dataset_split_name=train \
--model_name=mobilenet_v2 \
--train_image_size=224 \
--clone_on_cpu=True \
--checkpoint_path=${CHECKPOINT_PATH} \
--checkpoint_exclude_scopes=MobilenetV2/Logits \
--trainable_scopes=MobilenetV2/Logits
</code></pre>
<p>I suspect that the error is due to the last 2 arguments "checkpoint_exclude_scopes" or "trainable_scopes".</p>
<p>I know that these 2 arguments are being used for transfer learning by removing the last 2 layers and creating our own softmax layer for custom dataset classficiation. But I'm not sure if I'm passing the right values for them.</p>
|
<p>To retrain the model, you must fine tune for your custom number of classes</p>
<blockquote>
<p>MobilenetV2/Predictions and MobilenetV2/predics</p>
</blockquote>
<pre><code>--checkpoint_exclude_scopes=MobilenetV2/Logits,MobilenetV2/Predictions,MobilenetV2/predics \
--trainable_scopes=MobilenetV2/Logits,MobilenetV2/Predictions,MobilenetV2/predics \
</code></pre>
<p>In mobilenet_v2.py, <strong>depth_multiplier=1</strong> for both mobilenet and mobilenet_base, you should change that to 1.4</p>
<pre><code>@slim.add_arg_scope
def mobilenet_base(input_tensor, depth_multiplier=1.4, **kwargs):
"""Creates base of the mobilenet (no pooling and no logits) ."""
return mobilenet(input_tensor,
depth_multiplier=depth_multiplier,
base_only=True, **kwargs)
@slim.add_arg_scope
def mobilenet(input_tensor,
num_classes=1001,
depth_multiplier=1.4,
scope='MobilenetV2',
conv_defs=None,
finegrain_classification_mode=False,
min_depth=None,
divisible_by=None,
**kwargs):
</code></pre>
|
python|tensorflow|tensorflow-slim
| 5 |
1,904,707 | 54,787,938 |
Key, values under one header in .INI file
|
<p>I have this code in Python which, given a dictionary, it writes the key:value fields in a config.ini file. The problem is that it keeps writing the header for each field.</p>
<pre><code>import configparser
myDict = {'hello': 'world', 'hi': 'space'}
def createConfig(myDict):
config = configparser.ConfigParser()
# the string below is used to define the .ini header/title
config["myHeader"] = {}
with open('myIniFile.ini', 'w') as configfile:
for key, value in myDict.items():
config["myHeader"] = {key: value}
config.write(configfile)
</code></pre>
<p>This is the output of .ini file:</p>
<pre><code>[myDict]
hello = world
[myDict]
hi = space
</code></pre>
<p>How can I get rid of double title <code>[myDict]</code> and have the result like this </p>
<pre><code>[myDict]
hello = world
hi = space
</code></pre>
<p>?</p>
<p>The code to create the .ini in Python has been taken from <a href="https://stackoverflow.com/questions/29344196/creating-a-config-file">this question</a>.</p>
|
<p>You get twice the header, because you write twice to the configfile. You should build a complete dict and write it in one single write:</p>
<pre><code>def createConfig(myDict):
config = configparser.ConfigParser()
# the string below is used to define the .ini header/title
config["myHeader"] = {}
for key, value in myDict.items():
config["myHeader"][key] = value
with open('myIniFile.ini', 'w') as configfile:
config.write(configfile)
</code></pre>
|
python|dictionary|configparser
| 3 |
1,904,708 | 31,201,971 |
Output list element in Python function "any"
|
<p>I'm new at Python and I cannot manage to output day correctly within <code>any</code> function. <code>Any</code>function returns <code>True</code> or <code>False</code>, but I would like to output element of list. Example below:</p>
<pre><code>days = ["monday","tuesday","wednesday","thursday","friday"]
if any(day in content.lower() for day in days):
print day
</code></pre>
<p>I would like to print which day, function has found in string "content". Content has only one day at a time. Is there a simple way of doing this?</p>
|
<p>I think what you want is:</p>
<pre><code>print [day for day in days if day in content.lower()]
</code></pre>
<p>This will give you a list of all matching days. </p>
|
python|list|any
| 7 |
1,904,709 | 52,157,378 |
Tensorflow shape issues in a very simple all-scalar situation
|
<p>This SSCCE generates a shape error despite everything bieng a scalar:</p>
<pre><code>import tensorflow as tf
x = tf.placeholder(tf.float32, shape=[1], name='x')
y = tf.add(x, 1.0)
feed = dict()
feed[x] = 0.0
with tf.Session() as sess:
print('Simple:', sess.run(y, feed_dict=feed))
# ValueError: Cannot feed value of shape () for Tensor 'x:0', which has shape '(1,)'
</code></pre>
<p>What is the flaw in this code? Wrapping the 1.0 in tf.convert_to_tensor has no effect.</p>
|
<p>I think you might want to make your placeholder to have the shape of (), i.e. <code>tf.placeholder(dtype=tf.float32, shape=(), name='x')</code>.</p>
|
python-3.x|tensorflow
| 0 |
1,904,710 | 19,208,614 |
How to make Django DateTimefield non required?
|
<p>I know, there is a similiar question, which (<a href="https://stackoverflow.com/questions/11351619/how-to-make-djangos-datetimefield-optional">I have read</a>), but I have a question to that issue.</p>
<p>How my forms.py goes:</p>
<pre><code>class DataForm(forms.ModelForm):
start_date = forms.CharField(
widget=widgets.DateTimePicker(attrs={'class': 'form-control', 'style': 'width:90%'}),
required=False
)
....
class Meta:
model = Data
</code></pre>
<p>My models.py goes that way:</p>
<pre><code>start_date = models.DateTimeField(null=True, blank=True)
</code></pre>
<p>And when I try to save that way:</p>
<pre><code> if form.is_valid():
form.save()
</code></pre>
<p>I got this error:</p>
<pre><code>[u"'' value has an invalid format. It must be in YYYY-MM-DD HH:MM[:ss[.uuuuuu]][TZ] format."]
</code></pre>
<p>What is wrong with that? I want to make start_date field optional. </p>
|
<p>That happens because you declare <code>start_date</code> as a <code>CharField</code> in your <code>DataForm</code>, you should use instead <code>forms.DateTimeField</code>.</p>
<p>Your field is not required, so the user is allowed to leave it empty.
This is what happens inside Django when you post data in a form:</p>
<ol>
<li>The form field tries to convert the raw data into a Python type.</li>
<li>To do that, first checks if the raw data can be found inside the <code>empty_values</code> array (list of the values considered "empty", hardcoded into the field's source code).</li>
<li>If it finds the value in <code>empty_values</code> (and that is the case in your situation), it returns the proper Python representation of the empty value. Unless the field is required, in that case raises a <code>ValidationError</code>.</li>
<li>If the value is not empty, then it proceeds with the casting into Python's types.</li>
<li>The return value is then plugged into your model, or whatever your form does.</li>
</ol>
<p>At point 3. <code>forms.CharField</code> and <code>forms.DateTimeField</code>behaves differently. <code>CharField</code> finds the empty value and returns <code>''</code>, an empty string, while instead <code>DateTimeField</code> returns <code>None</code>. The correct blank value for the <code>model.DateTimeField</code> is <code>None</code> not <code>''</code>.</p>
<p>That's why Django raises the exception: because you're trying to assign <code>''</code> to <code>models.DateTimeField</code>, and the latter is unable to cast it properly into a date, because it doesn't recognize <code>''</code> as a valid blank value for a <code>DateTimeField</code>.</p>
<p>You can see <a href="https://github.com/django/django/blob/master/django/forms/fields.py" rel="noreferrer">Django's source code</a> to understand better how it works, and also the <a href="https://docs.djangoproject.com/en/dev/howto/custom-model-fields/" rel="noreferrer">documentation</a> on writing custom fields and on <a href="https://docs.djangoproject.com/en/dev/ref/forms/validation/" rel="noreferrer">form and field validation</a>.</p>
|
python|django
| 6 |
1,904,711 | 43,610,554 |
How can I remove the special lines in data frame of pandas in an easy way
|
<p>I have a dataframe of pandas in python. I want to remove the line in three conditions.First, column 1 to 6 and 10 to 15 are 'NA' in the line. Second, column 1 to 3 and 7 to 12 and 16 to 18 are 'NA'. Third, colum 4 to 9 and 13 to 18 are 'NA'. I wrote the code to fix it, but it didn't work.
The code is as follows:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>data = pd.read_csv('data(2).txt',sep = "\t",index_col = 'tracking_id')
num = len(data) + 1
for i in range(num):
if (data.iloc[i,[0:5,9:14]] == 'NA') | (data.iloc[i,[0:11,15:17]] == 'NA)'\
| (data.iloc[i,[3:8,12:17]] == 'NA'):
data = data.drop(data.index[i], axis = 0)</code></pre>
</div>
</div>
The data is in the link: <a href="http://pan.baidu.com/s/1mhETl48" rel="nofollow noreferrer">enter link description here</a></p>
|
<p>You can use:</p>
<pre><code>np.random.seed(100)
df = pd.DataFrame(np.random.randint(10, size=(5,18)))
df.iloc[0, np.r_[0:5,9:14]] = np.nan
df.iloc[2, np.r_[0:11,15:17]] = np.nan
df.iloc[3:5, np.r_[3:8,12:17]] = np.nan
print (df)
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 \
0 NaN NaN NaN NaN NaN 0.0 4.0 2.0 5.0 NaN NaN NaN NaN NaN 8.0
1 6.0 2.0 4.0 1.0 5.0 3.0 4.0 4.0 3.0 7.0 1.0 1.0 7.0 7.0 0.0
2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2.0 5.0 1.0 8.0
3 2.0 8.0 3.0 NaN NaN NaN NaN NaN 3.0 4.0 7.0 6.0 NaN NaN NaN
4 7.0 6.0 6.0 NaN NaN NaN NaN NaN 6.0 6.0 0.0 7.0 NaN NaN NaN
15 16 17
0 4.0 0.0 9
1 2.0 9.0 9
2 NaN NaN 4
3 NaN NaN 5
4 NaN NaN 4
</code></pre>
<p>First check if values are <code>NaN</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isnull.html" rel="nofollow noreferrer"><code>isnull</code></a>, then select by <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.r_.html" rel="nofollow noreferrer"><code>numpy.r_</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>iloc</code></a> and compare with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.all.html" rel="nofollow noreferrer"><code>all</code></a> for check if all valueas are <code>True</code> per row. Then build main mask with <code>|</code> (or). </p>
<p>Last filter by <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> with inverted condition by <code>~</code>:</p>
<pre><code>mask = df.isnull()
m1 = mask.iloc[:, np.r_[0:5,9:14]].all(1)
m2 = mask.iloc[:, np.r_[0:11,15:17]].all(1)
m3 = mask.iloc[:, np.r_[3:8,12:17]].all(1)
m = m1 | m2 | m3
print (m)
0 True
1 False
2 True
3 True
4 True
dtype: bool
df = df[~m]
print (df)
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 \
1 6.0 2.0 4.0 1.0 5.0 3.0 4.0 4.0 3.0 7.0 1.0 1.0 7.0 7.0 0.0
15 16 17
1 2.0 9.0 9
</code></pre>
|
python|pandas
| 2 |
1,904,712 | 54,689,063 |
Queue in multiprocessing blocked by threading.Timer
|
<p>I need to transfer data from a subprocess to the main one.
The subprocess in doing a repetitive task using <code>threading.timer</code>
Whenever <code>threading.timer</code> is called, the <code>queue</code> does not work anymore.</p>
<p>The subprocess is acquiring data, while I want to display them in real-time in the main process.</p>
<p>I wrote this snippet to showcase the problem:</p>
<pre><code>import threading
import multiprocessing
class MyClass():
def __init__(self, q):
self.q = q
print("put value in q: ", "start")
self.q.put("start")
self.i = 0
self.update()
def update(self):
if self.i < 3:
print("put value in q: ", self.i)
self.q.put(self.i)
self.i += 1
threading.Timer(0.5, self.update).start()
else:
self.stop()
def stop(self):
print("put value in q: ", "stop")
self.q.put("stop")
if __name__ == "__main__":
q = multiprocessing.Queue()
process = multiprocessing.Process(target = MyClass, args=(q,))
process.start()
process.join()
for i in range(5):
print("get value in q: ",q.get(block = True, timeout = 2))
</code></pre>
<p>and I get this only:</p>
<pre><code>put value in q: start
put value in q: 0
put value in q: 1
put value in q: 2
put value in q: stop
get value in q: start
get value in q: 0
</code></pre>
<p>Is there a solution or a workaround?</p>
|
<p>You have <code>process</code>. It has main thread (<code>MyClass()</code> call). <code>threading.Timer()</code> spawns another thread along with main thread so you have to wait untill all additional threads are terminated before you stop <code>process</code>. So to solve the problem replace <code>threading.Timer(0.5, self.update).start()</code> with (wait for threads):</p>
<pre><code>t = threading.Timer(0.5, self.update)
t.start()
t.join()
</code></pre>
<p>Or replace <code>threading.Timer(0.5, self.update).start()</code> with (no additional threads):</p>
<pre><code>time.sleep(.5)
self.update()
</code></pre>
<p>Both solutions should work.</p>
|
python|multithreading|timer|queue|multiprocessing
| 1 |
1,904,713 | 39,068,174 |
Why doesn't QtConsole echo next()?
|
<p>I found this question about iterator behavior in Python: </p>
<p><a href="https://stackoverflow.com/questions/16814984/python-list-iterator-behavior-and-nextiterator">Python list iterator behavior and next(iterator)</a></p>
<p>When I typed in the code: </p>
<pre><code>a = iter(list(range(10)))
for i in a:
print a
next(a)
</code></pre>
<p>into the <code>jupyter-qtconsole</code> it returned:</p>
<pre><code>0
2
4
6
8
</code></pre>
<p>exactly as Martijn Pieters said it should when the interpreter doesn't echo the call to <code>next(a)</code>.</p>
<p>However, when I ran the same the code again in my Bash interpreter and IDLE, the code printed:</p>
<pre><code>0
1
2
3
4
5
6
7
8
9
</code></pre>
<p>to the console. </p>
<p>I ran the code:</p>
<pre><code>import platform
platform.python_implementation()
</code></pre>
<p>in all three environments and they all said I ran <code>'CPython'</code>.</p>
<p><strong>So why does the QtConsole suppress the <code>next(a)</code> call when IDLE and Bash don't?</strong></p>
<p>If it helps, I'm running Python 2.7.9 on Mac OSX and using the Anaconda distribution. </p>
|
<p>This is just a choice the developers of <code>IPython</code> (on which the <code>QtConsole</code> is based) made regarding what should be echoed back to the user. </p>
<p>Specifically, in the <a href="https://github.com/ipython/ipython/blob/master/IPython/core/interactiveshell.py#L198" rel="nofollow"><code>InteractiveShell</code></a> class that is used, function <a href="https://github.com/ipython/ipython/blob/master/IPython/core/interactiveshell.py#L2770" rel="nofollow"><code>run_ast_nodes</code></a> is, by default, defined with an <code>interactivity='last_expr'</code>. The documentation on this attribute states:</p>
<pre><code>interactivity : str
'all', 'last', 'last_expr' or 'none', specifying which nodes should be
run interactively (displaying output from expressions). 'last_expr'
will run the last node interactively only if it is an expression (i.e.
expressions in loops or other blocks are not displayed. Other values
for this parameter will raise a ValueError.
</code></pre>
<p>As you can see: <strong><em>expressions in loops or other blocks are not displayed</em></strong>.</p>
<p>You can change this in the config files for <code>IPython</code> and get it to work like your <code>repl</code> if you really need to. Point is, it was just a preference the designers made.</p>
|
python|interpreter|jupyter|python-internals
| 3 |
1,904,714 | 47,750,068 |
Find the amount of ID that has same price in entire table
|
<p>I have a table looked like below:</p>
<pre><code>id price
1 10
1 10
1 10
2 30
2 33
2 30
3 15
3 15
3 15
</code></pre>
<p>How can I determine the number of ids that has the same price(such that their price stays constant) throughout the table? In the table above, the answer will be 2, for id=1 and id=3</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.SeriesGroupBy.nunique.html" rel="nofollow noreferrer"><code>SeriesGroupBy.nunique</code></a> first and then count values with <code>1</code>:</p>
<pre><code>a = df.groupby('id')['price'].nunique()
print (a)
id
1 1
2 2
3 1
Name: price, dtype: int64
print ((a == 1).sum())
2
#if need ids
print (a.index[a == 1].tolist())
[1, 3]
</code></pre>
|
python|pandas
| 2 |
1,904,715 | 26,270,355 |
Error setting environment variables
|
<p>I am working on a Django application. I want to put the SECRET_KEY in environment variables.
I added this to the .bashrc file:</p>
<pre><code>export SECRET_KEY=sdfsjhsuresfsdf
</code></pre>
<p>The i did <code>source .bashrc</code></p>
<p>I am able to access the environment variable from python shell using:</p>
<pre><code>import os
os.environ['SECRET_KEY']
</code></pre>
<p>But, it shows a <code>keyError</code> when I try to access this from <code>settings.py</code> file. What am I doing wrong?</p>
|
<p>Try:</p>
<pre><code>export SECRET_KEY=sdfsjhsuresfsdf
</code></pre>
<p>Then re-source. And start a new Python instance. It should then be visible.</p>
<p>Bash doesn't always auto-export its variables.</p>
<p>In general, OS environment variables are a hard place to put configuration information reliably, because of such issues. It's hard to track down why variables are seen or not seen. Reading configuration information out of configuration files (whether .ini, .json, or some other format) is a bit more robust. But, environment variables are very commonly used, so, when in Rome...</p>
|
python|linux|django|environment-variables
| 0 |
1,904,716 | 28,068,275 |
POST HTTP in Python: Reserved XML Name. line: 2, char: 40
|
<p>I'm trying to integrate German payments provider SOFORT Überweisung into my own, Python-coded online shop. They have no Python libs available and their support also can't answer what is going wrong.</p>
<p>They require XML to be POSTed in the first step like so:</p>
<pre><code><?xml version="1.0" encoding="UTF-8" ?>
<multipay>
<project_id>WHITEOUT</project_id>
<amount>24.51</amount>
<currency_code>EUR</currency_code>
<reasons>
<reason>Testueberweisung</reason>
<reason>-TRANSACTION-</reason>
</reasons>
<success_url>http://WHITEOUT?stage=paymentAuthorizationSuccessful</success_url>
<success_link_redirect>1</success_link_redirect>
<abort_url>http://WHITEOUT?stage=paymentCancelled</abort_url>
<su />
</multipay>
</code></pre>
<p>My code to achieve this is the following:</p>
<pre><code>response = PostHTTP(url = self.endpoint, data = xml, authentication = '%s:%s' % (self.clientNumber, self.APIkey), contentType = 'application/xml; charset=UTF-8')
</code></pre>
<p>using the following function:</p>
<pre><code>def PostHTTP(url, values = [], data = None, authentication = None, contentType = None):
u"""\
POST HTTP responses from the net. Values are dictionary {argument: value}
Authentication as "username:password"
"""
import urllib, urllib2, base64
if values:
data = urllib.urlencode(values)
headers = {}
if contentType:
headers["Content-Type"] = contentType
headers["Accept"] = contentType
if authentication:
base64string = base64.encodestring(authentication)
headers["Authorization"] = "Basic %s" % base64string
request = urllib2.Request(url, data, headers)
response = urllib2.urlopen(request)
return response.read()
</code></pre>
<p>Their server keeps on responding with </p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<errors>
<error>
<code>7000</code>
<message>Reserved XML Name. line: 2, char: 40</message>
</error>
</errors>
</code></pre>
<p>and I don't get it.
Any ideas?</p>
<p><strong>UPDATE:</strong></p>
<p>Their support answered again.
It seems that the error was indeed on the server side, for a change, because when I leave out the first line <code><?xml ?></code> the response is as expected.</p>
|
<p>Seems like the xml part is not in the first line of the Request.</p>
<pre><code>[empty-line]
<?xml version="1.0" encoding="UTF-8" ?>
<multipay>
<project_id>WHITEOUT</project_id>
<amount>24.51</amount>
<currency_code>EUR</currency_code>
<reasons>
<reason>Testueberweisung</reason>
<reason>-TRANSACTION-</reason>
</reasons>
<success_url>http://WHITEOUT?stage=paymentAuthorizationSuccessful</success_url>
<success_link_redirect>1</success_link_redirect>
<abort_url>http://WHITEOUT?stage=paymentCancelled</abort_url>
<su />
</multipay>
</code></pre>
<p>Try to remove leading (and trailing) whitespaces.
That should solve the Problem.</p>
|
python|xml|http|post
| 1 |
1,904,717 | 32,615,796 |
Creating a S3 bucket in Amazon AWS with boto in python fails ...
|
<pre><code>import boto
import boto.s3
import boto.s3.connection
conn = boto.s3.connect_to_region(
'us-west-2',
aws_access_key_id='MY_KEY',
aws_secret_access_key='MY_ACCESS_KEY'
)
conn.create_bucket('bucket_in_west')
</code></pre>
<p>And I get this error: </p>
<pre><code>Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Users/user/.virtualenvs/west-tests/lib/python2.7/site-packages/boto/s3/connection.py", line 621, in create_bucket
response.status, response.reason, body)
S3ResponseError: S3ResponseError: 400 Bad Request
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>IllegalLocationConstraintException</Code><Message>The unspecified location constraint is incompatible for the region specific endpoint this request was sent to.</Message><RequestId>0C0F09FBC87
</code></pre>
<p>Does anybody have an answer for how to create an S3 bucket in a specific region with boto?</p>
|
<p>If you are talking to the <code>us-west-2</code> endpoint and you want to create a bucket in that region, you have to specify that when you call <code>create_bucket</code>:</p>
<pre><code>conn.create_bucket('bucket-in-west', location='us-west-2')
</code></pre>
|
python|amazon-web-services|amazon-s3|amazon|boto
| 2 |
1,904,718 | 27,348,413 |
Flask query string second parameter returns null
|
<p>I'm trying to get 2 values from a query string in Flask, but for some reason that's incomprehensible to me, Flask's request object only manages to get hold of the first value.</p>
<p>Here's an example:</p>
<pre><code>@app.route('/whatishappening')
def what():
please = request.args.get('please')
work = request.args.get('work')
return jsonify({'strange': (please, work)})
</code></pre>
<p>A curl command: </p>
<pre><code>curl -i http://localhost:5000/whatishappening?please=god&work=already
</code></pre>
<p>request.args('work') returns null: </p>
<pre><code>{
"strange": [
"god",
null
]
}
</code></pre>
<p>Thank you very much for your time :)</p>
|
<p>The <code>&</code> is used to fork a process. If you wrap the URL in quotes, you should see the expected output.</p>
<pre><code>$ curl -i "http://0.0.0.0:5000/whatishappening?please=god&work=already"
HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 48
Server: Werkzeug/0.9.6 Python/3.4.2
Date: Sun, 07 Dec 2014 23:04:18 GMT
{
"strange": [
"god",
"already"
]
}
</code></pre>
|
python|flask|query-string
| 0 |
1,904,719 | 42,080,250 |
How to specify Firefox command line options using Selenium WebDriver in Python?
|
<p>I need to pass <code>--no-remote</code> to the Firefox started through Selenium in Python. Is there a way to specify command line arguments?</p>
|
<p>You can provide arguments using <code>FirefoxBinary.add_command_line_options()</code>:</p>
<pre><code>from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
binary = FirefoxBinary('/path/to/firefox')
binary.add_command_line_options('--no-remote')
driver = webdriver.Firefox(firefox_binary=binary)
</code></pre>
|
python|selenium|firefox
| 4 |
1,904,720 | 47,299,652 |
How can I one-hot encode data with numpy?
|
<p>Suppose I have a dataset</p>
<pre><code>sex age hours
female 23 900
male 19 304
female 42 222
...
</code></pre>
<p>If I use np.loadtxt or np.genfromtxt I can use a converter as a way to assign values to each of the categorical data in the sex column. Is there a way to instead create a one-hot column during the loading process? If not, where should I look to accomplish this?</p>
|
<p>With pandas, you can pass the category dtype (which loads in the column cheaply):</p>
<pre><code>In [11]: df = pd.read_csv("my_file.csv", dtype={"sex": "category"})
In [12]: df
Out[12]:
sex age hours
0 female 23 900
1 male 19 304
2 female 42 222
In [13]: df.dtypes
Out[13]:
sex category
age int64
hours int64
dtype: object
</code></pre>
<hr>
<p>Once you have a category you can use <code>get_dummies</code>:</p>
<pre><code>In [21]: pd.get_dummies(df.sex)
Out[21]:
female male
0 1 0
1 0 1
2 1 0
In [22]: pd.get_dummies(df.sex.cat.codes)
Out[22]:
0 1
0 1 0
1 0 1
2 1 0
</code></pre>
|
python|arrays|csv|numpy|one-hot-encoding
| 3 |
1,904,721 | 27,525,846 |
understand functions that operate on whole array in groupby aggregation
|
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame({
'clients': pd.Series(['A', 'A', 'A', 'B', 'B']),
'odd1': pd.Series([1, 1, 2, 1, 2]),
'odd2': pd.Series([6, 7, 8, 9, 10])})
grpd = df.groupby(['clients', 'odd1']).agg({
'odd2': lambda x: x/float(x.sum())
})
print grpd
</code></pre>
<p>The desired result is:</p>
<pre><code>A 1 0.619047619
2 0.380952381
B 1 0.473684211
2 0.526316
</code></pre>
<p>I have browsed <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow">around</a> but I still don't understand how having <code>lambda</code>s that operate on the whole array, f.ex. <code>x.sum()</code> work. Furthermore, I still miss the point on what <code>x</code> is in <code>x.sum()</code> wrt to the grouped columns.</p>
|
<p>You can do:</p>
<pre><code>>>> df.groupby(['clients', 'odd1'])['odd2'].sum() / df.groupby('clients')['odd2'].sum()
clients odd1
A 1 0.619
2 0.381
B 1 0.474
2 0.526
Name: odd2, dtype: float64
</code></pre>
<p>or alternatively, use <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#transformation" rel="nofollow"><code>.transform</code></a> to obtain values based on <code>clients</code> grouping and then sum for each <code>clients</code> and <code>odd1</code> grouping:</p>
<pre><code>>>> df['val'] = df['odd2'] / df.groupby('clients')['odd2'].transform('sum')
>>> df
clients odd1 odd2 val
0 A 1 6 0.286
1 A 1 7 0.333
2 A 2 8 0.381
3 B 1 9 0.474
4 B 2 10 0.526
>>> df.groupby(['clients', 'odd1'])['val'].sum()
clients odd1
A 1 0.619
2 0.381
B 1 0.474
2 0.526
Name: val, dtype: float64
</code></pre>
|
arrays|numpy|pandas|group-by|aggregate
| 3 |
1,904,722 | 27,596,357 |
Error in changing fastq header and written back with BioPython
|
<p>I am trying to change fastq header with postfix /1 and /2 and written back as new fie. However, I got this error:</p>
<pre><code>No suitable quality scores found in letter_annotations of SeqRecord
</code></pre>
<p>Is there any way to solve this problem? Do I need to modify the quality score information to match changed fastq header?</p>
<pre><code>import sys
from Bio.Seq import Seq
from Bio import SeqIO
from Bio.SeqRecord import SeqRecord
file = sys.argv[1]
final_records=[]
for seq_record in SeqIO.parse(file, "fastq"):
print seq_record.format("fastq")
#read header
header =seq_record.id
#add /1 at the end
header ="{0}/1".format(header)
# print(repr(seq_record.seq))
record = SeqRecord(seq_record.seq,id=header,description=seq_record.description)
final_records.append(record)
SeqIO.write(final_records, "my_example.fastq", "fastq")
</code></pre>
|
<p>You're getting the error because your new sequences don't have quality scores. You could transfer the quality scores from the input sequences:</p>
<pre><code>record.letter_annotations["phred_quality"]=seq_record.letter_annotations["phred_quality"]
</code></pre>
<p>It's probably easier to just modify the ids of the original sequences and write them to the output file tho:</p>
<pre><code>seq_record.id = header
final_records.append(seq_record)
</code></pre>
|
python|bioinformatics|biopython|fastq
| 0 |
1,904,723 | 51,391,545 |
django url without pk
|
<p>I have been trying for too long now to get my urls working with my simple models and view. There is something I am definitely not understanding.</p>
<p>I wish to create a url similar to:
<code>.../department/team</code>,
where I do not use the PK of department object but the name instead.</p>
<p>My model looks like:</p>
<pre><code>class Department(models.Model):
name = models.CharField(max_length=50, blank=True, null=True)
def __str__(self):
return 'Department: ' + self.name
class Hold(models.Model):
name = models.CharField(max_length=50, blank=True, null=True)
department = models.ForeignKey(Department, on_delete=models.CASCADE)
</code></pre>
<p>my view looks like (<strong>UPDATED</strong>):</p>
<pre><code>class IndexView(generic.ListView):
template_name = 'index.html'
context_object_name = 'departments_list'
def get_queryset(self):
return Department.objects.all()
class DepartmentView(generic.DetailView):
model = Department
template_name="depdetail.html"
slug_field = "name"
slug_url_kwarg = "name"
</code></pre>
<p>my url looks the following: <strong>UPDATED</strong></p>
<pre><code>urlpatterns = [
path('', views.IndexView.as_view(), name='index'),
path('<name>', views.DepartmentView.as_view(), name='depdetail')
]
</code></pre>
<p>and finally my html:</p>
<pre><code><h1> Hello there {{ object.name }} student</h1> </br>
<b> Choose your team:<b> </br>
</code></pre>
<p>however i keep getting <code>page not found</code> or <code>must be slug or pk</code>..
I hope someone can help me out so I can wrap my head around this.</p>
<p><strong>UPDATED</strong>
It works now :) Thank you for the replies.</p>
|
<p>By default Django will look for a <code>pk</code> or a <code>slug</code> field when you use a <code>DetailView</code>. You must override <a href="https://docs.djangoproject.com/en/2.0/ref/class-based-views/mixins-single-object/#django.views.generic.detail.SingleObjectMixin.get_object" rel="noreferrer"><code>get_object()</code></a> method to change this behaviour:</p>
<blockquote>
<p><code>get_object()</code> looks for a <code>pk_url_kwarg</code> argument in the arguments to the view; if this argument is found, this method performs a primary-key based lookup using that value. If this argument is not found, it looks for a <code>slug_url_kwarg</code> argument, and performs a slug lookup using the <code>slug_field</code>.</p>
</blockquote>
<p>That being said, your approach has other problems. It is always better to use a slug instead of a name for other reasons. For example, <code>name</code> is not guaranteed to be unique and also it may have characters which are not URL safe. See <a href="https://stackoverflow.com/questions/427102/what-is-a-slug-in-django">this question</a> for a detailed discussion on how to use slug fields.</p>
|
python|django|url
| 5 |
1,904,724 | 64,317,141 |
How to "print a UTF8 string"
|
<p>I have a string:</p>
<pre><code>"123456789012"
</code></pre>
<p>Is it possible to print it in something like this?</p>
<pre><code>print("\u1234\u5678\u9012")
</code></pre>
<p>using a function? ex. <code>print_utf8(string)</code></p>
|
<pre><code># Split the string into chunks of length 4
In [1]: codepoints = ["1234", "5678", "9012"]
# Convert them into the `\u` format
In [2]: r'\u' + r'\u'.join(codepoints)
Out[2]: '\\u1234\\u5678\\u9012'
# Decode
In [3]: _.encode().decode('unicode-escape')
Out[3]: 'ሴ噸递'
</code></pre>
<p>Note that in Python 3, strings are already in Unicode. That's why you need to <code>.encode()</code> the string with Unicode escapes and then <code>.decode()</code> it. See <a href="https://stackoverflow.com/questions/48908131/decodeunicode-escape-in-python-3-a-string">decode(unicode_escape) in python 3 a string</a></p>
|
python|unicode
| 0 |
1,904,725 | 69,999,737 |
Giving a data frame a title and styling columns with Pandas Python
|
<p>I want to give <code>dict_val</code> dataframe a title that is placed in the center <code>title</code>. I am also trying to separate the each 3 digits with a comma on the <code>Numbers</code> column how would I be able to give a dataframe a title and style the a column as such and get the Expected Output below?</p>
<pre><code>import pandas as pd
import numpy as np
title = 'Number and Numbers 2 Comparison'
numbers = np.array([123242737.4923,679754672.3849])
numbers2 = np.array([123523,467895])
dict_val = pd.DataFrame({'Numbers':numbers.round(2),'Numbers 2': numbers2})
display(dict_val)
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/1leW5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1leW5.png" alt="enter image description here" /></a></p>
<p>Expected output:</p>
<pre><code> Number and Numbers 2 Comparison
Numbers Numbers 2
0 123,242,737.49 123523
1 679,754,672.38 467895
</code></pre>
<p><a href="https://i.stack.imgur.com/VM4xK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VM4xK.png" alt="enter image description here" /></a></p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.io.formats.style.Styler.format.html" rel="nofollow noreferrer"><code>Styler.format</code></a>:</p>
<pre><code>dict_val.style.format(formatter={'Numbers': '{:,.2f}'})
</code></pre>
<p>Create <code>MultiIndex</code> and set styles for alignment to center:</p>
<pre><code>text = 'Number and Numbers 2 Comparison'
dict_val.columns = pd.MultiIndex.from_product([[text], dict_val.columns])
print (dict_val)
Number and Numbers 2 Comparison
Numbers Numbers 2
0 1.232427e+08 123523
1 6.797547e+08 467895
css = [ {'selector': 'th.col_heading.level0', 'props': 'text-align: center;'}]
dict_val.style.set_table_styles(css, overwrite=False).format(formatter={(text, 'Numbers'): '{:,.2f}'})
</code></pre>
|
python|pandas|database|numpy|format
| 2 |
1,904,726 | 69,973,819 |
analyse the result of accuracy in confusion matrix
|
<p>I am new to ML and I am doing some exercises for understanding the algorithms and their results. I have a doubt to interpret the result of accuracy in this confusion matrix, I know my question is very simple.. but I need to know how exactly can I interpret this chart of the Confusion matrix.
ps: dataset heart failure - classification.
Thank you in advance for any explanation
<img src="https://i.stack.imgur.com/8SszU.png" alt="enter image description here" /></p>
|
<p>3 were predicted as 'no Heart-Fail', but the true class is 'Heart fail', so it is an <strong>incorrect</strong> classification</p>
<p>1 were predicted as 'Heart fail', but the true class is 'no Heart-Fail', so it is an <strong>incorrect</strong> classification</p>
<p>11 were predicted as 'Heart fail', and the true class is 'Heart fail', so it is an <strong>correct</strong> classification</p>
<p>45 were predicted as 'no Heart-Fail', and the true class is 'no Heart-Fail', so it is a <strong>correct</strong> classification</p>
<p>So, the total number of incorrect classification is: 3+1=4<br />
and the total number of correct classification is: 11+45=56</p>
|
python
| 0 |
1,904,727 | 73,043,183 |
Python multiprocessing library thinks multilettered args are multiple args
|
<p>Ive been trying to program this piece of code that analsizes stocks and whenenver i try to pass a stock symbol such as "GOOG" trought multiproccess it takes each letter of "GOOG" as a different argument.</p>
<pre><code>import multiprocessing as mp
def main():
for value in stocks:
p = mp.Process(target=nnfuncs.compare, args=value["symbol"])
p.start()
nnstruct.processes.append(p)
for process in nnstruct.processes:
process.join()
if __name__ == '__main__':
main()
</code></pre>
<p>The "stocks" var contains "AWP" "ACP" "JEQ" "ASGI" "AOD".</p>
<p>If i were to run that it would tell me that "TypeError: compare() takes 1 positional argument but 4 were give" even though those are all one arguments, it multiproccesing takes it as each letter of the symbol is one different argument.</p>
|
<p>The <code>args</code> parameter in <code>multiprocessing.Process</code> are interpreted as a tuple (see the <a href="https://docs.python.org/3/library/multiprocessing.html#the-process-class" rel="nofollow noreferrer">documentation</a>), so your text arguments are being interpreted as <code>tuple("AWP") == ("A", "W", "P")</code>. You probably want to pass your argument as <code>args=(value["symbol"],)</code>.</p>
|
python|multiprocessing
| 2 |
1,904,728 | 72,876,700 |
Users' trip time over a particular period of time
|
<p>The <a href="https://www.microsoft.com/en-us/download/details.aspx?id=52367&from=https%3A%2F%2Fresearch.microsoft.com%2Fen-us%2Fdownloads%2Fb16d359d-d164-469e-9fd4-daa38f2b2e13%2F" rel="nofollow noreferrer">Geolife </a> dataset is a GPS trajectories of users logged as they move. Thanks to <a href="https://scholar.google.com/citations?hl=en&user=aVgx_iwAAAAJ" rel="nofollow noreferrer">Sina Dabiri</a> for providing a repository of the preprocessed data. I work with his preprocessed data and created a dataframe of GSP logs for the 69 users available.</p>
<p>In this post is a very little extract of the data for 3 user to describe by question.</p>
<pre><code>import pandas as pd
data = {'user': [10,10,10,10,10,10,10,10,21,21,21,54,54,54,54,54,54,54,54,54],
'lat': [39.921683,39.921583,39.92156,39.13622,39.136233,39.136241,39.136246,39.136251,42.171678,42.172055,
42.172243,39.16008333,39.15823333,39.1569,39.156,39.15403333,39.15346667,39.15273333,39.14811667,39.14753333],
'lon': [116.472342,116.472315,116.47229,117.218033,117.218046,117.218066,117.218166,117.218186,123.676778,123.677365,
123.677657,117.1994167,117.2002333,117.2007667,117.2012167,117.202,117.20225,117.20255,117.2043167,117.2045833],
'date': ['2009-03-21 13:30:35','2009-03-21 13:33:38','2009-03-21 13:34:40','2009-03-21 15:30:12','2009-03-21 15:32:35',
'2009-03-21 15:38:36','2009-03-21 15:44:42','2009-03-21 15:48:43','2007-04-30 16:00:20', '2007-04-30 16:05:22',
'2007-04-30 16:08:23','2007-04-30 11:47:38','2007-04-30 11:48:07','2007-04-30 11:48:27','2007-04-30 12:04:39',
'2007-04-30 12:04:07','2007-04-30 12:04:32','2007-04-30 12:19:41','2007-04-30 12:20:08','2007-04-30 12:20:21']
}
</code></pre>
<p>And the dataframe:</p>
<pre><code>df = pd.DataFrame(data)
df
user lat lon date
0 10 39.921683 116.472342 2009-03-21 13:30:35
1 10 39.921583 116.472315 2009-03-21 13:33:38
2 10 39.921560 116.472290 2009-03-21 13:34:40
3 10 39.136220 117.218033 2009-03-21 15:30:12
4 10 39.136233 117.218046 2009-03-21 15:32:35
5 10 39.136241 117.218066 2009-03-21 15:38:36
6 10 39.136246 117.218166 2009-03-21 15:44:42
7 10 39.136251 117.218186 2009-03-21 15:48:43
8 21 42.171678 123.676778 2007-04-30 16:00:20
9 21 42.172055 123.677365 2007-04-30 16:05:22
10 21 42.172243 123.677657 2007-04-30 16:08:23
11 54 39.160083 117.199417 2007-04-30 11:47:38
12 54 39.158233 117.200233 2007-04-30 11:48:07
13 54 39.156900 117.200767 2007-04-30 11:48:27
14 54 39.156000 117.201217 2007-04-30 12:04:39
15 54 39.154033 117.202000 2007-04-30 12:04:07
16 54 39.153467 117.202250 2007-04-30 12:04:32
17 54 39.152733 117.202550 2007-04-30 12:19:41
18 54 39.148117 117.204317 2007-04-30 12:20:08
19 54 39.147533 117.204583 2007-04-30 12:20:21
</code></pre>
<p><strong>My Question:</strong></p>
<p>I want calculate for how long users travel in a particular period.</p>
<p>For example.</p>
<ul>
<li>Total time users travelled in <code>March-2009</code>: Only user 10 travelled in this month. On <code>2009-03-21</code> from <code>13:30:35</code>. But then after <code>13:34:40</code> there is a long jump to <code>15:30:12</code>. Since this jumped period is more than 30-minutes, we consider it another trip. So user 10 has 2 trips recorded that month. First for about 5-minutes, second for about 19 minutes. So the duration of users trip for this month is <code>5 + 19 = 24 minutes</code>.</li>
<li>In <code> April 2007</code>, users 21 and 54 recorded trips on the same day. User 21 started at <code>16:00:20</code> for about 8-minutes. User 54 started at <code>11:47:38</code> and after about 1-minute, we see a jump to <code>12:04:39</code>. The time interval is not up to 30-minutes, so we consider it a single trip. For that, 54 covered trip for about 33-minutes. Users trip time in that month is therefore <code>8 + 33 = 41minutes</code>.</li>
<li>Sometimes, I would also want to determined trip time from say <code>February 2008</code> to <code>March 2009</code>.</li>
</ul>
<p>How do I perform this sort of analysis?</p>
<p>Any point to, using the little data provided above would be appreciated.</p>
|
<p>this code isn't the most effective, anyway you can test does it do what you need:</p>
<pre><code>df['date'] = pd.to_datetime(df['date'])
duration = (df.groupby(['user', df['date'].dt.month]).
apply(lambda x: (x['date']-x['date'].shift()).dt.seconds).
rename('duration').
to_frame())
res = (duration.mask(duration>1800,0). # 1800 - limit for a trip duration in seconds
groupby(level=[0,1]).
sum().
truediv(60). # converting seconds to minutes
rename_axis(index={'date':'month'}))
print(res)
'''
duration
user month
10 3 22.60
21 4 8.05
54 4 33.25
</code></pre>
|
python|python-3.x|pandas|dataframe|sklearn-pandas
| 0 |
1,904,729 | 55,599,147 |
AlreadyExistsError while training a network on colab
|
<p>I'm trying to train an LSTMs network on Google Colab. However, this error occurs: </p>
<pre class="lang-py prettyprint-override"><code> AlreadyExistsError: Resource __per_step_116/training_4/Adam/gradients/bidirectional_4/while/ReadVariableOp/Enter_grad/ArithmeticOptimizer/AddOpsRewrite_Add/tmp_var/N10tensorflow19TemporaryVariableOp6TmpVarE
[[{{node training_4/Adam/gradients/bidirectional_4/while/ReadVariableOp/Enter_grad/ArithmeticOptimizer/AddOpsRewrite_Add/tmp_var}}]]
</code></pre>
<p>I don't know where can be the issue. This is the model of the network:</p>
<pre class="lang-py prettyprint-override"><code>sl_model = keras.models.Sequential()
sl_model.add(keras.layers.Embedding(max_index+1, hidden_size, mask_zero=True))
sl_model.add(keras.layers.Bidirectional(keras.layers.LSTM(hidden_size,
activation='tanh', dropout=0.2, recurrent_dropout = 0.2, return_sequences=True)))
sl_model.add(keras.layers.Bidirectional(keras.layers.LSTM(hidden_size, activation='tanh', dropout=0.2, recurrent_dropout = 0.2, return_sequences=False))
)
sl_model.add(keras.layers.Dense(max_length, activation='softsign'))
optimizer = keras.optimizers.Adam()
sl_model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['acc'])
batch_size = 128
epochs = 3
cbk = keras.callbacks.TensorBoard("logging/keras_model")
print("\nStarting training...")
sl_model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size,
shuffle=True, validation_data=(x_dev, y_dev), callbacks=[cbk])
</code></pre>
<p>Thank you so much!</p>
|
<p>You need to restart your runtime -- this happens when you have defined multiple graphs built in a single jupyter (Colaboratory) runtime. </p>
<p>Calling <code>tf.reset_default_graph()</code> may also help, but depending on whether you are using eager exection and how you've defined your sessions this may or may not work. </p>
|
tensorflow|keras|jupyter|google-colaboratory
| 0 |
1,904,730 | 73,198,526 |
Include only blob 'name' property in list_blobs() response? - Azure Python SDK
|
<p>Currently, I am using the <code>list_blobs()</code> function in the Azure Python SDK to list all of the blobs within a container. However, in terms of the metadata/info of the blobs, I only require the names of the blobs.</p>
<p>In my container, there are over 1M blobs, and executing the following to access the name of each blob is not very efficient, since <code>list_blobs()</code> has to retrieve a lot of info on each blob (1M+ total) in its response, and this process takes over 15 minutes to complete:</p>
<pre><code>blobs = container_client.list_blobs()
for blob in blobs:
print(blob.name)
</code></pre>
<p>I am looking to decrease the time it takes to execute this block of code, and I was wondering if there is any way to retrieve all of the blobs in the container using <code>list_blobs()</code>, but only retrieving the 'name' property of the blobs, rather than retrieving info about every single property of each blob in the response.</p>
|
<blockquote>
<p>I am looking to decrease the time it takes to execute this block of
code, and I was wondering if there is any way to retrieve all of the
blobs in the container using list_blobs(), but only retrieving the
'name' property of the blobs, rather than retrieving info about every
single property of each blob in the response.</p>
</blockquote>
<p>It is not possible to retrieve only some of the properties of the blob (like name). <code>list_blobs</code> method is the implementation of <a href="https://docs.microsoft.com/en-us/rest/api/storageservices/list-blobs" rel="nofollow noreferrer"><code>List Blobs</code></a> REST API operation which does not support server-side projection.</p>
|
python|azure|azure-blob-storage|azure-python-sdk
| 0 |
1,904,731 | 50,125,218 |
Transferring front end data from HTML checkboxes to backend django in forms.py/views.py/urls.py
|
<p>I have a bunch of checkboxes on my HTML page and want to store whether a checkbox was ticked or not in backend django. My current HTML code is:</p>
<pre><code><input type="checkbox" name="activism" value="Yes">Activism & advocacy
</code></pre>
<p>I don't know how to modify my forms.py/urls.py/views.py to store whether a particular checkbox was ticked or not. Thank you very much.</p>
|
<p>There you are, it's not the best way to. Since you are new to django, this will work. Alongside you're learning, take the <a href="https://docs.djangoproject.com/en/2.0/" rel="nofollow noreferrer">django documentation</a> as guide</p>
<pre><code><form action='url' method='post'> {% csrf_token %}
<input type="checkbox" name="activism" value="Yes">Activism & advocacy
<input type='submit' value="submit" />
</form>
</code></pre>
<p>python view</p>
<pre><code>def viewName(request):
if request.method == 'POST':
# You have access to data inside request.POST
activism = request.POST.get('activism')
if activism:
pass # Activism is checked
</code></pre>
|
python|html|django
| 0 |
1,904,732 | 49,803,366 |
Point opacity relative to depth matplotlib 3D point plot
|
<p>I am plotting a basic scatterplot in 3D using code from another SO post (<a href="https://stackoverflow.com/questions/17411940/matplotlib-scatter-plot-legend">Matplotlib scatter plot legend</a> (top answer)) but want to have the points opacity relative to the 'depth' of the point as with <code>ax.scatter</code> <code>depthshade=True</code>.</p>
<p><a href="https://i.stack.imgur.com/9Y4AC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9Y4AC.png" alt="enter image description here"></a></p>
<p>I had to use <code>ax.plot</code> because <code>ax.scatter</code> doesn't appear to work well with legends on 3D plots. </p>
<p>I'm wondering if there is a way to get a similar aesthetic for <code>ax.plot</code>.</p>
<p>Thanks!</p>
|
<p>It looks like you're out of luck on this, it seems plot does not have the <code>depthshade=True</code> feature. I think the problem is plot does not let you set a different color (or alpha value) for each points in the way scatter does, which I guess is how depthshade is applied. </p>
<p>A solution is to loop over all points and set colour one by one, together with the <code>mpl_toolkits.mplot3d.art3d.zalpha</code> helper function to give depth.</p>
<pre><code>import mpl_toolkits
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
n = 100
xs = np.random.rand(n)
ys = np.random.rand(n)
zs = np.random.rand(n)
color = [1,0,0,1]
#Get normal to camera
alpha= ax.azim*np.pi/180.
beta= ax.elev*np.pi/180.
n = np.array([np.cos(alpha)*np.sin(beta),
np.sin(alpha)*np.cos(beta),
np.sin(beta)])
ns = -np.dot(n, [xs, ys, zs])
cs = mpl_toolkits.mplot3d.art3d.zalpha(color, ns)
for i in range(xs.shape[0]):
ax.plot([xs[i]], [ys[i]], [zs[i]], 'o', color=cs[i])
plt.show()
</code></pre>
<p>One tricky point is that the camera position <code>ax.elev</code> and <code>ax.azim</code> need to be used to work out the normal vector. Also, when you rotate the position of the camera, this will no longer be coloured correctly. To fix this, you could register an update event as follows,</p>
<pre><code>def Update(event):
#Update normal to camera
alpha= ax.azim*np.pi/180.
beta= ax.elev*np.pi/180.
n = np.array([np.cos(alpha)*np.sin(beta),
np.sin(alpha)*np.cos(beta),
np.sin(beta)])
ns = -np.dot(n, [xs, ys, zs])
cs = mpl_toolkits.mplot3d.art3d.zalpha(color, ns)
for i, p in enumerate(points):
p[0].set_alpha(cs[i][3])
fig.canvas.mpl_connect('draw_event', Update)
points = []
for i in range(xs.shape[0]):
points.append(ax.plot([xs[i]], [ys[i]], [zs[i]], 'o', color=cs[i]))
</code></pre>
|
python|matplotlib|plot|3d|opacity
| 3 |
1,904,733 | 66,343,172 |
how to join two table by time
|
<p>A:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>CODE</th>
<th>TIMESTAMP</th>
<th>MODE</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>2020-09-01 23:12:43</td>
<td>Sleep</td>
</tr>
<tr>
<td>B</td>
<td>2020-09-02 22:09:12</td>
<td>Weak</td>
</tr>
</tbody>
</table>
</div>
<p>B:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>CODE</th>
<th>TIMESTAMP</th>
<th>Action</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>2020-08-01 11:12:43</td>
<td>Go</td>
</tr>
<tr>
<td>A</td>
<td>2020-09-01 22:09:12</td>
<td>Stop</td>
</tr>
<tr>
<td>A</td>
<td>2020-09-02 06:12:43</td>
<td>Stop</td>
</tr>
<tr>
<td>A</td>
<td>2020-09-03 11:07:43</td>
<td>Stop</td>
</tr>
<tr>
<td>B</td>
<td>2020-09-03 22:09:12</td>
<td>Go</td>
</tr>
</tbody>
</table>
</div>
<p>final table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>CODE</th>
<th>A_TIMESTAMP</th>
<th>MODE</th>
<th>Action</th>
<th>B_TIMESTAMP</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>2020-09-01 23:12:43</td>
<td>Sleep</td>
<td>Stop</td>
<td>2020-09-02 06:12:43</td>
</tr>
<tr>
<td>B</td>
<td>2020-09-02 22:09:12</td>
<td>Weak</td>
<td>Go</td>
<td>2020-09-03 22:09:12</td>
</tr>
</tbody>
</table>
</div>
<p>What I want is to join table A and table B (key=Code), but if the timestamp of table B is greater than the value of table A, this is the method to join only the first one.</p>
<p>A table has more than 10 million rows</p>
<p>The number of rows in table B is also 1 million.</p>
<p>I can use dask, pyspark, pandas, sql all. How can I get it efficiently?</p>
|
<p>in 90 percent of scenarios getting the data the way you want from database engine is the fastest and most efficient way.</p>
<p>If you always have one record in table B with timestamp greater than in Table A then a simple join is the answer :</p>
<pre><code>select * from A
join B
on A.Code = B.Code
and A.TimeStamp < B.TimeStamp
</code></pre>
<p>If not :</p>
<pre><code>select
*
from
A
cross apply (
select TOP 1 *
from B
where A.Code = B.Code
and A.TimeStamp < B.TimeStamp
order by B.TimeStamp
)
</code></pre>
|
sql|pandas|database|join|mariadb
| 1 |
1,904,734 | 64,924,174 |
Unable to extract data using Scrapy
|
<p>I'm trying to extract Pre open stock market data of NSE(India). I'm able to fetch the data from the scrapy shell but when I run it as a file or run it in pycharm as a whole code I'm not getting any output. My code is</p>
<pre><code>class PreopenMarketDataSpider(scrapy.Spider):
name = 'preopen_market_data'
allowed_domains = ['www1.nseindia.com']
start_urls = ['https://www1.nseindia.com/live_market/dynaContent/live_watch/pre_open_market/pre_open_market.htm']
def parse(self, response):
stocks = ['RELIANCE', 'TATASTEEL', 'LT']
for stock in stocks:
stock_url = 'https://www1.nseindia.com/live_market/dynaContent/live_analysis/pre_open/preOpenOrderBook.jsp?param='+str('stock')+'EQN&symbol='+str('stock')
yield Request(stock_url, callback=self.data)
def data(self,response):
p=response.xpath('//*[@class="orderBookFontCBig"]/text()').extract()
yield Request(p,callback=self,meta={'Stock':p})
</code></pre>
<p>Why is it not fetching the data. What am I doing wrong here? Can we do this via formRequest method?</p>
|
<p>In the last line just print p to get the output data.</p>
<pre><code>class PreopenMarketDataSpider(scrapy.Spider):
name = 'preopen_market_data'
allowed_domains = ['www1.nseindia.com']
start_urls = ['https://www1.nseindia.com/live_market/dynaContent/live_watch/pre_open_market/pre_open_market.htm']
def parse(self, response):
stocks = ['RELIANCE', 'TATASTEEL', 'LT']
for stock in stocks:
stock_url = 'https://www1.nseindia.com/live_market/dynaContent/live_analysis/pre_open/preOpenOrderBook.jsp?param='+stock+'EQN&symbol='+stock
yield Request(stock_url, callback=self.data)
def data(self,response):
p=response.xpath('//*[@class="orderBookFontCBig"]/text()').extract()
print(p)
# yield Request(p,callback=self,meta={'Stock':p})
print output
['1975.00']
['526.00']
</code></pre>
|
python|scrapy
| 0 |
1,904,735 | 64,092,629 |
How do I iterate through a list of dictionaries to make sure the I don't add the same name?
|
<p>I have a json file that looks like this:</p>
<pre><code>[{"History": ["sally", "billy", "tommy"], "Calculus": [billy]}]
</code></pre>
<p>I want to traverse the name of the classes so when I add the next class, I make sure that class isn't in the list. And then I want to traverse that class to make sure there isn't a student with the same name added twice.</p>
<p>I made this code:</p>
<pre><code>with open('students.json') as j:
data = json.load(j)
for classRoom in data:
print "entered here4"
#classCheck = data[classRoom]
#this checks if the classroom is already in the dictionary of lists
if(className == classRoom):
print "entered here5"
for student in data[classRoom]:
if(studentName == student):
print "student already in this classroom"
sys.exit()
data[classRoom].append(studentName)
with open ('students.json', 'w') as outfile:
json.dump(data,outfile)
print "student added to class"
sys.exit()
</code></pre>
<p><code>if(className == classRoom):</code> This if statement is skipped when I run the code. And if I change <code>classCheck==data[classRoom]</code> I get an error "list indices must be integers, not dict. What can I change in the for loop so it checks if the classroom name exists or not?</p>
|
<p>Here's what you should be doing:
assuming <code>data = [{"History": ["sally", "billy", "tommy"], "Calculus": [billy]}]</code></p>
<pre><code>for classRoom, studentList in data[0].items(): #this grabs both keys and values in the dictionary, then
if className in classRoom:
#do something ..
for i in studentList: #since values are list
if studentName in i:# this grabs each values list and check if student exist there
#do somthing ...
</code></pre>
|
python|json|list|dictionary
| 1 |
1,904,736 | 63,879,104 |
compare a string with 1st line in a file
|
<p>For some reason it comes as false. I used the comparison operator. The file works properly. There seem to be an issue with the comparison.</p>
|
<pre><code>with open(filename, 'r') as fh:
for line in fh:
... do stuff with line ...
</code></pre>
|
python|file
| 0 |
1,904,737 | 53,234,959 |
how to deal with string in pandas
|
<p>I'm working on large csv file with almost only strings. I want to do some statisticals test such as define clusters but for that I need to convert my string as int. (I 'm totally new on python, pandas, scikitlearn as well).</p>
<p>so here my code:</p>
<pre><code>#replace str as int
df.WORK_TYPE[df.WORK_TYPE == 'aaa']=1
df.WORK_TYPE[df.WORK_TYPE == 'bbb']=2
df.WORK_TYPE[df.WORK_TYPE == 'ccc']=3
df.WORK_TYPE[df.WORK_TYPE == 'ddd']=4
print(df)
</code></pre>
<p>And here my error message:</p>
<pre><code>C:\Users\ishemf64\AppData\Local\Continuum\anaconda3\lib\site-packages\ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel.
C:\Users\ishemf64\AppData\Local\Continuum\anaconda3\lib\site-packages\ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
C:\Users\ishemf64\AppData\Local\Continuum\anaconda3\lib\site-packages\ipykernel_launcher.py:3: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
This is separate from the ipykernel package so we can avoid doing imports until
C:\Users\ishemf64\AppData\Local\Continuum\anaconda3\lib\site-packages\ipykernel_launcher.py:4: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
after removing the cwd from sys.path.
</code></pre>
<p>I don't understand why I have this error and also could you tell me if there is another way and/or mandatory to convert text if I want to do my analysis.</p>
|
<p>That looks like a warning, not an error. Better folks than I have explained it here: <a href="https://www.dataquest.io/blog/settingwithcopywarning/" rel="nofollow noreferrer">https://www.dataquest.io/blog/settingwithcopywarning/</a></p>
<p>Since you seem to have only a few categories, would you consider using <code>get_dummies</code>? It takes your <code>pd.Series</code> with categorical data in it and helps you convert it into dummy variables (1 if present, 0 if not). Check it out here: <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html</a></p>
|
python|string|pandas|scikit-learn
| 0 |
1,904,738 | 53,107,596 |
Converting Python array to C# and returning values
|
<p>I am converting a Python script to C# and I'm in need in some help. I just don't have any experience in Python really. These types of arrays are totally new to me.</p>
<p>I'm having trouble with the second to last line, <code>var posVec = dSorted[0][1];</code>, as well as the last line: <code>return posVec;</code>.</p>
<p>What is the actual variable type of <code>var posVec</code>? </p>
<p>Also I'm trying to return <code>posVec</code>, which should be a Vector3d but I'm getting this error:</p>
<blockquote>
<p>Cannot implicitly convert type 'double' to 'Rhino.Geometry.Vector3d'</p>
</blockquote>
<p>What am I doing wrong?
Thank you!</p>
<p>Python:</p>
<pre><code>posVec = dSorted[0][1]
return posVec
</code></pre>
<p>The full Python method:</p>
<pre><code>def getDirection(self):
#find a new vector that is at a 90 degree angle
#define dictionary
d = {}
#create an list of possible 90 degree vectors
arrPts = [(1,0,0), (-1,0,0), (0,1,0), (0,-1,0)]
vec = self.vec
vec = rs.VectorUnitize(vec)
#find the distance between the self vec
#position and one of the 4 90 degree vectors
#create a dictionary that matches the distance with the 90 degree vector
for i in range(4):
dist = rs.Distance(vec, arrPts[i])
d[dist] = arrPts[i]
#sort the dictionary. This function converts it to an array
#sort by the distances, "value"/item 0
dSorted = sorted(d.items(), key=lambda value: value[0])
#select the second item in the array which is one of the 90 degree vectors
posVec = dSorted[0][1]
return posVec
</code></pre>
<p>The full C# method as I've rewritten so far:</p>
<pre><code> // find a new vector that is at a 90 degree angle
public Vector3d GetDirection()
{
// define dictionary
Dictionary<double, Vector3d> d = new Dictionary<double, Vector3d>();
Vector3d[] arrPts = new Vector3d[] {
new Vector3d(1, 0, 0),
new Vector3d(-1, 0, 0),
new Vector3d(0, 1, 0),
new Vector3d(0, -1, 0),
new Vector3d(0, 0, 1),
new Vector3d(0, 0, -1) };
_vec = Vec;
_vec.Unitize();
// find the distance between the self vec position and one of the 6 90 degree vectors
// create a dictionary that matches the distance with the 90 degree vector
for (int i = 0; i < arrPts.Length; i++)
{
double dist = Math.Sqrt(
((_vec.X - arrPts[i].X) * (_vec.X - arrPts[i].X)) +
((_vec.Y - arrPts[i].Y) * (_vec.Y - arrPts[i].Y)) +
((_vec.Z - arrPts[i].Z) * (_vec.Z - arrPts[i].Z)));
d.Add(dist, arrPts[i]);
}
Vector3d[] dSorted = d.Values.ToArray();
var posVec = dSorted[0][1];
return posVec;
}
</code></pre>
|
<p>I am late to the party, but anyway if it serves someone in the future...</p>
<p>You are creating an array of <code>Vector3d(dSorted)</code> from your values of your <code>dictionary(d)</code>, but you try to cast it to <code>var posVec</code> which should be a <code>Vector3d</code> using <code>dSorted[0][1]</code>. This is the notation for a jagged array, but you declared <code>dSorted</code> to be an array of <code>Vector3d</code>. </p>
<p>So, to access one of its items just use <code>dSorted[0]</code>.</p>
|
c#|python|arrays|grasshopper|rhino3d
| 0 |
1,904,739 | 72,120,044 |
How can i enable GPU in those lines?
|
<p>how can I speed up the calculations for this equation using GPU or cuda as the file contains 30.000 points</p>
<pre><code>points = pd.read_csv('file.dat', sep='\t', usecols=[0, 1])
d = pd.DataFrame(np.zeros((max_id, max_id)))
dis = sch.distance.pdist(points, 'euclidean')
n = 0
for i in range(max_id):
print(i)
for j in range(i + 1, max_id):
d.at[i, j] = dis[n]
d.at[j, i] = d.at[i, j]
n += 1
</code></pre>
<p>EDIT</p>
<p>i tried</p>
<pre><code> points = genfromtxt(path, delimiter='\t', usecols=[0, 1])
points =torch.tensor(points)
d = pd.DataFrame(np.zeros((max_id, max_id)))
dis = torch.cdist(points)
</code></pre>
<p>but got</p>
<pre><code>TypeError: cdist() missing 1 required positional argument: 'x2'
</code></pre>
<p>does that mean I need to read points or the two columns of points separately?</p>
|
<p>NumPy doesn't natively support GPUs. Though you can use some libraries which are friendly with <code>numpy</code> and supports GPU. One of the option like that would be to use <code>PyTorch</code>. <a href="https://pytorch.org/docs/stable/generated/torch.cdist.html" rel="nofollow noreferrer"><code>torch.cdist</code></a> would be one of the function you can look at (Then you don't need to organize it like that using for loops). Also there is <a href="https://torch.nn.functional.pdist" rel="nofollow noreferrer"><code>torch.nn.functional.pdist</code></a>. Note also that, you don't need to use for loop in second case. Once you get the output, you can just <code>reshape</code> it as per your need.</p>
|
python|numpy|gpu
| 2 |
1,904,740 | 5,256,845 |
Tab-delimited file into dictionary (python)
|
<p>I have a tab separated file </p>
<p>How can I input this file into a dictionary?</p>
|
<p>You can use the <code>csv</code> module with its <a href="http://docs.python.org/library/csv.html#csv.DictReader" rel="nofollow"><code>DictReader</code> class</a>.</p>
|
python|dictionary
| 3 |
1,904,741 | 62,574,053 |
pandas error: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()
|
<p>I have a dataframe.</p>
<p><a href="https://i.stack.imgur.com/F19k0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F19k0.png" alt="enter image description here" /></a></p>
<pre><code>distdf=pd.DataFrame(dist)
for i in range(len(distdf)):
if distdf.loc[i]==distdf.loc[i+1]:
print("true")
else:
print('no')
</code></pre>
<p>but I have a error.</p>
<pre class="lang-none prettyprint-override"><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>How can I fix it?</p>
|
<p>Your code failed because <code>distdf.loc[i] == distdf.loc[i+1]</code> attempts
to compare <strong>whole rows</strong>, i.e. two <em>Series</em> objects.</p>
<p>Some impovement would be if you wrote:
<code>if distdf.loc[i, 'DISTINGUISH']==distdf.loc[i+1, 'DISTINGUISH']:</code>,
i.e. compared <em>DISTINGUISH</em> elements in each row (current and the next).</p>
<p>But this code would also fail on the last turn of the loop, because
when you "stay" on the last row, there is no next row, so then
<em>KeyError</em> exception is raised.</p>
<p>There is a simpler, more <em>pandasonic</em> approach to get a list of your values,
without any loop.</p>
<p>Assume that your DataFrame (<em>distdf</em>) contains:</p>
<pre><code> DISTINGUISH
0 1
1 1
2 2
3 2
4 3
5 3
6 32
7 32
8 33
9 33
10 34
11 34
</code></pre>
<p>As you want to operate on its only column, to keep the code short,
let's save it under <em>col0</em>:</p>
<pre><code>col0 = distdf.iloc[:, 0]
</code></pre>
<p>Then, to get your list of values, without any loop, run:</p>
<pre><code>np.where(col0 == col0.shift(-1), 'true', 'no')
</code></pre>
<p>The result, for the above data, is:</p>
<pre><code>array(['true', 'no', 'true', 'no', 'true', 'no', 'true', 'no', 'true',
'no', 'true', 'no'], dtype='<U4')
</code></pre>
|
python|pandas|dataframe|pandas-loc
| 2 |
1,904,742 | 62,841,936 |
Error when using flask babel in AppEngine
|
<p>I'm working on my first Flask Babel project and hit a snag:</p>
<pre><code>NameError: name '_' is not defined
</code></pre>
<p>I need to translate the texts and menus into different languages, later I will tackle numbers and dates. The pybabel extract and init commands work well and give no errors.</p>
<p>Here are my files:</p>
<p>main.py</p>
<pre><code>import datetime
from flask import Flask, render_template
from flask import session, redirect, url_for, escape, request
from flask_babel import Babel, gettext
from google.cloud import datastore
datastore_client = datastore.Client()
app = Flask(__name__)
app.config.from_pyfile('config.py')
babel = Babel(app)
@babel.localeselector
def get_locale():
# return request.accept_languages.best_match(app.config['LANGUAGES'].keys()
# In the app we'll ask the user what he prefers.
return 'es' # Let's force Spanish for testing purposes
message = _("This site is for development purposes only. Please contact us for more
information.")
footer = _("Test Text #1")
username = "test-user"
@app.route('/')
def root():
return render_template('main.html', username=user, message=message)
</code></pre>
<p>config.py</p>
<pre><code># add to your app.config or config.py file
LANGUAGES = {
'en': 'English',
'es': 'Español'
}
</code></pre>
<p>Output of messages.pot</p>
<pre><code># Translations template for PROJECT.
# Copyright (C) 2020 ORGANIZATION
# This file is distributed under the same license as the PROJECT project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2020.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PROJECT VERSION\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2020-07-07 23:09+0200\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.6.0\n"
#: /home/xxx/AppEngine/fpp-dev-01/main.py:25
msgid ""
"This site is for development purposes only. Please contact us for more "
"information."
msgstr ""
#: /home/xxx/AppEngine/fpp-dev-01/main.py:26
msgid "Test Text #1"
msgstr ""
#: /home/xxx/AppEngine/fpp-dev-01/templates/fppbase.html:46
msgid "Settings"
msgstr ""
</code></pre>
<p>messages.po (location: /home/xxx/Appengine/fpp-dev-01/translations/es/LC_MESSAGES)</p>
<pre><code>#: /home/xxx/AppEngine/fpp-dev-01/main.py:25
msgid ""
"This site is for development purposes only. Please contact us for more "
"information."
msgstr "Este sitio es solamente para fines de desarrollo. Por favor contáctenos para
más información"
#: /home/xxx/AppEngine/fpp-dev-01/main.py:26
msgid "Test Text #1"
msgstr ""
#: /home/xxx/AppEngine/fpp-dev-01/templates/fppbase.html:46
msgid "Settings"
msgstr "Ajustes"
</code></pre>
<p>I run the app locally (Linux) with the following command:</p>
<pre><code>python main.py
</code></pre>
<p>Output of the app in the terminal:</p>
<pre><code>Traceback (most recent call last):
File "main.py", line 20, in <module>
message = _("This site is for development purposes only. Please contact us for
more information.")
NameError: name '_' is not defined
</code></pre>
<p>So anybody has a hint why my app is not recognizing the '_(' for the translations?</p>
<p>Thanks in advance!</p>
|
<p>You have to import</p>
<pre><code>from flask_babel import lazy_gettext as _
</code></pre>
|
python|flask|jinja2|google-app-engine-python|flask-babel
| 1 |
1,904,743 | 67,256,977 |
Using Objects Memory Location As Hash Key
|
<p>So I realised today in Python we can use an objects memory address location as a key in dictionary.</p>
<p>How does python prevent collisions in the case when the object that was at that memory location is replaced with a different one?</p>
<p>Are there any other risks with using the memory location as hashing key?</p>
|
<p>I think you're talking about an object's <em>id</em>, not its memory address per se. They're the same thing in CPython, but that's an implementation detail.</p>
<p>If so, then the docs answer your question:</p>
<blockquote>
<p><a href="https://docs.python.org/3/library/functions.html#id" rel="nofollow noreferrer"><code>id(object)</code></a></p>
<p>Return the "identity" of an object. This is an integer which is guaranteed to be unique and constant for this object <strong>during its lifetime. Two objects with non-overlapping lifetimes may have the same <code>id()</code> value.</strong></p>
<p><strong>CPython implementation detail:</strong> This is the address of the object in memory.</p>
</blockquote>
<p><em>(added bold)</em></p>
<p>In other words, it <em>doesn't</em> prevent collisions.</p>
|
python|python-3.x|dictionary|object|hash
| 4 |
1,904,744 | 60,589,103 |
How to change the space between one checkbox to another second checkbox horizontally?
|
<p>How do I set the space between two checkboxes in PyQt? Currently, I have created two checkboxes and stored them in a QHBoxLayout, but the checkbox are too far apart in my GUI. (Image as shown below)
As you can see the checkbox is placed way too far apart. How do I shorten the space between the two checkboxes? </p>
<p><a href="https://i.stack.imgur.com/5Sw94.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Sw94.png" alt="enter image description here"></a></p>
<p><strong>CODE</strong></p>
<pre><code># Create Checkbox
showBuilding = QtWidgets.QCheckBox("Buildings")
showBuilding.setFont(QtGui.QFont("Arial", 15, QtGui.QFont.Bold))
showRoad = QtWidgets.QCheckBox("Roads")
showRoad.setFont(QtGui.QFont("Arial", 15, QtGui.QFont.Bold))
checkbox_container = QtWidgets.QWidget()
checklay = QtWidgets.QHBoxLayout(checkbox_container)
checklay.addWidget(showBuilding)
checklay.addWidget(showRoad)
</code></pre>
|
<p>Assuming that you want to continue using a QHBoxLayout then a possible solution is to establish a stretch to the right side and then set the distance using setSpacing():</p>
<pre><code>checkbox_container = QtWidgets.QWidget()
checklay = QtWidgets.QHBoxLayout(checkbox_container)
checklay.addWidget(showBuilding)
checklay.addWidget(showRoad)
<b>checklay.addStretch()
checklay.setSpacing(20)</b></code></pre>
|
python|pyqt|pyqt5
| 0 |
1,904,745 | 71,345,196 |
Change color of specific bar in matplotlib barplot
|
<p>I want to change the color of a bar in matplotlib's grouped barplot if it meets a certain condition. I'm plotting two bars for each <code>species</code> - one for <code>today</code> and one for <code>avg</code>, where <code>avg</code> contains <code>yerr</code> errorbars that show the 10th and 90th percentile values.</p>
<p>Now I want the <code>avg</code> bar to be green if <code>today</code>'s <code>length</code> value > 10th percentile, and red if <code>today</code>'s <code>length</code> value < 10th percentile.</p>
<p>I tried the solutions in these posts</p>
<ul>
<li><a href="https://stackoverflow.com/questions/3832809/how-to-change-the-color-of-a-single-bar-if-condition-is-true-matplotlib">how to change the color of a single bar if condition is True matplotlib</a></li>
<li><a href="https://stackoverflow.com/questions/59347557/update-single-bar-in-matplotlib">Update Single Bar in Matplotlib</a></li>
</ul>
<p>but the bars are always green.</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
z = pd.DataFrame(data={'length': [40,35,34,40,36,39,38,44,40,39,35,46],
'species': ['A','A','A','A','B','B','B','B','C','C','C','C'],
'type': ['today','avg','10pc','90pc','today','avg','10pc','90pc','today','avg','10pc','90pc']
},
)
z['Date'] = pd.to_datetime('2021-09-20')
z.set_index('Date',inplace=True)
z0 = z.loc[(z.type=='today') | (z.type=='avg')] # average length and today's length
z1 = z.loc[(z.type=='10pc') | (z.type=='90pc')] # 10th and 90th percentile
z2 = []
for n in z.species.unique().tolist():
dz = z.loc[(z.species==n) & (z.type=='today'),'length'].values[0] - z.loc[(z.species==n) & (z.type=='10pc'),'length'].values[0]
if dw>0:
z2.append(1)
else:
z2.append(0)
errors = z1.pivot_table(columns=[z1.index,'species'],index='type',values=['length']).values
avgs = z0.length[z0.type=='avg'].values
bars = np.stack((np.absolute(errors-avgs), np.zeros([2,z1.species.unique().size])), axis=0)
col = ['pink']
for k in z2:
if k==1:
col.append('g') # length within 10% bounds = green
else:
col.append('r') # length outside 10% bounds = red
fig, ax = plt.subplots()
z0.pivot(index='species', columns='type', values='length').plot(kind='bar', yerr=bars, ax=ax, color=col, capsize=0)
ax.set_title(z0.index[0].strftime('%d %b %Y'), fontsize=16)
ax.set_xlabel('species', fontsize=14)
ax.set_ylabel('length (cm)', fontsize=14)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/nyRrp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nyRrp.png" alt="enter image description here" /></a></p>
|
<p>One way is to overwrite the colors after creating the plot. First you need to change the line that initialize <code>col</code> with</p>
<pre><code>col = ['pink']*z['species'].nunique()
</code></pre>
<p>to get the numbers of avg bars, then the same for loop to add g or r depending on your case. Finally, change this</p>
<pre><code>fig, ax = plt.subplots()
z0.pivot(index='species', columns='type', values='length')\
.plot(kind='bar', yerr=bars, ax=ax,
color=['pink','g'], capsize=0) # here use only pink and g
# here overwrite the colors
for p, c in zip(ax.patches, col):
p.set_color(c)
ax.set_title...
</code></pre>
<p>Note that the legend for today is green even if you have a red bar, could be confusing.</p>
<p>Here is the full working example, adding the red entry in the legend thanks to <a href="https://stackoverflow.com/a/56551701/9274732">this answer</a></p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches # import this extra
z = pd.DataFrame(data={'length': [40,35,34,40,36,39,38,44,40,39,35,46],
'species': ['A','A','A','A','B','B','B','B','C','C','C','C'],
'type': ['today','avg','10pc','90pc','today','avg','10pc','90pc','today','avg','10pc','90pc']
},
)
z['Date'] = pd.to_datetime('2021-09-20')
z.set_index('Date',inplace=True)
z0 = z.loc[(z.type=='today') | (z.type=='avg')] # average length and today's length
z1 = z.loc[(z.type=='10pc') | (z.type=='90pc')] # 10th and 90th percentile
z2 = []
for n in z.species.unique().tolist():
dz = z.loc[(z.species==n) & (z.type=='today'),'length'].values[0] - z.loc[(z.species==n) & (z.type=='10pc'),'length'].values[0]
if dz>0:
z2.append(1)
else:
z2.append(0)
errors = z1.pivot_table(columns=[z1.index,'species'],index='type',values=['length']).values
avgs = z0.length[z0.type=='avg'].values
bars = np.stack((np.absolute(errors-avgs), np.zeros([2,z1.species.unique().size])), axis=0)
col = ['pink']*z['species'].nunique()
for k in z2:
if k==1:
col.append('g') # length within 10% bounds = green
else:
col.append('r') # length outside 10% bounds = red
print(col)
# ['pink', 'pink', 'pink', 'g', 'r', 'g']
fig, ax = plt.subplots()
z0.pivot(index='species', columns='type', values='length').plot(kind='bar', yerr=bars, ax=ax, color=['pink','g'], capsize=0)
for p, c in zip(ax.patches, col):
p.set_color(c)
ax.set_title(z0.index[0].strftime('%d %b %Y'), fontsize=16)
ax.set_xlabel('species', fontsize=14)
ax.set_ylabel('length (cm)', fontsize=14)
handles = None
labels = None
if 0 in z2: ## add the entry on the legend only when there is red bar
# where some data has already been plotted to ax
handles, labels = ax.get_legend_handles_labels()
# manually define a new patch
patch = mpatches.Patch(color='r', label='today')
# handles is a list, so append manual patch
handles.append(patch)
# plot the legend
plt.legend(handles=handles)
else:
# plot the legend when there isn't red bar
plt.legend(handles=handles)
plt.show()
</code></pre>
<p>and I get the red bar
<a href="https://i.stack.imgur.com/rmT1Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rmT1Z.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib
| 1 |
1,904,746 | 71,238,939 |
How do i put the chess pieces in a python chessboard grid?
|
<p>I am a begginer working on a college project and i need to make a chess game that works in the python console(can´t use the chess import), i just completed the board but now i have no idea how to put the pieces in,can someone help?(they can be represented by letters like Pawn=P )</p>
<p>this is what i did so far</p>
<pre><code>
def create_grid():
""" Creates the grid return=an array that represents the grid """
tab = []
for i in range(GRID_SIZE - 1):
tab.append([])
for j in range(GRID_SIZE):
tab[i].append('')
return tab
def print_grid(tab):
""" prints the grid """
print( ' 1','2','3','4','5','6','7','8')
for i in range(GRID_SIZE - 1):
print("{} ".format(i+1), end="")
for j in range(GRID_SIZE):
print(" {}".format(tab[i][j]), end="")
if j < GRID_SIZE:
print("|", end="")
if i < GRID_SIZE:
print()
print_grid(create_grid)
</code></pre>
|
<p>Try this and <strong>improve as needed</strong>. I considered @G.C's comment to print unicodes.</p>
<pre><code>GRID_SIZE = 8
def create_grid():
"""Creates the grid return=an array that represents the grid"""
tab = []
for i in range(GRID_SIZE):
tab.append([])
for j in range(GRID_SIZE):
tab[i].append("")
return tab
def print_grid(tab):
"""prints the grid"""
print(" 1", "2", "3", "4", "5", "6", "7", "8")
for i in range(GRID_SIZE):
print("{} ".format(i + 1), end="")
for j in range(GRID_SIZE):
print("\u2654{}".format(tab[i][j]), end="")
if j < GRID_SIZE:
print("|", end="")
if i < GRID_SIZE:
print()
print_grid(create_grid())
</code></pre>
<p>It will print something like this</p>
<pre><code> 1 2 3 4 5 6 7 8
1 ♔|♔|♔|♔|♔|♔|♔|♔|
2 ♔|♔|♔|♔|♔|♔|♔|♔|
3 ♔|♔|♔|♔|♔|♔|♔|♔|
4 ♔|♔|♔|♔|♔|♔|♔|♔|
5 ♔|♔|♔|♔|♔|♔|♔|♔|
6 ♔|♔|♔|♔|♔|♔|♔|♔|
7 ♔|♔|♔|♔|♔|♔|♔|♔|
8 ♔|♔|♔|♔|♔|♔|♔|♔|
</code></pre>
|
python|arrays|list|function|chess
| 0 |
1,904,747 | 10,901,247 |
Django Filter Error
|
<p>I want to save and filter users objects in my django app. After inputting the below codes, the imagefield keeps giving me a validation error, saying:</p>
<pre><code> This field is required.
</code></pre>
<p>It’s pointing to the imagefield that I should fill it. How can I get rid of that error and make it filter?</p>
<p>Models</p>
<pre><code> class Fin(models.Model):
user=models.ForeignKey(User)
title=models.CharField(max_length=250)
main_view=models.ImageField(upload_to="photos")
side_view=models.ImageField(upload_to="photos")
address=models.CharField(max_length=200)
city=models.CharField(max_length=200)
state=models.CharField(max_length=200)
guideline=models.TextField(max_length=1000)
def __unicode__(self):
return self.title
def get_absolute_url(self):
return self.title
class FinForm(ModelForm):
class Meta:
model=Fin
fields=('title','main_view','side_view', 'address','city','state','guideline')
exclude=('user')
</code></pre>
<p>Views</p>
<pre><code> def fincrib(request):
extra_data_context={}
#if there's nothing in the field do nothing.
if request. method=="POST":
form =FinForm(request.POST)
if form.is_valid():
data=form.cleaned_data
newfincribs=Fin(
user= request.user,
title=data['title'],
main_view=Fin.objects.latest['main_view'],
side_view=Fin.objects.latest['side_view'],
address=data['address'],
city=data['city'],
state=data['state'],
guideline=data['guideline'])
newfincribs.save()
extra_data_context.update({'FinForm':form})
else:
form = FinForm()
extra_data_context.update({'FinForm':form})
extra_data_context.update({'Fins':Fin.objects.filter(user=request.user)})
plan=Fin.objects.filter(user=request.user)
paginator=Paginator(plan, 5)
try:
page=request.GET.get('page', '1')
except ValueError:
page=1
try:
Fins=paginator.page(page)
except (EmptyPage, InvalidPage):
Fins=paginator.page(paginator.num_pages)
extra_data_context.update({'Fins': Fins})
return render_to_response('post.html',extra_data_context,context_instance=RequestContext(request))
</code></pre>
<p>Template</p>
<pre><code> {% block content %}
<form action="." method="POST">
{% csrf_token %}
<center> {{FinForm.as_p}} </center>
<input type="submit" value="Submit"/>
</form>
{% for Fin in Fins.object_list %}
<tr>
<a href="{% url profiles_edit_profile %}"> {{Fin.user}} </a> </p> </strong>
<p>{{Fin.title}}</p>
<p><img src="{{MEDIA_URL}}/{{Fin.main_view}}"/></p>
<p> <img src="{{MEDIA_URL}}/{{Fin.side_view}}"/></p>
<p> {{Fin.address}} </p>
<p> {{Fin.city}}</p>
<p> {{Fin.state}}</p>
<p> {{Fin.guideline}}</p>
{% endfor %}
<div class="pagination">
<span class="step-links">
{% if Fins.has_previous %}
<a href="?page={{ Fins.previous_page_number }}">previous</a>
{% endif %}
<span class="current">
Page {{ Fins.number }} of {{ Fins.paginator.num_pages }}
</span>
{% if Fins.has_next %}
<a href="?page={{ Fins.next_page_number }}">next</a>
{% endif %}
</span>
</div>
{% endblock %}
</code></pre>
|
<p>It's because by default all model fields are required, it means if you want to create and save new model instance in the database, you should fill all the mandatory fields. Maybe </p>
<pre><code>main_view=Fin.objects.latest['main_view'],
side_view=Fin.objects.latest['side_view'],
</code></pre>
<p>is giving you the error, because there is no data. </p>
|
python|django
| 0 |
1,904,748 | 61,049,188 |
How do I get this information out of this website?
|
<p>I found this link:
<a href="https://search.roblox.com/catalog/json?Category=2&Subcategory=2&SortType=4&Direction=2" rel="nofollow noreferrer">https://search.roblox.com/catalog/json?Category=2&Subcategory=2&SortType=4&Direction=2</a></p>
<p>The original is:
<a href="https://www.roblox.com/catalog/?Category=2&Subcategory=2&SortType=4" rel="nofollow noreferrer">https://www.roblox.com/catalog/?Category=2&Subcategory=2&SortType=4</a></p>
<p>I am trying to scrape the prices of all the items in the whole catalog with Python, but I can't seem to locate the prices of the items. The URL does not change whenever I go to the next page. I have tried inspecting the website itself but I can't manage to find anything.</p>
<p>The first URL is somehow scrapeable and I found it randomly on a forum. How did the user get all this text data in there?</p>
<p>Note: I know the website is for children, but I earn money by selling limiteds on there. No harsh judgement please. :)</p>
|
<p>You can scrape all the item information without <code>BeautifulSoup</code> or <code>Selenium</code> - you just need <code>requests</code>. That being said, it's not super straight-forward, so I'll try to break it down:</p>
<p>When you visit a URL, your browser makes many requests to external resources. These resources are hosted on a server (or, nowadays, on several different servers), and they make up all the files/data that your browser needs to properly render the webpage. Just to list a few, these resources can be images, icons, scripts, HTML files, CSS files, fonts, audio, etc. Just for reference, loading <code>www.google.com</code> in my browser makes 36 requests in total to various resources.</p>
<p>The very first resource you make a request to will always be the actual webpage itself, so an HTML-like file. The browser then figures out which other resources it needs to make requests to by looking at that HTML.</p>
<p>For example's sake, let's say the webpage contains a table containing data we want to scrape. The very first thing we should ask ourselves is "How did that table get on that page?". What I mean is, there are different ways in which a webpage is populated with elements/html tags. Here's one such way:</p>
<ul>
<li>The server receives a request from our browser for the <code>page.html</code>
resource</li>
<li>That resource contains a table, and that table needs data, so the
server communicates with a database to retrieve the data for the
table</li>
<li>The server takes that table-data and bakes it into the HTML</li>
<li>Finally, the server serves that HTML file to you</li>
<li>What you receive is an HTML with baked in table-data. There is no way
that you can communicate with the previously mentioned database -
this is fine and desirable</li>
<li>Your browser renders the HTML</li>
</ul>
<p>When scraping a page like this, using <code>BeautifulSoup</code> is standard procedure. You know that the data you're looking for is baked into the HTML, so <code>BeautifulSoup</code> will be able to see it.</p>
<p>Here's another way in which webpages can be populated with elements:</p>
<ul>
<li>The server receives a request from our browser for the <code>page.html</code>
resource</li>
<li>That resource requires another resource - a script, whose job it is
to populate the table with data at a later point in time</li>
<li>The server serves that HTML file to you (it contains no actual
table-data)</li>
</ul>
<p>When I say "a later point in time", that time interval is negligible and practically unnoticeable for actual human beings using actual browsers to view pages. However, the server only served us a "bare-bones" HTML. It's just an empty template, and it's relying on a script to populate its table. That script makes a request to a web API, and the web API replies with the actual table-data. All of this takes a finite amount of time, and it can only start once the script resource is loaded to begin with.</p>
<p>When scraping a page like this, you cannot use <code>BeautifulSoup</code>, because it will only see the "bare-bones" template HTML. This is typically where you would use <code>Selenium</code> to simulate a real browsing session.</p>
<p>To get back to your roblox page, this page is the second type.</p>
<p>The approach I'm suggesting (which is my favorite, and in my opinion, should be the approach you always try first), simply involves figuring out what web API potential scripts are making requests to, and then imitating a request to get the data you want. The reason this is my favorite approach is because these web APIs often serve JSON, which is trivial to parse. It's super clean because you only need one third-party module (<code>requests</code>).</p>
<p>The first step is to log all the traffic/requests to resources that your browser makes. I'll be using Google Chrome, but other modern browsers probably have similar features:</p>
<ol>
<li>Open Google Chrome and navigate to the target page
(<code>https://www.roblox.com/catalog/?Category=2&Subcategory=2&SortType=4</code>)</li>
<li>Hit F12 to open the Chrome Developer Tools menu</li>
<li>Click on the "Network" tab</li>
<li>Click on the "Filter" button (The icon is funnel-shaped), and then
change the filter selection from "All" to "XHR" (<code>XMLHttpRequest</code> or
<code>XHR</code> resources are objects which interact with servers. We want to
only look at XHR resources because they potentially communicate with
web APIs)</li>
<li>Click on the round "Record" button (or press CTRL + E) to enable
logging - the icon should turn red once enabled</li>
<li>Press CTRL + R to refresh the page and begin logging traffic</li>
<li>After refreshing the page, you should see the resource-log start to
fill up. This is a list of all resources our browser requested -
we'll only see XHR objects though since we set up our filter (if
you're curious, you can switch the filter back to "All" to see a
list of all requests to resources made)</li>
</ol>
<p>Click on one of the items in the list. A panel should open on the right with several tabs. Click on the "Headers" tab to view the request URL, the request- and response headers as well as any cookies (view the "Cookies" tab for a prettier view). If the Request URL contains any query string parameters you can also view them in a prettier format in this tab. Here's what that looks like (sorry for the large image):</p>
<p><a href="https://i.stack.imgur.com/Pb7IL.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Pb7IL.png" alt="" /></a></p>
<p>This tab tells us everything we want to know about imitating our request. It tells us where we should make the request, and how our request should be formulated in order to be accepted. An ill-formed request will be rejected by the web API - not all web APIs care about the same header fields. For example, some web APIs desperately care about the "User-Agent" header, but in our case, this field is not required. The only reason I know that is because I copy and pasted request headers until the web API wouldn't reject my request anymore - in my solution I'll use the bare minimum to make a valid request.</p>
<p>However, we need to actually figure out which of these XHR objects is responsible for talking to the correct web API - the one that returns the actual information we want to scrape. Select any XHR object from the list and then click on the "Preview" tab to view a parsed version of the data returned by the web API. The assumption is that the web API returned JSON to us - you may have to expand and collapse the tree-structure for a bit before you find what you're looking for, but once you do, you know this XHR object is the one whose request we need to imitate. I happen to know that the data we're interested in is in the XHR object named "details". Here's what part of the expanded JSON looks like in the "Preview" tab:</p>
<p><a href="https://i.stack.imgur.com/IwjJQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/IwjJQ.png" alt="" /></a></p>
<p>As you can see, the response we got from this web API (<code>https://catalog.roblox.com/v1/catalog/items/details</code>) contains all the interesting data we want to scrape!</p>
<p>This is where things get sort of esoteric, and specific to this particular webpage (everything up until now you can use to scrape stuff from other pages via web APIs). Here's what happens when you visit <code>https://www.roblox.com/catalog/?Category=2&Subcategory=2&SortType=4</code>:</p>
<ul>
<li>Your browser gets some cookies that persist and a CSRF/XSRF Token is generated and baked into the HTML of the page</li>
<li>Eventually, one of the XHR objects (the one that starts with
"items?") makes an HTTP GET request (cookies required!) to the web API
<code>https://catalog.roblox.com/v1/search/items?category=Collectibles&limit=60&sortType=4&subcategory=Collectibles</code>
(notice the query string parameters)
The response is JSON. It contains a list of item-descriptor things, it looks like this:</li>
</ul>
<p><a href="https://i.stack.imgur.com/CDPoR.png" rel="noreferrer"><img src="https://i.stack.imgur.com/CDPoR.png" alt="" /></a></p>
<p>Then, some time later, another XHR object ("details") makes an HTTP POST request to the web API <code>https://catalog.roblox.com/v1/catalog/items/details</code> (refer to first and second screenshots). This request is only accepted by the web API if it contains the right cookies and the previously mentioned CSRF/XSRF token. In addition, this request also needs a payload containing the asset ids whose information we want to scrape - failure to provide this also results in a rejection.</p>
<p>So, it's a bit tricky. The request of one XHR object depends on the response of another.</p>
<p>So, here's the script. It first creates a <code>requests.Session</code> to keep track of cookies. We define a dictionary <code>params</code> (which is really just our query string) - you can change these values to suit your needs. The way it's written now, it pulls the first 60 items from the "Collectibles" category. Then, we get the CSRF/XSRF token from the HTML body with a regular expression. We get the ids of the first 60 items according to our <code>params</code>, and generate a dictionary/payload that the final web API request will accept. We make the final request, create a list of items (dictionaries), and print the keys and values of the first item of our query.</p>
<pre><code>def get_csrf_token(session):
import re
url = "https://www.roblox.com/catalog/"
response = session.get(url)
response.raise_for_status()
token_pattern = "setToken\\('(?P<csrf_token>[^\\)]+)'\\)"
match = re.search(token_pattern, response.text)
assert match
return match.group("csrf_token")
def get_assets(session, params):
url = "https://catalog.roblox.com/v1/search/items"
response = session.get(url, params=params, headers={})
response.raise_for_status()
return {"items": [{**d, "key": f"{d['itemType']}_{d['id']}"} for d in response.json()["data"]]}
def get_items(session, csrf_token, assets):
import json
url = "https://catalog.roblox.com/v1/catalog/items/details"
headers = {
"Content-Type": "application/json;charset=UTF-8",
"X-CSRF-TOKEN": csrf_token
}
response = session.post(url, data=json.dumps(assets), headers=headers)
response.raise_for_status()
items = response.json()["data"]
return items
def main():
import requests
session = requests.Session()
params = {
"category": "Collectibles",
"limit": "60",
"sortType": "4",
"subcategory": "Collectibles"
}
csrf_token = get_csrf_token(session)
assets = get_assets(session, params)
items = get_items(session, csrf_token, assets)
first_item = items[0]
for key, value in first_item.items():
print(f"{key}: {value}")
return 0
if __name__ == "__main__":
import sys
sys.exit(main())
</code></pre>
<p>Output:</p>
<pre><code>id: 76692143
itemType: Asset
assetType: 8
name: Chaos Canyon Sugar Egg
description: This highly collectible commemorative egg recalls that *other* classic ROBLOX level, the one that was never quite as popular as Crossroads.
productId: 11837951
genres: ['All']
itemStatus: []
itemRestrictions: ['Limited']
creatorType: User
creatorTargetId: 1
creatorName: ROBLOX
lowestPrice: 400
purchaseCount: 7714
favoriteCount: 2781
>>>
</code></pre>
|
python|html|web-scraping
| 8 |
1,904,749 | 68,990,612 |
Setting a range of values as variables for a barplot - python
|
<p>In my data, I have a column that shows either one of the following options: <code>'NOT_TESTED'</code>, <code>'NOT_COMPLETED'</code>, <code>'TOO_LOW'</code>, or a value between <code>150</code> and <code>190</code> with a step of 5 (so 150, 155, 160 etc).<br />
I am trying to plot a barplot which shows the amount of time each of these appear in the column, including each individual number.<br />
So the barplot should have as variables on the x-axis: <code>'NOT_TESTED'</code>, <code>'NOT_COMPLETED'</code>, <code>'TOO_LOW'</code>, <code>150</code>, <code>155</code>, <code>160</code> and so on.<br />
The stick height should be the amount of times it appears in the column.<br />
This is the code I have tried and it has gotten me the closest to my goal, however, all the numbers (150-190) output 1 as a value for the barplot, so all the sticks are at the same height.<br />
This does not follow the data and I do not know how to move forward.<br />
I am new to this, any guidance would be greatly appreciated!</p>
<pre><code>num_range = list(range(150,191, 5))
OUTCOMES = ['NOT_TESTED', 'NOT_COMPLETED', 'TOO_LOW']
OUTCOMES.extend(num_range)
df = df.append(pd.DataFrame(num_range,
columns=['PT1']),
ignore_index = True)
df["outcomes_col"] = df["PT1"].astype ("category")
df["outcomes_col"].cat.set_categories(OUTCOMES , inplace = True )
sns.countplot(x= "outcomes_col", data=df, palette='Magma')
plt.xticks(rotation = 90)
plt.ylabel('Amount')
plt.xlabel('Outcomes')
plt.title("Outcomes per Testing")
plt.show()
pd.DataFrame({'ID': {0: 'GF342', 1: 'IF874', 2: 'FH386', 3: 'KJ190', 4: 'TY748', 5: 'YT947', 6: 'DF063', 7: 'ET512', 8: 'GC714', 9: 'SD978', 10: 'EF472', 11: 'PL489', 12: 'AZ315', 13: 'OL821', 14: 'HN765', 15: 'ED589'}, 'Location': {0: 'Q1', 1: 'Q3', 2: 'Q1', 3: 'Q3', 4: 'Q3', 5: 'Q4', 6: 'Q3', 7: 'Q1', 8: 'Q2', 9: 'Q3', 10: 'Q1', 11: 'Q2', 12: 'Q1', 13: 'Q1', 14: 'Q3', 15: 'Q1'}, 'NEW': {0: 'YES', 1: 'NO', 2: 'NO', 3: 'YES', 4: 'YES', 5: 'NO', 6: 'NO', 7: 'YES', 8: 'NO', 9: 'NO', 10: 'NO', 11: 'YES', 12: 'NO', 13: 'YES', 14: 'YES', 15: 'YES'}, 'YEAR': {0: 2021, 1: 2018, 2: 2019, 3: 2021, 4: 2021, 5: 2019, 6: 2019, 7: 2021, 8: 2018, 9: 2019, 10: 2018, 11: 2021, 12: 2018, 13: 2021, 14: 2021, 15: 2021}, 'PT1': {0: '', 1: 'NOT_TESTED', 2: '', 3: 'NOT_FINISHED', 4: '165', 5: '', 6: '180', 7: '145', 8: '155', 9: '', 10: '', 11: '', 12: 'TOO_LOW', 13: '150', 14: '155', 15: ''}, 'PT2': {0: '', 1: '', 2: '', 3: '', 4: '', 5: 'TOO_LOW', 6: '', 7: '', 8: '160', 9: 'TOO_LOW', 10: '', 11: '', 12: '', 13: '', 14: '', 15: ''}, 'PT3': {0: '', 1: 'TOO_LOW', 2: '', 3: 'TOO_LOW', 4: '', 5: '', 6: '', 7: '', 8: '', 9: '', 10: '', 11: 'NOT_FINISHED', 12: '', 13: '185', 14: '', 15: '165'}, 'PT4': {0: '', 1: '', 2: '', 3: '', 4: '', 5: 165.0, 6: '', 7: '', 8: '', 9: '', 10: '', 11: '', 12: 180.0, 13: '', 14: '', 15: ''}})
</code></pre>
<p>This not the whole dataset, just part of it.</p>
|
<p>Starting from this dataframe:<br />
(I replaced <code>NOT_FINISHED</code> with <code>NOT_COMPLETED</code>, compliant to code in your question, let me know if this replacement is correct)</p>
<pre><code> ID Location NEW YEAR PT1 PT2 PT3 PT4
0 GF342 Q1 YES 2021
1 IF874 Q3 NO 2018 NOT_TESTED TOO_LOW
2 FH386 Q1 NO 2019
3 KJ190 Q3 YES 2021 NOT_COMPLETED TOO_LOW
4 TY748 Q3 YES 2021 165
5 YT947 Q4 NO 2019 TOO_LOW 165
6 DF063 Q3 NO 2019 180
7 ET512 Q1 YES 2021 145
8 GC714 Q2 NO 2018 155 160
9 SD978 Q3 NO 2019 TOO_LOW
10 EF472 Q1 NO 2018
11 PL489 Q2 YES 2021 NOT_COMPLETED
12 AZ315 Q1 NO 2018 TOO_LOW 180
13 OL821 Q1 YES 2021 150 185
14 HN765 Q3 YES 2021 155
15 ED589 Q1 YES 2021 165
</code></pre>
<p>If you are interested in a count plot of <code>'PT1'</code> column, first of all you have to define the categories to be plotted. You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.CategoricalDtype.html" rel="nofollow noreferrer"><strong><code>pandas.CategoricalDtype</code></strong></a>, so you can sort these categories.<br />
So you define a new <code>'outcomes_col'</code> column:</p>
<pre><code>num_range = list(range(150,191, 5))
OUTCOMES = ['NOT_TESTED', 'NOT_COMPLETED', 'TOO_LOW']
OUTCOMES.extend([str(num) for num in num_range])
OUTCOMES = CategoricalDtype(OUTCOMES, ordered = True)
df["outcomes_col"] = df["PT1"].astype(OUTCOMES)
</code></pre>
<p>Then you can proceed to plot this column:</p>
<pre><code>sns.countplot(x= "outcomes_col", data=df, palette='Magma')
plt.xticks(rotation = 90)
plt.ylabel('Amount')
plt.xlabel('Outcomes')
plt.title("Outcomes per Testing")
plt.show()
</code></pre>
<h2>Complete Code</h2>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from pandas.api.types import CategoricalDtype
df = pd.DataFrame({'ID': {0: 'GF342', 1: 'IF874', 2: 'FH386', 3: 'KJ190', 4: 'TY748', 5: 'YT947', 6: 'DF063', 7: 'ET512', 8: 'GC714', 9: 'SD978', 10: 'EF472', 11: 'PL489', 12: 'AZ315', 13: 'OL821', 14: 'HN765', 15: 'ED589'}, 'Location': {0: 'Q1', 1: 'Q3', 2: 'Q1', 3: 'Q3', 4: 'Q3', 5: 'Q4', 6: 'Q3', 7: 'Q1', 8: 'Q2', 9: 'Q3', 10: 'Q1', 11: 'Q2', 12: 'Q1', 13: 'Q1', 14: 'Q3', 15: 'Q1'}, 'NEW': {0: 'YES', 1: 'NO', 2: 'NO', 3: 'YES', 4: 'YES', 5: 'NO', 6: 'NO', 7: 'YES', 8: 'NO', 9: 'NO', 10: 'NO', 11: 'YES', 12: 'NO', 13: 'YES', 14: 'YES', 15: 'YES'}, 'YEAR': {0: 2021, 1: 2018, 2: 2019, 3: 2021, 4: 2021, 5: 2019, 6: 2019, 7: 2021, 8: 2018, 9: 2019, 10: 2018, 11: 2021, 12: 2018, 13: 2021, 14: 2021, 15: 2021}, 'PT1': {0: '', 1: 'NOT_TESTED', 2: '', 3: 'NOT_COMPLETED', 4: '165', 5: '', 6: '180', 7: '145', 8: '155', 9: '', 10: '', 11: '', 12: 'TOO_LOW', 13: '150', 14: '155', 15: ''}, 'PT2': {0: '', 1: '', 2: '', 3: '', 4: '', 5: 'TOO_LOW', 6: '', 7: '', 8: '160', 9: 'TOO_LOW', 10: '', 11: '', 12: '', 13: '', 14: '', 15: ''}, 'PT3': {0: '', 1: 'TOO_LOW', 2: '', 3: 'TOO_LOW', 4: '', 5: '', 6: '', 7: '', 8: '', 9: '', 10: '', 11: 'NOT_COMPLETED', 12: '', 13: '185', 14: '', 15: '165'}, 'PT4': {0: '', 1: '', 2: '', 3: '', 4: '', 5: 165.0, 6: '', 7: '', 8: '', 9: '', 10: '', 11: '', 12: 180.0, 13: '', 14: '', 15: ''}})
num_range = list(range(150,191, 5))
OUTCOMES = ['NOT_TESTED', 'NOT_COMPLETED', 'TOO_LOW']
OUTCOMES.extend([str(num) for num in num_range])
OUTCOMES = CategoricalDtype(OUTCOMES, ordered = True)
df["outcomes_col"] = df["PT1"].astype(OUTCOMES)
sns.countplot(x= "outcomes_col", data=df, palette='Magma')
plt.xticks(rotation = 90)
plt.ylabel('Amount')
plt.xlabel('Outcomes')
plt.title("Outcomes per Testing")
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/4YVVY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4YVVY.png" alt="enter image description here" /></a></p>
|
python|pandas|dataframe|matplotlib|seaborn
| 1 |
1,904,750 | 72,744,365 |
How to efficiently join a small table to a large one on python
|
<p>I have two tables similar to these:</p>
<pre><code>df_1 = pd.DataFrame({'id': [1,1,1,1,1,2,2,2,2,2], 'x': [0,1,2,3,4,5,6,7,8,9]})
id x
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
5 2 5
6 2 6
7 2 7
8 2 8
9 2 9
df_2 = pd.DataFrame({'y': [10,100]}, index=[1,2])
y
1 10
2 100
</code></pre>
<p>My goal is to multiply <code>df['x']</code> by <code>df['y']</code> based on <code>id</code>. My current solution works, but it seems to me that there should be a more efficient/elegant way of doing this.</p>
<p>This is my code:</p>
<pre><code>df_comb = pd.merge(df_1, df_2, left_on='id', right_index=True)
x_new = df_comb['x'] * df_comb['y']
df_1['x_new'] = x_new.to_numpy()
id x x_new
0 1 0 0
1 1 1 10
2 1 2 20
3 1 3 30
4 1 4 40
5 2 5 500
6 2 6 600
7 2 7 700
8 2 8 800
9 2 9 900
</code></pre>
|
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>map</code></a> to recover the multiplication factor from <code>df_2</code>, then <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.mul.html" rel="nofollow noreferrer"><code>mul</code></a> to the <code>x</code> column:</p>
<pre><code>df_1['x_new'] = df_1['id'].map(df_2['y']).mul(df_1['x'])
</code></pre>
<p>output:</p>
<pre><code> id x x_new
0 1 0 0
1 1 1 10
2 1 2 20
3 1 3 30
4 1 4 40
5 2 5 500
6 2 6 600
7 2 7 700
8 2 8 800
9 2 9 900
</code></pre>
|
python|pandas|merge
| 3 |
1,904,751 | 72,818,706 |
Python: Assert mock function was called with a string containing another string
|
<p>Here is a simplified version of the problem I am facing: Let's say I have a function that accepts a path to a directory and then removes all of its content except (optionally) a designated "keep file",</p>
<pre class="lang-py prettyprint-override"><code>import os
KEEP_FILE_CONSTANT = '.gitkeep'
def clear_directory(directory: str, retain: bool = True) -> bool:
try:
filelist = list(os.listdir(directory))
for f in filelist:
filename = os.path.basename(f)
if retain and filename == KEEP_FILE_CONSTANT:
continue
os.remove(os.path.join(directory, f))
return True
except OSError as e:
print(e)
return False
</code></pre>
<p>I am trying to write a unit test for this function that verifies the <code>os.remove</code> was called. This is currently how I am testing it:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
from unittest.mock import ANY
@pytest.mark.parametrize('directory', [
('random_directory_1'),
('random_directory_2'),
# ...
])
@patch('module.os.remove')
def test_clear_directory(delete_function, directory):
clear_directory(directory)
delete_function.assert_called()
delete_function.assert_called_with(ANY)
</code></pre>
<p>Ideally, what I would like to assert in the test is the <code>delete_function</code> was called with an argument containing <code>directory</code>, i.e. something like,</p>
<pre class="lang-py prettyprint-override"><code>delete_function.assert_called_with(CONTAINS(directory))
</code></pre>
<p>or something of that nature. I have been looking at <a href="https://pypi.org/project/PyHamcrest/" rel="nofollow noreferrer">PyHamcrest</a>, specifically the <a href="https://pyhamcrest.readthedocs.io/en/v2.0.2/tutorial/#predefined-matchers" rel="nofollow noreferrer">contains_string</a> function, but I am not sure how to apply it here or if it's even possible.</p>
<p>Is there any way to implement a CONTAINS matcher for this use case?</p>
|
<p>This isn't a direct answer to your question, but if I were writing these tests, I would take a different approach:</p>
<ul>
<li>Make a temporary directory.</li>
<li>Actually delete the files.</li>
<li>Check that only the expected files remain.</li>
</ul>
<p>This way, you're testing the actual behavior you want, and not depending on internal implementation details (i.e. the fact that you're using <code>os.remove()</code> instead of some alternative like <code>Pathlib.unlink()</code>).</p>
<p>If you're not familiar, pytest provides a <a href="https://docs.pytest.org/en/7.1.x/how-to/tmp_path.html" rel="nofollow noreferrer"><code>tmp_path</code></a> fixture for exactly this kind of test. However, it's still a bit of a chore to fill in the temporary directory, especially if you want to test a variety of nested file hierarchies. I wrote a fixture called <a href="https://pytest-tmp-files.readthedocs.io/en/latest/?badge=latest" rel="nofollow noreferrer"><code>tmp_files</code></a> to make this easier, though, and I think it might be a good fit for your problem. Here's how the test would look:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
# I included tests for nested files, even though the sample function you
# provided doesn't support them, just to show how to do it.
@pytest.mark.parametrize(
'tmp_files, to_remove, to_keep', [
pytest.param(
{'file.txt': ''},
['file.txt'],
[],
id='remove-1-file',
),
pytest.param(
{'file-1.txt': '', 'file-2.txt': ''},
['file-1.txt', 'file-2.txt'],
[],
id='remove-2-files',
),
pytest.param(
{'file.txt': '', 'dir/file.txt': ''},
['file.txt', 'dir/file.txt'],
[],
id='remove-nested-files',
marks=pytest.mark.xfail,
),
pytest.param(
{'.gitkeep': ''},
[],
['.gitkeep'],
id='keep-1-file',
),
pytest.param(
{'.gitkeep': '', 'dir/.gitkeep': ''},
[],
['.gitkeep', 'dir/.gitkeep'],
id='keep-nested-files',
marks=pytest.mark.xfail,
),
],
indirect=['tmp_files'],
)
def test_clear_directory(tmp_files, to_remove, to_keep):
clear_directory(tmp_files)
for p in to_remove:
assert not os.path.exists(tmp_files / p)
for p in to_keep:
assert os.path.exists(tmp_files / p)
</code></pre>
<p>To briefly explain, the <code>tmp_files</code> parameters specify what files to create in each temporary directory, and are simply dictionaries mapping file names to file contents. Here all the files are simple text files, but it's also possible to create symlinks, FIFOs, etc. The <code>indirect=['tmp_files']</code> argument is easy to miss but very important. It tells pytest to parametrize the <code>tmp_files</code> fixture with the <code>tmp_files</code> parameters.</p>
|
python|pytest|python-unittest|python-mock|pyhamcrest
| 1 |
1,904,752 | 68,347,041 |
Problem training a model for housing dataset, ANN
|
<p>I am making a model to train on a housing datasets. Below is the error I keep on getting:-</p>
<pre><code>ValueError: Failed to find data adapter that can handle input: <class 'sklearn.preprocessing._data.MinMaxScaler'>, <class 'numpy.ndarray'>
</code></pre>
<p>The model is given below:-</p>
<pre><code> model=Sequential()
model.add(Dense(7,activation='relu'))
model.add(Dense(7,activation='relu'))
model.add(Dense(7,activation='relu'))
model.add(Dense(7,activation='softmax'))
model.compile(optimizer='adam',loss='categorical_crossentropy')
model.fit(X_train,y_train,batch_size=3,epochs=200,verbose=3,callbacks=
[early],validation_data=(X_test,y_test))
</code></pre>
|
<p>Ensure that your input data, i.e., training and testing data, are numpy arrays, instead of a lists. Don't forget to check the labels as well.</p>
|
python|keras
| 0 |
1,904,753 | 68,408,522 |
Swapping elements in tuples - football calendar generator
|
<p>Here is my code:</p>
<pre><code> n = int(len(teams_names) / 2)
rounds = []
for i in range(len(teams_names) - 1):
t = teams_names[:1] + teams_names[-i:] + teams_names[1:-i] if i else teams_names
rounds.append(list(zip(t[:n], reversed(t[n:]))))
one_round_length = len(rounds)
list(rounds)
for i in range(len(rounds)):
one_round = rounds[one_round_length-i-1]
for j in range(len(one_round)):
one_round[j][0], one_round[j][1] = one_round[j][1], one_round[j][0]
rounds.append(one_round)
</code></pre>
<p>But I have TypeError: 'tuple' object does not support item assignment in the line:</p>
<pre><code> one_round[j][0], one_round[j][1] = one_round[j][1], one_round[j][0]
</code></pre>
<p>The code is to create the football matches calendar. In the first ,,round", creates random pairs of the teams. And in the second "round" I want to reverse host-guest teams pairs.</p>
<p>For example, without reverse I have:</p>
<pre><code>[('Atletico Madrid', 'Sevilla FC'), ('Villareal CF', 'Athletic Club')]
[('Atletico Madrid', 'Villareal CF'), ('Sevilla FC', 'Athletic Club')]
[('Atletico Madrid', 'Athletic Club'), ('Sevilla FC', Villareal CF')]
[('Atletico Madrid', 'Athletic Club'), ('Sevilla FC', Villareal CF')]
[('Atletico Madrid', 'Villareal CF'), ('Sevilla FC', 'Athletic Club')]
[('Atletico Madrid', 'Sevilla FC'), ('Villareal CF', 'Athletic Club')]
</code></pre>
<p>I want to have:</p>
<pre><code>[('Atletico Madrid', 'Sevilla FC'), ('Villareal CF', 'Athletic Club')]
[('Atletico Madrid', 'Villareal CF'), ('Sevilla FC', 'Athletic Club')]
[('Atletico Madrid', 'Athletic Club'), ('Sevilla FC', Villareal CF')]
[('Athletic Club', 'Atletico Madrid'), ('Villareal CF', Sevilla FC')]
[('Villareal CF', 'Atletico Madrid'), ('Athletic Club', 'Sevilla FC')]
[('Sevilla FC', 'Atletico Madrid'), ('Athletic Club', 'Sevilla FC')]
</code></pre>
|
<p>if you have the following list</p>
<pre><code>rounds = [
('Atletico Madrid', 'Sevilla FC'), ('Villareal CF', 'Athletic Club'),
('Atletico Madrid', 'Villareal CF'), ('Sevilla FC', 'Athletic Club'),
('Atletico Madrid', 'Athletic Club'), ('Sevilla FC', 'Villareal CF'),
('Atletico Madrid', 'Athletic Club'), ('Sevilla FC', 'Villareal CF'),
('Atletico Madrid', 'Villareal CF'), ('Sevilla FC', 'Athletic Club'),
('Atletico Madrid', 'Sevilla FC'), ('Villareal CF', 'Athletic Club')]
</code></pre>
<p>apply the following code and print</p>
<pre><code>reversed_rounds = [(y,x) for x,y in rounds]
print(reversed_rounds)
</code></pre>
<p>Output:</p>
<pre><code>[('Sevilla FC', 'Atletico Madrid'), ('Athletic Club', 'Villareal CF'),
('Villareal CF', 'Atletico Madrid'), ('Athletic Club', 'Sevilla FC'),
('Athletic Club', 'Atletico Madrid'), ('Villareal CF', 'Sevilla FC'),
('Athletic Club', 'Atletico Madrid'), ('Villareal CF', 'Sevilla FC'),
('Villareal CF', 'Atletico Madrid'), 'Athletic Club', 'Sevilla FC'),
('Sevilla FC', 'Atletico Madrid'), ('Athletic Club', 'Villareal CF')]
</code></pre>
<h1 id="or-dbi0">OR</h1>
<p>try the following one to get exactly what you want</p>
<pre><code>rounds = [
[('Atletico Madrid', 'Sevilla FC'), ('Villareal CF', 'Athletic Club')],
[('Atletico Madrid', 'Villareal CF'), ('Sevilla FC', 'Athletic Club')],
[('Atletico Madrid', 'Athletic Club'), ('Sevilla FC', 'Villareal CF')],
[('Atletico Madrid', 'Athletic Club'), ('Sevilla FC', 'Villareal CF')],
[('Atletico Madrid', 'Villareal CF'), ('Sevilla FC', 'Athletic Club')],
[('Atletico Madrid', 'Sevilla FC'), ('Villareal CF', 'Athletic Club')]
]
final_list = []
for li in rounds:
inner_list=[]
for tup in li:
x,y = tup
inner_list.append((y,x))
final_list.append(inner_list)
for item in final_list:
print(item)
</code></pre>
<p>Output:</p>
<pre><code>[('Sevilla FC', 'Atletico Madrid'), ('Athletic Club', 'Villareal CF')]
[('Villareal CF', 'Atletico Madrid'), ('Athletic Club', 'Sevilla FC')]
[('Athletic Club', 'Atletico Madrid'), ('Villareal CF', 'Sevilla FC')]
[('Athletic Club', 'Atletico Madrid'), ('Villareal CF', 'Sevilla FC')]
[('Villareal CF', 'Atletico Madrid'), ('Athletic Club', 'Sevilla FC')]
[('Sevilla FC', 'Atletico Madrid'), ('Athletic Club', 'Villareal CF')]
</code></pre>
|
python|list|tuples
| 1 |
1,904,754 | 63,218,726 |
TypeError: unhashable type: 'list' in last code line
|
<pre><code>rm=[[],['COC1=C(C=C(C=C1)CC2=NC=CC3=CC(=C(C=C32)OC)OC)OC.Cl'], ['C[C@@H]1[C@H]([C@@H]([C@@H]([C@@H](O1)OC\\2CC3C(C(CC(O3)(CC(CC(CC(CC(CC(=O)CC(CC(=O)OC(C(/C=C/C=C/C=C\\C=C/C=C/C=C/C=C2)C)C(C)CCC(CC(=O)C4=CC=C(C=C4)NC)O)O)O)O)O)O)O)O)C(=O)OC)O)N)O']]
smiles_dict=['CCCC1=NN(C2=C1N=C(NC2=O)C3=C(C=CC(=C3)C(=O)CN4CCN(CC4)CC)OCC)C', 'CCCC1=NN(C2=C1N=C(NC2=O)C3=C(C=CC(=C3)S(=O)(=O)N4CCN(CC4)CC)OCC)C']
nsmilesd = {}
nsmilesd= list(set(smiles_dict)-set(rm))
</code></pre>
<p>Error is shown for <code>set(rm)</code>. Can you help me understanding why?</p>
|
<p>The error is referring to the object hash which must not change throughout the run of the python program. Try decomposing that statement and you'll see that <code>set(rm)</code> fails. That list's elements are also lists and mutable things can't be hashed in python's use of the word here. That's because mutable things can be changed later, invalidating that hash.</p>
<p>Sets use the object hash to lookup values in the set. If the hash of the object changes, then like and unlike values can not longer be determined.</p>
<p>I'm not sure what <code>rm</code> is supposed to be, but it looks like it has single item lists and its those items that should be in the set operation.</p>
<pre><code>rm=[[],['COC1=C(C=C(C=C1)CC2=NC=CC3=CC(=C(C=C32)OC)OC)OC.Cl'], ['C[C@@H]1[C@H]([C@@H]([C@@H]([C@@H](O1)OC\\2CC3C(C(CC(O3)(CC(CC(CC(CC(CC(=O)CC(CC(=O)OC(C(/C=C/C=C/C=C\\C=C/C=C/C=C/C=C2)C)C(C)CCC(CC(=O)C4=CC=C(C=C4)NC)O)O)O)O)O)O)O)O)C(=O)OC)O)N)O']]
smiles_dict=['CCCC1=NN(C2=C1N=C(NC2=O)C3=C(C=CC(=C3)C(=O)CN4CCN(CC4)CC)OCC)C', 'CCCC1=NN(C2=C1N=C(NC2=O)C3=C(C=CC(=C3)S(=O)(=O)N4CCN(CC4)CC)OCC)C']
rm2 = []
for lst in rm:
rm2.extend(lst)
nsmilesd = {}
nsmilesd= list(set(smiles_dict)-set(rm2))
</code></pre>
|
python
| 0 |
1,904,755 | 58,882,960 |
eventlet with django and python socket io
|
<p>whenever i try connecting client to ther server, it connects succesully and when client fires an event 'chat' with some data, it gets response back as:
"received your msg" + data sent
"welcome"</p>
<p>Whenever client fires 'chat' event server(sio) also fires 'chat' event but in case server itself want to fire an event 'chat' to the client it doesn't work
I tried different things like</p>
<pre><code>import eventlet
eventlet.monkey_patch()
</code></pre>
<p>then I tried</p>
<pre><code>sio.start_background_task()
</code></pre>
<p>after that I also tried normal threading but that doesn't worked too.</p>
<p>apart from that I also tried </p>
<pre><code>eventlet.spawn()
</code></pre>
<p>but still no improvement.
How should I proceed?</p>
<h1>socketio_app/views.py</h1>
<pre><code>import threading
import time
import eventlet
eventlet.monkey_patch()
from django.shortcuts import render
# Create your views here.
import socketio
sio = socketio.Server(async_mode='eventlet')
@sio.event
def connect(sid, environ):
print('connected to ', sid)
return True
@sio.event
def disconnect(sid):
print("disconnected", sid)
@sio.on('chat')
def on_message(sid, data):
print('I received a message!', data)
sio.emit("chat", "received your msg" + str(data))
sio.emit("chat", "welcome")
@sio.event
def bg_emit():
print("emiitting")
# sio.emit('chat', dict(foo='bar'))
sio.emit('chat', data='ment')
def bkthred():
print("strting bckgroud tsk")
while True:
eventlet.sleep(5)
bg_emit()
def emit_data(data):
print("emitting strts")
# sio.start_background_task(target=bkthred)
eventlet.spawn(bkthred)
</code></pre>
<h1>supervisord.conf</h1>
<pre><code>[program:gunicorn]
command=gunicorn --name ProLogger-gunicorn --workers 2 ProLogger.wsgi:application --bind 0.0.0.0:8000 --timeout 100 -k eventlet
#directory=/home/ubuntu/ProLoggerBackend/
directory=/home/nimish/PycharmProjects/ProLoggerBackend/ProLogger/
stdout_logfile=/home/ubuntu/logs/gunicorn_output.log
stderr_logfile=/home/ubuntu/logs/gunicorn_error.log
autostart=true
autorestart=true
startretries=10
environment=PYTHONUNBUFFERED=1
[program:gunicornsocketio]
command=gunicorn --name ProLoggerSocket-gunicorn --workers 1 ProLogger.wsgi:socket_application --bind 0.0.0.0:5000 -k eventlet
#directory=/home/ubuntu/ProLoggerBackend/
directory=/home/nimish/PycharmProjects/ProLoggerBackend/ProLogger/
stdout_logfile=/home/ubuntu/logs/gunicornsocket_output.log
stderr_logfile=/home/ubuntu/logs/gunicornsocket_error.log
autostart=true
autorestart=true
startretries=10
environment=PYTHONUNBUFFERED=1
</code></pre>
<p>port 8000 runs the main application
port 5000 runs socket</p>
<h1>wsgi.py</h1>
<pre><code>from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "ProLogger.settings")
from utility.environment_utils import set_settings_module
import socketio
from socketio_app.views import sio
application = get_wsgi_application()
socket_application = socketio.WSGIApp(sio, application)
</code></pre>
<h1>My Simple testing Client Application</h1>
<pre><code>import socketio
sio = socketio.Client()
@sio.event
def connect():
print('connection established')
@sio.on("chat")
def my_message(data):
print('message received with ', data)
@sio.event
def disconnect():
print('disconnected farom server')
sio.connect('http://0.0.0.0:5000')
# sio.connect('http://3.17.182.118:5000')
# sio.wait()
# socketio.WSGIApp()
# socketio.Server()
sio.emit("chat", "hello")
</code></pre>
<p>How should I proceed further so that server itself can fire 'chat' event?
Not as an acknowledgement like it is doing now(i.e whenever client fires 'chat' server also fires 'chat')</p>
<p>So, how can I fire the event from the Django views?</p>
|
<p><a href="https://github.com/miguelgrinberg/python-socketio/issues/155#issuecomment-353529308" rel="nofollow noreferrer">https://github.com/miguelgrinberg/python-socketio/issues/155#issuecomment-353529308</a>
This comment helped me to check myself the process id for running wsgi and socket servers
The biggest blunder that I did was trying to emit from the socket server from port 8000, but the server was actually connected to port 5000.
So running emit from process port 5000 solved the issue.</p>
|
python|django|socket.io|eventlet
| 0 |
1,904,756 | 58,607,904 |
Django - Receiving error "too many values to unpack (expected 2)"
|
<p>In my website, I have created a custom user from the <strong>AbstractUser</strong> class.
When I check if a user with the same username or email exists, I get a value error returned to me.</p>
<p>Here's my code:
<strong>models.py</strong>:</p>
<pre><code>class Account(AbstractUser):
pass
def __str__(self):
return self.username
def __unicode__(self):
return self.username
</code></pre>
<p><strong>views.py</strong>:</p>
<pre><code>try:
user = Account.objects.get(username=request.POST['username'])
return render(request, 'users/signup.html', {'error': 'Username has already been taken'})
except Account.DoesNotExist:
try:
user = Account.objects.get(request.POST['email'])
return render(request, 'users/signup.html', {'error': 'Email is not available'})
except Account.DoesNotExist:
user = Account.objects.create_user(username=request.POST['username'], email=request.POST['email'], password=request.POST['password'])
auth.login(request, user)
return redirect('home')
</code></pre>
<p><strong>Error returned</strong>:</p>
<pre><code>ValueError at /accounts/signup/
too many values to unpack (expected 2)
</code></pre>
<p><strong>Traceback</strong>: <code>user = Account.objects.get(username=request.POST['username'])</code> </p>
<p><strong>Ful traceback</strong>:</p>
<pre><code>File "/Users/eesamunir/Documents/Projects/youtube-project/accounts/views.py" in signup
28. user = Account.objects.get(username=request.POST['username'])
File "/Users/eesamunir/.local/share/virtualenvs/youtube-project-snNRTgQy/lib/python3.8/site-packages/django/db/models/manager.py" in manager_method
82. return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/Users/eesamunir/.local/share/virtualenvs/youtube-project-snNRTgQy/lib/python3.8/site-packages/django/db/models/query.py" in get
406. raise self.model.DoesNotExist(
During handling of the above exception (Account matching query does not exist.), another exception occurred:
File "/Users/eesamunir/.local/share/virtualenvs/youtube-project-snNRTgQy/lib/python3.8/site-packages/django/core/handlers/exception.py" in inner
34. response = get_response(request)
File "/Users/eesamunir/.local/share/virtualenvs/youtube-project-snNRTgQy/lib/python3.8/site-packages/django/core/handlers/base.py" in _get_response
115. response = self.process_exception_by_middleware(e, request)
File "/Users/eesamunir/.local/share/virtualenvs/youtube-project-snNRTgQy/lib/python3.8/site-packages/django/core/handlers/base.py" in _get_response
113. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/eesamunir/Documents/Projects/youtube-project/accounts/views.py" in signup
32. user = Account.objects.get(request.POST['email'])
File "/Users/eesamunir/.local/share/virtualenvs/youtube-project-snNRTgQy/lib/python3.8/site-packages/django/db/models/manager.py" in manager_method
82. return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/Users/eesamunir/.local/share/virtualenvs/youtube-project-snNRTgQy/lib/python3.8/site-packages/django/db/models/query.py" in get
399. clone = self.filter(*args, **kwargs)
File "/Users/eesamunir/.local/share/virtualenvs/youtube-project-snNRTgQy/lib/python3.8/site-packages/django/db/models/query.py" in filter
892. return self._filter_or_exclude(False, *args, **kwargs)
File "/Users/eesamunir/.local/share/virtualenvs/youtube-project-snNRTgQy/lib/python3.8/site-packages/django/db/models/query.py" in _filter_or_exclude
910. clone.query.add_q(Q(*args, **kwargs))
File "/Users/eesamunir/.local/share/virtualenvs/youtube-project-snNRTgQy/lib/python3.8/site-packages/django/db/models/sql/query.py" in add_q
1290. clause, _ = self._add_q(q_object, self.used_aliases)
File "/Users/eesamunir/.local/share/virtualenvs/youtube-project-snNRTgQy/lib/python3.8/site-packages/django/db/models/sql/query.py" in _add_q
1315. child_clause, needed_inner = self.build_filter(
File "/Users/eesamunir/.local/share/virtualenvs/youtube-project-snNRTgQy/lib/python3.8/site-packages/django/db/models/sql/query.py" in build_filter
1187. arg, value = filter_expr
Exception Type: ValueError at /accounts/signup/
Exception Value: too many values to unpack (expected 2)
</code></pre>
<p>Does anybody have a solution? Thank you.</p>
|
<p>Read this line carefully </p>
<pre><code>File "/Users/eesamunir/Documents/Projects/youtube-project/accounts/views.py" in signup
32.user = Account.objects.get(request.POST['email'])
</code></pre>
<p>Error occurs due to this,
It should be a key, value pair, <code>Account.objects.get(email=request.POST['email'])</code></p>
|
python|django|django-models
| 3 |
1,904,757 | 16,021,308 |
Set read timeout in Python-Requests
|
<p>Running into the following exception while trying to query a RESTful api(note not my api, so going in and doing anything with the actual server is unfortunately not possible):
<code>javax.naming.NamingException: LDAP response read timed out</code>.</p>
<p>I am using Python-requests to handle all of my GETs, POSTs, etc, and the actual connection appears to always be fine, but when the server is under heavy load it appears to not actually return all of the data before the read timesout. Anybody know of anyway to change the timeout on the read? Looking through the Python-requests documentation, I was only able to find information on changing the connection timeout.</p>
<p>Note, I have read through other read timeout questions, but all were questions regarding Pythons other http/url libraries.</p>
|
<p>Maybe you can set timeout parameter as a tuple, just like</p>
<pre><code>r = requests.get('https://github.com', timeout=(3.05, 27))
</code></pre>
<p>The first value of tuple is connection timeout(3.05), the second is read timeout(27)</p>
<p>more details in requests doc: <a href="https://requests.readthedocs.io/en/master/user/advanced/#timeouts" rel="nofollow noreferrer">https://requests.readthedocs.io/en/master/user/advanced/#timeouts</a></p>
|
python|python-requests
| 2 |
1,904,758 | 59,607,299 |
Python to compare colored areas of images
|
<p>Assuming there are only 2 colors in an image. What's the simplest way in Python to tell an image has more (the colored areas) of these 2 colors than the other (group of similar images)?</p>
<p>Definition of "more": the area of total colored blocks of one picture, is bigger than the other. (please note the shape of colors might be irregular)</p>
<p>Thank you.</p>
<p><a href="https://i.stack.imgur.com/bkhvo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bkhvo.png" alt="enter image description here"></a></p>
|
<p>Okay, after some experimentation, I have a possible solution. You can use Pillow, a common image-loading/handling library, to convert the images to an ndarray, and then use the <code>count_nonzero()</code> method to get your desired results. As a fun side-effect, this works with an arbitrary amount of colors. Here's full working code that I just tried:</p>
<pre><code>from PIL import Image # because for some reason, that's how you import something from Pillow
import numpy as np
im = Image.open("/path/to/image.png")
arr = np.array(im.getdata())
unique_colors, counts = np.unique(arr.reshape(-1, arr.shape[1]), axis=0, return_counts=True)
</code></pre>
<p>Now the <code>unique_colors</code> variable holds the unique colors that appear in your image, and <code>counts</code> holds the corresponding counts for each color in the image; that is to say, <code>counts[i]</code> is the number of times <code>unique_colors[i]</code> appears in the image for any <code>i</code>. </p>
<p>How does the unique + reshaping line work? This is borrowed from <a href="https://stackoverflow.com/a/48904991/10497351">this particular answer</a>. Basically, you flatten out your image array such that it has shape <code>(num_pixels, num_channels)</code>, which could be 1, 3, or 4 depending on your image format (single-channel, RGB, RGBA, etc.). Now that I have a giant 2D "table" of pixels, I simply find which row values (hence <code>axis=0</code>) are unique, and then use the <code>return_counts</code> keyword to return, well, the counts. </p>
<p>At this point, you have extracted the unique colors and counts of those colors <strong>for a single image.</strong> To compare multiple images, you would repeat this process on multiple images, find the colors they have in common, and then you can simply compare integers to find out which image has more of a particular color.</p>
<p>For my particular image, the format of the channels happened to be RGBA; in any case, I would recommend printing out <code>arr.shape</code> prior to the reshape step to verify that you have the correct index. If you/anyone else knows of a more general method to find the channel index of an image obtained in this fashion — I'm all ears. Thus, you may have to change the index of <code>arr.shape</code> to something else depending on your image. For the record, I tried this on a <code>.png</code> image, like you specified. Hope this helps!</p>
|
python|image
| 1 |
1,904,759 | 70,940,103 |
How to set proxy while using googletrans lib python
|
<p>I am using googletrans lib in python to translate my text. Can someone help me out how do we set proxies in goolgetrans object?</p>
<p>Version of googletrans :3.0.0</p>
|
<p><strong>UPDATE:</strong>
Based on the error described in the comments, you show use the following code. Tested on my environment point to a proxy on cloud:</p>
<h4>Whitout password:</h4>
<pre class="lang-py prettyprint-override"><code>from googletrans import Translator
from httpcore import SyncHTTPProxy
proxy={"https": SyncHTTPProxy((b'http', b'<proxy server ip>', 3128, b''))}
# http = proxy protocol
# <proxy server ip> = proxy server ip / localhost if its local
# 3128 = proxy server port, change to yours
translator = Translator(service_urls=['translate.googleapis.com'],proxies=proxy)
translation = translator.translate("Der Himmel ist blau und ich mag Bananen", dest='en')
print(translation.text)
</code></pre>
<h4>With Password authentication:</h4>
<pre class="lang-py prettyprint-override"><code>from googletrans import Translator
from httpcore import SyncHTTPProxy
from base64 import b64encode
def build_proxy_headers(username, password):
userpass = (username.encode("utf-8"), password.encode("utf-8"))
token = b64encode(b":".join(userpass))
return [(b"Proxy-Authorization", b"Basic " + token)]
proxy_url=(b"http", b"<my_proxy_host_ip> ", 3128, b'') # <my_proxy_host_ip> as `bytes`, <my_proxy_port> as `int`
proxy_headers = build_proxy_headers("<username>","<password>" ) # <my_username>, <my_password> as `str`
proxy={"https": SyncHTTPProxy(proxy_url=proxy_url, proxy_headers=proxy_headers) }
translator = Translator(service_urls=['translate.googleapis.com'],proxies=proxy)
translation = translator.translate("Der Himmel ist blau und ich mag Bananen", dest='en')
print(translation.text)
</code></pre>
<hr />
<h4>Previous answer</h4>
<p>The <code>googletrans</code> uses the <code>httpx</code> library, you can follow its documentation do configure the proxy connection:</p>
<p>send the proxy data in a dictionary variable:</p>
<pre class="lang-py prettyprint-override"><code>proxies_def = {
"http://": "http://localhost:8030",
"https://": "http://localhost:8031",
}
#Or with authentication:
proxies_def = {
"http://": "http://username:password@localhost:8030",
# ...
}
</code></pre>
<pre class="lang-py prettyprint-override"><code>from googletrans import Translator
translator = Translator(proxies=proxies_def)
</code></pre>
<p>Check the httpx documentation <a href="https://www.python-httpx.org/advanced/#http-proxying" rel="nofollow noreferrer">here</a>.</p>
|
python|proxy|google-translate|google-translation-api
| 0 |
1,904,760 | 70,917,880 |
how can I loop over Api request for multiple keys and add a value to every loop?
|
<p>How can I add a value to every loop with different API keys? I am trying to write a simple ETL for multiple store locations. My first attempt was to pull each API request individually. However, this will mean I have to duplicate the same code. Therefore, I decided to create a loop to iterate over the keys for every location. I want to Identify each transaction with a location Identifier. I want to add this identifier during the loop. I have included my attempt at solving this in python. Please provide some feedback and if this would be the correct approach. </p>
<pre><code>all=[]
for key,value in api_data.items():
header= {
'Accept': 'application/json',
'Authorization': value,}
res=requests.get(URL,headers=header)
res_j=res.json()
all.extend(res_j)
</code></pre>
|
<p>Use a dictionary keyed on the authorisation value. The associated value will be the returned JSON.</p>
<p>Always check the HTTP response.</p>
<p>In this case you only need <em>values()</em> from the <em>api_data</em> dictionary.</p>
<p>Something like this should help:</p>
<pre><code>mydata = dict()
headers = {'Accept': 'application/json'}
for value in api_data.values():
headers['Authorization'] = value
(res := requests.get(URL, headers=headers)).raise_for_status()
mydata[value] = res.json()
</code></pre>
|
python|json|loops|dictionary|etl
| 0 |
1,904,761 | 59,933,495 |
Need a function that returns an estimate equal to the mean of the closest k points to the number p?
|
<p>Define a function that takes a 1-d NumPy array, a parameter k, and a number p. The function returns an estimate equal to the mean of the closest k points to the number p?</p>
<p>def k_neighbor(input_data, k, p):
"""Returns the k-neighbor estimate for p using data input_data.</p>
<pre><code>Keyword arguments:
input_data -- NumPy array of all the data
k -- Number of k
p -- input values
</code></pre>
<p>Here is the function call</p>
<p>data = np.array([1,3,4,5,7,8,11,12,13,15,19,24,25,29,40])
print(k_neighbor(input_data=data, k=3, p=5))</p>
|
<p>Copying from <a href="https://stackoverflow.com/questions/2566412/find-nearest-value-in-numpy-array">Find nearest value in numpy array</a> and adjusting a bit gives what you want:</p>
<pre><code>import numpy as np
def find_nearest(array, value,n_neighbors=1):
distances=np.abs(array - value)
print(distances)
nearest_neighbors=[]
for i in range(0,n_neighbors):
idx=distances.argmin()
nearest_neighbors.append(array[idx])
distances[idx]=np.amax(distances)
return nearest_neighbors
data=np.array([1,3,4,5,7,8,11,12,13,15,19,24,25,29,40])
value=24
print(find_nearest(data, value,n_neighbors=3))
</code></pre>
<p>returns </p>
<blockquote>
<p>[24, 25, 19]</p>
</blockquote>
|
python|numpy|knn
| 1 |
1,904,762 | 60,182,105 |
Autheticate Github private repo in python
|
<p>I want to verify if the given combination of repo URL, userid and password is valid or not.
I am using request for this. Below is my python code:</p>
<pre><code>requests.get('https://github.com/geetikatalreja/WebApp_DotNet.git', auth = ('valid_username', 'Valid_password'))
</code></pre>
<p>or </p>
<pre><code>requests.get('https://github.com/geetikatalreja/WebApp_DotNet.git', auth=HTTPBasicAuth('valid_username', 'Valid_password'))
</code></pre>
<p>Both the statements are returning Error code 401. Error code 401 occurs when authentication is required and has failed or has not yet been provided, but I am able to login using the same credentials and URL from Github UI. </p>
<p>Kindly assist.</p>
|
<p>This kind of authentication works if you use the GitHub API but you cannot use the Web UI with basic authentication.</p>
<p>Normally, a Web UI uses a login form that sends a POST Request when you log in. After that, the session cookie is used in order to stay logged in (for the session). If the login should persist after the session expired, the website could use cookies that persist longer. I think GitHub uses this concept.</p>
<p>I would recommend you to use the API for automated processes because you can parse the responses easier.</p>
<p>Also, I strongly recommand not to use basic authentication with the real password. I would use PATs instead.</p>
<p>If you want to send authenticated requests to the API you can e.g. execute</p>
<pre><code>requests.get('https://api.github.com/repos/geetikatalreja/WebApp_DotNet', auth = ('valid_username', 'Valid_password'))
</code></pre>
<p>Instead of the password, you can also just use a PAT of your account(which is more secure). You can create a PAT <a href="https://github.com/settings/tokens/new" rel="nofollow noreferrer">over there</a>.</p>
<p>The GitHub API documentation can be found <a href="https://developer.github.com/v3/" rel="nofollow noreferrer">here</a> and the documentation for accessing repositories <a href="https://developer.github.com/v3/repos/" rel="nofollow noreferrer">there</a>.</p>
|
python|github|python-requests|python-3.7
| 2 |
1,904,763 | 5,656,715 |
Can I append twice the same object to an InstrumentedList in SQLAlchemy?
|
<p>I have a pretty simple N:M relationship in SqlAlchemy 0.6.6. I have a class "attractLoop" that can contain a bunch of Media (Images or Videos). I need to have a list in which the same Media (let's say image) can be appended twice. The relationship is as follows:</p>
<p>The media is a base class with most of the attributes Images and Videos will share.</p>
<pre><code>class BaseMedia(BaseClass.BaseClass, declarativeBase):
__tablename__ = "base_media"
_polymorphicIdentity = Column("polymorphic_identity", String(20), key="polymorphicIdentity")
__mapper_args__ = {
'polymorphic_on': _polymorphicIdentity,
'polymorphic_identity': None
}
_name = Column("name", String(50))
_type = Column("type", String(50))
_size = Column("size", Integer)
_lastModified = Column("last_modified", DateTime, key="lastModified")
_url = Column("url", String(512))
_thumbnailFile = Column("thumbnail_file", String(512), key="thumbnailFile")
_md5Hash = Column("md5_hash", LargeBinary(32), key="md5Hash")
</code></pre>
<p>Then the class who is going to use these "media" things:</p>
<pre><code>class TestSqlAlchemyList(BaseClass.BaseClass, declarativeBase):
__tablename__ = "tests"
_mediaItems = relationship("BaseMedia",
secondary=intermediate_test_to_media,
primaryjoin="tests.c.id == intermediate_test_to_media.c.testId",
secondaryjoin="base_media.c.id == intermediate_test_to_media.c.baseMediaId",
collection_class=list,
uselist=True
)
def __init__(self):
super(TestSqlAlchemyList, self).__init__()
self.mediaItems = list()
def getMediaItems(self):
return self._mediaItems
def setMediaItems(self, mediaItems):
if mediaItems:
self._mediaItems = mediaItems
else:
self._mediaItems = list()
def addMediaItem(self, mediaItem):
self.mediaItems.append(mediaItem)
#log.debug("::addMediaItem > Added media item %s to %s. Now length is %d (contains: %s)" % (mediaItem.id, self.id, len(self.mediaItems), list(item.id for item in self.mediaItems)))
def addMediaItemById(self, mediaItemId):
mediaItem = backlib.media.BaseMediaManager.BaseMediaManager.getById(int(mediaItemId))
if mediaItem:
if mediaItem.validityCheck():
self.addMediaItem(mediaItem)
else:
raise TypeError("Media item with id %s didn't pass the validity check" % mediaItemId)
else:
raise KeyError("Media Item with id %s not found" % mediaItem)
mediaItems = synonym('_mediaItems', descriptor=property(getMediaItems, setMediaItems))
</code></pre>
<p>And the intermediate class to link both of the tables:</p>
<pre><code>intermediate_test_to_media = Table(
"intermediate_test_to_media",
Database.Base.metadata,
Column("id", Integer, primary_key=True),
Column("test_id", Integer, ForeignKey("tests.id"), key="testId"),
Column("base_media_id", Integer, ForeignKey("base_media.id"), key="baseMediaId")
)
</code></pre>
<p>When I append the same Media object (instance) twice to one instances of that TestSqlAlchemyList, it appends two correctly, but when I retrieve the TestSqlAlchemyList instance from the database, I only get one. It seems to be behaving more like a set.</p>
<p>The intermediate table has properly all the information, so the insertion seems to be working fine. Is when I try to load the list from the database when I don't get all the items I had inserted.</p>
<pre><code>mysql> SELECT * FROM intermediate_test_to_media;
+----+---------+---------------+
| id | test_id | base_media_id |
+----+---------+---------------+
| 1 | 1 | 1 |
| 2 | 1 | 1 |
| 3 | 1 | 2 |
| 4 | 1 | 2 |
| 5 | 1 | 1 |
| 6 | 1 | 1 |
| 7 | 2 | 1 |
| 8 | 2 | 1 |
| 9 | 2 | 1 |
| 10 | 2 | 2 |
| 11 | 2 | 1 |
| 12 | 2 | 1 |
</code></pre>
<p>As you can see, the "test" instance with id=1 should have the media [1, 1, 2, 2, 1, 1]. Well, it doesn't. When I load it from the DB, it only has the media [1, 2]</p>
<p>I have tried to set any parameter in the relationship that could possibly smell to list... uselist, collection_class = list... Nothing...</p>
<p>You will see that the classes inherit from a BaseClass. That's just a class that isn't actually mapped to any table but contains a numeric field ("id") that will be the primary key for every class and a bunch of other methods useful for the rest of the classes in my system (toJSON, toXML...). Just in case, I'm attaching an excerpt of it:</p>
<pre><code>class BaseClass(object):
_id = Column("id", Integer, primary_key=True, key="id")
def __hash__(self):
return int(self.id)
def setId(self, id):
try:
self._id = int(id)
except TypeError:
self._id = None
def getId(self):
return self._id
@declared_attr
def id(cls):
return synonym('_id', descriptor=property(cls.getId, cls.setId))
</code></pre>
<p>If anyone can give me a push, I'll appreciate it a lot. Thank you. And sorry for the huge post... I don't really know how to explain better.</p>
|
<p>I think what you want is a described in the documentation as "<a href="http://docs.djangoproject.com/en/dev/topics/db/models/#extra-fields-on-many-to-many-relationships" rel="nofollow">Extra Fields in Many-to-Many Relationships</a>". Rather than storing a unique row in the database foreach "link", between attractLoop and Media, you would store a single association and specify (as a part of the link object model) how many times it is referenced and/or in which location(s) in the final list the media should appear. This is a different paradigm from where you started, so it'll certainly require some re-coding, but I think it addresses your issue. You would likely need to use a property to redefine how to add or remove Media from the attactLoop. </p>
|
python|list|sqlalchemy|multiple-instances|adjacency-list
| 2 |
1,904,764 | 67,874,725 |
How to obtain a subtree of a tree using python igraph?
|
<p>I hoped there is a simple function for this purpose (e.g. <code>tree.subtree(vertex)</code>) but I was not able to find one even after browsing the documentation quite a long time.</p>
<p>In any case, I found this workaround:</p>
<pre class="lang-py prettyprint-override"><code>subtree = tree.induced_subgraph(tree.subcomponent(vertex, mode='out'))
</code></pre>
<p>But this seems inefficient to me as <code>subcomponent()</code> returns a list of vertices reachable from <code>vertex</code> then <code>induced_subgraph()</code> (re)creates the subtree from this vertex-list.</p>
<p>Any other idea? :)</p>
|
<p>I think you are looking for breadth first search, starting at your root:</p>
<pre><code>[vertices, layers, parents] = g.bfs(root)
</code></pre>
<p>You can combine vertices and parents to get the edges of your new graph.</p>
<p>However, I don't think that will actually be a lot more efficient:
<code>subcomponent</code> will be based on BFS, so there will be no difference in running time there (O(|V| + |E|). <code>induced_subgraph</code> will have a running time O(|E|), so that is also the same as the running time for combining vertices and parents. The constant factors omitted by the big-O notation will be different, sure, but how big is your graph that you think those differences might matter?</p>
|
python|tree|igraph
| 1 |
1,904,765 | 67,662,719 |
How to remove sublist from a list based on indices in another list?
|
<p>I have these lists:</p>
<pre><code>ls = [[0, "a"],
[2, "c"]]
ls2 = [[0, "a"],
[1, "b"],
[2, "c"],
[3, "d"],
[4, "e"]]
</code></pre>
<p>I want to remove all sublists from <code>ls2</code> that are not in <code>ls</code>. So at the end <code>ls2</code> should be the same as <code>ls</code>. I tried this:</p>
<pre><code>ls = [[0, "a"],
[2, "c"]]
ls2 = [[0, "a"],
[1, "b"],
[2, "c"],
[3, "d"],
[4, "e"]]
indices_left = [l[0] for l in ls]
for subll in ls2:
if subll[0] not in indices_left:
ls2.pop(subll[0])
for subll2 in ls2:
print(subll2)
</code></pre>
<p>But this gives:</p>
<pre><code>[0, 'a']
[2, 'c']
[3, 'd']
</code></pre>
<p>I think this is because the first for loop removes list <code>[1, "b"]</code> but after that list <code>[2, "c"]</code> moves to <code>ls2[1]</code> and list <code>[3, "d"]</code> moves to <code>ls2[2]</code> and since <code>ls[1][0] == 2</code>, sublist <code>[3, "d"]</code> is not removed because it is now at index <code>ls2[2]</code>. How can I avoid this and remove all sublists from <code>ls2</code> that are not in <code>ls</code>?</p>
|
<p>Ignoring the fact that there's no use for your code because if you want <code>ls2</code> to be identical to <code>ls</code> you could just copy it, the following list comprehension will work:</p>
<pre><code>ls = [[0, "a"],
[2, "c"]]
ls2 = [[0, "a"],
[1, "b"],
[2, "c"],
[3, "d"],
[4, "e"]]
ls2 = [x for x in ls2 if x in ls]
</code></pre>
<h2>Edit</h2>
<p>If the goal is to select only elements from <code>ls2</code> that have the same first element as an item from <code>ls</code>, then this will work:</p>
<pre><code>indices = [x[0] for x in ls]
ls2 = [x for x in ls2 if x[0] in indices]
</code></pre>
|
python|list
| 2 |
1,904,766 | 43,009,950 |
Difficulty getting ndb.tasklets to work in Google App Engine
|
<p>I'm having a bit of difficulty understanding what's going on with some tasklets that I've written to use with Google App Engine's NDB datastore's asynchronous API's. The overview is: I need to pull together a purchase history for a user, and the "UserPurchase" object references the "Product" or "Custom" item that was purchased, and the "Establishment" in which it was purchased, both using KeyProperty properties in the UserPurchase ndb.Model. I want to parallelize the various NDB lookups as much as possible, so I've stuck quite close to the NDB documentation (at <a href="https://cloud.google.com/appengine/docs/standard/python/ndb/async" rel="nofollow noreferrer">https://cloud.google.com/appengine/docs/standard/python/ndb/async</a>) and built 3 tasklets:</p>
<pre><code>@ndb.tasklet
def get_establishment(purch):
resto = yield purch.p_establishment.get_async()
ret = {
"rkey": resto.key.urlsafe(),
"rname": resto.resto_name
}
raise ndb.Return(ret)
@ndb.tasklet
def get_item(purch):
item = yield purch.p_item.get_async()
if (purch.p_item.kind() == "Product"):
ret = {
"ikey": item.key.urlsafe(),
"iname": item.name
}
else:
ret = {
"ikey": item.key.urlsafe(),
"iname": item.cust_name
}
raise ndb.Return(ret)
@ndb.tasklet
def load_history(purch):
at, item = yield get_establishment(purch), get_item(purch)
ret = {
"date": purch.p_datetime,
"at_key": at.rkey,
"at": at.rname,
"item_key": item.ikey,
"item": item.iname,
"qty": purch.p_quantity,
"type": purch.p_type
}
raise ndb.Return(ret)
</code></pre>
<p>Then, I make a call like so:</p>
<pre><code>pq = UserPurchase.query().order(-UserPurchase.p_datetime)
phistory = pq.map(load_history, limit = 20)
</code></pre>
<p>to retrieve the most recent 20 purchases for display. When I run it, however, I get a strange exception that I'm not sure what to make of ... and I'm not familiar enough with tasklets and how they work to confidently say I know what's going on. I'd really appreciate it if someone could give me some pointers on what to look for...! Here's the exception I get when I run the above code:</p>
<pre><code> ...
File "/Programming/VirtualCellar/server/virtsom.py", line 2348, in get
phistory = pq.map(load_history, limit = 20)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/utils.py", line 160, in positional_wrapper
return wrapped(*args, **kwds)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/query.py", line 1190, in map
**q_options).get_result()
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/tasklets.py", line 383, in get_result
self.check_success()
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/tasklets.py", line 624, in _finish
result = [r.get_result() for r in self._results]
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/tasklets.py", line 383, in get_result
self.check_success()
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/ndb/tasklets.py", line 430, in _help_tasklet_along
value = gen.send(val)
File "/Programming/VirtualCellar/server/virtsom.py", line 2274, in load_history
"at_key": at.rkey,
AttributeError: 'dict' object has no attribute 'rkey'
</code></pre>
<p>Based on the error, it almost seems like tasklets don't like returning dictionaries...</p>
|
<p>You're misinterpreting the error message: a dict is returned, but you're trying to access an attribute of that dict which doesn't exist. </p>
<p>It appears you want to access <em>a value inside</em> that dict, not an <em>atribute</em> of the dict, if so then change inside <code>load_history</code>:</p>
<pre><code> "at_key": at.rkey,
</code></pre>
<p>to:</p>
<pre><code> "at_key": at.get('rkey'),
</code></pre>
<p>or:</p>
<pre><code> "at_key": at['rkey'],
</code></pre>
<p>You also want to revisit your other similar attempts in accessing other dict values in <code>at</code> and <code>item</code> dicts.</p>
|
python|google-app-engine|app-engine-ndb|tasklet
| 1 |
1,904,767 | 66,654,447 |
How to compare two arrays and eliminate duplicates python
|
<p>I'm trying to get clean values, remove duplicate values, and associate correctly this two arrays:</p>
<p>What i have:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">soid</th>
<th style="text-align: center;">cid</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">1</td>
<td style="text-align: center;">a</td>
</tr>
<tr>
<td style="text-align: center;">1</td>
<td style="text-align: center;">b</td>
</tr>
<tr>
<td style="text-align: center;">2</td>
<td style="text-align: center;">c</td>
</tr>
<tr>
<td style="text-align: center;">2</td>
<td style="text-align: center;">d</td>
</tr>
<tr>
<td style="text-align: center;">3</td>
<td style="text-align: center;">e</td>
</tr>
<tr>
<td style="text-align: center;">4</td>
<td style="text-align: center;">f</td>
</tr>
</tbody>
</table>
</div>
<pre><code>soid = [1, 1, 2, 2, 3, 4]
cid = ["a", "b", "c", "d", "e", "f"]
</code></pre>
<p>The result has to be like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">soid</th>
<th style="text-align: center;">cid</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">1</td>
<td style="text-align: center;">a // b</td>
</tr>
<tr>
<td style="text-align: center;">2</td>
<td style="text-align: center;">c // d</td>
</tr>
<tr>
<td style="text-align: center;">3</td>
<td style="text-align: center;">e</td>
</tr>
<tr>
<td style="text-align: center;">4</td>
<td style="text-align: center;">f</td>
</tr>
</tbody>
</table>
</div>
<pre><code>soid = [1, 2, 3, 4]
cid = [ "a // b", "c // d", "e", "f"]
</code></pre>
<p>The input is not sorted.
The order of the columns relation is very important.</p>
|
<p>Here you go, this should satisfy your requirements,</p>
<p><strong>For (Python < 3.7)</strong> you need to use OrderedDict,</p>
<pre><code>from collections import OrderedDict
combiner = OrderedDict()
for x, y in zip(soid, cid):
combiner.setdefault(x, []).append(y)
soid = combiner.keys()
cid = [ r" // ".join(x) for x in combiner.values()]
</code></pre>
<p><strong>For (Python >= 3.7)</strong> you can simply do this:</p>
<pre><code>combiner = dict()
for x, y in zip(soid, cid):
combiner.setdefault(x, []).append(y)
soid = combiner.keys()
cid = [ r" // ".join(x) for x in combiner.values()]
</code></pre>
<p>This may be faster than the above method, haven't tested yet,</p>
<pre><code>from collections import defaultdict
combiner = defaultdict(lambda: [])
for x, y in zip(soid, cid):
combiner[x].append(y)
soid = combiner.keys()
cid = [ r" // ".join(x) for x in combiner.values()]
</code></pre>
<p><strong>Note:</strong>
From Python 3.7 onwards, the normal dict maintains the order of insertion of keys like the OrderedDict.</p>
<p>The keys will maintain insertion order(as it's an OrderedDict), and so will the values (as it's a list).</p>
|
python|arrays|concatenation
| 2 |
1,904,768 | 72,291,338 |
Map integer values to categories defined by range of integers in second dataframe Pandas
|
<p>I am trying to <strong>map zip_codes of given data frame to regions provided by a second data frame</strong>.
The <strong>regions are defined by a range of integers</strong> (for example, range 1000-1299 is Noord-Holland, 1300-1379 is Flevoland and so on). The data looks like this:</p>
<pre><code>df1
zip_code state_name
0 7514 None
1 7891 None
2 2681 None
3 7606 None
4 5051 None
5 2611 None
6 4341 None
7 1851 None
8 1861 None
9 2715 None
df2
zpcd1 zpcd2 region
0 1000 1299 Noord-Holland
1 1300 1379 Flevoland
2 1380 1384 Noord-Holland
3 1390 1393 Utrecht
4 1394 1494 Noord-Holland
5 1396 1496 Utrecht
6 1398 1425 Noord-Holland
7 1426 1427 Utrecht
8 1428 1429 Zuid-Holland
9 1430 2158 Noord-Holland
</code></pre>
<p><em>The duplicates regions are ok, because one region can have different zip code ranges.</em></p>
<p>The question is: <strong>How do I map the zip code values in df1 to the ranges defined in df2 in order to assign the region name to that row?</strong></p>
<p>I tried</p>
<pre><code>def region_map(row):
global df2
if row['zip_code'] in range(nlreg.zpcd1, nlreg.zpcd2, 1):
return df2.region
df1['state_name'] = df1.apply(lambda row: region_map(row))
</code></pre>
<p>but it returns a KeyError: 'zip_code'.
Thank you in advance</p>
<p><strong>EDIT</strong></p>
<p>I got the result that I was searching for using</p>
<pre><code>df2['zip_c_range']=list(zip(df2.zpcd1, df2.zpcd2))
for i,v in tqdm(df1.zip_code.items()):
for x,z in df2.zip_c_range.items():
if v in range(*z):
df1['state_name'][i]=df2.region[x]
</code></pre>
<p>but I am sure that there is a better solution using lambda.</p>
|
<p>I think what you're trying to do is the following (nlreg being df2):</p>
<pre><code>def region_map(zc):
return nlreg.loc[(nlreg['zpcd1'] <= zc) & (zc <= nlreg['zpcd2']), 'region']
df1['state_name'] = df1['zip_code'].apply(lambda z: region_map(z))
</code></pre>
|
python|pandas|dataframe|mapping
| 0 |
1,904,769 | 72,150,369 |
Random access in a 7z file
|
<p>I have a 100 GB text file in a 7z archive. I can find a pattern <code>'hello'</code> in it by reading it by 1 MB block (7z outputs the data to stdout):</p>
<pre><code>Popen("7z e -so archive.7z big100gb_file.txt", stdout=PIPE)
while True:
block = proc.stdout.read(1024*1024) # 1 MB block
i += 1
...
if b'hello' in block: # omitting other details for search pattern split in consecutive blocks...
print('pattern found in block %i' % i)
...
</code></pre>
<p>Now that we have found after 5 minutes of search that the pattern <code>'hello'</code> is, say, in the 23456th block, how to access this block or line very fast in the future inside the 7z file?</p>
<p>(without saving this data in another file, which I don't want - indexing is the goal, not duplication of data)</p>
<p><strong>With <code>7z</code>, how to seek in the middle of the file?</strong></p>
<p>Note: I already read <a href="https://stackoverflow.com/questions/4457997/indexing-random-access-to-7zip-7z-archives">Indexing / random access to 7zip .7z archives</a> and <a href="https://stackoverflow.com/questions/7882337/random-seek-in-7z-single-file-archive">random seek in 7z single file archive</a> but these questions don't discuss concrete implementation.</p>
|
<p>It is possible, in principle, to build an index to compressed data. You would pick, say, a block size of uncompressed data, where the start of each block would be an entry point at which you would be able to start decompressing. The index would be separate file or large structure in memory that you would build, with the entire decompression state saved for each entry point. You would need to decompress all of the compressed data once to build the index. The choice of block size would be a balance of how quickly you want to access any given byte in the compressed data, against the size of the index.</p>
<p>There are several different compression methods that 7z can use (deflate, lzma2, bzip2, ppmd). What you would need to do to implement this sort of random access would be entirely different for each method.</p>
<p>Also for each method there are better places to pick entry points than some fixed uncompressed block size. Such choices would greatly reduce the size of the index, taking advantage of the internal structure of the compressed data used by that method.</p>
<p>For example, bzip2 has natural entry points with no history at each bzip2 block, by default each with 900 KiB of uncompressed data. This allows the index to be quite small with just the compressed and uncompressed offsets needing to be saved.</p>
<p>For deflate, the entry points can be deflate blocks, where the index is the compressed and uncompressed offset of selected deflate blocks, along with the 32K dictionary for each entry point. <a href="https://github.com/madler/zlib/blob/master/examples/zran.c" rel="nofollow noreferrer">zran.c</a> implements such an index for deflate compressed data.</p>
<p>The decompression state at any point in an lzma2 or ppmd compressed stream is extremely large. I do not believe that such a random access approach could be practical for those compression methods. The compressed data formats would need to be modified to break it up into blocks at the time of compression, at some cost to the compression ratio.</p>
|
python|compression|large-files|7zip|random-access
| 3 |
1,904,770 | 65,808,537 |
dblpy event isn't being called
|
<p>So my bot is on Top.gg, and I'm using dblpy to interact with the api. But for some reason the <code>on_dbl_vote</code> event is not being called, I can make commands to update the server count using <code>dblpy.post_guild_count()</code>, and get past votes using <code>get_bot_votes()</code>, so I really don't know what's wrong.</p>
<p>I've attached my code for the cog below, any help would be great :)</p>
<pre class="lang-py prettyprint-override"><code>import dbl as DBL
from discord.ext import commands
class dblcog(commands.Cog):
def __init__(self, bot):
self.bot = bot
self.dblpy = DBL.DBLClient(self.bot, TOKEN, webhook_path=WEBHOOK,
webhook_auth=WEBHOOK_AUTH, webhook_port=PORT)
@commands.Cog.listener()
async def on_dbl_vote(self, data):
print("VOTE RECIEVED\n{data}")
def setup(bot):
bot.add_cog(dblcog(bot))
</code></pre>
|
<p>If other methods such as <code>dblpy.post_guild_count</code> work, it is most likely a configuration issue.</p>
<p>Make sure you are introducing the correct <em>URL</em> (<code>http://serverip:webhook_port/webhook_path</code>) to your webserver and its authorization key on <em>top.gg</em></p>
<p>It could also be a firewall problem, make sure your device does not have a firewall that could be blocking the requests on the webhook port. You might need to perform port forwarding.</p>
<p>You should have a look at their <a href="https://dblpy.readthedocs.io/en/latest/api.html" rel="nofollow noreferrer"><strong>documentation</strong></a> and contact them for any further inquiries, as this does not look to be related to programming.</p>
|
python|python-3.x|discord|discord.py
| 1 |
1,904,771 | 65,565,348 |
How to find the lows of a stock for each trading day?
|
<p>So I have a csv with minute stock data for Microsoft. I am trying to find the low of each trading day. The code looks like:</p>
<pre><code>ticker='MSFT'
df = pd.read_csv('/Volumes/Seagate Portable/S&P 500 List/{}.txt'.format(ticker))
df.columns = ['Extra', 'Dates', 'Open', 'High', 'Low', 'Close', 'Volume']
df.Dates = pd.to_datetime(df.Dates)
df.set_index(df.Dates, inplace=True)
df.drop(['Extra', 'High', 'Volume', 'Dates', 'Open'], axis=1, inplace=True)
df = df.between_time('9:30', '16:00')
df['Low'] = df.Low.groupby(by=[df.index.day]).min()
df
</code></pre>
<p>The output is:</p>
<pre><code> Low Close
Dates
2020-01-02 09:30:00 NaN 158.610
2020-01-02 09:31:00 NaN 158.380
2020-01-02 09:32:00 NaN 158.620
2020-01-02 09:33:00 NaN 158.692
2020-01-02 09:34:00 NaN 158.910
... ... ...
2020-12-18 15:56:00 NaN 218.700
2020-12-18 15:57:00 NaN 218.540
2020-12-18 15:58:00 NaN 218.710
2020-12-18 15:59:00 NaN 218.150
2020-12-18 16:00:00 NaN 218.500
</code></pre>
<p>So the issue is that the lows are filled with NaN values, which I am asuming is because I am miss using groupby. I have also tried:</p>
<pre><code>ticker='MSFT'
df = pd.read_csv('/Volumes/Seagate Portable/S&P 500 List/{}.txt'.format(ticker))
df.columns = ['Extra', 'Dates', 'Open', 'High', 'Low', 'Close', 'Volume']
df.Dates = pd.to_datetime(df.Dates)
df.set_index(df.Dates, inplace=True)
df.drop(['Extra', 'High', 'Volume', 'Dates', 'Open'], axis=1, inplace=True)
df = df.between_time('9:30', '16:00')
df = df.groupby(by=[df.index.day]).min()
df
</code></pre>
<p>The output to this is:</p>
<pre><code> Low Close
Dates
1 150.8200 150.9800
2 150.3600 150.8400
3 152.1900 152.2800
4 165.6200 165.7000
5 165.6900 165.8200
6 156.0000 156.0700
7 157.3200 157.3500
8 157.9491 158.0000
9 150.0000 150.2700
10 152.5800 152.7950
11 151.1500 151.1930
12 138.5800 138.7600
13 140.7300 140.8700
14 161.7200 161.7500
15 162.5700 162.6300
16 135.0000 135.3300
17 135.0000 135.3400
18 135.0200 135.2600
19 139.0000 139.1300
20 135.8600 136.5900
21 166.1102 166.2100
22 165.6800 165.6900
23 132.5200 132.7100
24 141.2700 141.6481
25 144.4400 144.8102
26 148.3700 149.7000
27 149.2000 149.2700
28 152.0000 153.8152
29 165.6900 165.7952
30 150.0100 152.7200
31 156.5600 157.0450
</code></pre>
<p>The issue with this is that it is finding the low of of both Close and Open. Also there are only 31 total rows, though there should be more giving this is a dataset of all of 2020. I assuming in doing this I am grouping wrong because I looked at the close prices of each day for the first 31 days, and there is no way that these are the lows of each of those days. So the questions is how can I find the lows of each day, without affecting the Close columns, and avoiding the issues mentioned above?</p>
|
<p>Try this:</p>
<pre><code>unique_dates = list(set([str(date).split()[0] for date in df.index]))
min_values_daily = [min(df.loc[df.index==date].Close) for date in unique_dates]
</code></pre>
<p>And finally, create a new dataframe:</p>
<pre><code>low_data = pd.DataFrame({
'date': unique_dates,
'low': min_values_daily
})
</code></pre>
|
python|pandas|dataframe|datetime|indexing
| 1 |
1,904,772 | 50,755,540 |
Using CreateChatRequest occurs Not enough users (to create a chat, for example)
|
<p><code>telethon</code> vaersion: 0.19.1.4</p>
<p><code>Python version</code> : 3.6.</p>
<p>When using <code>CreateChatRequest</code> to create a group, it occurs error like this:</p>
<pre><code>CreateChatRequest occurs Not enough users (to create a chat, for example).
</code></pre>
<p>And my code like this: </p>
<pre><code>user = InputUser(user_id=12345, access_hash=12345678901234)
client(CreateChatRequest([user], title=title))
</code></pre>
<p><code>user_id</code> and <code>access_hash</code> is correct, but I'm confused about the error message. </p>
|
<p>I ran into this problem, as well, even when I tried to supply 2 or 3 users in the initial user list. In the end I figured out the problem: the users weren't letting me add them to the group since I wasn't in their contacts, and their privacy settings did not allow such things. I was able to create the group using just one user whose permissions allowed me to add them.</p>
|
python|telegram|telethon
| 0 |
1,904,773 | 35,194,007 |
Scoring regression model using PMML with Augustus in Python
|
<p>I have a PMML file (below) generated from an R linear model from my colleague that is to be used to predict the cost of an item based on 5 features. I am trying to consume this model using Augustus in Python and make these predictions. I have been successful in getting the PMML file loaded by Augustus but I am failing to get the predicted values.</p>
<p>I've gone through many examples from Augustus's <a href="http://augustusdocs.appspot.com/docs/v06/model_abstraction/augustus_and_pmml.html" rel="nofollow noreferrer">Model abstraction</a> and through searching Stack and Google but I have yet to find any examples of linear regression being successfully used. There was one <a href="https://stackoverflow.com/questions/19997433/how-to-score-a-linear-model-using-pmml-file-and-augustus-on-python">similar question asked previously</a> but it was never properly answered. I have also tried other <a href="http://dmg.org/pmml/pmml_examples/index.html" rel="nofollow noreferrer">example regression PMML files</a> with similar results.</p>
<p><strong>How can I run the regression using Augustus (or other library) in Python and obtain the predictions?</strong></p>
<p><strong>PMML Code:</strong> linear_model.xml</p>
<pre><code><?xml version="1.0"?>
<PMML version="4.1" xmlns="http://www.dmg.org/PMML-4_1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.dmg.org/PMML-4_1 http://www.dmg.org/v4-1/pmml-4-1.xsd">
<Header copyright="Copyright (c) 2016 root" description="Linear Regression Model">
<Extension name="user" value="root" extender="Rattle/PMML"/>
<Application name="Rattle/PMML" version="1.4"/>
<Timestamp>2016-02-02 19:20:59</Timestamp>
</Header>
<DataDictionary numberOfFields="6">
<DataField name="cost" optype="continuous" dataType="double"/>
<DataField name="quantity" optype="continuous" dataType="double"/>
<DataField name="total_component_weight" optype="continuous" dataType="double"/>
<DataField name="quantity_cost_mean" optype="continuous" dataType="double"/>
<DataField name="mat_quantity_cost_mean" optype="continuous" dataType="double"/>
<DataField name="solid_volume" optype="continuous" dataType="double"/>
</DataDictionary>
<RegressionModel modelName="Linear_Regression_Model" functionName="regression" algorithmName="least squares" targetFieldName="cost">
<MiningSchema>
<MiningField name="cost" usageType="predicted"/>
<MiningField name="quantity" usageType="active"/>
<MiningField name="total_component_weight" usageType="active"/>
<MiningField name="quantity_cost_mean" usageType="active"/>
<MiningField name="mat_quantity_cost_mean" usageType="active"/>
<MiningField name="solid_volume" usageType="active"/>
</MiningSchema>
<Output>
<OutputField name="Predicted_cost" feature="predictedValue"/>
</Output>
<RegressionTable intercept="-5.18924891969128">
<NumericPredictor name="quantity" exponent="1" coefficient="0.0128484453941352"/>
<NumericPredictor name="total_component_weight" exponent="1" coefficient="12.0357979395919"/>
<NumericPredictor name="quantity_cost_mean" exponent="1" coefficient="0.500814050845585"/>
<NumericPredictor name="mat_quantity_cost_mean" exponent="1" coefficient="0.556822746464491"/>
<NumericPredictor name="solid_volume" exponent="1" coefficient="0.000197314943339284"/>
</RegressionTable>
</RegressionModel>
</PMML>
</code></pre>
<p><strong>Python Code:</strong></p>
<pre><code>import pandas as pd
from augustus.strict import *
train_full_df = pd.read_csv('train_data.csv', low_memory=False)
model = modelLoader.loadXml('linear_model.xml')
dataTable = model.calc({'quantity': train_full_df.quantity[:10],
'total_component_weight': train_full_df.total_component_weight[:10],
'quantity_cost_mean': train_full_df.quantity_cost_mean[:10],
'mat_quantity_cost_mean': train_full_df.mat_quantity_cost_mean[:10],
'solid_volume': train_full_df.solid_volume[:10],
})
dataTable.look()
</code></pre>
<p>(output)</p>
<pre><code># | quantity | total_comp | quantity_c | mat_quanti | solid_volu
---+------------+------------+------------+------------+-----------
0 | 1.0 | 0.018 | 32.2903337 | 20.4437141 | 1723.48653
1 | 2.0 | 0.018 | 17.2369194 | 12.0418426 | 1723.48653
2 | 5.0 | 0.018 | 10.8846412 | 7.22744702 | 1723.48653
3 | 10.0 | 0.018 | 6.82802948 | 4.3580642 | 1723.48653
4 | 25.0 | 0.018 | 4.84356482 | 3.09218161 | 1723.48653
5 | 50.0 | 0.018 | 4.43703495 | 2.74377648 | 1723.48653
6 | 100.0 | 0.018 | 4.22259101 | 2.5990824 | 1723.48653
7 | 250.0 | 0.018 | 4.1087198 | 2.53432422 | 1723.48653
8 | 1.0 | 0.018 | 32.2903337 | 20.4437141 | 1723.48653
9 | 2.0 | 0.018 | 17.2369194 | 12.0418426 | 1723.48653
</code></pre>
<p>As you can see from the table, only the input values are being displayed and no "cost" values. How do I get the cost to be predicted?</p>
<p>I am using Python 2.7, Augustus 0.6 (also tried 0.5), OS X 10.11</p>
|
<p>You could use the PyPMML to score PMML models in Python, takes your model as an example:</p>
<pre><code>import pandas as pd
from pypmml import Model
model = Model.fromString('''<?xml version="1.0"?>
<PMML version="4.1" xmlns="http://www.dmg.org/PMML-4_1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.dmg.org/PMML-4_1 http://www.dmg.org/v4-1/pmml-4-1.xsd">
<Header copyright="Copyright (c) 2016 root" description="Linear Regression Model">
<Extension name="user" value="root" extender="Rattle/PMML"/>
<Application name="Rattle/PMML" version="1.4"/>
<Timestamp>2016-02-02 19:20:59</Timestamp>
</Header>
<DataDictionary numberOfFields="6">
<DataField name="cost" optype="continuous" dataType="double"/>
<DataField name="quantity" optype="continuous" dataType="double"/>
<DataField name="total_component_weight" optype="continuous" dataType="double"/>
<DataField name="quantity_cost_mean" optype="continuous" dataType="double"/>
<DataField name="mat_quantity_cost_mean" optype="continuous" dataType="double"/>
<DataField name="solid_volume" optype="continuous" dataType="double"/>
</DataDictionary>
<RegressionModel modelName="Linear_Regression_Model" functionName="regression" algorithmName="least squares" targetFieldName="cost">
<MiningSchema>
<MiningField name="cost" usageType="predicted"/>
<MiningField name="quantity" usageType="active"/>
<MiningField name="total_component_weight" usageType="active"/>
<MiningField name="quantity_cost_mean" usageType="active"/>
<MiningField name="mat_quantity_cost_mean" usageType="active"/>
<MiningField name="solid_volume" usageType="active"/>
</MiningSchema>
<Output>
<OutputField name="Predicted_cost" feature="predictedValue"/>
</Output>
<RegressionTable intercept="-5.18924891969128">
<NumericPredictor name="quantity" exponent="1" coefficient="0.0128484453941352"/>
<NumericPredictor name="total_component_weight" exponent="1" coefficient="12.0357979395919"/>
<NumericPredictor name="quantity_cost_mean" exponent="1" coefficient="0.500814050845585"/>
<NumericPredictor name="mat_quantity_cost_mean" exponent="1" coefficient="0.556822746464491"/>
<NumericPredictor name="solid_volume" exponent="1" coefficient="0.000197314943339284"/>
</RegressionTable>
</RegressionModel>
</PMML>''')
data = pd.DataFrame({
'quantity': [1.0,2.0,5.0,10.0,25.0,50.0,100.0,250.0,1.0,2.0],
'total_component_weight': [0.018, 0.018, 0.018, 0.018, 0.018, 0.018, 0.018, 0.018, 0.018, 0.018],
'quantity_cost_mean': [32.2903337,17.2369194,10.8846412,6.82802948,4.84356482,4.43703495,4.22259101,4.1087198,32.2903337,17.2369194],
'mat_quantity_cost_mean': [20.4437141,12.0418426,7.22744702,4.3580642 ,3.09218161,2.74377648,2.5990824 ,2.53432422,20.4437141,12.0418426],
'solid_volume': [1723.48653,1723.48653,1723.48653,1723.48653,1723.48653,1723.48653,1723.48653,1723.48653,1723.48653,1723.48653]
})
result = model.predict(data)
</code></pre>
<p>The result is:</p>
<pre><code> Predicted_cost
0 22.935291
1 10.730825
2 4.907295
3 1.342192
4 -0.163801
5 -0.240186
6 0.214271
7 2.048450
8 22.935291
9 10.730825
</code></pre>
|
python|xml|xsd|linear-regression|pmml
| 1 |
1,904,774 | 26,923,010 |
How do I hide the entire tab bar in a tkinter ttk.Notebook widget?
|
<p>How do I hide the tab bar in a ttk Notebook widget? I don't want to hide the frame that belongs to the tab. I just want to remove the tab bar from sight, even where it's not at the top of the screen (for more than one purpose).</p>
<p>Anyway, it would be nice for fullscreen mode.</p>
|
<p>from the help on tkinter.ttk.Style:</p>
<blockquote>
<p><code>layout(self, style, layoutspec=None)</code></p>
<p>Define the widget layout for given style. If <code>layoutspec</code> is omitted, return the layout specification for given style.</p>
<p><code>layoutspec</code> is expected to be a list or an object different than <code>None</code> that evaluates to <code>False</code> if you want to "turn off" that style.</p>
</blockquote>
<p>try this:</p>
<pre><code>import tkinter as tk
from tkinter import ttk
root = tk.Tk()
style = ttk.Style()
style.layout('TNotebook.Tab', []) # turn off tabs
note = ttk.Notebook(root)
f1 = ttk.Frame(note)
txt = tk.Text(f1, width=40, height=10)
txt.insert('end', 'Page 0 : a text widget')
txt.pack(expand=1, fill='both')
note.add(f1)
f2 = ttk.Frame(note)
lbl = tk.Label(f2, text='Page 1 : a label')
lbl.pack(expand=1, fill='both')
note.add(f2)
note.pack(expand=1, fill='both', padx=5, pady=5)
def do_something():
note.select(1)
root.after(3000, do_something)
root.mainloop()
</code></pre>
|
python|tkinter|fullscreen|ttk
| 7 |
1,904,775 | 45,100,049 |
Python 3: accessing windows share under Linux by UNC path
|
<p>First, lets ensure that windows share is accesable:</p>
<pre><code>$ sudo mkdir /mnt/test
</code></pre>
<p>Let's try mount, but it's fail:</p>
<pre><code>$ sudo mount -t cifs //192.168.0.10/work /mnt/test
mount: wrong fs type, bad option, bad superblock on //192.168.0.10/work,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so.
</code></pre>
<p>But if provide dummy user/pass (i.e. point exactly 'USERNAME' and 'PASSWD'), mount success:</p>
<pre><code>$ sudo mount -t cifs -o username=USERNAME,password=PASSWD //192.168.0.10/work /mnt/test
$ ls /mnt/test/*.txt
/mnt/test/1.txt
$ umount test
</code></pre>
<p>Now lets try python:</p>
<pre><code>$ python -V
Python 3.5.2+
$ python
>>> import os
>>> os.listdir(r'//192.168.0.10/work')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
FileNotFoundError: [Errno 2] No such file or directory: '//192.168.0.10/work'
</code></pre>
<p>I'm trying four slashes, backslashes, combine it, with or without <code>r</code>, unicode escaping (<code>bytes(path, "utf-8").decode("unicode_escape")</code>), all this fail with <code>No such file or directory</code>. May be reason of this fail is user/pass, but I have no imagine how add it to UNC.</p>
<p>ps. Also I try <code>pysmb</code> library, it's work fine without user/pass. But I dont want using additional lib if it possible.</p>
|
<p>You must mount a UNC path on Linux. The OS has no way of understanding two backslashes except when mounting the path. So to make this automatic, just write some Python code to call the necessary Linux commands to accomplish the mount task and then refer to a file path as you normally would.</p>
<p>Example running Linux "ls" command from Python.</p>
<pre><code>import os
os.system('ls')
</code></pre>
<p>Now follow one of these two methods.</p>
<p><a href="https://unix.stackexchange.com/questions/18925/how-to-mount-a-device-in-linux">https://unix.stackexchange.com/questions/18925/how-to-mount-a-device-in-linux</a></p>
<p><a href="https://www.putorius.net/mount-windows-share-linux.html" rel="nofollow noreferrer">https://www.putorius.net/mount-windows-share-linux.html</a></p>
|
python|unc
| 0 |
1,904,776 | 44,872,859 |
How to print a maze saved as a dictionary in python?
|
<p>I have a function that reads a maze from a .txt file and converts it into a dictionary. Dictionary's keys are the cells of the maze, and the values are the cardinal points. The 'True' value means that a wall is present in north and so on.
This is what my function does:</p>
<pre><code>def importa(maze):
lab=open(maze, 'r')
l=list(lab.readlines())
if len(l)==0:
return None
righe=len(l)
colonne=(len(l[0])-1)
maze=dict()
for r in range(1,righe-1,2):
for c in range(1,colonne-1,2):
nord=l[r-1][c]=='*'
sud=(l[r+1][c]=='*')
est=(l[r][c+1]=='*')
ovest=(l[r][c-1]=='*')
maze[(r//2,c//2)]=[{'N':nord,'S':sud,'E':est,'O':ovest},'']
#la stringa vuota è lo stato della cella
return maze
</code></pre>
<p>For example a maze is: </p>
<pre><code>{(0, 0): [{'N': True, 'S': False, 'E': True, 'O': True}, ''], (0, 1): [{'N': True, 'S': False, 'E': True, 'O': True}, ''], (0, 2): [{'N': True, 'S': True, 'E': False, 'O': True}, ''], (0, 3): [{'N': True, 'S': False, 'E': False, 'O': False}, ''], (0, 4): [{'N': True, 'S': True, 'E': True, 'O': False}, ''], (1, 0): [{'N': False, 'S': True, 'E': False, 'O': True}, ''], (1, 1): [{'N': False, 'S': False, 'E': False, 'O': False}, ''], (1, 2): [{'N': True, 'S': False, 'E': False, 'O': False}, ''], (1, 3): [{'N': False, 'S': True, 'E': True, 'O': False}, ''], (1, 4): [{'N': True, 'S': False, 'E': False, 'O': True}, ''], (2, 0): [{'N': True, 'S': True, 'E': False, 'O': True}, ''], (2, 1): [{'N': False, 'S': False, 'E': True, 'O': False}, ''], (2, 2): [{'N': False, 'S': False, 'E': False, 'O': True}, ''], (2, 3): [{'N': True, 'S': True, 'E': False, 'O': False}, ''], (2, 4): [{'N': False, 'S': False, 'E': True, 'O': False}, ''], (3, 0): [{'N': True, 'S': False, 'E': True, 'O': True}, ''], (3, 1): [{'N': False, 'S': True, 'E': True, 'O': True}, ''], (3, 2): [{'N': False, 'S': True, 'E': False, 'O': True}, ''], (3, 3): [{'N': True, 'S': False, 'E': True, 'O': False}, ''], (3, 4): [{'N': False, 'S': False, 'E': True, 'O': True}, ''], (4, 0): [{'N': False, 'S': True, 'E': False, 'O': True}, ''], (4, 1): [{'N': True, 'S': True, 'E': False, 'O': False}, ''], (4, 2): [{'N': True, 'S': True, 'E': False, 'O': False}, ''], (4, 3): [{'N': False, 'S': True, 'E': True, 'O': False}, ''], (4, 4): [{'N': False, 'S': True, 'E': True, 'O': True}, '']}
</code></pre>
<p>I need to write a function that prints my maze like this:
<a href="https://i.stack.imgur.com/9uRcU.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>Can someone help me?</p>
<p>I tried to build a function but it seems very long...</p>
<pre><code>def stampa(L):
labirinto=[]
for r in range(righe_colonne(L)[0]):
riga=[]
for cella in celle(L):
if cella[0]==r:
if L[cella][0]['O']==True:
riga.append('| ')
if L[cella][0]['O']==False:
riga.append(' ')
if cella[1]==righe_colonne(L)[1]-1:
if L[cella][0]['E']==True:
riga.append('|')
if L[cella][0]['E']==False:
riga.append(' ')
labirinto.append(''.join(str(x) for x in riga))
#cosi ho le righe pari, quelle che contengono le celle.
#ora voglio le righe fra le celle
labirinto2=[]
for r in range(righe_colonne(L)[0]):
riga=['+']
for cella in celle(L):
if cella[0]==r:
if L[cella][0]['N']==True:
riga.append('---+')
if L[cella][0]['N']==False:
riga.append(' +')
labirinto2.append(''.join(str(x) for x in riga))
m=''
for e in range(0, len(labirinto)):
m+=labirinto2[e]+'\n'+labirinto[e]+'\n'
#aggiungo l'ultima riga
riga=[]
for cella in celle(L):
if cella[0]==righe_colonne(L)[0]-1:
if L[cella][0]['S']==True:
riga.append('+---')
if L[cella][0]['S']==False:
riga.append(' ')
riga.append('+')
m+=(''.join(str(x) for x in riga))
m+='\n'
print(m)
</code></pre>
|
<p>You can use <code>matplotlib</code> for that purpose:</p>
<pre><code>import matplotlib.pyplot as plt
for i, j in maze:
walls, _ = maze[i, j]
if walls['N']:
plt.plot([i, i+1], [j+1, j+1], 'b-')
if walls['E']:
plt.plot([i+1, i+1], [j, j+1], 'b-')
if walls['S']:
plt.plot([i, i+1], [j, j], 'b-')
if walls['O']:
plt.plot([i, i], [j, j+1], 'b-')
plt.show()
</code></pre>
<p>For your example this gives:</p>
<p><a href="https://i.stack.imgur.com/nutxQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nutxQ.png" alt="enter image description here"></a></p>
|
python|dictionary|printing|maze
| 0 |
1,904,777 | 64,742,206 |
python urllib3 lambda Error: LocationParseError Failed to parse
|
<p>I'm using urllib3 library in Lambda and python3 code that fetches the webhook url of MSTeams from AWS Secret Manager and sends a http post request to publish a notification.</p>
<p>My webhook url starts with https and looks like this "https://outlook.office.com/webhook/.......". On executing the lambda function, I get an error as shown below <code>LocationParseError Failed to parse:</code></p>
<p>Code</p>
<pre><code>import urllib3
http = urllib3.PoolManager()
MSTEAMS_WEBHOOK_SECRET_NAME = os.getenv('MSTEAMS_WEBHOOK_SECRET_NAME')
HOOK_URL = get_secret(MSTEAMS_WEBHOOK_SECRET_NAME,"eu-west-1")
def get_secret(secret_name, region_name):
# Create a Secrets Manager client
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name
)
get_secret_value_response = client.get_secret_value(
SecretId=secret_name,
VersionStage="AWSCURRENT"
)
if 'SecretString' in get_secret_value_response:
secret = get_secret_value_response['SecretString']
return secret
else:
decoded_binary_secret = base64.b64decode(get_secret_value_response['SecretBinary'])
return decoded_binary_secret
def lambda_handler(event, context):
message = {
"@context": "https://schema.org/extensions",
"@type": "MessageCard",
"themeColor": data["colour"],
"title": title,
"text": "accountId:\n" + account_id + " <br/>\n"
}
webhook_encoded_body = json.dumps(message).encode('utf-8')
response = http.request('POST',HOOK_URL, body=webhook_encoded_body)
</code></pre>
<p>errorMessage</p>
<pre><code>{
"errorMessage": "Failed to parse: {\"msteams-secret\":\"https://outlook.office.com/webhook/dxxxxxx@d779xxxxx-xxxxxx/IncomingWebhook/axxxxxx5/ca746326-bxxx-4xxx-8x-xxxxx\"}",
"errorType": "LocationParseError",
"stackTrace": [
[
"/var/task/lambda_function.py",
145,
"lambda_handler",
"resp = http.request('POST',HOOK_URL, body=webhook_encoded_body)"
],
[
"/var/runtime/urllib3/request.py",
80,
"request",
"method, url, fields=fields, headers=headers, **urlopen_kw"
],
[
"/var/runtime/urllib3/request.py",
171,
"request_encode_body",
"return self.urlopen(method, url, **extra_kw)"
],
[
"/var/runtime/urllib3/poolmanager.py",
324,
"urlopen",
"u = parse_url(url)"
],
[
"/var/runtime/urllib3/util/url.py",
392,
"parse_url",
"return six.raise_from(LocationParseError(source_url), None)"
],
[
"<string>",
3,
"raise_from",
""
]
]
}
</code></pre>
|
<p>Here is how I solved it</p>
<ul>
<li>Deployed the lambda zip file again, with correct dependencies like requests, urllib3 in the same folder</li>
<li>Apparently, I was trying to store the secret as key/value pair in AWS Secret manager so it was not able to parse a dictionary. I changed the secret type to plaintext</li>
</ul>
|
python|lambda|microsoft-teams|urllib3|aws-secrets-manager
| 0 |
1,904,778 | 60,536,771 |
Complicated Regex Pattern
|
<p>I have the following literal strings that i'm looping through:</p>
<pre><code>Some prior text <COMPANY-IDENTIFIER>oranges.txt : 3254323
Some prior text <COMPANY-IDENTIFIER>raisins.txt : 6434422
Some prior text <COMPANY-IDENTIFIER>apples.txt : 932323
</code></pre>
<p>I'm trying to split the strings on: <code><COMPANY-IDENTIFIER></code>, the file name, and the <code>:</code></p>
<p>I believe <code><</code> and <code>></code> are special regex characters and file name changes for each string.</p>
<p>I have used variations of the following pattern to split on:</p>
<pre><code>pattern = '<COMPANY-IDENTIFIER>(.*): ' #supposed to detect <COMPANY-IDENTIFIER>apples.txt : , etc
the_number = string.split(pattern)[1]
</code></pre>
<p>But my pattern isn't working.</p>
<p>Looking for guidance on what I'm doing wrong.</p>
<p>Thanks.</p>
|
<p>You should use regular expressions, <code>re.split()</code>, not <code>str.split()</code>. Also, remove the parentheses from the pattern:</p>
<pre><code>pattern = '<COMPANY-IDENTIFIER>.*: '
re.split(pattern, txt)
#['Some prior text ', '3254323']
</code></pre>
|
python|regex
| 1 |
1,904,779 | 57,954,930 |
How to get the camera matrix used for calibrating the camera
|
<p>I'm using acA4112-30uc - Basler ace camera. I'm trying to get rid of distortion, but in order to do that, I have to get the "mtx" parameter, which, I have no idea how to get.</p>
|
<p>theory of camera calibration with OPENCV is here : <a href="https://docs.opencv.org/3.4/d4/d94/tutorial_camera_calibration.html" rel="nofollow noreferrer">OPENCV_DOC</a></p>
<p>First you need to fix camera. then print chessboard <a href="https://gofile.io/?c=zMwu7m" rel="nofollow noreferrer">chessboardtoprint</a></p>
<p><strong>PART A: take image and perform calculations</strong></p>
<p>after print, put it on a book. then take some images by camera using this code:</p>
<pre><code>import numpy as np
import cv2
objp = np.zeros((6 * 7, 3), np.float32)
objp[ : , : 2] = np.mgrid[0 : 7, 0 : 6].T.reshape(-1, 2)
objpoints = []
imgpoints = []
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
cam = cv2.VideoCapture(0)
(w, h) = (int(cam.get(4)), int(cam.get(3)))
while(True):
_ , frame = cam.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findChessboardCorners(gray, (7, 6), None)
if ret == True:
objpoints.append(objp)
corners = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)
imgpoints.append(corners)
cv2.drawChessboardCorners(frame, (7, 6), corners, ret)
cv2.imshow('Find Chessboard', frame)
cv2.waitKey(0)
cv2.imshow('Find Chessboard', frame)
print "Number of chess boards find:", len(imgpoints)
if cv2.waitKey(1) == 27:
break
ret, oldMtx, coef, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints,
gray.shape[: : -1], None, None)
newMtx, roi = cv2.getOptimalNewCameraMatrix(oldMtx, coef, (w, h), 1, (w, h))
print "Original Camera Matrix:\n", oldMtx
print "Optimal Camera Matrix:\n", newMtx
np.save("Original camera matrix", oldMtx)
np.save("Distortion coefficients", coef)
np.save("Optimal camera matrix", newMtx)
cam.release()
cv2.destroyAllWindows()
</code></pre>
<p>in this code you open camera and put printed chessboard in many positions to code detect chessboard for matrices calculations. Each time code detect chessboard show image and print number of detected images. after some images you can press Esc to finish calculation.</p>
<p><strong>PART B: Use matrices to correct camera output</strong></p>
<p>you can use this code for test: </p>
<pre><code>import numpy as np
import cv2
oldMtx = np.load("Original camera matrix.npy")
coef = np.load("Distortion coefficients.npy")
newMtx = np.load("Optimal camera matrix.npy")
cam = cv2.VideoCapture(0)
(w, h) = (int(cam.get(4)), int(cam.get(3)))
while(True):
_ , frame = cam.read()
undis = cv2.undistort(frame, oldMtx, coef, newMtx)
cv2.imshow("Original vs Undistortion", np.hstack([frame, undis]))
key = cv2.waitKey(1)
if key == 27:
break
cam.release()
cv2.destroyAllWindows()
</code></pre>
<p>good luck!</p>
|
python|opencv
| 1 |
1,904,780 | 55,568,850 |
Check if key is pressed on background using win32api
|
<p>I'm trying to make a simple Python script to capture image from my webcam on workstation unlock.
And I am making a "kill switch" that checks if key is pressed and if it does the program will not run.
My problem is that I need to check if key is pressed and I can't find a way to do that.
I have tried this:</p>
<pre><code> keyState = win32api.GetAsyncKeyState(17)
</code></pre>
<p>But it does not work.</p>
<p>From the documentation:</p>
<blockquote>
<p>The return value is zero if a window in another thread or process
currently has the keyboard focus.</p>
</blockquote>
<p>So it doesn't really help me.
I'm on Windows btw.</p>
|
<p>First, <code>GetAsyncKeyState()</code> Also need to AND(&) 0x8000 to ensure the key is down.</p>
<blockquote>
<p>Return Value</p>
<p>Type: SHORT</p>
<p>If the function succeeds, the return value specifies whether the key
was pressed since the last call to GetAsyncKeyState, and whether the
key is currently up or down. If the most significant bit is set, the
key is down, and if the least significant bit is set, the key was
pressed after the previous call to GetAsyncKeyState.</p>
</blockquote>
<p>Note that the return value is bit-encoded (not a boolean). You should clear the the least significant bit like:</p>
<pre><code>keyState = win32api.GetAsyncKeyState(17)&0x8000.
</code></pre>
<p>And, there is a simple solution without window focus in python. You could get the it through <a href="https://pypi.org/project/pynput/" rel="nofollow noreferrer">pynput</a>.</p>
<p>Command line:</p>
<pre><code>> pip install pynput
</code></pre>
<p>Python code:</p>
<pre><code>from pynput import keyboard
def on_press(key):
try: k = key.char # single-char keys
except: k = key.name # other keys
if key == *(which you want to set):#To Do.
lis = keyboard.Listener(on_press=on_press)
lis.start() # start to listen on a separate thread
lis.join() # no this if main thread is polling self.keys
</code></pre>
|
python|winapi|keypress
| 0 |
1,904,781 | 57,556,092 |
How to add a column of results gotten from a method to an existing dataframe?
|
<p>I get a dataframe of exchange tokens as the following:</p>
<pre><code>Exchange=df[df["marketSegment"]=="Exchange"]
Exchange
</code></pre>
<p><img src="https://i.stack.imgur.com/E3e6Q.png" alt="Exchange result"></p>
<p>I want to add a column to the dataframe above to show each token's price.</p>
<p>From the following method, i can get each token's price:</p>
<pre><code>san.get(
"prices/huobi-token",
from_date="2018-06-01",
to_date="2018-06-05",
interval="1d"
)
</code></pre>
<p><img src="https://i.stack.imgur.com/gsDdb.png" alt="san result"></p>
<p>Can anyone tell me how to define a function or method to quickly work out every token's price and add them together as the last column of the dataframe?</p>
|
<p>Assuming that you want to use some kind of average price, you can define a function which will look up the price for a particular value of <code>slug</code>:</p>
<pre><code>def averageprice(slug):
pricedf = san.get(
"prices/{}".format(slug),
from_date="2018-06-01",
to_date=datetime.now(),
interval="1d"
)
return pricedf['priceUsd'].mean()
</code></pre>
<p>Then you can create a new column by applying that function to the original dataframe:</p>
<pre><code>Exchange['price'] = Exchange['slug'].apply(averageprice)
</code></pre>
|
python|pandas|dataframe
| 1 |
1,904,782 | 57,708,023 |
Plotting the ROC curve of K-fold Cross Validation
|
<p>I am working with an imbalanced dataset. I have applied SMOTE Algorithm to balance the dataset after splitting the dataset into test and training set before applying ML models. I want to apply cross-validation and plot the ROC curves of each folds showing the AUC of each fold and also display the mean of the AUCs in the plot. I named the resampled training set variables as X_train_res and y_train_res and following is the code:</p>
<pre><code>cv = StratifiedKFold(n_splits=10)
classifier = SVC(kernel='sigmoid',probability=True,random_state=0)
tprs = []
aucs = []
mean_fpr = np.linspace(0, 1, 100)
plt.figure(figsize=(10,10))
i = 0
for train, test in cv.split(X_train_res, y_train_res):
probas_ = classifier.fit(X_train_res[train], y_train_res[train]).predict_proba(X_train_res[test])
# Compute ROC curve and area the curve
fpr, tpr, thresholds = roc_curve(y_train_res[test], probas_[:, 1])
tprs.append(interp(mean_fpr, fpr, tpr))
tprs[-1][0] = 0.0
roc_auc = auc(fpr, tpr)
aucs.append(roc_auc)
plt.plot(fpr, tpr, lw=1, alpha=0.3,
label='ROC fold %d (AUC = %0.2f)' % (i, roc_auc))
i += 1
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',
label='Chance', alpha=.8)
mean_tpr = np.mean(tprs, axis=0)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
std_auc = np.std(aucs)
plt.plot(mean_fpr, mean_tpr, color='b',
label=r'Mean ROC (AUC = %0.2f $\pm$ %0.2f)' % (mean_auc, std_auc),
lw=2, alpha=.8)
std_tpr = np.std(tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
plt.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2,
label=r'$\pm$ 1 std. dev.')
plt.xlim([-0.01, 1.01])
plt.ylim([-0.01, 1.01])
plt.xlabel('False Positive Rate',fontsize=18)
plt.ylabel('True Positive Rate',fontsize=18)
plt.title('Cross-Validation ROC of SVM',fontsize=18)
plt.legend(loc="lower right", prop={'size': 15})
plt.show()
</code></pre>
<p>following is the output:</p>
<p><a href="https://i.stack.imgur.com/fcqqV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fcqqV.png" alt="enter image description here" /></a></p>
<p>Please tell me whether the code is correct for plotting ROC curve for the cross-validation or not.</p>
|
<blockquote>
<p>The problem is that I do not clearly understand cross-validation. In the for loop range, I have passed the training sets of X and y variables. Does cross-validation work like this?</p>
</blockquote>
<p>Leaving SMOTE and the imbalance issue aside, which are not included in your code, your procedure looks correct.</p>
<p>In more detail, for each one of your <code>n_splits=10</code>:</p>
<ul>
<li><p>you create <code>train</code> and <code>test</code> folds</p>
</li>
<li><p>you fit the model using the <code>train</code> fold:</p>
<pre><code> classifier.fit(X_train_res[train], y_train_res[train])
</code></pre>
</li>
<li><p>and then you predict probabilities using the <code>test</code> fold:</p>
<pre><code> predict_proba(X_train_res[test])
</code></pre>
</li>
</ul>
<p>This is exactly the idea behind cross-validation.</p>
<p>So, since you have <code>n_splits=10</code>, you get 10 ROC curves and respective AUC values (and their average), exactly as expected.</p>
<p><strong>However</strong>:</p>
<p>The need for (SMOTE) upsampling due to the class imbalance changes the correct procedure, and turns your overall process <strong>incorrect</strong>: you should <strong>not</strong> upsample your initial dataset; instead, you need to incorporate the upsampling procedure into the CV process.</p>
<p>So, the correct procedure here for each one of your <code>n_splits</code> becomes (notice that starting with a stratified CV split, as you have done, becomes essential in class imbalance cases):</p>
<ul>
<li>create <code>train</code> and <code>test</code> folds</li>
<li>upsample your <code>train</code> fold with SMOTE</li>
<li>fit the model using the upsampled <code>train</code> fold</li>
<li>predict probabilities using the <code>test</code> fold (not upsampled)</li>
</ul>
<p>For details regarding the rationale, please see own answer in the Data Science SE thread <a href="https://datascience.stackexchange.com/questions/82073/why-you-shouldnt-upsample-before-cross-validation">Why you shouldn't upsample before cross validation</a>.</p>
|
python|machine-learning|scikit-learn|cross-validation|roc
| 3 |
1,904,783 | 54,052,147 |
scraping udemy page with python but can't get access
|
<p>I want to scrape udemy courses reviews but I can't get access to the page web.
when I want to read the page using python I get this error:</p>
<pre><code>.
urllib.error.HTTPError: HTTP Error 403: Unauthorized
</code></pre>
<p><a href="https://i.stack.imgur.com/vsVma.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vsVma.png" alt="enter image description here" /></a></p>
|
<p>403 means that you got forbidden from accessing the Udemy website because of your scrapper.
checks these links to get more info:
403 status code <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403" rel="nofollow noreferrer">https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403</a></p>
<p>robot.txt <a href="https://moz.com/learn/seo/robotstxt" rel="nofollow noreferrer">https://moz.com/learn/seo/robotstxt</a></p>
<p>this is Udemy robot.txt link:
<a href="https://www.udemy.com/robots.txt" rel="nofollow noreferrer">https://www.udemy.com/robots.txt</a></p>
|
python-3.x
| 1 |
1,904,784 | 58,406,864 |
Python libreries for chatbot and installation procedures
|
<p>I'm doing my first project in python and not familiar with python. I'm using visual studio code version 1.38.1 and python version 3.7.3. And using flask framework. I need to create a chatbot. So which are the libreries I have to use and how to install those libreries</p>
|
<p>you need nltk library (natural language toolkit ) , scikit-learn , random and string .
as a beginner i recommend this <a href="https://medium.com/analytics-vidhya/building-a-simple-chatbot-in-python-using-nltk-7c8c8215ac6e" rel="nofollow noreferrer">guide</a> that can help you create your first chatbot from scratch
to install libraries just you can use pip3 </p>
<pre><code>pip3 install nltk sklearn
</code></pre>
|
python|flask|installation|chatbot
| 1 |
1,904,785 | 65,381,073 |
Tensorflow-GPU not using GPU with CUDA,CUDNN
|
<p>I want to use Tensorflow on GPU. So I install all the needed tool and installed as below-</p>
<ol>
<li>CUDA-11.2</li>
<li>CUDNN-11.1</li>
<li>Anaconda-2020.11</li>
<li>Tensorflow-GPU-2.3.0
<a href="https://i.stack.imgur.com/uQNOH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uQNOH.png" alt="enter image description here" /></a></li>
</ol>
<p>I tested that my cuda,cudnn is working using deviseQuery example.
But Tensorflow not used GPU. Then i find that version compatibility issue is possible so i innstalled CudaToolkit,cudnn using conda environment checking with version compatibility on Tensorflow website which is given below.</p>
<ol>
<li>CUDA-10.2.89</li>
<li>CUDNN-7.6.5</li>
<li>Tensorflow-GPU-2.3.0</li>
</ol>
<p><a href="https://i.stack.imgur.com/GF37I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GF37I.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/C7pfz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C7pfz.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/lgCHn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lgCHn.png" alt="enter image description here" /></a></p>
<p>But after this try Tensorflow-GPU not used GPU,yet. so what i am doing now? Any steps or suggestion require.</p>
|
<p>The installation engine has a problem for <code>tensorflow-gpu 2.3</code> in Anaconda on Windows 10.</p>
<p>Workaround is to explicitly specify the correct tensorflow build:</p>
<pre><code>conda install tensorflow-gpu=2.3 tensorflow=2.3=mkl_py38h1fcfbd6_0
</code></pre>
|
python|tensorflow|gpu
| 6 |
1,904,786 | 22,585,176 |
Sorting list of names from external text file into common names (M, F and ALL) - Python 3
|
<p>This is my first ever post because I can't seem to find a solution to my problem. I have a text file that contains a simple line by line list of different names distinguished from male and female by an M or F next to it. A simple example of this is:</p>
<pre><code>John M
John M
Jim M
Jim M
Jim M
Jim M
Sally F
Sally F
</code></pre>
<p>You'll notice that names repeat because I want the python code to count what names occur the most and provide lists of most common names, male name and female names. I am very new to python and my understanding of many elements are limited at best.</p>
|
<p>Are you simply trying to group names into M and F categories?</p>
<p>If you only have two categories you could simply manually group them:</p>
<pre><code>>>> >>> people = [('Mark', 'M'), ('Susan', 'F'), ('Mary', 'F'), ('Jake', 'M')]
>>> M_names = [ name for name, gender in people if gender == 'M' ]
>>> F_names = [ name for name, gender in people if gender == 'F' ]
>>> M_names, F_names
(['Mark', 'Jake'], ['Susan', 'Mary'])
</code></pre>
<p>However as you get more categories (undeclared gender, people who aren't exactly male/female, or grouping on something other than gender), <code>itertools</code> can give you a nicer solution:</p>
<pre><code>>>> people = [('Mark', 'M'), ('Susan', 'F'), ('Mary', 'F'), ('Jake', 'M'), ('Morgan', 'Undeclared')]
>>> dict((k, list(name for name, _ in g)) for k, g in itertools.groupby(sorted(people, key=lambda p: p[1]), key=lambda p: p[1]))
{'Undeclared': ['Morgan'], 'M': ['Mark', 'Jake'], 'F': ['Susan', 'Mary']}
</code></pre>
<p>That's a pretty complicated one-liner, but conceptually it's simple. First we have to sort the data by key, this is because <code>groupby</code> will sort in the order the data appears, and will create separate groups if there are non-continuous groups in the data. We then pass that sorted data to <code>groupby</code>, which returns an iterator that yields tuples of a type and another iterator of data elements that have that type. We pass that to <code>dict</code> to create a dictionary of type -> list of names with that type (stripping off the second type element of each tuple to avoid redundancy).</p>
<p>You could also write that line as:</p>
<pre><code>>>> genders_to_names = {}
>>> sorted_by_gender = sorted(people, key=lambda p: p[1]) # [('Susan', 'F'), ('Mary', 'F'), ('Mark', 'M'), ('Jake', 'M'), ('Morgan', 'Undeclared')]
>>> for gender, names in itertools.groupby(sorted_by_gender, key=lambda p: p[1]):
... genders_to_names[gender] = list(name for name, _ in names)
>>> print(genders_to_names)
{'Undeclared': ['Morgan'], 'M': ['Mark', 'Jake'], 'F': ['Susan', 'Mary']} # same as before
</code></pre>
<p>But who wants to do that ;)</p>
<p>Counting is very easy! Just import <code>collections</code> and use <code>Counter</code>:</p>
<pre><code>>>> collections.Counter(['Mark', 'Mark', 'Joe', 'John'])
Counter({'Mark': 2, 'John': 1, 'Joe': 1})
>>> collections.Counter(['Mark', 'Mark', 'Joe', 'John'])['Mark']
2
</code></pre>
|
python
| 0 |
1,904,787 | 45,606,906 |
Can't INSERT into Google Fusion Tables by two-legged OAuth in Python
|
<p>I try to work with Google Fusion Table using OAuth 2.0 for Server to Server (two-legged) by the next code: </p>
<pre><code>from apiclient.discovery import build
from oauth2client.service_account import ServiceAccountCredentials
scopes = ['https://www.googleapis.com/auth/fusiontables']
credentials = ServiceAccountCredentials\
.from_json_keyfile_name('***file-name***.json', scopes)
fusiontables = build('fusiontables', 'v2', credentials=credentials)
obj = fusiontables.query().sql(sql='select * from ***table-id***').execute()
</code></pre>
<p>Everything is OK when I read from the table, but when I try to insert some data:</p>
<pre><code>obj = fusiontables.query().sql(
sql="INSERT INTO ***table-id*** (Location, Number) VALUES('Paris', 1234)")\
.execute()
</code></pre>
<p>I got the error:</p>
<pre><code>googleapiclient.errors.HttpError: <HttpError 403 when requesting https://www.googleapis.com/fusiontables/v2/query?alt=json&sql=INSERT+INTO+***table-id***+%28Location%2C+Number%29+VALUES%28%27Paris%27%2C+1234%29 returned "Forbidden">
</code></pre>
<p>Question: does someone know what I am doing wrong to implement insertion into Google Fusion Table?</p>
<p><strong>UPD:</strong></p>
<p>Found that oauth2client is deprecated, tried to make a query by <a href="https://google-auth.readthedocs.io" rel="nofollow noreferrer">google.auth</a> and Requests as it is described in the <a href="https://google-auth.readthedocs.io/en/latest/user-guide.html" rel="nofollow noreferrer">docs</a>:</p>
<pre><code>from google.oauth2 import service_account
from google.auth.transport.requests import AuthorizedSession
credentials = service_account.Credentials.from_service_account_file('***filename***.json')
if credentials.requires_scopes:
credentials = credentials.with_scopes(scopes=['https://www.googleapis.com/auth/fusiontables'])
authed_session = AuthorizedSession(credentials)
r = authed_session.post('https://www.googleapis.com/fusiontables/v2/query',
data={
"sql": "***My SQL***",
"hdrs": True,
"typed": True,
})
</code></pre>
<p>As the preview, I got the same result: success on <strong>SELECT</strong> and 403 on <strong>INSERT</strong>.</p>
<p>Thanks.</p>
|
<p>The bought ways are the right and workable. The reason was that I need to add the permission to access to the table in the <a href="https://fusiontables.google.com/DataSource" rel="nofollow noreferrer">site</a>. </p>
<p>Add this permission by email, that you can get from the credential file.</p>
|
python|google-fusion-tables|service-accounts|google-apis-explorer
| 0 |
1,904,788 | 28,622,556 |
How to add a new functionality to a command line arparse menu?
|
<p>I have a scholar project which is about a command line menu script. It expects a directory with a large text file, for example this is how I execute the program:</p>
<pre><code> project/script.py /large_text_directory
</code></pre>
<p>So, lets assume I add a new sub directory and a <code>.txt</code> file to <code>/data</code>, let's say <code>new_directory/list.txt</code>:</p>
<p>Now I create this new functionality:</p>
<p>What would be an easy aproach to add the previous functionality to this command line menu.</p>
<p>For example I would like to run the script like this:</p>
<pre><code>python project/txt
</code></pre>
<p>I know this could be difficult but this is my first dive in argparse. How can I aproach this?. Thanks in advance</p>
|
<p>This will store the filename given from commandline to the <code>opts.sw</code>.</p>
<pre><code>p.add_argument("-sw", action="store", help="Enter a specific file to process")
</code></pre>
<p>When using this commandline switch, after <code>opts = p.parse_args()</code> the contents of <code>opts.sw</code> will be <code>path/of/the/file_1.txt</code></p>
<p>After this you'll pass that filename to the function that does your text processing.</p>
|
python
| 2 |
1,904,789 | 41,284,501 |
Google search results limits
|
<p>I tried to search in google search engine the word "sunday".</p>
<p>The total number of results is <code>1.390.000.000</code>. However I can see only the first 420 results, until 42 page of results.</p>
<p>Is there any way to take all the results of google search?</p>
|
<p>This 'feature' of google - and other search engines - applies to the web interface too. The claimed number of matches on the first page is not reflected in the actual number of results returned. </p>
<p>For example, if you <a href="https://www.google.com/search?num=100&q=systematic%20%22literature%20review%22&oq=systematic%20%22literature%20review%22" rel="nofollow noreferrer">search Google for "Systematic Literature Review"</a> it will claim a few milion results on the first page, but if you go to page 3 (at 100 results/page) it will 'revise' the estimate to 200-300 results.</p>
<p>This seems too high in the initial estimate but may be a problem with filtering large datasets and is possibly forgivable. However it is definitely too small a number of actual results returned for this topic. Bing and DuckDuckGo are similar. Google Scholar claims 11k or so results but returns a server error if you try going beyond 1000.</p>
<p>Speculating: this might be in order to encourage people to change their search terms, rather than return a huge number of results that are then filtered programmatically locally (which is what I would like to do!).</p>
<p>This isn't an answer to getting more results I'm afraid, but it is an explanation that the problem is not the library, it's the search engine(s).</p>
|
python|search|google-search
| 2 |
1,904,790 | 6,993,564 |
Backspace with `raw_input` in Python
|
<p>I am using <code>raw_input()</code> like this:</p>
<pre><code>while True:
print "MC ID (CTRL-D = done, 0 = sets, ? = lookup):",
try:
mcid=raw_input()
except:
print
break
# evaluate user input
# ...
</code></pre>
<p>Now if you type something, e.g. <code>abc</code> and hit backspace to correct something, as soon as you remove the <code>a</code>, the output from <code>print</code> is erased as well (and the cursor jumps to the beginning of the line), so that you no longer see the input prompt. Is there a way to avoid this?</p>
|
<p>Try this:</p>
<pre><code>mcid = raw_input("MC ID (CTRL-D = done, 0 = sets, ? = lookup): ")
</code></pre>
|
python|raw-input|backspace
| 4 |
1,904,791 | 57,095,895 |
How to use Multiprocessing Queue with Lock
|
<p>The posted code starts two async <strong>Processes</strong>. The first <code>publisher</code> <strong>Process</strong> publishes data to the <code>Queue</code>, the second <code>subscriber</code> <strong>Process</strong> reads the data from the <code>Queue</code> and logs it to console. </p>
<p>To make sure the Queue is not accessed at the same time, before getting the data from the <code>Queue</code>, the <code>subscribe</code> function first executes <code>lock.acquire()</code>, then gets the data with <code>data = q.get()</code> and finally releases the lock with the <code>lock.release()</code> statement.</p>
<p>The same locking-releasing sequence is used in <code>publish</code> function. But I had to comment two lines since acquiring the <code>lock</code> in <code>publish</code>
function brings the script to halt. Why?</p>
<pre><code>import multiprocessing, time, uuid, logging
log = multiprocessing.log_to_stderr()
log.setLevel(logging.INFO)
queue = multiprocessing.Queue()
lock = multiprocessing.Lock()
def publish(q):
for i in range(20):
data = str(uuid.uuid4())
# lock.acquire()
q.put(data)
# lock.release()
log.info('published: %s to queue: %s' % (data, q))
time.sleep(0.2)
def subscribe(q):
while True:
lock.acquire()
data = q.get()
lock.release()
log.info('.......got: %s to queue: %s' % (data, q))
time.sleep(0.1)
publisher = multiprocessing.Process(target=publish, args=(queue,))
publisher.start()
subscriber = multiprocessing.Process(target=subscribe, args=(queue,))
subscriber.start()
</code></pre>
<p><a href="https://i.stack.imgur.com/RsSex.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RsSex.jpg" alt="enter image description here"></a></p>
|
<blockquote>
<p><code>multiprocessing</code> <em>Queues</em> are thread and process safe.</p>
</blockquote>
<p>And they support internal <code>block</code>ing mechanism (see the signatures of <code>get</code>/<code>put</code> methods).<br>
You don't need a <code>lock</code> in your case.</p>
<pre><code>import multiprocessing, time, uuid, logging
log = multiprocessing.log_to_stderr()
log.setLevel(logging.INFO)
queue = multiprocessing.Queue()
def publish(q):
for i in range(20):
data = str(uuid.uuid4())
q.put(data)
log.info('published: %s to queue: %s' % (data, q))
time.sleep(0.2)
q.put(None)
def subscribe(q):
while True:
data = q.get()
if data is None:
log.info('....... end of queue consumption')
break
log.info('.......got: %s to queue: %s' % (data, q))
time.sleep(0.1)
publisher = multiprocessing.Process(target=publish, args=(queue,))
publisher.start()
subscriber = multiprocessing.Process(target=subscribe, args=(queue,))
subscriber.start()
publisher.join()
subscriber.join()
</code></pre>
<p>Sample output:</p>
<pre><code>[INFO/Process-1] child process calling self.run()
[INFO/Process-1] published: eff77f27-e13e-4d55-9f34-4ea5fc464fc8 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] child process calling self.run()
[INFO/Process-2] .......got: eff77f27-e13e-4d55-9f34-4ea5fc464fc8 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: 264fcf94-9195-4145-b0a1-5ddd787bee1f to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: 264fcf94-9195-4145-b0a1-5ddd787bee1f to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: 2e040d60-5fd4-45c9-98e6-f0032e13dae8 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: 2e040d60-5fd4-45c9-98e6-f0032e13dae8 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: afe406ea-20cc-41b3-9cf5-c1dbea11580d to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: afe406ea-20cc-41b3-9cf5-c1dbea11580d to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: e14a6c04-e2fe-4394-a189-5c57c5a98bc8 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: e14a6c04-e2fe-4394-a189-5c57c5a98bc8 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: fb90ba87-8090-4ec6-9ac1-85bcaa2bb3f6 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: fb90ba87-8090-4ec6-9ac1-85bcaa2bb3f6 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: 85ab36ee-36f3-4c67-8260-7c41ea82a5d5 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: 85ab36ee-36f3-4c67-8260-7c41ea82a5d5 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: d4dce917-9b5c-470a-9063-bfb0221da55f to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: d4dce917-9b5c-470a-9063-bfb0221da55f to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: 1e1e2f02-932d-418d-b603-8c90f4699423 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: 1e1e2f02-932d-418d-b603-8c90f4699423 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: 0b80f1df-c803-4c00-be4d-fad39213829b to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: 0b80f1df-c803-4c00-be4d-fad39213829b to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: f6afef2a-42f8-4330-b995-26ee41f833a5 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: f6afef2a-42f8-4330-b995-26ee41f833a5 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: abd85275-dc9f-478c-8528-23217db79631 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: abd85275-dc9f-478c-8528-23217db79631 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: c4fad226-8c83-4e52-beae-1cb9a825d370 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: c4fad226-8c83-4e52-beae-1cb9a825d370 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: ca16fd7d-ff51-4019-970c-f55c2b3c0db2 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: ca16fd7d-ff51-4019-970c-f55c2b3c0db2 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: eca614df-89da-47d0-a8a5-90b56fadb922 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: eca614df-89da-47d0-a8a5-90b56fadb922 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: 046903d7-0fd8-4af0-ac49-a22efdc9c029 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: 046903d7-0fd8-4af0-ac49-a22efdc9c029 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: 7904d15a-7b04-4968-a52c-cfd8d822b921 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: 7904d15a-7b04-4968-a52c-cfd8d822b921 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: 8543b520-9a7e-4e22-afb3-a4880d910482 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: 8543b520-9a7e-4e22-afb3-a4880d910482 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: b4e98f5e-ce63-4f11-a6f7-b7d36020deb0 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: b4e98f5e-ce63-4f11-a6f7-b7d36020deb0 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] published: 4a5eb231-4ccf-41e1-a0d6-ca41a50a6fd6 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-2] .......got: 4a5eb231-4ccf-41e1-a0d6-ca41a50a6fd6 to queue: <multiprocessing.queues.Queue object at 0x103528780>
[INFO/Process-1] process shutting down
[INFO/Process-2] ....... end of queue consumption
[INFO/Process-1] process exiting with exitcode 0
[INFO/Process-2] process shutting down
[INFO/Process-2] process exiting with exitcode 0
[INFO/MainProcess] process shutting down
Process finished with exit code 0
</code></pre>
|
python|multiprocessing|queue
| 4 |
1,904,792 | 44,640,575 |
How to run Python script on a webserver
|
<p>I use a webapp that can generate a PDF report of some data stored in the app. To get to that report, however, requires several clicks and monkeying around with the app. </p>
<p>I support a group of users of this app (we use the app, we don't create the app) and I'd like them to be able to generate and view this report with as few clicks as possible. Thankfully, this web app provides a lot of data via a RESTful API. So I did some scripting.</p>
<p>I have a Python script that makes an HTTP GET request, processes the JSON results, and uses that resultant data to dynamically build a URL. Here's a simplified version of my python code:</p>
<pre><code>#!/usr/bin/env python
import requests
app_id="12345"
secret="67890"
api_url='https://api.webapp.example/some_endpoint'
resp = requests.get(api_url, auth=(app_id,secret))
json_data = resp.json()
# Simplification of the data processing I'm doing
my_data = json_data['attr1']['attr2'] + my_data_processing
# Result of the script is a link to a dynamically generated PDF
pdf_url = 'https://pdf.webapp.example/items/' + my_data
</code></pre>
<p>The above is a simplification of the code I actually have, but it shows the relevant points. In my actual script, I continue on by doing another GET with the dynamically built URL. The webapp generates a PDF based on the <code>my_data</code> portion of the URL, and I write that PDF to file. This works very well today.</p>
<p>Currently, this is a python script that runs on my local machine on-demand. However, I'd like to host this somewhere on the web so that when a user hits a URL in their browser it runs and generates the <code>pdf_url</code>, instead of having to install this script on each user's local machine, and so that the PDF can be generated and viewed on a mobile device. </p>
<p>The thought is that the user can open <code>http://example.com/report-shortcut</code>, the python script would run server-side, dynamically build the URL, and redirect the user to that URL, which would then show the PDF in the browser (assuming the user is using a browser that shows PDFs like Chrome, Safari, etc). Alternately, if a redirect is problematic, going to <code>http://example.com/report-shortcut</code> could just show an HTML page with a link to the URL generated by the Python script.</p>
<p>I'm looking for a solution on how to host this Python script and have it run when a user accesses a webpage. I've looked into AWS Lambda and Django, but both seem like overkill for such a simple script (~20 lines of code, plus comments and whitespace). I've also looked at Python CGI scripting, which looks promising, but I have no experience setting up something like that. </p>
<p>Looking for suggestions on how best to host and run this code when a user goes to the example URL. </p>
<p>PS: I thought about just re-implementing in Javascript, but I'd rather the API key not be publicly accessible.</p>
|
<p>I suggest building the script in AWS Lambda and using the API Gateway to invoke it.</p>
<p>You could create the pdf, store it in S3 and generate a pre-signed URL. Then return a response 302 to the user to redirect them to the pre-signed URL. This will display the PDF in their browser.
Very quick to setup and using Boto3 getting the PDF into S3 and generating the URL is simple.</p>
<p>It will be much simpler than some of your other suggestions.</p>
<p>See <a href="https://aws.amazon.com/api-gateway/" rel="nofollow noreferrer">API Gateway</a>
& <a href="https://boto3.readthedocs.io/en/latest/reference/services/s3.html" rel="nofollow noreferrer">Boto3</a></p>
|
python|web-applications|scripting|cgi|aws-lambda
| 0 |
1,904,793 | 44,717,492 |
How to parse years with BCE/CE suffix in Python?
|
<p>I am trying to present a graph of CE/BCE dates in Python. I tried to use <code>datetime</code>, <code>dateutil</code> and <code>astropy</code> for the graph but it didn't work.
When I used <code>datetime</code> and <code>astropy</code> I saw that it did not have support for CE/BCE years.
With <code>dateutil</code> I tried: </p>
<pre><code>from dateutil.parser import *
bc = parse(u'2000BCE')
</code></pre>
<p>but it had an error:</p>
<pre><code>ValueError: Unknown string format
</code></pre>
<p>How can I present CE/BCE years in python? Is there any library that enables support for BCE/CE years?</p>
<p>The data I'm using is a list of strings and looks like:</p>
<pre><code>0 CE
1000 CE
1007 CE
104 BCE
10450 BCE
1050 BCE
1050 BCE
1050 BCE
</code></pre>
|
<p>I did not find a packege that supports BCE/CE dates in python so I did the following interpulation:</p>
<pre><code>#looks for values containing BCE
BCE = [s for s in yearsList if "BCE" in s]
#removes BCE string
BCE = [x.strip(' BCE') for x in BCE]
#defines them as integers
BCE = list(map(int, BCE))
#add minus sign to BCE years
BCE = [ -x for x in BCE]
CE = [s for s in yearsList if " CE" in s]
CE = [x.strip(' CE') for x in CE]
CE = list(map(int, CE))
#merges the BCE and the CE integers to one list
mergedlist = BCE + CE
#plot the list
sns.distplot(mergedlist)
plt.xlabel("Year")
</code></pre>
|
python|date
| 0 |
1,904,794 | 23,942,305 |
__init__() takes exactly 3 arguments (1 given)?
|
<p>It is asking for 3 arguments and i have given it one. How do i give it 2 more and could you explain how to do that as well? Thanks</p>
<pre><code>import pygame, random, collisionObjects
pygame.init()
screen = pygame.display.set_mode((640,480))
class Pirate(pygame.sprite.Sprite):
EAST = 0
def __init__(self, screen, dx):
self.screen = screen
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load("king_pirate/running e0000.bmp")
self.image = self.image.convert()
tranColor = self.image.get_at((1, 1))
self.image.set_colorkey(tranColor)
self.rect = self.image.get_rect()
self.rect.inflate_ip(-50, -30)
self.rect.center = (0, random.randrange(30,450))
self.img = []
self.loadPics()
self.frame = 0
self.delay = 4
self.pause = self.delay
self.dx = dx
def update(self):
#set delay
self.pause -= 1
if self.pause <= 0:
self.pause = self.delay
self.frame += 1
if self.frame > 7:
self.frame = 0
self.image = self.img[self.frame]
self.rect.centerx += self.dx
if self.rect.centerx > self.screen.get_width():
self.rect.centerx = 0
self.rect.centery = random.randrange(30,450)
#load pictures
def loadPics(self):
for i in range(8):
imgName = "king_pirate/running e000%d.bmp" % i
tmpImg = pygame.image.load(imgName)
tmpImg.convert()
tranColor = tmpImg.get_at((0, 0))
tmpImg.set_colorkey(tranColor)
self.img.append(tmpImg)
class Pirate2(pygame.sprite.Sprite):
WEST = 0
def __init__(self, screen, dx):
self.screen = screen
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load("pirate/running w0000.bmp")
self.image = self.image.convert()
tranColor = self.image.get_at((1, 1))
self.image.set_colorkey(tranColor)
self.rect = self.image.get_rect()
self.rect.inflate_ip(-50, -30)
self.rect.center = (640, random.randrange(20,460))
self.img = []
self.loadPics()
self.frame = 0
self.delay = 4
self.pause = self.delay
self.dx = dx
def update(self):
#set delay
self.pause -= 1
if self.pause <= 0:
self.pause = self.delay
self.frame += 1
if self.frame > 7:
self.frame = 0
self.image = self.img[self.frame]
self.rect.centerx -= self.dx
if self.rect.centerx < 0:
self.rect.centerx = self.screen.get_width()
self.rect.centery = random.randrange(20,460)
#load pictures
def loadPics(self):
for i in range(8):
imgName = "pirate/running w000%d.bmp" % i
tmpImg = pygame.image.load(imgName)
tmpImg.convert()
tranColor = tmpImg.get_at((0, 0))
tmpImg.set_colorkey(tranColor)
self.img.append(tmpImg)
#set up class for gold object,
class Gold(pygame.sprite.Sprite):
def __init__(self, screen, imageFile):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load(imageFile)
self.image = self.image.convert()
self.rect = self.image.get_rect()
self.rect.centerx = random.randrange(0, screen.get_width())
self.rect.centery = random.randrange(0, screen.get_height())
#main character class
class Thief(pygame.sprite.Sprite):
def __init__(self):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load("thief2.gif")
self.image = self.image.convert()
tranColor = self.image.get_at((1, 1))
self.image.set_colorkey(tranColor)
self.rect = self.image.get_rect()
self.rect.inflate_ip(-15, -10)
self.rect.center = (30, (screen.get_height()-40))
self.dx = 30
self.dy = 30
if not pygame.mixer:
print("problem with sound")
else:
pygame.mixer.init()
self.collectcoin = pygame.mixer.Sound("collectcoin.wav")
self.hit = pygame.mixer.Sound("hit.ogg")
def update(self):
if self.rect.bottom > screen.get_height():
self.rect.centery = (screen.get_height()-40)
elif self.rect.top < 0:
self.rect.centery = 40
elif self.rect.right > screen.get_width():
self.rect.centerx = (screen.get_width()-30)
elif self.rect.left < 0:
self.rect.centerx = 30
#define movements
def moveUp(self):
self.rect.centery -= self.dy
def moveDown(self):
self.rect.centery += self.dy
def moveLeft(self):
self.rect.centerx -= self.dx
def moveRight(self):
self.rect.centerx += self.dx
def reset(self):
self.rect.center = (30, (screen.get_height()-40))
#set up a scoreboard
class Scoreboard(pygame.sprite.Sprite):
def __init__(self):
pygame.sprite.Sprite.__init__(self)
self.lives = 0
self.score = 0
self.font = pygame.font.SysFont("None", 40)
self.number = 0
#with a self updating label
def update(self):
self.text = "Damage: %d %% Gold Taken: %d" % (self.lives, self.score)
self.image = self.font.render(self.text, 1, (199,237,241))
self.rect = self.image.get_rect()
#define the game function
def game():
#set up background
background = pygame.Surface(screen.get_size())
background = pygame.image.load("sand.jpg")
background = pygame.transform.scale(background, screen.get_size())
screen.blit(background, (0, 0))
#initialize pirates & scoreboard sprites
pirate = Pirate()
scoreboard = Scoreboard()
#create two arrays for multiple gold object occurances
#two arrays are used for better distribution on screen
gold1 = []
numberofGold = 16
for i in range(numberofgolds):
oneGold = golds(screen,"gold1.png")
golds1.append(onegold)
for gold in golds1:
gold.rect.centerx = random.randrange(20,620)
gold.rect.centery = random.randrange(20,240)
gold.rect.inflate_ip(-5, -5)
gold2 = []
for i in range(numberofgolds):
onegold = golds(screen,"gold1.png")
golds2.append(onegold)
for gold in golds2:
gold.rect.centerx = random.randrange(20,620)
gold.rect.centery = random.randrange(250,460)
gold.rect.inflate_ip(-5, -5)
totalgolds = ((len(golds1)-1)+(len(golds2)-1))
#initialize gold sprites
goldSprites = pygame.sprite.Group(golds1,
#initialize pirate sprites & instances
pirate1 = pirate1(screen,13)
pirate2 = pirate2(screen,13)
pirate3 = pirate1(screen,11)
pirate4 = pirate2(screen,11)
pirate5 = pirate1(screen,13)
pirateSprites = pygame.sprite.Group(pirate1, pirate2, pirate3, pirate4, pirate5)
#use ordered updates to keep clean appearance
allSprites = pygame.sprite.OrderedUpdates(goldSprites, thief, pirateSprites, scoreboard)
#set up clock & loop
clock = pygame.time.Clock()
keepGoing = True
while keepGoing:
clock.tick(30)
for event in pygame.event.get():
if event.type == pygame.QUIT:
keepGoing = False
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_UP:
thief.moveUp()
elif event.key == pygame.K_DOWN:
thief.moveDown()
elif event.key == pygame.K_LEFT:
thief.moveLeft()
elif event.key == pygame.K_RIGHT:
thief.moveRight()
elif event.key == pygame.K_ESCAPE:
keepGoing = False
#check collisions here
hitpirates = pygame.sprite.spritecollide(thief, pirateSprites, False)
hitgold = pygame.sprite.spritecollide(thief, goldSprites, True)
if hitpirates:
thief.hit.play()
scoreboard.lives += 1
if scoreboard.lives >= 100:
keepGoing = False
number = 0
if hitgolds:
thief.collectcoin.play()
scoreboard.score += 1
totalgolds -= 1
if totalgolds <= 0:
keepGoing = False
number = 1
#draw sprites
allSprites.clear(screen, background)
allSprites.update()
allSprites.draw(screen)
pygame.display.flip()
return scoreboard.score
return scoreboard.number
def instructions(score, number):
pygame.display.set_caption("Hunt for Gold!")
background = pygame.Surface(screen.get_size())
background = pygame.image.load("sand.jpg")
background = pygame.transform.scale(background, screen.get_size())
screen.blit(background, (0, 0))
if number == 0:
message = "Sorry try again..."
elif number == 1:
message = "The theif escapes!"
else:
message = "Onto the hunt for gold!"
insFont = pygame.font.SysFont("Calibri", 25)
insLabels = []
instructions = (
"Last score: %d" % score ,
"%s" % message,
"",
"GOLD!",
"Get all the gold before you are "
"obliterated!",
"Use arrow keys to move the thief.",
"Space to start, Esc to quit."
)
for line in instructions:
tempLabel = insFont.render(line, 1 , (0,0,0))
insLabels.append(tempLabel)
#set up homescreen loop
keepGoing = True
clock = pygame.time.Clock()
while keepGoing:
clock.tick(30)
for event in pygame.event.get():
if event.type == pygame.QUIT:
keepGoing = False
donePlaying = True
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_SPACE:
keepGoing = False
donePlaying = False
if event.key == pygame.K_ESCAPE:
keepGoing = False
donePlaying = True
for i in range(len(insLabels)):
screen.blit(insLabels[i], (50, 30*i))
pygame.display.flip()
return donePlaying
#define main function
def main():
donePlaying = False
score = 0
message = ""
while not donePlaying:
donePlaying = instructions(score, message)
if not donePlaying:
score = game()
else:
pygame.quit()
if __name__ == "__main__":
main()
</code></pre>
|
<p>Glancing at your code, this line:</p>
<pre><code>pirate = Pirate()
</code></pre>
<p>Your <code>Pirate</code> class expects <code>self</code>, <code>screen</code>, <code>dx</code>. You only implicitly provide <code>self</code>.</p>
<p>I can only guess at what you want, especially since I don't know off the bat what <code>dx</code> is supposed to mean in respect to your game, but this will probably at least avoid the error:</p>
<pre><code>pirate = Pirate(pygame.display.get_surface(), 60)
</code></pre>
|
python|arguments|init
| 1 |
1,904,795 | 23,748,623 |
On significant figures of numbers in python
|
<p>Hi I have a somewhat strange question. I am converting a list of numbers (which represent physical measurements), where each are reported to some specified accuracy in arbitrary units depending on the study it comes from. Here is an example of what I mean:</p>
<pre><code>[...,105.0,20.0,3.5,4.25,9.78,7.2,7.2]
</code></pre>
<p>These are of course of type: </p>
<pre><code><type 'numpy.float64'>.
</code></pre>
<p>If I print out the last number, for instance:</p>
<pre><code>print list[-1]
>>> 7.2
</code></pre>
<p>However, if accessed in this way, I get:</p>
<pre><code>7.2000000000000002
</code></pre>
<p>I understand that floats are by default 64 bits, so doing a calculation with them (i.e. - converting this list of measurements) returns a converted value which is 64 bits. Here is an example of the converted values:</p>
<pre><code>[105.27878958,20.0281600192,3.47317185649,4.27596751688,9.82706595042,7.27448290596,7.26291009446]
</code></pre>
<p>Accessing them either way (using print or not), returns a 64 bit number. But clearly there is something else going on, otherwise the print function wouldn't really know how to display the true measurement. I would like to report the converted values to the same level of accuracy as the original measurements. Is there a way I can do this in python?</p>
|
<p>You have three problems:</p>
<blockquote>
<p>1) How do I determine the number of decimal places in the input?</p>
</blockquote>
<p>If your Python code literally says <code>[...,105.0,20.0,3.5,4.25,9.78,7.2,7.2]</code>, you're out of luck. You've already lost the information you need.</p>
<p>If, instead, you have the list of numbers as some kind of input string, then you have the info you need. You probably already have code like:</p>
<pre><code>for line in input_file:
data = [float(x) for x in line.split(',')]
</code></pre>
<p>You need to record the number of places to the right of the decimal <em>before</em> converting to float, for example:</p>
<pre><code>for line in input_file:
places = [len(x.partition('.')[2]) for x in line.split(',')]
data = [float(x) for x in line.split(',')]
</code></pre>
<blockquote>
<p>2) How do I store that information?</p>
</blockquote>
<p>That's up to you. Sorry, I can't help you with that one without seeing your whole program.</p>
<blockquote>
<p>3) How do I format the output to round to that number of decimal places?</p>
</blockquote>
<p>Use the <code>%</code> operator or <code>{}</code>-style formatting, like so:</p>
<pre><code>print '%.*f' % (places[i], data[i])
</code></pre>
|
python|numpy|significant-digits
| 6 |
1,904,796 | 24,094,904 |
Retrieving page number by contained paragraph
|
<p>I'm having difficulty fetching page numbers in according to the paragraphs on them. My code is the following.
para_list stores the paragraphs in which the phrase i'm searching for is found. I then attempt to select the range and
then retrieve the page number..however all I receive is the same page number repeatedly. Can someone suggest another method or reveal what I'm doing wrong. Thanks</p>
<pre><code>for para in doc.Paragraphs:
count=count+1
if phrase in para.Range.Text:
para.Range.Select
para_list.append(count)
p_list.append(doc.ActiveWindow.Selection.Information(constants.wdActiveEndAdjustedPageNumber))
</code></pre>
|
<p>Seems to me like using <code>Range.Information[constants.wdActiveEndAdjustedPageNumber]</code> is the right way (see for example the second answer in <a href="http://www.techques.com/question/1-7097402/How-do-I-find-the-page-number-for-a-Word-Paragraph" rel="nofollow">How do I find the page number for a Word Paragraph</a>). However I'm not sure why you are operating on the selection, rather than the paragraph range itself. I would guess (can't try here) that the following should work: </p>
<pre><code>for count, para in enumerate(doc.Paragraphs):
if phrase in para.Range.Text:
pageNum = para.Range.Information(constants.wdActiveEndAdjustedPageNumber)
print 'page for para #%s is %s' % (count, pageNum)
</code></pre>
<p>Stylistic note: <code>para_list</code> and <code>p_list</code>? The names should more clearly identify the purpose of each container. </p>
|
.net|python-2.7|com|ms-word|win32com
| 2 |
1,904,797 | 72,091,919 |
Add multiple checkout buttons for multiple events on same page Eventbrite
|
<p>How to add multiple checkout buttons for multiple events on the same page?</p>
<pre><code><script src="https://www.eventbrite.com/static/widgets/eb_widgets.js"></script>
<script type="text/javascript">
var exampleCallback = function () {
console.log('Order complete!');
};
var getEventID = function(){
var value = document.getElementById('eventID').value;
return value;
};
window.EBWidgets.createWidget({
widgetType: 'checkout',
eventId: getEventID,
modal: true,
modalTriggerElementId: 'checkout_btn',
onOrderComplete: exampleCallback,
});
</script>
</code></pre>
<p>HTML Here</p>
<pre><code>{% for event in data.events %}
<form id="form_id">
{% csrf_token%}
<div class="center">
<div class="w3-card-4" style="width:100%;">
<header class="w3-container w3-blue" >
<h1>{{event.name.text}}</h1>
</header>
<div class="w3-container" style="background-color: #ddd;">
<p>{{event.description.text}}</p>
Event ID: <input type="hidden" id="eventID" name="eventID" value="{{event.id}}"><br>
Capcity: {{event.capacity}}
<button id="checkout_btn" class="button" type="button">Buy Tickets!</button>
</div>
</div>
</form>
{% endfor %}
</code></pre>
<p>I am showing multiple events in <strong>Django</strong> and trying to fetch the event id in script code. It works for one event when I provide a hardcoded value.</p>
<p>Any help will be appreciated!</p>
|
<p>Found the solution, but don't know if it is a good one or not (But working for me):</p>
<pre><code>{% for event in data.events %}
<form id="form_id">
<!-- checkout widget START-->
<script src="https://www.eventbrite.com/static/widgets/eb_widgets.js"></script>
<script type="text/javascript">
var exampleCallback = function () {
console.log('Order complete!');
};
window.EBWidgets.createWidget({
widgetType: 'checkout',
eventId: '{{event.id}}',
modal: true,
modalTriggerElementId: 'checkout_btn-{{event.id}}',
onOrderComplete: exampleCallback,
});
</script>
<!-- checkout widget END -->
{% csrf_token%}
<div class="center">
<header class="w3-container w3-blue">
<h1>{{event.name.text}}</h1>
</header>
<p>{{event.description.text}}</p>
Event ID: <input type="hidden" id="eventID" name="eventID" value="{{event.id}}">{{event.id}}
<br>
Capcity: {{event.capacity}}
<br>
Starting: {{event.start.local}}
<br>
Ending: {{event.end.local}}
</div>
<button id="checkout_btn-{{event.id}}" class="button" type="button">Buy Tickets!</button>
</form>
{% endfor %}
</code></pre>
<p>Took the script inside the loop and provided a unique id to the button as <code>checkout_btn-{{event.id}}</code>. It will become <strong>checkout_btn-<em>xxxxxxxxxxx121</em></strong>(event ID retrieve from <code>{{event.id}}</code>). Similarly provide the same button id in the script as <code>modalTriggerElementId: 'checkout_btn-{{event.id}}'</code>,
and provided <code>eventId: '{{event.id}}'</code>, in place of <code>eventId: getEventID</code>. Now it can distinguish between each event.</p>
|
python|django|api|eventbrite
| 0 |
1,904,798 | 60,996,949 |
how to have my email script run only when a 'host is offline' not when the result comes back as 'host online'
|
<p>I have tried a couple of different ways to execute the file. every time I run the code I get email whether the ping is successful or not. I only want an email when the ping is successful. This is for me and my team to know if a computer goes offline or not. </p>
<pre><code>from subprocess import PIPE, Popen
import Email
num = 1
host = "192.168.1.116"
wait = 1000
ping = Popen("ping -n {} -w {} {}".format(num, wait, host),
stdout=PIPE, stderr=PIPE)
exit_code = ping.wait()
if exit_code != 0:
print("Host offline.")
exec(Email)
else:
print("Host online.")
</code></pre>
|
<p>The exec(email) calls the email code thats is in a different section of code, to send an email with text that says the ping failed and the server is down. </p>
|
python-3.x
| 0 |
1,904,799 | 60,785,563 |
In Python, is there anyway you can make non-terminating float (fraction) into a string without chaing its state?
|
<p>Is there any way you can make non-terminating float (non-terminating fraction) into string while preserving its format (no decimal. i.e. 1.666667)? For instance, I want 7/6 to be '7/6'. Assuming the non-terminating float is <strong>unknown</strong>, and the code should work with any number. You are allowed to import, however, you can't create a new file. I've attempted multiple times, but I could only work this out using file.
We will start like this →
<code>a = 7/6</code></p>
|
<h1>Answer</h1>
<p>As stated by <a href="https://stackoverflow.com/a/60785632/6630397">Mureinik</a>, you can use the <a href="https://docs.python.org/3.8/library/fractions.html" rel="nofollow noreferrer">Fraction</a> module.</p>
<p>Especially look at its <code>limit_denominator()</code> method.</p>
<blockquote>
<p>Finds and returns the closest Fraction to self that has denominator at
most max_denominator. This method is useful for finding rational
approximations to a given floating-point number</p>
</blockquote>
<p><br>
For example in your case:</p>
<pre><code>from fractions import Fraction
a = 7/6.
print(Fraction(a).limit_denominator(max_denominator=100000))
7/6
</code></pre>
<p>Source: <a href="https://docs.python.org/3.8/library/fractions.html#fractions.Fraction.limit_denominator" rel="nofollow noreferrer">https://docs.python.org/3.8/library/fractions.html#fractions.Fraction.limit_denominator</a><br>
and: <a href="https://www.geeksforgeeks.org/fraction-module-python/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/fraction-module-python/</a> </p>
<h2>EDIT</h2>
<p>If your number has to many places after the decimal, you may then think about rounding it, e.g.:</p>
<pre><code>from math import log10
from fractions import Fraction
a = 7/6.
rounding_factor = 100000
a = round(a, int(log10(rounding_factor)))
print(Fraction(a).limit_denominator(max_denominator=int(rounding_factor/10)))
7/6
</code></pre>
<p>Turned into a function, it may look like this (try to play around with it):</p>
<pre><code>def transform_decimal_to_fraction(a, rounding_factor=100000):
"""Return a string representing the input number a as a fraction.
Parameters:
----------
a : float
The variable on which to perform the transformation to a fraction string.
rounding_factor : int
The rounding factor to use. This factor is transforme to its log10 to
cut the input variable a to a limited number of digits after the decimal
point. It is also used, dived by 10 to adapt to the latter, as the
max_denominator used by Fraction.
Default value = 100000.
Returns:
-------
retval : str
A string representing the input variable as a fraction.
"""
from math import log10
from fractions import Fraction
a = round(a, int(log10(rounding_factor)))
b = (Fraction(a).limit_denominator(max_denominator=int(rounding_factor/10)))
retval = str(b)
return(retval)
</code></pre>
<h3>Notice</h3>
<p>Of course, the range of numbers to which this function applies is dependent of both your inputs.</p>
|
python|math|floating-point|integer|fractions
| 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.