Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,908,400 | 64,057,792 |
How to read an image file from ftp and convert it into an opencv image without saving in python
|
<p>The question is self explanatory, basically I want to read an image file from ftp using ftplib and convert it into an opencv image but without saving it on the disk in python.</p>
<p>Thanks</p>
|
<p>I was able to achieve this myself using the following code.</p>
<pre><code>connection= ftplib.FTP('server.address.com','USERNAME','PASSWORD')
r = BytesIO()
connection.retrbinary('RETR '+ image_path, r.write)
image = np.asarray(bytearray(r.getvalue()), dtype="uint8")
image = cv.imdecode(image, cv.IMREAD_COLOR)
</code></pre>
|
python|opencv|ftplib
| 2 |
1,908,401 | 64,047,196 |
How can I start writing a program in python where it reads an excel file with few records and generate more record for testing purpose
|
<p>I have got a file with 5 rows and multiple columns and that when read by the program it should generate 100 records for example which can then be loaded into database.
Format can be excel or csv</p>
|
<p>Let's save you have a file <code>file.csv</code>. Read that into a dataframe and sample from it as many times as you need. Write the result to a new dataframe or csv.</p>
<pre><code>import pandas as pd
df = pd.read_csv('file.csv')
new_df = df.sample(n=100, replace=True) # n could be as big as you want
# new df can now be exported
new_df.to_csv('new_df.csv')
</code></pre>
|
python|python-3.x|pandas|data-generation|msdatasetgenerator
| 0 |
1,908,402 | 64,098,745 |
Pandas column with amount range
|
<p>We have a column like below</p>
<pre><code>name salary-range
A $53K-$99K
B $41K-$78K
c $97K-$129K
D $29K-$38K
</code></pre>
<p>we need to find the name with highest salary</p>
<p>dtype of salary-range is object , is there any easy way to convert the column to int64 and check for the with highest salary?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.extractall.html" rel="nofollow noreferrer"><code>Series.str.extractall</code></a> for get numbers, convert to integers:</p>
<pre><code>s = (df.set_index('name')['salary-range']
.str.extractall('(\d+)')[0]
.astype(int)
.reset_index(level=1, drop=True))
print (s)
name
A 53
A 99
B 41
B 78
c 97
c 129
D 29
D 38
Name: 0, dtype: int32
</code></pre>
<p>Last get names by maximal value by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.idxmax.html" rel="nofollow noreferrer"><code>Series.idxmax</code></a>:</p>
<pre><code>a = s.idxmax()
print (a)
c
</code></pre>
|
pandas
| 1 |
1,908,403 | 72,062,443 |
Creating new columns that contain the value of a specific index
|
<p>I have tried multiple methods that get me to a point close to but not exactly where I want to be with the final output. I am trying to first create a few columns that contain a specific within the raw dataframe based on it's position, afterwards I am trying to make a particular row the header row and skip all the rows that were above it.</p>
<p>Raw input:</p>
<pre><code> | NA | NA_1 | NA_2 | NA_3 |
0 | 12-Month Percent Change | NaN | NaN | NaN |
1 | Series Id: CUUR0000SAF1 | NaN | NaN | NaN |
2 | Item: Food | NaN | NaN | NaN |
3 | Year | Jan | Feb | Mar |
4 | 2010 | -0.4 | -0.2 | 0.2 |
5 | 2011 | 1.8 | 2.3 | 2.9 |
</code></pre>
<p>Code used:</p>
<pre><code>df1['View Description'] = df1.iat[0,0]
df1['Series ID'] = df1.iat[1,1]
df1['Series Name'] = df1.iat[2,1]
df1
</code></pre>
<p>Resulted to:</p>
<pre><code> NA NA.1 NA.2 NA.3 NA.4 NA.5 NA.6 NA.7 View Description Series ID Series Name
0 12-Month Percent Change NaN NaN NaN NaN NaN NaN NaN 12-Month Percent Change CUUR0000SAF1 Food
1 Series Id: CUUR0000SAF1 NaN NaN NaN NaN NaN NaN 12-Month Percent Change CUUR0000SAF1 Food
2 Item: Food NaN NaN NaN NaN NaN NaN 12-Month Percent Change CUUR0000SAF1 Food
3 Year Jan Feb Mar Apr May Jun Jul 12-Month Percent Change CUUR0000SAF1 Food
4 2010 -0.4 -0.2 0.2 0.5 0.7 0.7 0.9 12-Month Percent Change CUUR0000SAF1 Food
5 2011 1.8 2.3 2.9 3.2 3.5 3.7 4.2 12-Month Percent Change CUUR0000SAF1 Food
6 2012 4.4 3.9 3.3 3.1 2.8 2.7 2.3 12-Month Percent Change CUUR0000SAF1 Food
7 2013 1.6 1.6 1.5 1.5 1.4 1.4 1.4 12-Month Percent Change CUUR0000SAF1 Food
</code></pre>
<p>Last thing is I want to make the header the row 3 and remove all the rows above it. BUT still keep the three columns at the end: 1) View Description, Series ID, Series Name.</p>
<p>Any suggestions with an efficient way that this can be done as next I want to scale it up with a for loop or something that would do this process for x10 files.</p>
<p>Thanks in advance!</p>
|
<p>Here's a way to do what I believe your question is asking:</p>
<pre class="lang-py prettyprint-override"><code># Parse and store the first 3 values in column 0 so that we can use them
# as values for 3 new columns later.
new_columns = [x.split(':')[-1].strip() for x in df1.iloc[0:3,0].to_list()]
# Transpose so that we can use set_index() to replace the index
# (the columns from the original df1) to ['Item: Food', NaN, NaN, NaN],
# then transpose back so that the new index becomes the columns.
df1 = df1.T.set_index(3).T
# Use reset_index() to replace the index with a fresh range
# index (0, 1, 2, ...) so we can use iloc() to discard the
# first 3 unwanted rows, then call reset_index() again.
df1 = df1.reset_index(drop=True).iloc[3:].reset_index(drop=True)
# Get rid of vestigial name for columns.
df1.columns.names = [None]
# Add the three new columns set to the values saved earlier.
df1[['View Description', 'Series ID', 'Series Name']] = new_columns
</code></pre>
<p>Here is full test case (with the above annotated code compressed into fewer lines):</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
s = [
' | NA | NA_1 | NA_2 | NA_3 |',
'0 | 12-Month Percent Change | NaN | NaN | NaN |',
'1 | Series Id: CUUR0000SAF1 | NaN | NaN | NaN |',
'2 | Item: Food | NaN | NaN | NaN |',
'3 | Year | Jan | Feb | Mar |',
'4 | 2010 | -0.4 | -0.2 | 0.2 |',
'5 | 2011 | 1.8 | 2.3 | 2.9 |']
df1 = pd.DataFrame(
[[x.strip() for x in y.split('|')[1:-1]] for y in s[1:]],
columns = [x.strip() for x in s[0].split('|')[1:-1]],
)
print(df1)
new_columns = [x.split(':')[-1].strip() for x in df1.iloc[0:3,0].to_list()]
df1 = df1.T.set_index(3).T.reset_index(drop=True).iloc[3:].reset_index(drop=True)
df1.columns.names = [None]
df1[['View Description', 'Series ID', 'Series Name']] = new_columns
print(df1)
</code></pre>
<p>Output:</p>
<pre><code> NA NA_1 NA_2 NA_3
0 12-Month Percent Change NaN NaN NaN
1 Series Id: CUUR0000SAF1 NaN NaN NaN
2 Item: Food NaN NaN NaN
3 Year Jan Feb Mar
4 2010 -0.4 -0.2 0.2
5 2011 1.8 2.3 2.9
Year Jan Feb Mar View Description Series ID Series Name
0 2010 -0.4 -0.2 0.2 12-Month Percent Change CUUR0000SAF1 Food
1 2011 1.8 2.3 2.9 12-Month Percent Change CUUR0000SAF1 Food
</code></pre>
<p><strong>UPDATE</strong>: This is code that allows us to configure (1) the cell coordinates of each of 3 cells to be used for new column values (<code>new_col_coords</code>) and (2) the <code>header_row</code> above which rows are discarded:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
s = [
' | NA | NA_1 | NA_2 | NA_3 |',
'0 | 12-Month Percent Change | NaN | NaN | NaN |',
'91 | To be discarded | NaN | NaN | NaN |',
'1 | Series Id: CUUR0000SAF1 | Abc | NaN | NaN |',
'92 | To be discarded | NaN | NaN | NaN |',
'93 | To be discarded | NaN | NaN | NaN |',
'94 | To be discarded | NaN | NaN | NaN |',
'2 | Item: Food | Xyz | NaN | NaN |',
'95 | To be discarded | NaN | NaN | NaN |',
'96 | To be discarded | NaN | NaN | NaN |',
'97 | To be discarded | NaN | NaN | NaN |',
'98 | To be discarded | NaN | NaN | NaN |',
'3 | Year | Jan | Feb | Mar |',
'4 | 2010 | -0.4 | -0.2 | 0.2 |',
'5 | 2011 | 1.8 | 2.3 | 2.9 |']
df1 = pd.DataFrame(
[[x.strip() for x in y.split('|')[1:-1]] for y in s[1:]],
columns = [x.strip() for x in s[0].split('|')[1:-1]],
)
print(df1)
# parse and store the 3 values at specified coordinates so that we can use them as values for 3 new columns later
new_col_coords = [[0,0], [2,1], [6,1]]
new_columns = [x.split(':')[-1].strip() for x in [df1.iloc[i, j] for i, j in new_col_coords]]
header_row = 11
# Here's how to do everything that follows in one line of code:
#df1 = df1.T.set_index(header_row).T.reset_index(drop=True).iloc[header_row:].reset_index(drop=True)
# Transpose so that we can use set_index() to change the index to ['Item: Food', NaN, NaN, NaN], then transpose back so that index becomes the columns
df1 = df1.T.set_index(header_row).T
# Use reset_index() to replace the index with a fresh range index (0, 1, 2, ...) so we can use iloc() to discard the unwanted rows above header_row, then call reset_index() again
df1 = df1.reset_index(drop=True).iloc[header_row:].reset_index(drop=True)
# Get rid of vestigial name for columns
df1.columns.names = [None]
# Add the three new columns set to the values saved earlier
df1[['View Description', 'Series ID', 'Series Name']] = new_columns
print(df1)
</code></pre>
<p>Output:</p>
<pre><code> NA NA_1 NA_2 NA_3
0 12-Month Percent Change NaN NaN NaN
1 To be discarded NaN NaN NaN
2 Series Id: CUUR0000SAF1 Abc NaN NaN
3 To be discarded NaN NaN NaN
4 To be discarded NaN NaN NaN
5 To be discarded NaN NaN NaN
6 Item: Food Xyz NaN NaN
7 To be discarded NaN NaN NaN
8 To be discarded NaN NaN NaN
9 To be discarded NaN NaN NaN
10 To be discarded NaN NaN NaN
11 Year Jan Feb Mar
12 2010 -0.4 -0.2 0.2
13 2011 1.8 2.3 2.9
Year Jan Feb Mar View Description Series ID Series Name
0 2010 -0.4 -0.2 0.2 12-Month Percent Change Abc Xyz
1 2011 1.8 2.3 2.9 12-Month Percent Change Abc Xyz
</code></pre>
|
python|pandas|dataframe|numpy
| 1 |
1,908,404 | 72,022,107 |
Read Plain Text Document in pandas, only one column
|
<p><a href="https://i.stack.imgur.com/tsd5w.png" rel="nofollow noreferrer">here is the photo</a></p>
<p>how can I FIX this, thank you</p>
|
<p>You have to specify that your <code>csv</code> file seperator is <code>whitespace</code>.
To do so you have to add <code>sep='\s+'</code> as this says that the seperator in the file is one or more spaces (Regex expression).</p>
<h2>The better way</h2>
<p>You have to specify <code>delim_whitespace=True</code> as parametes as it is faster that <code>regex</code> I shown you above.</p>
<p>So your code should be like:</p>
<pre><code>pd.read_csv("beer_drug_1687_1739.txt", header=None, delim_whitespace=True)
</code></pre>
<p>And also I see that your first row has the <strong>names of your columns</strong>. So you have to change <code>header=None</code> to <code>header=[0]</code> to get the names of your columns.</p>
<p>If you have any questions feel free to ask.</p>
|
python|pandas|jupyter-notebook
| 0 |
1,908,405 | 71,715,782 |
Tensorflow module is not found when running a code on AWS Deep Learning AMI (p2.xlarge)
|
<p>when running the following code from a jupyter notebook in the ec2 instance:</p>
<p><code>from keras.datasets import imdb</code></p>
<p>the following error message pops out:</p>
<pre><code>ModuleNotFoundError: No module named 'tensorflow'
</code></pre>
<p>I tried installing tensorflow using pip / conda e.g. <code>pip install tensorflow</code> but the error still persists.
Aren't these packages pre-installed already in the deep learning instance and why does it not let me install it on my own?</p>
|
<p>The issue is resolved. It was caused by running the jupyter notebook server in the wrong environment of the instance (in base instead of tensorflow_p37).</p>
|
python|amazon-web-services|tensorflow|amazon-ec2|deep-learning
| 0 |
1,908,406 | 71,743,582 |
attempt to get argmax of an empty sequence
|
<p>when tried to execute this code, it is showing 'attempt to get argmax of an empty sequence' error.
code:
import re
import numpy as np</p>
<p>code:</p>
<pre><code>import re
import numpy as np
output_directory = './fine_tuned_model'
lst = os.listdir(model_dir)
lst = [l for l in lst if 'model.ckpt-' in l and '.meta' in l]
steps=np.array([int(re.findall('\d+', l)[0]) for l in lst])
last_model = lst[steps.argmax()].replace('.meta', '')
last_model_path = os.path.join(model_dir, last_model)
print(last_model_path)
!python /content/models/research/object_detection/export_inference_graph.py \
--input_type=image_tensor \
--pipeline_config_path={pipeline_fname} \
--output_directory={output_directory} \
--trained_checkpoint_prefix={last_model_path}
</code></pre>
<p><a href="https://i.stack.imgur.com/5pdpm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5pdpm.png" alt="" /></a></p>
|
<p>Think the error is saying exactly what is happening. You are creating <code>steps</code> with no data so argmax() won't run. Perhaps you need to adjust how it's loaded so data is in steps. Hard to say based on info provided</p>
<h2>Simplified Example to Demonstrate Issue:</h2>
<pre><code>steps = np.array([1,2,3])
#Works fine with data
print(steps.argmax())
#argmax() throws an error since the array is empty
emptySteps = np.delete(steps,[0,1,2])
print(emptySteps.argmax())
</code></pre>
<h2>Possible Workaround</h2>
<p>Appears you are searching a directory. If there are no files available, you may not want an error. Could achieve this with a simple check before running to see if there are files to process</p>
<pre><code>if steps.size > 0:
print("Do something with argmax()")
else:
print ("No data in steps array")
</code></pre>
|
python
| 0 |
1,908,407 | 5,214,044 |
Cluto like library for Python
|
<p>I like Cluto as a data clustering software a lot. But its library binding is available only in C.</p>
<p>Is there any python library which is similar to Cluto?</p>
|
<p>I've not tried any of them, but there are a few things which say they do clustering in Python:</p>
<ul>
<li><a href="http://docs.scipy.org/doc/scipy/reference/cluster.html" rel="nofollow">scipy.cluster</a> (numpy and scipy are the two mainstays of serious numerical computation in Python)</li>
<li><a href="http://docs.scipy.org/doc/scipy/reference/cluster.html" rel="nofollow">Pycluster</a></li>
<li><a href="http://pypi.python.org/pypi/hcluster/0.2.0" rel="nofollow">hcluster</a> (looks like it's not been updated in a couple of years)</li>
</ul>
<p>If none of those do the trick, you could use something like <a href="http://docs.python.org/library/ctypes.html" rel="nofollow">ctypes</a> to call functions from Cluto, although it won't be as elegant.</p>
|
python|cluster-analysis|cluto
| 1 |
1,908,408 | 5,354,048 |
Python multiple comparisons style?
|
<p>I am wondering if there is a way to do the following in a more compact style:</p>
<pre><code>if (text == "Text1" or text=="Text2" or text=="Text3" or text=="Text4"):
do_something()
</code></pre>
<p>The problem is i have more than just 4 comparisons in the if statement and it's starting to look rather long, ambiguous, and ugly. Any ideas?</p>
|
<p>How about this:</p>
<pre><code>if text in ( 'Text1', 'Text2', 'Text3', 'Text4' ):
do_something()
</code></pre>
<p>I've always found that simple and elegant.</p>
|
python|coding-style|comparison|readability
| 16 |
1,908,409 | 62,492,306 |
how do i manually authenticate and log in a user in django?
|
<pre><code> if request.method=='POST':
try:
email=request.POST['email']
password = request.POST['password']
if StudentUser.objects.filter(email=email).exists():
user = StudentUser.objects.get(email=email)
if user.check_password(password):
user = auth.authenticate(email=email)
if user is not None:
auth.login(request,user)
messages.success(request,'Successfully Loggedin')
return redirect('/')
else:
messages.warning(request,'Password does not match')
return redirect('login')
else:
messages.error(request,'No Account registered with this mail')
return redirect('login')
except Exception as problem:
messages.error(request,problem)
return redirect('login')
return render(request,'login.html')
</code></pre>
<p>This above code i am trying to authenticate the user manually, but it is not working. i want to authenticate a user by manually checking the password.
How can i do it?</p>
<p>** when I am passing the password in auth.authenticate function it is showing password does not match error</p>
|
<p>You should pass password as argument to authenticate method</p>
<p>From <a href="https://docs.djangoproject.com/en/3.0/topics/auth/default/#authenticating-users" rel="nofollow noreferrer">docs</a></p>
<pre><code>user = authenticate(username='john', password='secret')
</code></pre>
<blockquote>
<p>Use authenticate() to verify a set of credentials. It takes
credentials as keyword arguments, username and password for the
default case, checks them against each authentication backend, and
returns a User object if the credentials are valid for a backend.</p>
</blockquote>
|
python|django|django-models|django-views|django-authentication
| 1 |
1,908,410 | 62,848,647 |
Google Cloud Build can't find Python venv... For Java Spring Boot project
|
<p>I have a usual Java 11 Spring Boot application that is deployed to Heroku at the moment.</p>
<p>I can deploy the app manually to AppEngine via a local call to <code>gcloud app deploy</code></p>
<p>However, I'm struggling for about 2 hours to make the Google Cloud Build to build and deploy the application automatically. It crashes not being able to find python, but I have absolutely no idea why does it try to look for the python anyway.</p>
<p>Here is the <code>cloubuild.yaml</code>:</p>
<pre><code> - name: maven:3.6.3-adoptopenjdk-11
entrypoint: mvn
args: ['--version']
- name: maven:3.6.3-adoptopenjdk-11
entrypoint: mvn
args: ['package']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', 'app.yaml']
</code></pre>
<p>Here is the app yaml in the same root folder:</p>
<pre><code>runtime: java11
env: standard
instance_class: F2
automatic_scaling:
max_instances: 1
</code></pre>
<p>Here is the error part of the output:</p>
<pre><code>Step #2: descriptor: [/workspace/app.yaml]
Step #2: source: [/workspace]
Step #2: target project: [atomic-parity-282520]
Step #2: target service: [default]
Step #2: target version: [20200711t113051]
Step #2: target url: [https://atomic-parity-282520.ew.r.appspot.com]
Step #2:
Step #2:
Step #2: Do you want to continue (Y/n)?
Step #2: Beginning deployment of service [default]...
Step #2: Created .gcloudignore file. See `gcloud topic gcloudignore` for details.
Step #2: ERROR: gcloud crashed (OSError): [Errno 2] No such file or directory: '/workspace/venv/bin/python3'
Step #2:
Step #2: If you would like to report this issue, please run the following command:
Step #2: gcloud feedback
Step #2:
Step #2: To check gcloud for common problems, please run the following command:
Step #2: gcloud info --run-diagnostics
Finished Step #2
ERROR
ERROR: build step 2 "gcr.io/cloud-builders/gcloud" failed: step exited with non-zero status: 1
</code></pre>
|
<p>It appeared that some time ago we've added a few python scripts for db migration. In addition to the script, a <code>venv</code> folder was created.</p>
<p>Looks like AppEngine triggers it as a clue that the project is Python-based, even if you set a particular <code>app.yaml</code> telling to use Java.</p>
<p>Deleting the <code>venv</code> folder resolved the issue.</p>
|
java|python|google-cloud-platform|gcloud|google-cloud-build
| 0 |
1,908,411 | 61,645,572 |
Custom User model in django. UNIQUE constraint failed: users_user.username
|
<p>I am trying to make a custom registration form. Main idea is that users should use email as login method and be able to set username in profile(so I dont want to override username field). Problem with built-in django User model is that username field is required so I made my own model based on AbstractUser. Now when I am trying to register a new user I get "UNIQUE constraint failed: users_user.username".</p>
<p>models.py</p>
<pre><code>from django.db import models
from django.contrib.auth.models import AbstractUser
class User(AbstractUser):
pass
</code></pre>
<p>forms.py</p>
<pre><code>from django import forms
from .models import User
class RegisterForm(forms.ModelForm):
username = forms.CharField(max_length=100, required=False)
email = forms.EmailField(label='', max_length=100, widget=forms.TextInput(attrs={'placeholder': 'email@example.com'}))
password = forms.CharField(label='', max_length=100, widget=forms.PasswordInput(attrs={'placeholder': 'Password'}))
class Meta:
model = User
fields = ['username', 'email', 'password']
</code></pre>
<p>views.py</p>
<pre><code>from django.shortcuts import render, redirect
from django.contrib import messages
from .forms import RegisterForm
def register(request):
if request.method == 'POST':
form = RegisterForm(request.POST)
if form.is_valid():
form.save()
return redirect('trade')
else:
form = RegisterForm()
return render(request, 'users/register.html', {'form': form})
</code></pre>
<p>AUTH_USER_MODEL = 'users.User' is set</p>
<p>I tried to set email unique=True in models.py</p>
<pre><code>from django.db import models
from django.contrib.auth.models import AbstractUser
class User(AbstractUser):
email = models.EmailField(unique=True)
</code></pre>
<p>Now I get "The view users.views.register didn't return an HttpResponse object. It returned None instead" instead.</p>
<p>Keep in mind that this is my first django project on my own :)
Help is appriciated!</p>
<p>EDIT with solution:</p>
<p>Well I solved this. All answers are in django documentation(who would've tought). The problem is if your are new as I am to python and django reading the documentation about custom user model can be very overwhelming. Main goal was to set email as login method. To do that you would have to set USERNAME_FIELD = 'email' in your model. And to be able to do that you have to make your model based not on AbstractUser, but on AbstractBaseUser(<a href="https://docs.djangoproject.com/en/3.0/topics/auth/customizing/#specifying-a-custom-user-model" rel="nofollow noreferrer">https://docs.djangoproject.com/en/3.0/topics/auth/customizing/#specifying-a-custom-user-model</a>) and rewrite how you create users. Seemed pretty hard to me but django has a very nice example how to do that right at the end of documentation(<a href="https://docs.djangoproject.com/en/3.0/topics/auth/customizing/#a-full-example" rel="nofollow noreferrer">https://docs.djangoproject.com/en/3.0/topics/auth/customizing/#a-full-example</a>). I just copied the code, replaced 'date_of_birth' with 'username' and got exactly what I wanted plus a little bit of understandig how things work on top of that.</p>
|
<p>With regards the error <code>"The view users.views.register didn't return an HttpResponse object. It returned None instead"</code>, this is happening because <code>register</code> doesn't return something in all of the flows. In the case where the request is a <code>POST</code> the first if statement is true, so that's the branch we're in. If the form is valid, all is good, but if it's not, nothing is returned. We don't enter the else part because that only happens when request isn't a <code>POST</code> you could fix it by doing the following:</p>
<pre><code>def register(request):
if request.method == 'POST':
form = RegisterForm(request.POST)
if form.is_valid():
form.save()
return redirect('trade')
else:
form = RegisterForm()
return render(request, 'users/register.html', {'form': form})
</code></pre>
<p>This way, in all situations you return something.</p>
<p>With regards the original error. Since you are inheriting from <code>AbstractUser</code> you are inheriting the <code>username</code> field and all the behaviours associated with it. In particular, it is still required to be unique. This is how it is defined in <code>AbstractUser</code>:</p>
<pre><code> username = models.CharField(
_('username'),
max_length=150,
unique=True,
help_text=_('Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.'),
validators=[username_validator],
error_messages={
'unique': _("A user with that username already exists."),
},
)
</code></pre>
<p>This is going to cause you problems if you are submitting blank values to your username. Your form will allow it, but it won't be allowed at the database level. There I would overwrite username, just add something like this:</p>
<pre><code> username = models.CharField(
max_length=150,
blank=True,
)
</code></pre>
|
python|django
| 0 |
1,908,412 | 61,843,210 |
Convert multiple dataframes in lower case
|
<p>I would like to put all the dataframes' rows in lower case. I am considering multiple dataframes, so I am doing a for loop through them.
I have tried as follows</p>
<pre><code>for i, file in enumerate(files):
df[str(i)]= pd.read_csv(file)
df[str(i)].apply(lambda x: x.astype(str).str.lower())
</code></pre>
<p>but unfortunately it is not returning rows in lower case.
I have follower the answer given in a previous post: <a href="https://stackoverflow.com/questions/39512002/convert-whole-dataframe-from-lower-case-to-upper-case-with-pandas">Convert whole dataframe from lower case to upper case with Pandas</a></p>
<p>Could you please tell me what it is wrong in the code above? Thanks </p>
|
<p>It looks like you're putting your DataFrames into a dictionary; that definitely helps.<br>
But you have to assign the result of the <code>.apply()</code> operation to something.<br>
As it is it's not being saved anywhere.<br>
Try instead (with <code>df</code> renamed to be more clear):</p>
<pre><code>df_dict = {}
for i, f in enumerate(files):
df_dict[str(i)] = pd.read_csv(f)
df_dict[str(i)] = df_dict[str(i)].apply(lambda x: x.astype(str).str.lower())
</code></pre>
|
python|pandas
| 1 |
1,908,413 | 60,571,340 |
Django renew field from database,calculated field in Database
|
<p>Newby in django, have two question, can't find needed info.</p>
<p>1) I have database (SQLite) which have table scale_calibration and field weight. Other application rewrite value in field weight 1-2 times per second. Is there possibility in Django to renew this field without renew browser (F5)? </p>
<p>models.py:</p>
<pre><code>from django.db import models
class Calibration(models.Model):
mean_weight = models.FloatField(editable=True)
hours_to_export = models.PositiveIntegerField(default=4, editable=True)
weight = models.FloatField(editable=True)
</code></pre>
<p>admin.py:</p>
<pre><code>from django.contrib import admin
from .models import Calibration
# Register your models here.
admin.site.register(Calibration)
</code></pre>
<p>2) I try follow <a href="https://stackoverflow.com/questions/44805303/django-model-method-or-calculation-as-field-in-database">that link</a> to make easy calculated field (that will be write to database when save), but i have no results and no error, don't understand where i did mistake.</p>
<p>models.py:</p>
<pre><code>from django.db import models
class Calibration(models.Model):
mean_weight = models.FloatField(editable=True)
hours_to_export = models.PositiveIntegerField(default=4, editable=True)
weight = models.FloatField(editable=True)
calibration_factor = models.FloatField(editable=True)
@property
def get_calibration(self):
return self.weight/self.mean_weight
def save(self, *args, **kwarg):
self.calibration_factor = self.get_calibration()
super(Calibration, self).save(*args, **kwarg)
</code></pre>
<p>Please help with advise.</p>
|
<p>As @david-alford mention, AJAX is a good solution to your problem. This simply writing JavaScript in your templates that make a request every <code>n</code> seconds and update your webpage, also you will need a new endpoint in your Django app that provides the update values from your model to this requests to be repeated.</p>
<p>If this sounds weird or complicated, take a look at some <a href="https://simpleisbetterthancomplex.com/tutorial/2016/08/29/how-to-work-with-ajax-request-with-django.html" rel="nofollow noreferrer">AJAX examples with Django</a>, and feel free to ask more specific questions for clarifications.</p>
|
python|django|field
| 0 |
1,908,414 | 60,619,403 |
datalab folder is missing from root in google colab
|
<p>I'm trying to run some machine learning in Google Colab.</p>
<p>However, there is one line that says</p>
<pre><code>%cd ~/datalab
</code></pre>
<p>But it can't find the 'datalab' folder.</p>
<pre><code>[Errno 2] No such file or directory: '/root/datalab'
</code></pre>
<p>I checked if it's there in content and root using !ls and os.listdir but there's nothing.</p>
<p>I also tried mounting my content folder but it doesn't fix the datalab problem.</p>
<p>Why is 'datalab' missing?</p>
<p>How can I fix it?</p>
<p>Any help would be appreciated, thanks!</p>
|
<p>The error names the problem. You'll want to create the directory before attempting to switch to it using a command like:</p>
<pre><code>!mkdir -p /root/datalab
</code></pre>
|
python|machine-learning|directory|google-colaboratory
| 0 |
1,908,415 | 64,527,109 |
How to concatenate pandas dataframes with automatic keys?
|
<p>Following on <a href="https://stackoverflow.com/questions/64524854/how-to-concatenate-dataframes-as-new-group/64525024#64525024">an earlier question</a></p>
<p>I have</p>
<pre><code>df1 = pd.Dataframe(
[
{'a': 1},
{'a': 2},
{'a': 3},
]
)
df2 = pd.Dataframe(
[
{'a': 4},
{'a': 5},
]
)
</code></pre>
<p>And I want</p>
<pre><code> df_id a
1 1
2
3
2 4
5
</code></pre>
<p>I accepted an answer too soon, that told me to do</p>
<pre><code>pd.concat([df1, df2], keys=[1,2])
</code></pre>
<p>which gives the correct result, but [1,2] is hardcoded.</p>
<p>I also want this to be incremental, meaning given</p>
<p>df3</p>
<pre><code> df_id a
1 1
2
3
2 4
5
</code></pre>
<p>and</p>
<pre><code>df4 = pd.Dataframe(
[
{'a': 6},
{'a': 7},
]
)
</code></pre>
<p>I want the concatenation to give</p>
<pre><code> df_id a
1 1
2
3
2 4
5
3 6
7
</code></pre>
<p>Using the same function.</p>
<p>How can I achieve this correctly?</p>
<hr />
<p><strong>EDIT</strong>: A discount- I can manage with only the incrementing function. It doesn't have to work with the single level dfs, but it would be nice if it did.</p>
|
<p>IIUC,</p>
<pre><code>def split_list_by_multitindex(l):
l_multi, l_not_multi = [], []
for df in l:
if isinstance(df.index, pd.MultiIndex):
l_multi.append(df)
else:
l_not_multi.append(df)
return l_multi, l_not_multi
def get_start_key(df):
return df.index.get_level_values(0)[-1]
def concat_starting_by_key(l, key):
return pd.concat(l, keys=range(key, key+len(l))) \
if len(l) > 1 else set_multiindex_in_df(l[0], key)
def set_multiindex_in_df(df, key):
return df.set_axis(pd.MultiIndex.from_product(([key], df.index)))
def myconcat(l):
l_multi, l_not_multi = split_list_by_multitindex(l)
return pd.concat([*l_multi,
concat_starting_by_key(l_not_multi,
get_start_key(l_multi[-1]) + 1)
]) if l_multi else concat_starting_by_key(l_not_multi, 1)
</code></pre>
<p><strong>Examples</strong></p>
<pre><code>l1 = [df1, df2]
print(myconcat(l1))
a
1 0 1
1 2
2 3
2 0 4
1 5
</code></pre>
<hr />
<pre><code>l2 = [myconcat(l1), df4]
print(myconcat(l2))
a
1 0 1
1 2
2 3
2 0 4
1 5
3 0 6
1 7
</code></pre>
<hr />
<pre><code>myconcat([df4, myconcat([df1, df2]), df1, df2])
a
1 0 1
1 2
2 3
2 0 4
1 5
3 0 6
1 7
4 0 1
1 2
2 3
5 0 4
1 5
</code></pre>
<p><strong>Note</strong></p>
<p>This assumes that if we make a concatenation of the dataframes belonging to the <code>l_multi</code> <code>list</code>, the resulting dataframe would already be ordered</p>
|
python|pandas
| 1 |
1,908,416 | 53,122,159 |
How to run several functions which display information in a GUI one after another?
|
<p>I'm pretty new to Python and have started building a GUI that displays news information. I've created five functions which, when called, display the relevant information in the window. Below is a snippet of the functions themselves:</p>
<pre><code># first function which creates new labels and fills them with the relevant site pic,
# first article title, and description.
def fn1():
label_maker(infoFrame, 0, 0, 630, 389, image=newImage1,
background='red')
label_maker(infoFrame, 630, 0, 655, 389, text=entry1.title,
background='blue', font=("", 20), wraplength=600)
label_maker(infoFrame, 0, 389, 1286, 389, text=entry1.description,
wraplength=1250, font=("", 16),
background='green')
# second function to create labels and fill them with relevant info
def fn2():
label_maker(infoFrame, 0, 0, 630, 389, image=newImage2,
background='red')
label_maker(infoFrame, 630, 0, 655, 389, text=entry2.title,
background='blue', font=("", 20), wraplength=600)
label_maker(infoFrame, 0, 389, 1286, 389, text=entry2.description,
wraplength=1250, font=("", 16),
background='green')
# third
def fn3():
label_maker(infoFrame, 0, 0, 630, 389, image=newImage3,
background='red')
label_maker(infoFrame, 630, 0, 655, 389, text=entry3.title,
background='blue', font=("", 20), wraplength=600)
label_maker(infoFrame, 0, 389, 1286, 389, text=entry3.description,
wraplength=1250, font=("", 16),
background='green')
# fourth
def fn4():
label_maker(infoFrame, 0, 0, 630, 389, image=newImage4,
background='red')
label_maker(infoFrame, 630, 0, 655, 389, text=entry4.title,
background='blue', font=("", 20), wraplength=600)
label_maker(infoFrame, 0, 389, 1286, 389, text=entry4.description,
wraplength=1250, font=("", 16),
background='green')
# fifth
def fn5():
label_maker(infoFrame, 0, 0, 630, 389, image=newImage5,
background='red')
label_maker(infoFrame, 630, 0, 655, 389, text=entry5.title,
background='blue', font=("", 20), wraplength=600)
label_maker(infoFrame, 0, 389, 1286, 389, text=entry5.description,
wraplength=1250, font=("", 16),
background='green')
</code></pre>
<p>Also, here is the label_maker function for clarification:</p>
<pre><code># define a new label maker function to construct labels within frames that
will
# be placed within infoFrame
def label_maker(master, x, y, w, h, *args, **kwargs):
frame = Frame(master, width=w, height=h)
frame.pack_propagate(0)
frame.place(x=x, y=y)
label = Label(frame, *args, **kwargs).pack(fill=BOTH, expand=1)
return label
</code></pre>
<p>I want to run each of these functions in a rotation of sorts where one function runs for ~15 sec, then the next one runs, then the next and so on until the window is closed. I've tried using the <code>after()</code> method but it in the way I used it, the functions ran without displaying anything until the last function was called. How can I loop these one after another and actually have them display the relevant information?</p>
|
<p>I am not 100% sure what your <code>entry1</code> and other entry values are from. So I just build some dictionaries to use for now.</p>
<p>I think one of the problems you are facing is stacking widgets on top of the last set when you could easily build them once and then update them.</p>
<p>I have reworked your code to be something function on my end and I have made some changes to your label_maker. Please note that your <code>place()</code> statement is cause a big visual problem here.</p>
<p><code>place()</code> will not affect the size of your frames so when you only use <code>place()</code> for your widgets in the frame then the frame will always be a zero size. Its just not going to work. You need to control the size of your frames some how.</p>
<p>With that said I have simplified the issue by just updating the labels.</p>
<p>Let me know if you have any questions.</p>
<p>I used 3 different colors of squares on my end so I can at least get your code working. Just switch out the different image paths.</p>
<p>The below code will change the labels ever 15 seconds before closing at the end.</p>
<pre><code>import tkinter as tk
def manage_time():
global tracker
if tracker == 1:
lbl1.config(image=newImage2)
lbl2.config(text=entry2['title'])
lbl3.config(text=entry2['description'])
tracker = 2
root.after(15000, manage_time)
elif tracker == 2:
lbl1.config(image=newImage3)
lbl2.config(text=entry3['title'])
lbl3.config(text=entry3['description'])
tracker = 3
root.after(15000, manage_time)
else:
root.destroy()
def label_maker(master, x, y, w, h, *args, **kwargs):
label = tk.Label(master, *args, **kwargs)
label.pack(fill="both", expand=1)
return label
root = tk.Tk()
tracker = 1
infoFrame = tk.Frame(root, width=500, height=500)
infoFrame.pack()
""" All the code for your images and entry fields"""
entry1 = {"title":"entry1", "description":"description for entry 1"}
entry2 = {"title":"entry2", "description":"description for entry 2"}
entry3 = {"title":"entry3", "description":"description for entry 3"}
newImage1 = tk.PhotoImage(file="./RGB/blue.gif")
newImage2 = tk.PhotoImage(file="./RGB/red.gif")
newImage3 = tk.PhotoImage(file="./RGB/green.gif")
lbl1 = label_maker(infoFrame, 0, 0, 630, 389, image=newImage1, background='red')
lbl2 = label_maker(infoFrame, 630, 0, 655, 389, text=entry1['title'], background='blue',)
lbl3 = label_maker(infoFrame, 0, 389, 1286, 389, text=entry1['description'], background='green')
root.after(15000, manage_time)
root.mainloop()
</code></pre>
|
python|tkinter
| 0 |
1,908,417 | 70,042,455 |
Getting Error when I try to run my code with pyautogui click function
|
<p>I'm a python beginner, and I was making a small script/macro that executes a specific command when I press q. In this case it should just press 2 double clicks and press 1, but for some reason, when I added the <code>pyautogui.click(clicks=2, intervals=0.25)</code>function it broke my code</p>
<pre><code>from pynput.keyboard import Key, Listener
import pyautogui
from pynput import keyboard
def action():
pyautogui.press("2")
pyautogui.click(clicks=2, intervals=0.25)
pyautogui.press("1")
def on_press(key):
try:
if key.char == "q":
action()
except AttributeError:
pass
def Stop_listner(key):
if key == Key.esc:
return False
# Collect keyboard inputs
with Listener(on_press=on_press, on_release=Stop_listner) as listener:
listener.join()
</code></pre>
<p>Error:</p>
<pre><code>C:\Users\sanch\AppData\Local\Programs\Python\Python39\python.exe C:/Users/sanch/PycharmProjects/pythonProject/ThrowAwayProjects/ideas.py
Unhandled exception in listener callback
Traceback (most recent call last):
File "C:\Users\sanch\AppData\Local\Programs\Python\Python39\lib\site-packages\pynput\_util\__init__.py", line 211, in inner
return f(self, *args, **kwargs)
File "C:\Users\sanch\AppData\Local\Programs\Python\Python39\lib\site-packages\pynput\keyboard\_win32.py", line 284, in _process
self.on_press(key)
File "C:\Users\sanch\AppData\Local\Programs\Python\Python39\lib\site-packages\pynput\_util\__init__.py", line 127, in inner
if f(*args) is False:
File "C:\Users\sanch\PycharmProjects\pythonProject\ThrowAwayProjects\ideas.py", line 13, in on_press
action()
File "C:\Users\sanch\PycharmProjects\pythonProject\ThrowAwayProjects\ideas.py", line 7, in action
pyautogui.click(clicks=2, intervals=0.25)
File "C:\Users\sanch\AppData\Local\Programs\Python\Python39\lib\site-packages\pyautogui\__init__.py", line 586, in wrapper
returnVal = wrappedFunction(*args, **kwargs)
TypeError: click() got an unexpected keyword argument 'intervals'
Traceback (most recent call last):
File "C:\Users\sanch\PycharmProjects\pythonProject\ThrowAwayProjects\ideas.py", line 23, in <module>
listener.join()
File "C:\Users\sanch\AppData\Local\Programs\Python\Python39\lib\site-packages\pynput\_util\__init__.py", line 259, in join
six.reraise(exc_type, exc_value, exc_traceback)
File "C:\Users\sanch\AppData\Local\Programs\Python\Python39\lib\site-packages\six.py", line 718, in reraise
raise value.with_traceback(tb)
File "C:\Users\sanch\AppData\Local\Programs\Python\Python39\lib\site-packages\pynput\_util\__init__.py", line 211, in inner
return f(self, *args, **kwargs)
File "C:\Users\sanch\AppData\Local\Programs\Python\Python39\lib\site-packages\pynput\keyboard\_win32.py", line 284, in _process
self.on_press(key)
File "C:\Users\sanch\AppData\Local\Programs\Python\Python39\lib\site-packages\pynput\_util\__init__.py", line 127, in inner
if f(*args) is False:
File "C:\Users\sanch\PycharmProjects\pythonProject\ThrowAwayProjects\ideas.py", line 13, in on_press
action()
File "C:\Users\sanch\PycharmProjects\pythonProject\ThrowAwayProjects\ideas.py", line 7, in action
pyautogui.click(clicks=2, intervals=0.25)
File "C:\Users\sanch\AppData\Local\Programs\Python\Python39\lib\site-packages\pyautogui\__init__.py", line 586, in wrapper
returnVal = wrappedFunction(*args, **kwargs)
TypeError: click() got an unexpected keyword argument 'intervals'
Process finished with exit code 1
</code></pre>
|
<p>Looks like <code>pyautogui.click()</code> doesn't like <code>intervals=0.25</code>. Try:</p>
<pre><code>pyautogui.click(clicks=2, interval=0.25)
</code></pre>
<p>Change the "intervals" option to "interval"</p>
|
python|pyautogui|pynput
| 0 |
1,908,418 | 70,071,805 |
Python Problems with Guessing Numbers
|
<p>"""</p>
<p>I am new to coding in general and I have looked up how to fix this but when I input a letting than
a letting 2x then a number this happens</p>
<pre><code>ges a number 1 tho 100 t
your ges is t
plz input a positive number below 100 that is not a word. exmp(42)
ges a number 1 tho 100 w
your ges is w
plz input a positive number below 100 that is not a word. exmp(42)
</code></pre>
<p>this is where I get confused</p>
<pre><code> ges a number 1 tho 100 2
your ges is 2
2
your ges is t
plz input a positive number below 100 that is not a word. exmp(42)
ges a number 1 tho 100 2
your ges is 2
2
None
Traceback (most recent call last):
File "c:\Users\BertD\OneDrive\Desktop\random_number_gessing_game.py", line 20, in<module>
ges = int(ges)
ValueError: invalid literal for int() with base 10: 't'
</code></pre>
<p>I am concerned about how python is coming up with this?
It's a tuff puzzle to solve for me.</p>
<p>"""</p>
<pre><code>import random
ges = input("ges a number 1 tho 100 ")
def valid_ges(num):
while True:
print("your ges is "+str(num))
if num.isdigit():
print(num)
break
elif num != num.isdigit():
print("plz input a positive number below 100 that is not a word. exmp(42)")
num = input("ges a number 1 tho 100 ")
elif num > 100:
print("try a number below 100")
num = input("pic a number 1 tho 100 ")
valid_ges(ges)
print(valid_ges(ges))
r = random.randint(1, 100)
ges = int(ges)
while ges != r:
if (ges <= 100):
print("your "+str(r - ges)+" away")
ges = abs(int(input("pic a number 1 tho 100 ")))
elif(ges == r):
break
print("yay you did it")
</code></pre>
<p>thank you for your time and have a nice day!</p>
|
<p>This a python value error which occurs when you try to convert a string value that is not formatted as an integer.
Here, the line which causes the error is <code>ges = int(ges)</code>.
Where when you first executed you are giving(ges) it a String <code>t</code>. It's not changing it's value while guessing again. Go through your code.</p>
|
python
| 0 |
1,908,419 | 70,569,620 |
Convert decimal string to integer and include the zeros
|
<p>I have an output that is a string, such as: price: "0.00007784"</p>
<p>I'd like to convert that string to a number so as I can perform arithmetic on it (add 10% to it, for example). However, I try:
<code>convertedPrice = int(float(number))</code> and I get 0 as the result.
I need the whole string converted to a number, as it is and <strong>not</strong> in scientific form.</p>
<p>What am I doing wrong?</p>
|
<p>The reason that the string is converted to a whole number is because you use the int function to convert it to an integer.</p>
<p>When you print the float value you may get the scientific notation, but that does not mean it is stored in scientific notation, this is only the way it represented when converted back to a string. <a href="https://stackoverflow.com/questions/658763/how-to-suppress-scientific-notation-when-printing-float-values">This</a> question’s answer explain how this can be avoided.</p>
<p>An alternative would be to use <a href="https://docs.python.org/3/library/decimal.html" rel="nofollow noreferrer">Decimal</a> if you want it to be very specifically stored in decimal notation, and the cost of increased computation.</p>
|
python
| 1 |
1,908,420 | 63,629,212 |
How can I import/use two different versions of a library (pytorch) in one program in Python?
|
<p>I need to use two different versions of pytorch in different parts of the same python webserver. Unfortunately, I can't install them both on the same conda environment that I'm using. I've tried importing one of them from the path itself:</p>
<pre><code>MODULE_PATH = "/home/abc/anaconda3/envs/env/lib/python3.7/site-packages/torch/__init__.py"
MODULE_NAME = "torch"
import importlib
import sys
spec = importlib.util.spec_from_file_location(MODULE_NAME, MODULE_PATH)
module = importlib.util.module_from_spec(spec)
sys.modules[spec.name] = module
spec.loader.exec_module(module)
</code></pre>
<p>Which works fine for importing a different version than the one in the active environment, but then I run into an error when trying to import the second one (I've tried simply 'import torch' and also the same as above):</p>
<pre><code>File "/home/abc/anaconda3/envs/env2/lib/python3.7/site-packages/torch/__init__.py", line 82, in <module>
__all__ += [name for name in dir(_C)
NameError: name '_C' is not defined
</code></pre>
<p>Any ideas on how I can use both versions? Thanks!</p>
|
<p>In principle, importing two libraries with the same name is not possible. Sure, it might be the case that you could do some import-sorcery and manage to do it. But keep in mind that <code>pytorch</code> is not a straightforward Python package.</p>
<p>Now, even if you manage to solve this, it seems extremely strange to me that you need for your own service two different versions. Having that situation will just be a headache for you in the long run. My advice would be to reconsider how you're doing it.</p>
<p>Without knowing your situation, I'd recommend splitting the web service into two. This will allow you to have two environments and the two versions of <code>pytorch</code> you need.</p>
|
python|import|pytorch|conda|torch
| 2 |
1,908,421 | 55,995,092 |
How to i transform a very large dataframe to get the count of values in all columns (without using df.stack or df.apply)
|
<p>I am working with a very large dataframe (~3 million rows) and i need the count of values from multiple columns, grouped by time related data. </p>
<p>I have tried to stack the columns but the resulting dataframe was very long and wouldn't fit in the memory. Similarly df.apply gave memory issues. </p>
<p>For example if my sample dataframe is like,</p>
<pre><code>id,date,field1,field2,field3
1,1/1/2014,abc,,abc
2,1/1/2014,abc,,abc
3,1/2/2014,,abc,abc
4,1/4/2014,xyz,abc,
1,1/1/2014,,abc,abc
1,1/1/2014,xyz,qwe,xyz
4,1/7/2014,,qwe,abc
2,1/4/2014,qwe,,qwe
2,1/4/2014,qwe,abc,qwe
2,1/5/2014,abc,,abc
3,1/5/2014,xyz,xyz,
</code></pre>
<p>I have written the following script that does the needed for a small sample but fails in a large dataframe. </p>
<pre class="lang-py prettyprint-override"><code>df.set_index(["id", "date"], inplace=True)
df = df.stack(level=[0])
df = df.groupby(level=[0,1]).value_counts()
df = df.unstack(level=[1,2])
</code></pre>
<p>I also have a solution via <code>apply</code> but it has the same complications. </p>
<p>The expected result is, </p>
<pre><code>date 1/1/2014 1/4/2014 ... 1/5/2014 1/4/2014 1/7/2014
abc xyz qwe qwe ... xyz xyz abc qwe
id ...
1 4.0 2.0 1.0 NaN ... NaN NaN NaN NaN
2 2.0 NaN NaN 4.0 ... NaN NaN NaN NaN
3 NaN NaN NaN NaN ... 2.0 NaN NaN NaN
4 NaN NaN NaN NaN ... NaN 1.0 1.0 1.0
</code></pre>
<p>I am looking for a more optimized version of what I have written.</p>
<p>Thanks for the help !!</p>
|
<p>You don't want to use <code>stack</code>. Therefore, another solution is using <code>crosstab</code> on <code>id</code> with each <code>date</code> and <code>fields</code> columns. Finally, <code>concat</code> them together, <code>groupby()</code> the index and <code>sum</code>. Use listcomp on <code>df.columns[2:]</code> to create each <code>crosstab</code> (note: I assume the first 2 columns is <code>id</code> and <code>date</code> as your sample): </p>
<pre><code>pd.concat([pd.crosstab([df.id], [df.date, df[col]]) for col in df.columns[2:]]).groupby(level=0).sum()
Out[497]:
1/1/2014 1/2/2014 1/4/2014 1/5/2014 1/7/2014
abc qwe xyz abc abc qwe xyz abc xyz abc qwe
id
1 4 1.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2 2 0.0 0.0 0.0 1.0 4.0 0.0 2.0 0.0 0.0 0.0
3 0 0.0 0.0 2.0 0.0 0.0 0.0 0.0 2.0 0.0 0.0
4 0 0.0 0.0 0.0 1.0 0.0 1.0 0.0 0.0 1.0 1.0
</code></pre>
<p>I think showing <code>0</code> is better than <code>NaN</code>. However, if you want <code>NaN</code> instead of <code>0</code>, you just need to chain additional <code>replace</code> as follows:</p>
<pre><code>pd.concat([pd.crosstab([df.id], [df.date, df[col]]) for col in df.columns[2:]]).groupby(level=0).sum().replace({0: np.nan})
Out[501]:
1/1/2014 1/2/2014 1/4/2014 1/5/2014 1/7/2014
abc qwe xyz abc abc qwe xyz abc xyz abc qwe
id
1 4.0 1.0 2.0 NaN NaN NaN NaN NaN NaN NaN NaN
2 2.0 NaN NaN NaN 1.0 4.0 NaN 2.0 NaN NaN NaN
3 NaN NaN NaN 2.0 NaN NaN NaN NaN 2.0 NaN NaN
4 NaN NaN NaN NaN 1.0 NaN 1.0 NaN NaN 1.0 1.0
</code></pre>
|
python|python-3.x|pandas
| 0 |
1,908,422 | 56,519,690 |
argparse: Require either one flag or 2+ positional arguments
|
<p>I'm writing a program that can either take one flag-argument <code>--list</code> <strong>OR</strong> two or more positional arguments <code>SOURCE [SOURCE ...] DESTINATION</code>. Ideally with when <code>SRC/DST</code> is used it should also accept <code>--recursive</code> but that can be a global option simply ignored with <code>--list</code>.</p>
<p>For now I have this:</p>
<pre><code>group = parser.add_argument_group('Source / Dest Selection')
group.add_argument('--list', action="store_true")
group.add_argument('--recursive', action="store_true")
group.add_argument('SOURCE', nargs='+')
group.add_argument('DESTINATION')
</code></pre>
<p>However it always requires SOURCE and DESTINATION. I don't want to make <em>each</em> optional, instead I would like to either require <em>both</em> <code>SRC</code> and <code>DST</code> or <em>none</em> of them and then require <code>--list</code>.</p>
<p>I would also settle for <em>both or none</em> of SRC/DST and simply ignore them if <code>--list</code> was used.</p>
<p>Any idea how to express that with <code>argparse</code>? Thanks!</p>
|
<p>Very hackish but you could use multiple parsers. May be something like:</p>
<pre><code>import argparse
parser1 = argparse.ArgumentParser()
parser1.add_argument('--list', action="store_true")
parser1.add_argument('DUMMY_POSITIONAL', nargs='*')
args1 = parser1.parse_args()
if not args1.list:
parser2 = argparse.ArgumentParser()
parser2.add_argument('SOURCE', nargs='+')
parser2.add_argument('DESTINATION')
args2 = parser2.parse_args()
if len(args2.SOURCE) == 0:
print("Must specify SOURCE")
else:
print(args2.SOURCE, args2.DESTINATION)
</code></pre>
|
python|argparse
| 0 |
1,908,423 | 56,514,279 |
To call python code implemented as a class
|
<p>I'm new to python, just installed vs code on my Ubuntu 18.04 and ran some simple python code such as</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 20, 100) # Create a list of evenly-spaced numbers over the range
plt.plot(x, np.sin(x)) # Plot the sine of each x point
plt.show() # Display the plot
</code></pre>
<p>but how could I call a python snippet defining a class?</p>
<p>Here I got a solution for the "longest palindromic substring" problem, implemented as a class, but without any entry point similar to C++ 's <code>main()</code> function. How shall I call this "longest palindromic substring" code? </p>
<pre><code>class LPS:
"""
@param s: input string
@return: the longest palindromic substring
"""
def longestPalindrome(self, s):
if not s:
return ""
n = len(s)
is_palindrome = [[False] * n for _ in range(n)]
for i in range(n):
is_palindrome[i][i] = True
for i in range(1, n):
is_palindrome[i][i - 1] = True
longest, start, end = 1, 0, 0
for length in range(1, n):
for i in range(n - length):
j = i + length
is_palindrome[i][j] = s[i] == s[j] and is_palindrome[i + 1][j - 1]
if is_palindrome[i][j] and length + 1 > longest:
longest = length + 1
start, end = i, j
return s[start:end + 1]
</code></pre>
|
<p>Outside of class (and after it!) call</p>
<pre><code>LPS().longestPalindrome("someString")
</code></pre>
<p>Note the parenthesis after <code>LPS</code> and before <code>.longestPalindrome</code>. This way you create an object of class <code>LPS</code> allowing you to call its "nonstatic" methods (see that <code>longestPalindrome</code> has <code>self</code> as a parameter).</p>
<p>Another way would be to call it as</p>
<pre><code>lps = LPS()
lps.longestPalindrome("someString")
</code></pre>
<p>Alternatively, omit the <code>self</code> parameter which is completely redundant in your case and call as</p>
<pre><code>LPS.longestPalindrome("someString")
</code></pre>
<p><em>Note:</em> <code>self</code> is like <code>this</code> in Java.</p>
<p><em>Edit:</em> I see some answers omitting <code>()</code> after <code>LPS</code>, like <code>LPS.longestPalindrome(“someString”)</code>. This is highly unhigienic Python, just like using <code>””</code> for character literals and <code>’ ’</code> for strings, although both are correct. </p>
|
python|visual-studio-code|ubuntu-18.04
| 4 |
1,908,424 | 60,944,008 |
scrapy-splash give me this error "HTTP status code is not handled or not allowed"
|
<pre><code>from scrapy.spiders import Spider
from scrapy_splash import SplashRequest
from ..items import Tutorial2Item
class MySpider(Spider):
name = 'splashspider'
start_urls = ['https://www.livescore.bet3000.com'] #FIRST LEVEL
def start_requests(self):
for url in self.start_urls:
yield SplashRequest(url=url, callback = self.parse, meta ={'splash':{'endpoint':'render.js',
'args':{'wait':0.5,}}} )
# 1. SCRAPING
def parse(self, response):
item = Tutorial2Item()
for game in response.xpath("//div[@id='srlive_matchlist']"):
item["home_team"] = game.xpath("//div[@id='srlive_matchlist']//td[contains(@class,'hometeam team home')][contains(text(),'San Marcos Arica')]").extract_first()
item["away_team"] = game.xpath("//div[@id='srlive_matchlist']//td[contains(@class,'awayteam team away')][contains(text(),'Boston River')]").extract_first()
yield item
</code></pre>
<p>and setting.py is:</p>
<pre><code># -*- coding: utf-8 -*-
# Scrapy settings for tutorial2 project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'tutorial2'
SPIDER_MODULES = ['tutorial2.spiders']
NEWSPIDER_MODULE = 'tutorial2.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'tutorial2 (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
#handle_httpstatus_list = [404]
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'tutorial2.middlewares.Tutorial2SpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'tutorial2.middlewares.Tutorial2DownloaderMiddleware': 543,
#}
#DOWNLOADER_MIDDLEWARES = {
# 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
# 'scrapy_user_agents.middlewares.RandomUserAgentMiddleware': 400,
#}
# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
# 'tutorial2.pipelines.Tutorial2Pipeline': 300,
#}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
DOWNLOADER_MIDDLEWARES = {
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
SPLASH_URL = 'http://localhost:8050'
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
#HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'
SPIDER_MIDDLEWARES = {`enter code here`
'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
#USER_AGENT = 'Mozilla/5.0 (X11; Linux x86_64; rv:7.0.1) Gecko/20100101 Firefox/7.7'
#DOWNLOAD_DELAY = 0.25
</code></pre>
<p>I was trying for many days but can't find solution and which gives me this error.
Either my code has some error or there is another issue that I can't figure out.</p>
<pre><code>(scrapy-projects) danish-khan@danishkhan-VirtualBox:~/PycharmProjects/scrapy-projects/tutorial2$ scrapy crawl splashspider
2020-03-30 16:35:19 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: tutorial2)
2020-03-30 16:35:20 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.9, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.10.0, Python 3.7.6 (default, Jan 8 2020, 19:59:22) - [GCC 7.3.0], pyOpenSSL 19.1.0 (OpenSSL 1.1.1d 10 Sep 2019), cryptography 2.8, Platform Linux-4.15.0-91-generic-x86_64-with-debian-stretch-sid
2020-03-30 16:35:20 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'tutorial2', 'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter', 'NEWSPIDER_MODULE': 'tutorial2.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['tutorial2.spiders']}
2020-03-30 16:35:20 [scrapy.extensions.telnet] INFO: Telnet Password: b43580967da382d6
2020-03-30 16:35:21 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats']
2020-03-30 16:35:21 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy_splash.SplashCookiesMiddleware',
'scrapy_splash.SplashMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-03-30 16:35:21 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy_splash.SplashDeduplicateArgsMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-03-30 16:35:21 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-03-30 16:35:21 [scrapy.core.engine] INFO: Spider opened
2020-03-30 16:35:21 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-03-30 16:35:21 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-03-30 16:35:23 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://www.livescore.bet3000.com/robots.txt> (referer: None)
2020-03-30 16:35:23 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://localhost:8050/robots.txt> (referer: None)
2020-03-30 16:35:23 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://www.livescore.bet3000.com via http://localhost:8050/render.js> (referer: None)
2020-03-30 16:35:24 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://www.livescore.bet3000.com>: HTTP status code is not handled or not allowed
2020-03-30 16:35:24 [scrapy.core.engine] INFO: Closing spider (finished)
2020-03-30 16:35:24 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 970,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 2,
'downloader/request_method_count/POST': 1,
'downloader/response_bytes': 1116,
'downloader/response_count': 3,
'downloader/response_status_count/404': 3,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2020, 3, 30, 11, 35, 24, 28203),
'httperror/response_ignored_count': 1,
'httperror/response_ignored_status_count/404': 1,
'log_count/DEBUG': 3,
'log_count/INFO': 10,
'memusage/max': 54149120,
'memusage/startup': 54149120,
'response_received_count': 3,
'robotstxt/request_count': 2,
'robotstxt/response_count': 2,
'robotstxt/response_status_count/404': 2,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'splash/render.js/request_count': 1,
'splash/render.js/response_count/404': 1,
'start_time': datetime.datetime(2020, 3, 30, 11, 35, 21, 853911)}
2020-03-30 16:35:24 [scrapy.core.engine] INFO: Spider closed (finished)
</code></pre>
|
<p>I faced the same problem. But somehow putting some parameters in the SplashRequest helped me.</p>
<pre><code>yield SplashRequest(url=url, callback=self.parse,
args={'wait': 0.5, 'viewport': '1024x2480', 'timeout': 90, 'images': 0, 'resource_timeout': 10}
)
</code></pre>
<p>On a different note, I think the website you are trying to crawl responses very slowly. That may be the problem too. Try another site.</p>
|
python|web-scraping|scrapy-splash
| 0 |
1,908,425 | 61,139,247 |
Adding one colorbar to multiple plots in one graph
|
<p>I'm trying to attach the colorbar to my MatplotLib plot which plots several plots in one graph (<strong>I'm not looking for a single colorbar to multiple subplots</strong>).</p>
<p>In my script I load files and plot runs of variables, however I'd like to colorize them regarding to the third variable.</p>
<p>I found a way to do it, however it plots colorbar to each plot, and it looks like: <a href="https://i.stack.imgur.com/5VpuP.png" rel="nofollow noreferrer">1</a></p>
<p>I'd like it to look like: <a href="https://i.stack.imgur.com/Bh3bo.png" rel="nofollow noreferrer">2</a>, except every path should be colorized.</p>
<p>Here is my block of code generating the plots:</p>
<pre><code>import os
import glob
import mesa_reader as mesa
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.patches import Rectangle
fig, ax = plt.subplots(1, 1, sharex=True, sharey=True, figsize=(10,5), dpi=100)
counter = 0
for fname in glob.glob('LOGS_P_*'):
a = mesa.MesaData(fname+'/LOGS1/history.data')
counter = counter + 1
if counter == 1:
plt.plot(a.log_Teff, a.log_L, color='black', linestyle='solid', linewidth=0.8)
points = np.array([a.log_Teff, a.log_L]).T.reshape(-1, 1, 2)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
# Create a continuous norm to map from data points to colors
norm = plt.Normalize(-20, a.lg_mtransfer_rate.max())
lc = LineCollection(segments, cmap='viridis', norm=norm)
# Set the values used for colormapping
lc.set_array(a.lg_mtransfer_rate)
lc.set_linewidth(2)
fig.colorbar(ax.add_collection(lc), ax=ax)
else:
plt.plot(a.log_Teff, a.log_L, color='black', linestyle='solid', linewidth=0.8)
points = np.array([a.log_Teff, a.log_L]).T.reshape(-1, 1, 2)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
# Create a continuous norm to map from data points to colors
norm = plt.Normalize(-20, a.lg_mtransfer_rate.max())
lc = LineCollection(segments, cmap='viridis', norm=norm)
# Set the values used for colormapping
lc.set_array(a.lg_mtransfer_rate)
lc.set_linewidth(2)
fig.colorbar(ax.add_collection(lc), ax=ax)
</code></pre>
|
<p>This figure</p>
<p><a href="https://i.stack.imgur.com/3HeJV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3HeJV.png" alt="a sine and a cosine"></a></p>
<p>was produced running the following script</p>
<pre><code>from numpy import array, concatenate, linspace, cos, pi, sin
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
from matplotlib.colors import Normalize
from matplotlib.cm import ScalarMappable
def segments_from(x, y):
tmp = array((x, y)).T.reshape(-1,1,2)
return concatenate([tmp[:-1], tmp[1:]], axis=1)
t = linspace(0, 3, 301)
w1, w2 = 2*pi, 3*pi
s1, s2 = sin(w1*t), sin(w2*t)
c1, c2 = cos(w1*t), cos(w2*t)
norm = Normalize(-2, +2)
cmap = plt.get_cmap('inferno')
fig, ax = plt.subplots()
ax.set_xlim(0, 3)
ax.set_ylim(-2, 2)
for y, v in ((1.6*c1, c2), (0.9*s1, s2)):
lc = LineCollection(segments_from(t, y),
linewidths=4,
norm=norm, cmap=cmap)
lc.set_array(v)
ax.add_collection(lc)
fig.colorbar(ScalarMappable(norm=norm, cmap=cmap))
plt.show()
</code></pre>
|
python|python-3.x|matplotlib|colorbar
| 1 |
1,908,426 | 68,938,173 |
Value not appending to global array
|
<p>I am trying to run a multithreaded email checker to see if the emails are office 365 valid.</p>
<p>Looking over and over my code, I cannot seem to find the reason it's not working correctly.</p>
<p>It should be appending the email to a GOOD or BAD list.</p>
<p>Instead, it's not appending anything!</p>
<p>This is my code:</p>
<pre><code>...
currentDirectory = os.getcwd() # set the current directory - /new/
# Locations
location_emails_goods = currentDirectory + '/contacts/goods/'
location_emails_bads = currentDirectory + '/contacts/bads/'
location_emails = currentDirectory + '/contacts/contacts.txt'
now = datetime.now()
todayString = now.strftime('%d-%m-%Y-%H-%M-%S')
FILE_NAME_DATE_GOODS = None
FILE_NAME_DATE_BADS = None
ALL_EMAILS = get_contacts(location_emails)
url = 'https://login.microsoftonline.com/common/GetCredentialType'
# Get all emails
def get_contacts(filename):
emails = []
with open(filename, mode='r', encoding='utf-8') as contacts_file:
for a_contact in contacts_file:
emails.append(a_contact.strip())
return emails
def saveLogs():
global GOOD_EMAILS_ARRAY, BAD_EMAILS_ARRAY, file_bads, file_goods, FILE_NAME_DATE_GOODS, FILE_NAME_DATE_BADS
#print(GOOD_EMAILS_ARRAY)
for good in GOOD_EMAILS_ARRAY:
file_goods.write(good + '\n')
file_goods.close()
for bad in BAD_EMAILS_ARRAY:
file_bads.write(bad + '\n')
file_bads.close()
def newChecker(email):
global url, GOOD_EMAILS_ARRAY, BAD_EMAILS_ARRAY
s = req.session()
body = '{"Username":"%s"}' % email
request = req.post(url, data=body)
response = request.text
valid = re.search('"IfExistsResult":0,', response)
invalid = re.search('"IfExistsResult":1,', response)
if invalid:
BAD_EMAILS_ARRAY.append(email)
if valid:
GOOD_EMAILS_ARRAY.append(email)
else:
if valid:
GOOD_EMAILS_ARRAY.append(email)
else:
BAD_EMAILS_ARRAY.append(email)
# The follow is showing empty array eventhough I have defined GOOD_EMAILS_ARRAY globally so it should be updating
print(GOOD_EMAILS_ARRAY)
def mp_handler(p):
global ALL_EMAILS
p.map(newChecker, ALL_EMAILS)
if __name__ == '__main__':
# Foreach email, parse it into our checker
# Define a filename to save to
FILE_NAME_DATE_GOODS = '{}{}{}'.format(location_emails_goods, todayString, '.txt')
FILE_NAME_DATE_BADS = '{}{}{}'.format(location_emails_bads, todayString, '.txt')
file_bads = open(FILE_NAME_DATE_BADS, 'a')
file_goods = open(FILE_NAME_DATE_GOODS, 'a')
p = multiprocessing.Pool(500)
mp_handler(p)
saveLogs()
p.close()
</code></pre>
<p>As you can see, I am trying to append an email to either GOOD_EMAILS_ARRAY or BAD_EMAILS_ARRAY.
The BAD_EMAILS_ARRAY and GOOD_EMAILS_ARRAY are global variables but it for reason won't append to them.</p>
<p>I am running this through multiprocessing if you need to know.</p>
<p>Any ideas or errors looking in my code?</p>
|
<p>Okay so it turns out that I just needed to use the Manager from multiprocessing:</p>
<pre><code>from multiprocessing import Manager, Pool
</code></pre>
<p>then I could use a normal array through the manager such as:</p>
<pre><code># Set empty arrays using manager so we can carry it over
manager = Manager()
bad_list = manager.list()
good_list = manager.list()
</code></pre>
<p>This allowed me to then use my script like it was, just using these new arrays by Manager which works just how I wanted :)</p>
<pre><code>...
FILE_NAME_DATE_GOODS = None
FILE_NAME_DATE_BADS = None
# Set empty arrays using manager so we can carry it over
manager = Manager()
bad_list = manager.list()
good_list = manager.list()
# Get all emails
def get_contacts(filename):
emails = []
with open(filename, mode='r', encoding='utf-8') as contacts_file:
for a_contact in contacts_file:
emails.append(a_contact.strip())
return emails
ALL_EMAILS = get_contacts(location_emails)
url = 'https://login.microsoftonline.com/common/GetCredentialType'
def saveLogs():
global file_bads, file_goods, FILE_NAME_DATE_GOODS, FILE_NAME_DATE_BADS, good_list, bad_list
for good in good_list:
file_goods.write(good + '\n')
file_goods.close()
for bad in bad_list:
file_bads.write(bad + '\n')
file_bads.close()
print('{} => Fully completed email scanning'.format(Fore.CYAN))
print('{} => Good emails [{}] || Bad emails [{}]'.format(Fore.GREEN, FILE_NAME_DATE_GOODS, FILE_NAME_DATE_BADS))
def newChecker(email):
global url, good_list, bad_list
s = req.session()
body = '{"Username":"%s"}' % email
request = req.post(url, data=body)
response = request.text
valid = re.search('"IfExistsResult":0,', response)
invalid = re.search('"IfExistsResult":1,', response)
if invalid:
bad_list.append(email)
if valid:
good_list.append(email)
else:
if valid:
good_list.append(email)
else:
bad_list.append(email)
def mp_handler(p):
global ALL_EMAILS
p.map(newChecker, ALL_EMAILS)
if __name__ == '__main__':
# Foreach email, parse it into our checker
# Define a filename to save to
FILE_NAME_DATE_GOODS = '{}{}{}'.format(location_emails_goods, todayString, '.txt')
FILE_NAME_DATE_BADS = '{}{}{}'.format(location_emails_bads, todayString, '.txt')
file_bads = open(FILE_NAME_DATE_BADS, 'a')
file_goods = open(FILE_NAME_DATE_GOODS, 'a')
p = multiprocessing.Pool(500)
mp_handler(p)
saveLogs()
p.close()
</code></pre>
|
python|arrays
| 0 |
1,908,427 | 72,770,834 |
Python: What is the time.perf_counter method's time reference point?
|
<p>I'm trying to play with the time module in Python 3.9.7 on MacOS
this is my script:</p>
<pre><code>import time
print(time.perf_counter(), '\n', time.localtime(time.perf_counter()))
print('-----')
print(time.asctime(time.localtime(time.process_time())))
</code></pre>
<p>This is the output:</p>
<pre><code>820.324263708
time.struct_time(tm_year=1970, tm_mon=1, tm_mday=1, tm_hour=1, tm_min=13, tm_sec=40, tm_wday=3, tm_yday=1, tm_isdst=0)
-----
Thu Jan 1 01:00:02 1970
</code></pre>
<p>I understand Unix time concept, but it counts from Jan 1st 1970 to what? Definitely not "now". Any hints?</p>
|
<p>Per the <a href="https://docs.python.org/3/library/time.html#time.perf_counter" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>Return the value (in fractional seconds) of a performance counter ...</p>
<p>The reference point of the returned value is undefined, so that <em>only the difference between the results of two calls is valid.</em></p>
</blockquote>
<p>Therefore, a call unto its own, is meaningless, as you've discovered. However, a call such as the following has more meaning with regard to code execution time.</p>
<pre><code>import time
def perf():
s = time.perf_counter()
for _ in range(1000000):
pass
time.sleep(1)
e = time.perf_counter()
return e - s
>>> perf()
1.049831447
</code></pre>
|
python|time
| 0 |
1,908,428 | 68,403,961 |
What does left and bottom in figure.add_axes() do in Matplotlib?
|
<p>Hello, I am currently learning matplotlib. I have just learnt about figures and the <code>add_axes()</code> method. For the <code>add_axes()</code> method, you pass in a list containing <code>[left, bottom, width, height]</code>.</p>
<p>Obviously, <code>width</code> and <code>height</code> controls the width and height of the graph, but what does <code>left</code> and <code>bottom</code> do?</p>
<p>Here is my <code>add_axes()</code> -> <code>axes = figure.add_axes([0.1, 0.1, 0.6, 0.6])</code></p>
<p>I keep changing <code>left</code> and <code>bottom</code>, but nothing on the plot is changing. I read the matplotlib documentation on <code>figure.add_axes()</code>, but it did not explain what <code>left</code> and <code>bottom</code> did.</p>
<p>Thank you!</p>
|
<h3>Description and example of mpl.figure.add_axes</h3>
<pre><code>mpl.figure.add_axes(rect, projection=None, polar=False, **kwargs)
</code></pre>
<p>Adds an axis to the current figure or a specified axis.<br>
From the <a href="https://matplotlib.org/stable/api/figure_api.html#matplotlib.figure.Figure.add_axes" rel="nofollow noreferrer">matplotlib mpl.figure.ad_axes method documentation:</a><br>
rect: sequence of floats. The dimensions [left, bottom, width, height] of the new Axes. All quantities are in fractions of figure width and height.</p>
<ul>
<li>left = how far from the left of the figure (like a x value)</li>
<li>bottom = how far from the bottom of the figure (like a y value)</li>
</ul>
<p><a href="https://stackoverflow.com/questions/43326680/what-are-the-differences-between-add-axes-and-add-subplot">Here is a quick google search</a><br>
Here is an example:</p>
<pre><code>fig, ax1 = plt.subplots(figsize=(12,8))
ax2 = fig.add_axes([0.575,0.55,0.3,0.3])
ax1.grid(axis='y', dashes=(8,3), color='gray', alpha=0.3)
for ax in [ax1, ax2]:
[ax.spines[s].set_visible(False) for s in ['top','right']]
</code></pre>
<p><a href="https://i.stack.imgur.com/7Cit4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Cit4.png" alt="enter image description here" /></a></p>
|
python|matplotlib|figure
| 2 |
1,908,429 | 59,199,265 |
'tuple' object is not callable
|
<p>I defined the following class for a custom transformation and implemented the necessary methods for functionality with Scikit-Learn :</p>
<pre><code>from sklearn.base import BaseEstimator, TransformerMixin
rooms_ix, bedrooms_ix, population_ix, household_ix = 3, 4, 5, 6
class CombinedAttributesAdder(BaseEstimator, TransformerMixin):
def __init__(self, add_bedrooms_per_room = True): # no *args or **kargs
self.add_bedrooms_per_room = add_bedrooms_per_room
def fit(self, X, y=None):
return self # nothing else to do
def transform(self, X, y=None):
rooms_per_household = X[:, rooms_ix] / X[:, household_ix]
population_per_household = X[:, population_ix] / X[:, household_ix]
if self.add_bedrooms_per_room:
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
return np.c_[X, rooms_per_household, population_per_household,bedrooms_per_room]
else:
return np.c_[X, rooms_per_household, population_per_household]
attr_adder = CombinedAttributesAdder(add_bedrooms_per_room=False)
housing_extra_attribs = attr_adder.transform(housing.values)
</code></pre>
<p>Then I call the class and others in a pipeline like this:</p>
<pre><code>from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
num_pipeline = Pipeline([
('imputer', SimpleImputer(strategy="median"))
('attribs_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler()),
])
housing_num_tr = num_pipeline.fit_transform(housing_num)
</code></pre>
<p>This produces the above error message:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-52-bcea5f2689c0> in <module>
4 num_pipeline = Pipeline([
5 ('imputer', SimpleImputer(strategy="median"))
----> 6 ('attribs_adder', CombinedAttributesAdder()),
7 ('std_scaler', StandardScaler()),
8 ])
TypeError: 'tuple' object is not callable
</code></pre>
|
<p>You're missing a comma after this line:</p>
<pre><code>('imputer', SimpleImputer(strategy="median"))
</code></pre>
|
python|scikit-learn
| 5 |
1,908,430 | 62,113,137 |
How to efficiently find correspondences between two point sets without nested for loop in Pytorch?
|
<p>I now have two point sets (tensor) A and B that shape like</p>
<p>A.size() >>(50, 3) , example: [ [0, 0, 0], [0, 1, 2], ..., [1, 1, 1]]</p>
<p>B.size() >>(10, 3) </p>
<p>where the first dimension stands for number of points and the second dim stands for coordinates (x,y,z)</p>
<p>To some extent, the question could also be simplified into " Finding common elements between two tensors ". Is there a quick way to do this without nested loop ?</p>
|
<p>You can quickly compute all the 50x10 distances using:</p>
<pre class="lang-py prettyprint-override"><code>d2 = ((A[:, None, :] - B[None, ...])**2).sum(dim=2)
</code></pre>
<p>Once you have all the pair-wise distances, you can select "similar" ones if the distance does not exceed a threshold <code>thr</code>:</p>
<pre class="lang-py prettyprint-override"><code>(d2 < thr).nonzero()
</code></pre>
<p>returns pairs of <code>a-idx, b-idx</code> of "similar" points.</p>
<p>If you want to match the points <em>exactly</em>, you can do instead:</p>
<pre class="lang-py prettyprint-override"><code>((a[:, None, :] == b[None, ...]).all(dim=2)).nonzero()
</code></pre>
|
image-processing|computer-vision|pytorch|tensor
| 0 |
1,908,431 | 35,484,548 |
Run py.test in a docker container as a service
|
<p>I am working on setting up a dockerised selenium grid. I can send my python tests [run with pytest] from a pytest container [see below] by attaching to it.
But I have setup another LAMP container that is going to control pytest.
So I want to make the pytest container standalone,running idle and waiting for commands from the LAMP container.</p>
<p>I have this Dockerfile:</p>
<pre><code># Starting from base image
FROM ubuntu
#-----------------------------------------------------
# Set the Github personal token
ENV GH_TOKEN blablabla
# Install Python & pip
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y python python-pip python-dev && pip install --upgrade pip
# Install nano for #debugging
RUN apt-get install -y nano
# Install xvfb
RUN apt-get install -y xvfb
# Install GIT
RUN apt-get update -y && apt-get install git -y
# [in the / folder]
RUN git clone https://$GH_TOKEN:x-oauth-basic@github.com/user/project.git /project
# Install dependencies via pip
WORKDIR /project
RUN pip install -r dependencies.txt
#-----------------------------------------------------
#
CMD ["/bin/bash"]
</code></pre>
<p>I start the pytest container manually [for development] with this:</p>
<pre><code>docker run -dit -v /project --name pytest repo/user:py
</code></pre>
<p>The thing is that I finished development and I want to have the pytest container launched from <code>docker-compose</code> and connect it to other containers [with link and volume].
I just cannot make it to stay up .</p>
<p>I used this :</p>
<pre><code>pytest:
image: repo/user:py
volumes:
- "/project"
command: "/bin/bash tail -f /dev/null"
</code></pre>
<p>but didnt work.</p>
<p>So, inside the Dockerfile, should I use a specific CMD or ENTRYPOINT ?</p>
<p>Should I use some <code>command</code> from the <code>docker-compose</code> file?</p>
|
<p>I just enabled it on one of my projects recently. I use a multistage build. At present I put tests in the same folder as the source <code>test_*.py</code>. From my experience with this, it doesn't feel natural, I prefer tests to be in its own folder that is excluded by default.</p>
<pre><code>FROM python:3.7.6 AS build
WORKDIR /app
COPY requirements.txt .
RUN pip3 install --compile -r requirements.txt && rm -rf /root/.cache
COPY src /app
# TODO precompile
# Build stage test - run tests
FROM build AS test
RUN pip3 install pytest pytest-cov && rm -rf /root/.cache
RUN pytest --doctest-modules \
--junitxml=xunit-reports/xunit-result-all.xml \
--cov \
--cov-report=xml:coverage-reports/coverage.xml \
--cov-report=html:coverage-reports/
# Build stage 3 - Complete the build setting the executable
FROM build AS final
CMD [ "python", "./service.py" ]
</code></pre>
<p>In order to exclude the test files from coverage. <code>.coveragerc</code> must be present.</p>
<pre><code>[run]
omit = test_*
</code></pre>
<p>The <code>test</code> target runs the required tests and generates coverage and execution reports. These are <em>NOT</em> suitable for Azure DevOps and SonarQube. To make it suitable</p>
<pre><code>sed -i~ 's#/app#$(Build.SourcesDirectory)/app#' $(Pipeline.Workspace)/b/coverage-reports/coverage.xml
</code></pre>
<p>To run tests</p>
<pre class="lang-sh prettyprint-override"><code>#!/usr/bin/env bash
set -e
DOCKER_BUILDKIT=1 docker build . --target test --progress plain
</code></pre>
|
python|docker|pytest|docker-compose|dockerfile
| 6 |
1,908,432 | 58,767,317 |
not sure what is wrong here
|
<p>Need to ask user if they want to start, end up getting input expected at most 1 arguments, got 3:</p>
<pre><code>for i in range(1,5):
Player1Points += roll()
print('After this round ',player_1, 'you now have: ',Player1Points,' Points')
while True:
answer = input("Would you like to see", player_2, "'s score? yes/no")
if answer == "no":
print("how about now?")
else:
print("Okay")
break
Player2Points += roll()
print('After this round ',player_2, 'you now have: ',Player2Points,' Points')
</code></pre>
<p>Input expected at most 1 arguments but got 3.</p>
|
<p>You need to concatenate the string not use commas like a print function. Use </p>
<pre><code>answer = input("Would you like to see" + player_2 + "'s score? yes/no")
</code></pre>
<p>instead. Or if you're using python 3.5+</p>
<pre><code>answer = input(f"Would you like to see {player_2}'s score? yes/no")
</code></pre>
|
python|dice
| 4 |
1,908,433 | 58,960,067 |
How do I loop through pages request?
|
<p>I am very (very) new to python and am struggling to get my loop to go through pages in the request-it seems only to be returning the first page of results so I can only think that I have missed a vital part of code.....here is what I have so far:</p>
<p>import requests</p>
<pre><code>articles = []
for i in range(1, 6):
response = requests.get(url=everything_news_url, headers=headers, params=everything_payload)
headers = {'Authorization': 'xxxxxxxxxxxxxxxxxxxx'}
everything_news_url = 'https://newsapi.org/v2/everything'
everything_payload = {
'q': 'cryptocurrency',
'language': 'en',
'sortBy': 'relevancy',
'from_param': '2019-10-20',
'to': '2019-11-11',
'page': 'i'
}
headlines_payload = {'category': 'business', 'country': 'us'}
sources_payload = {'category': 'general', 'country': 'us'}
articles.append(response)
</code></pre>
<p>Any help greatly appreciated...no errors show but only the first page of results!</p>
|
<p>You had forgot to ident the code into the for, and also you had your usage of <code>i</code> as a string, also, some of it didn't need to be inside the loop.</p>
<pre><code>headers = {'Authorization': 'xxxxxxxxxxxxxxxxxxxx'}
everything_news_url = 'https://newsapi.org/v2/everything'
headlines_payload = {'category': 'business', 'country': 'us'}
sources_payload = {'category': 'general', 'country': 'us'}
articles = []
for i in range (1, 6):
everything_payload = {'q': 'cryptocurrency', 'language': 'en', 'sortBy': 'relevancy',
'from_param' : '2019-10-20', 'to':'2019-11-11', 'page': i }
response = requests.get(url=everything_news_url,
headers=headers,
params=everything_payload)
articles.append(response)
</code></pre>
|
python
| 2 |
1,908,434 | 15,953,560 |
Sum tuples if identical values
|
<p>Here's my list of tuples:</p>
<pre><code>regions = [(23.4, 12, 12341234),
(342.23, 19, 12341234),
(4312.3, 12, 12551234),
(234.2, 12, 12341234)]
</code></pre>
<p>I'm trying to sum the first index value in a list of tuples where the values at indices 1 and 2 are identical. Note that regions[0] and regions[3] have the same values at indices 1 and 2.</p>
<p>My desired list is:</p>
<pre><code>result = [(257.6, 12, 12341234),
(342.23, 19, 12341234),
(4312.3, 12, 12551234)]
</code></pre>
<p>I realize that I probably need to save it as a dictionary first, probably with the second value as the first key and the third value as the second key and summing if it already exists. Just wondering if there's a better or easier way: I'm thinking maybe using some intersect function. </p>
|
<pre><code>from collections import defaultdict
sums = defaultdict(float)
for c, a, b in regions:
sums[a, b] += c
result = [(csum, a, b) for (a, b), csum in sums.iteritems()]
</code></pre>
<p>There isn't a built-in function to do this; it's far too specialized a task.</p>
|
python|list|tuples
| 6 |
1,908,435 | 15,801,267 |
Python, text document to list?
|
<p>In Python I'm trying to take a text document that has a bunch of words all separated by new lines, and select them one by one and edit them. How would I turn these items into a list in python?</p>
<p>Ex.</p>
<p>hi</p>
<p>I</p>
<p>need</p>
<p>help</p>
<p>That needs to be able to be put into a list in python so I can edit them, I'm trying to add the numbers 1-99 after every one.</p>
|
<p>If you simply want to process the entire file as one large String (which may be very inefficient), you could use the splitlines() method like:</p>
<pre><code>listOfWords = textFile.splitlines()
</code></pre>
<p>where <code>textFile</code> is a String which represents all of the text in the text document.</p>
|
python|list|text
| 1 |
1,908,436 | 59,661,835 |
Is it necessary to split data into three; train, val and test?
|
<p><a href="https://stackoverflow.com/questions/2976452/whats-is-the-difference-between-train-validation-and-test-set-in-neural-netwo">Here</a> the difference between test, train and validation set is described. In most documentation on training neural networks, I find that these three sets are used, however they are often predefined. </p>
<p>I have a relatively small data set (906 3D images in total, the distribution is balanced). I'm using <code>sklearn.model_selection.train_test_split</code> function to split the data in train and test set and using X_test and y_test as validation data in my model. </p>
<pre><code>X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=1)
...
history = AD_model.fit(
X_train,
y_train,
batch_size=batch_size,
epochs=100,
verbose=1,
validation_data=(X_test, y_test))
</code></pre>
<p>After training, I evaluate the model on the test set:</p>
<pre><code>test_loss, test_acc = AD_model.evaluate(X_test, y_test, verbose=2)
</code></pre>
<p>I've seen other people also approach it this way, but since the model has already seen this data, I'm not sure what the consequences are of this approach. Can someone tell me what the consequences are of using the same set for validation and testing? And since I already have a small data set (with overfitting as a result), is it necessary to split the data in 3 sets? </p>
|
<p>You can use <code>train, validate, test = np.split(df.sample(frac=1), [int(.6*len(df)), int(.8*len(df))])</code> it produces a 60%, 20%, 20% split for training, validation and test sets.</p>
<p>Hope it's Helpfull
Thank you for reading!!</p>
|
python|tensorflow|keras|scikit-learn|conv-neural-network
| 3 |
1,908,437 | 59,741,641 |
Python - issue with extracting metadata from the image
|
<p>I am trying to extract metadata from the image with code I put together below:</p>
<pre><code>from PIL import Image
from PIL.ExifTags import TAGS
import pandas as pd
import glob
import urllib
import itertools
def get_exif(fn):
ret = {}
i = Image.open(fn)
info = i._getexif()
for tag, value in info.items():
decoded = TAGS.get(tag, tag)
ret[decoded] = value
return ret
LoadingDir = "C:/IMAGES/TEST/"
final_df = pd.DataFrame()
for file in glob.glob(LoadingDir+'*.jpg'):
data = get_exif(file)
temp_df = pd.DataFrame([data])
temp_df = temp_df.loc[:,['ExifImageWidth','ExifImageHeight', 'XResolution', 'YResolution']]
final_df = final_df.append(temp_df)
final_df
</code></pre>
<p>When i run the code on single image without for function, it works however when i run it as is here, i get this error:</p>
<pre><code> ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-48-a739cd753170> in <module>
18 final_df = pd.DataFrame()
19 for file in glob.glob(LoadingDir+'*.jpg'):
---> 20 data = get_exif(file)
21 temp_df = pd.DataFrame([data])
22 temp_df = temp_df.loc[:,['ExifImageWidth','ExifImageHeight', 'XResolution', 'YResolution']]
<ipython-input-48-a739cd753170> in get_exif(fn)
10 i = Image.open(fn)
11 info = i._getexif()
---> 12 for tag, value in info.items():
13 decoded = TAGS.get(tag, tag)
14 ret[decoded] = value
AttributeError: 'NoneType' object has no attribute 'items'
</code></pre>
<p>What am I missing here?</p>
|
<p>+1 to what Jason said. You just have found an image without the metadata. In your function, you can assign "None" to each of the attributes you're looking for - that seems like a good option to keep your data output referencable.</p>
|
python|python-3.x
| 0 |
1,908,438 | 48,952,526 |
Tensorflow-gpu with Keras Error
|
<p>Using Ubuntu 16.04, PyCharm</p>
<p>I used the following link to install tensorflow-gpu with python 3.5 : <a href="http://www.python36.com/install-tensorflow141-gpu/" rel="nofollow noreferrer">http://www.python36.com/install-tensorflow141-gpu/</a></p>
<p>Tensorflow installation is fine. It ran with test code. Then I installed Keras to run codes from here: <a href="https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/5.1-introduction-to-convnets.ipynb" rel="nofollow noreferrer">https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/5.1-introduction-to-convnets.ipynb</a></p>
<p>I got the following error:</p>
<blockquote>
<p>2018-02-23 11:19:13.457201: I
tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful
NUMA node read from SysFS had negative value (-1), but there must be
at least one NUMA node, so returning NUMA node zero</p>
<p>2018-02-23 11:19:13.457535: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0
with properties: name: GeForce GTX 1060 6GB major: 6 minor: 1
memoryClockRate(GHz): 1.7845 pciBusID: 0000:01:00.0 totalMemory:
5.93GiB freeMemory: 5.65GiB</p>
<p>2018-02-23 11:19:13.457551: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating
TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX
1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)</p>
<p>2018-02-23 11:21:22.130004: E
tensorflow/stream_executor/cuda/cuda_dnn.cc:378] Loaded runtime CuDNN
library: 7005 (compatibility version 7000) but source was compiled
with 6021 (compatibility version 6000). If using a binary install,
upgrade your CuDNN library to match. If building from sources, make
sure the library loaded at runtime matches a compatible version
specified during compile configuration.</p>
<p>2018-02-23 11:21:22.130663: F tensorflow/core/kernels/conv_ops.cc:667]
Check failed: stream->parent()->GetConvolveAlgorithms(
conv_parameters.ShouldIncludeWinogradNonfusedAlgo(), &algorithms)</p>
</blockquote>
<p>Tensorflow-cpu works fine with Keras.
My question is:</p>
<p>1) which instructions to follow to correctly install tensorflow-gpu with Keras for my setup?</p>
<p>2) What can I do eliminate these errors?</p>
<p>3) Is there any universal instructions that could be followed to correctly install tensorflow-gpu with Keras for all platforms?</p>
<p><em>Edit</em>
Here is a file I created which shows how to run Keras with tensorflow-gpu in Pycharm-community edition.
<a href="https://github.com/mdkhan48/AgBot2018_VidStream/blob/master/how%20to%20run%20tensorflow-gpu.odt" rel="nofollow noreferrer">https://github.com/mdkhan48/AgBot2018_VidStream/blob/master/how%20to%20run%20tensorflow-gpu.odt</a></p>
|
<p>There are 4 components you are trying to install: </p>
<ol>
<li>Cuda</li>
<li>Cudnn</li>
<li>tensorflow</li>
<li>keras</li>
</ol>
<p>they are not always synchronized, as of today tensorflow supports cuda 9.1 if you compile it yourself (like the guide you posted), the prebuilt binaries were compiled with cuda 9.0 and cudnn 7. I prefer using them so this is this is the cuda version I'm using.</p>
<p>this is how you can install it:</p>
<p>first, remove existing cuda installations:</p>
<pre><code>sudo apt-get remove --purge nvidia-*
</code></pre>
<p>install ubuntu headers and cuda 9.0:</p>
<pre><code>sudo apt-get install linux-headers-$(uname -r)
CUDA_REPO_PKG=cuda-repo-ubuntu1604_9.1.85-1_amd64.deb
wget -O /tmp/${CUDA_REPO_PKG} http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/${CUDA_REPO_PKG}
sudo dpkg -i /tmp/${CUDA_REPO_PKG}
sudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub
rm -f /tmp/${CUDA_REPO_PKG}
sudo apt-get update
sudo apt-get install -y cuda-9-0
</code></pre>
<p>download cuDNN v7.0.5 Library for Linux from <a href="https://developer.nvidia.com/cudnn" rel="nofollow noreferrer">nvidia site</a>,
navigate to downloaded folder and install it:</p>
<pre><code>tar -xzvf cudnn-9.0-linux-x64-v7.tgz
sudo cp cuda/include/* /usr/local/cuda/include/
sudo cp cuda/lib64/* /usr/local/cuda/lib64/
</code></pre>
<p>reboot your machine and check nvidia driver works using:</p>
<pre><code>nvidia-smi
</code></pre>
<p>the output should look like this:</p>
<pre><code>+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.30 Driver Version: 390.30 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 105... Off | 00000000:04:00.0 On | N/A |
| 0% 33C P8 N/A / 120W | 310MiB / 4038MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1307 G /usr/lib/xorg/Xorg 141MiB |
| 0 2684 G compiz 120MiB |
| 0 4078 G ...-token= 45MiB |
+-----------------------------------------------------------------------------+
</code></pre>
<p>create a virtual environment and install tensorflow and keras in it:</p>
<pre><code>sudo apt-get install virtualenv
virtualenv tfenv
source tfenv/bin/activate
pip install tensorflow-gpu
pip install keras
</code></pre>
|
python|tensorflow|keras
| 2 |
1,908,439 | 49,297,339 |
name 'raw' is not defined in scapy
|
<p>Python 2.7
Scapy (2.3.3)</p>
<p>I tried to run a scapy demo</p>
<pre><code>server@root:/usr/bin$ sudo ./scapy
INFO: Can't import matplotlib. Won't be able to plot.
WARNING: No route found for IPv6 destination :: (no default route?)
Welcome to Scapy (2.3.3)
</code></pre>
<p>ran following command
<code>>>> raw(IP())</code></p>
<p>get error message like:</p>
<pre><code>Traceback (most recent call last):
File "<console>", line 1, in <module>
NameError: name 'raw' is not defined
</code></pre>
<p>I am beginner in python. </p>
<p>How to fix this problem?</p>
|
<p><code>raw()</code> is new in Scapy 2.4.0 (including release candidates, aka 2.4.0-rc*).</p>
<p>You can either install Scapy 2.4.0 or newer, or use <code>str()</code> for now.</p>
|
python|scapy
| 1 |
1,908,440 | 49,234,416 |
Getting [Errno 110] 'Connection timed out' when trying to do get requests in python
|
<p>I am able to access the below "URL" mentioned in code manually in firefox browser and see the json response as : </p>
<pre><code>{
"version" : "1.0",
"email_address" : "xxxx@emailin.com"
}
</code></pre>
<p>But when I am trying to do get request with below code I am getting connection timeout error.</p>
<pre><code>import requests
URL='https://deviceconfig.pod1.pie.avatar.ext.hp.com/virtualprinter/v1/printers/emailaddress/AQAAAAFiGLv5GQAAAAGrHfgO'
proxies = {'http': '', 'https': ''}
headers={'User-Agent':'Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0'}
try:
r= requests.get(url=URL,proxies=proxies,verify=False,headers=headers)
print r.status_code
print r.content
except requests.exceptions.RequestException as e:
print e.message
Error :
HTTPSConnectionPool(host='deviceconfig.pod1.pie.avatar.ext.hp.com', port=443): Max retries exceeded with url: /virtualprinter/v1/printers/emailaddress/AQAAAAFiGLv5GQAAAAGrHfgO (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fd9f45080d0>: Failed to establish a new connection: [Errno 110] Connection timed out',))
</code></pre>
|
<p>Now Get request is working , by setting the proxies with port number.</p>
<p>Added proxy setting in above code:</p>
<pre><code>proxies = { 'http': 'web-proxy.xxxxxx.xx.com:8080','https': 'web-proxy.xxxxxx.xx.com:8080'}
Output :
200
{
"version" : "1.0",
"email_address" : "xxxx@emailin.com"
}
</code></pre>
|
python|python-2.7|python-requests
| 4 |
1,908,441 | 48,935,958 |
Pandas Series.ne operator returning unexpected result against two slices of same Series
|
<p>So I have this series of integers shown below</p>
<pre><code>from pandas import Series
s = Series([1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
</code></pre>
<p>And I want to see how many times the numbers changes over the series, so I can do the following and get the expected result. </p>
<pre><code>[i != s[:-1][idx] for idx, i in enumerate(s[1:])]
Out[5]:
[True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
True,
False,
False,
False,
False,
False,
False,
False,
False,
False]
</code></pre>
<p>From there I could just count the number of True's present easy. But this is obviously not the best way to operate on a pandas Series and I'm adding this in a situation where performance matter so I did the below expecting identical results, however I was very surprised and confused. </p>
<pre><code>s[1:].ne(s[:-1])
Out[4]:
0 True
1 False
2 False
3 False
4 False
5 False
6 False
7 False
8 False
9 False
10 False
11 False
12 False
13 False
14 False
15 False
16 False
17 False
18 False
19 False
20 False
21 False
22 False
23 False
24 False
25 False
26 False
27 False
28 False
29 False
30 False
31 False
32 False
33 False
34 False
35 False
36 False
37 False
38 False
39 True
dtype: bool
</code></pre>
<p>Not only does the output using the <code>Series.ne</code> method not make any logical sense to me but the output is also longer than either of the inputs which is especially confusing. </p>
<p>I think this might be related to this <a href="https://github.com/pandas-dev/pandas/issues/1134" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/issues/1134</a></p>
<p>Regardless I'm curious as to what I'm doing wrong and what the best way to accomplish this would be. </p>
<p><strong>tl;dr:</strong></p>
<p>Where <code>s</code> is a pandas.Series of int's</p>
<p><code>[i != s[:-1][idx] for idx, i in enumerate(s[1:])] != s[:-1].ne(s[1:]).tolist()</code></p>
<p><strong>Edit</strong>
Thanks all, reading some of the answers below a possible solution is <code>sum(s.diff().astype(bool)) - 1</code> however I'm still curious why the above solution doesn't work </p>
|
<p>IIUC, Using <code>shift</code> </p>
<pre><code>s!=s.shift()
</code></pre>
|
python|pandas|series
| 1 |
1,908,442 | 49,112,356 |
I want to create a genric format function, what is default value of top,bottom,right,left
|
<pre><code>def columnandcellformate(sheet_name,bold = 0,font_color = '#000000',bg_color = '#ffffff',align = '' ,bottom = 0 ,top = 3,right = 0,left = 0,font_size = 10 ,starcolumn = 0, endrow = 0 ):
global sheet_format
sheet_format=sheet_name.add_format({
'bottom':bottom,
'top' : top,
'bg_color':bg_color,
'font_color' : font_color,
'align':align,
'font_size':font_size,
'bold': bold,
'font_name':'Batang'
})
</code></pre>
<p>What is default value of top,bottom,right,left, My function is making cell top,bottom,right and left blank </p>
|
<p>The default values for format properties are almost all 0/False. See the <a href="https://github.com/jmcnamara/XlsxWriter/blob/master/xlsxwriter/format.py#L46" rel="nofollow noreferrer">initialization code</a> for a format object. </p>
|
python|xlsx|xlsxwriter
| 1 |
1,908,443 | 25,260,487 |
how can I update scipy in winpython on windows?
|
<p>I have winpython installed and I would like to update scipy to the version 0.14.
How can I do that? Should I reinstall winpython completely?</p>
<p>EDIT: </p>
<p>If I run <code>pip install --upgrade scipy</code> from the <code>WinPython Command Prompt</code> I receive this error: </p>
<pre><code>----------------------------------------
Rolling back uninstall of scipy
Cleaning up...
Command C:\Users\donbeo\WinPython-64bit-3.3.5.0\python-3.3.5.amd64\python.exe -c
"import setuptools, tokenize;__file__='c:\\users\\donbeo\\appdata\\local\\temp\
\pip_build_donbeo\\scipy\\setup.py';exec(compile(getattr(tokenize, 'open', open)
(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:
\users\donbeo\appdata\local\temp\pip-puzp_i-record\install-record.txt --single-v
ersion-externally-managed --compile failed with error code 1 in c:\users\donbeo\
appdata\local\temp\pip_build_donbeo\scipy
Storing debug log for failure in C:\Users\donbeo\WinPython-64bit-3.3.5.0\setting
s\pip\pip.log
</code></pre>
<p>C:\Users\donbeo\WinPython-64bit-3.3.5.0\python-3.3.5.amd64>pip install --upgrade
scipy</p>
|
<p>Christoph Gohlke <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/" rel="nofollow">now provides wheels</a>, so starting Winpython of January 2015, you may also be able to do that:</p>
<ul>
<li>Download the correct <code>.whl</code> file for your Python version (e.g. <code>cp33</code> for Python 3.3 and <code>win32</code> for 32-Bit) to some location, e.g. <code>D:\scipy\</code></li>
<li>Launch the WinPython Command Prompt</li>
<li><p>Type</p>
<pre><code>pip install --no-index --upgrade D:\scipy\scipy‑0.15.0‑cp33‑none‑win32.whl
</code></pre></li>
</ul>
<p>It should give you an output like this, for example for Python 3.3 32-Bit:</p>
<pre><code>D:\result_tests\WinPython-32bit-3.3.5.5_build5\python-3.3.5>pip install --no-index --upgrade D:\here_is_scip\scipy-0.15.0-cp33-none-win32.whl
Ignoring indexes: https://pypi.python.org/simple
Processing d:\here_is_scip\scipy-0.15.0-cp33-none-win32.whl
Installing collected packages: scipy
Found existing installation: scipy 0.14.1
Uninstalling scipy-0.14.1:
Successfully uninstalled scipy-0.14.1
Successfully installed scipy-0.15.0
D:\result_tests\WinPython-32bit-3.3.5.5_build5\python-3.3.5>
</code></pre>
|
python|windows|scipy
| 5 |
1,908,444 | 59,919,104 |
GeoJson layer not visible on python Folium map
|
<p>I am trying to add a GeoJSON layer to a Folium map but the layer is not visible in the map though it is visible in the layer selector of folium. I am able to view the data in Qgis so the data is correct. I also do not get an error in Spyder.</p>
<p>I also inspected the HTML in the browser and there seems to be a script added with all the coordinates etc. The browser does not display an error when inspecting the file.</p>
<p>Anyone an idea what I am missing?</p>
<pre><code>import folium
m = folium.Map(
location=[-59.1759, -11.6016],
tiles='OpenStreetMap',
zoom_start=2 # Limited levels of zoom for free Mapbox tiles.
)
folium.GeoJson(
data=(open('./projects/test/data/breda_bus_route.geojson', "r").read()),
name='layerName',
).add_to(m)
folium.LayerControl().add_to(m)
m.save('index.html')
</code></pre>
|
<p>It might be case that GeoJSON layer is not visible since it's not fit within the given map view, try to dynamically fit GeoJSON layer to the map view:</p>
<pre><code>layer = folium.GeoJson(
data=(open(path, "r").read()),
name='geojson',
).add_to(m) # 1. keep a reference to GeoJSON layer
m.fit_bounds(layer.get_bounds()) # 2. fit the map to GeoJSON layer
</code></pre>
<p><strong>Update</strong></p>
<p>It appears it was related with GeoJSON file projection <code>EPSG::3857</code>, Leaflet expects <code>EPSG:4326</code>. </p>
<p>Once GeoJSON reprojected, the layer will be rendered like this </p>
<p><a href="https://i.stack.imgur.com/GIsjV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GIsjV.png" alt="enter image description here"></a></p>
|
python|leaflet|folium
| 2 |
1,908,445 | 5,835,302 |
Python extension for Upskirt: garbage at end of string
|
<p>I've been trying to make a Python extension for <a href="https://github.com/tanoku/upskirt" rel="nofollow">Upskirt</a>. I though it would not be <em>too</em> hard for a first C project since there are examples (example program in the Upskirt code and the Ruby extension).</p>
<p>The extension works, it converts the Markdown I throw at it, but sometimes the output has some garbage at the end of the string. And I don't know what causes it.</p>
<p>Here's some output:</p>
<pre><code>python test.py
<module 'pantyshot' from '/home/frank/Code/pantyshot/virtenv/lib/python2.7/site-packages/pantyshot.so'>
<built-in function render>
'<p>This <strong>is</strong> <em>a</em> <code>test</code>. <a href="http://example.com">Test</a>.</p>\n\x7f'
<p>This <strong>is</strong> <em>a</em> <code>test</code>. <a href="http://example.com">Test</a>.</p>
--------------------------------------------------------------------------------
'<p>This <strong>is</strong> <em>a</em> <code>test</code>. <a href="http://example.com">Test</a>.</p>\n\x7f'
<p>This <strong>is</strong> <em>a</em> <code>test</code>. <a href="http://example.com">Test</a>.</p>
--------------------------------------------------------------------------------
</code></pre>
<p>My code can be found in <a href="https://github.com/FSX/pantyshot" rel="nofollow">my Github repo</a>. I called it pantyshot, because I thought of that when I heard upskirt. Strange name, I know.</p>
<p>I hope someone can help me.</p>
|
<p>You are doing a <a href="https://github.com/FSX/pantyshot/blob/master/pantyshot.c#L36" rel="nofollow"><code>strdup</code> in <code>pantyshot_render</code></a>:</p>
<pre><code>output_text = strdup(ob->data); /* ob is a "struct buf *" */
</code></pre>
<p>But I don't think <code>ob->data</code> is a nul-terminated C string. You'll find this inside <a href="https://github.com/FSX/pantyshot/blob/master/upskirt/buffer.c#L166" rel="nofollow"><code>upskirt/buffer.c</code></a>:</p>
<pre><code>/* bufnullterm • NUL-termination of the string array (making a C-string) */
void
bufnullterm(struct buf *buf) {
if (!buf || !buf->unit) return;
if (buf->size < buf->asize && buf->data[buf->size] == 0) return;
if (bufgrow(buf, buf->size + 1))
buf->data[buf->size] = 0; }
</code></pre>
<p>So, you're probably running off the end of the buffer and getting lucky by hitting a <code>'\0'</code> before doing any damage. I think you're supposed to call <code>bufnullterm(ob)</code> before copying <code>ob->data</code> as a C string; or you could look at <a href="https://github.com/FSX/pantyshot/blob/master/upskirt/buffer.h#L33" rel="nofollow"><code>ob->size</code></a>, use <code>malloc</code> and <code>strncpy</code> to copy it, and take care of the nul-terminator by hand (but make sure you allocation <code>ob->size + 1</code> bytes for your copied string).</p>
<p>And if you want to get rid of the newline (i.e. the trailing <code>\n</code>), then you'll probably have to do some whitespace stripping by hand somewhere.</p>
|
python|c|python-extensions
| 3 |
1,908,446 | 68,008,481 |
numpy : float indices to interpolate
|
<p>What I would like to do is as follows :</p>
<p>I have a vector A which is initially zeros</p>
<pre><code>[0,0,0]
</code></pre>
<p>I am given a float index</p>
<pre><code>0.5
</code></pre>
<p>What I mean by interpolating with float index is a function that will have such output</p>
<pre><code>[0.5,0.5,0]
</code></pre>
<p>A few more examples</p>
<pre><code>1 -> [0,1,0]
2 -> [0,0,1]
1.5 -> [0,0.5,0.5]
1.9 -> [0,0.1,0.9]
</code></pre>
<p>How is this called and what function in numpy does the behavior I described above?</p>
|
<p>The function you describe could be thought of as interpolating between the rows (or columns) of a suitably-sized identity matrix: an input of <code>0</code> gives the
first basis vector <code>[1, 0, 0]</code>, an input of <code>1</code> gives <code>[0, 1, 0]</code>, and so on, and non-integer inputs interpolate between the two nearest vectors.</p>
<p>NumPy's <a href="https://numpy.org/doc/stable/reference/generated/numpy.interp.html" rel="nofollow noreferrer"><code>interp</code></a> function doesn't support interpolation between vectors, but SciPy's <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html" rel="nofollow noreferrer"><code>interp1d</code></a> does, and gives you exactly what you need. Here's a demonstration:</p>
<pre><code>>>> from scipy.interpolate import interp1d
>>> import numpy as np
>>> interpolator = interp1d(np.arange(3), np.identity(3))
>>> interpolator(0.5)
array([0.5, 0.5, 0. ])
>>> interpolator(1)
array([0., 1., 0.])
>>> interpolator(2)
array([0., 0., 1.])
>>> interpolator(1.5)
array([0. , 0.5, 0.5])
>>> interpolator(1.9)
array([0. , 0.1, 0.9])
</code></pre>
<p>You don't say what behaviour you'd want for <em>extrapolation</em>. That is, for inputs smaller than <code>0.0</code> or greater than <code>2.0</code>. But SciPy offers you various options here, too. By default, it will raise an exception:</p>
<pre><code>>>> interpolator(-0.2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/scipy/interpolate/polyint.py", line 78, in __call__
y = self._evaluate(x)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/scipy/interpolate/interpolate.py", line 677, in _evaluate
below_bounds, above_bounds = self._check_bounds(x_new)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/scipy/interpolate/interpolate.py", line 706, in _check_bounds
raise ValueError("A value in x_new is below the interpolation "
ValueError: A value in x_new is below the interpolation range.
</code></pre>
<p>But you can also extrapolate, or provide a fill value. Here's an extrapolation example:</p>
<pre><code>>>> interpolator = interp1d(np.arange(3), np.identity(3), fill_value="extrapolate")
>>> interpolator(-0.2)
array([ 1.2, -0.2, 0. ])
</code></pre>
|
python|numpy
| 2 |
1,908,447 | 64,107,279 |
Data analysis optimazation
|
<p>I have a dataset with cars being spotted on two different cameras. I need to calculate the average time it takes to travel from camera 1 to 2. Database looking like so:</p>
<pre><code>"ID","PLATE", "CSPOTID", "INOUT_FLAG","EVENT_TIME"
"33173","xys8","1","0","2020-08-27 08:24:53"
"33174","asd4","1","0","2020-08-27 08:24:58"
"33175","------","2","1","2020-08-27 08:25:03"
"33176","asd4","1","0","2020-08-27 08:25:04"
"33177","ghj1","1","0","2020-08-27 08:25:08"
...
</code></pre>
<p>Currently my code works as intended and calculates the average time between different rows. But working with big data and having a flow of incoming data, it takes too much time.</p>
<pre><code>import numpy as np, matplotlib.pyplot as plt, pandas as pd,collections, sys, operator, datetime
df = pd.read_csv('tmetrics_base2.csv', quotechar='"', skipinitialspace=True, delimiter=',', dtype={"ID": int, "PLATE": "string", "CSPOTID": int, "INOUT_FLAG": int,"EVENT_TIME": "string"})
data = df.as_matrix()
#Sort values by PLATE
dfSortedByPlate = df.sort_values(['PLATE', 'EVENT_TIME'])
#List for already tested PLATEs
TestedPlate = []
resultList = []
#Iterate through all rows in db
for i,j in dfSortedByPlate.iterrows():
# If PLATE IS "------" = skip it
if j[1] == "-------":
continue
if j[1] in TestedPlate:
continue
TestedPlate.append(j[1])
for ii,jj in dfSortedByPlate.iterrows():
if j[1] != jj[1]:
continue
if j[1] == jj[1]:
dt1 = datetime.datetime.strptime(jj[4],'%Y-%m-%d %H:%M:%S')
dt2 = datetime.datetime.strptime(j[4],'%Y-%m-%d %H:%M:%S')
Travel_time = []
Travel_time.append((dt1 - dt2).total_seconds())
# Discard if greater than 1 hour and less than 3min
if (dt1 - dt2).total_seconds() < 3000 and (dt1 - dt2).total_seconds() > 180:
resultList.append((dt1 - dt2).total_seconds())
#print((dt1 - dt2).total_seconds())
print(sum(resultList) / len(resultList))
placeholdertime = jj[4]
</code></pre>
<p>I have sorted the database by plate number so that the comparison should be fairly quick. Any advice or pointers to where I could increase run speed would be greatly appreciated.</p>
<p>Also I am unsure of how long time I should expect calculations like these to take? I don't have experience with data in this scale.</p>
|
<p>Just a few suggestions:</p>
<p>Read only what you need:</p>
<pre><code>df = pd.read_csv('data_raw.csv',
quotechar='"',
skipinitialspace=True,
delimiter=',',
usecols=['PLATE', 'EVENT_TIME'],
index_col=['PLATE'])
</code></pre>
<p>Convert the <code>EVENT_TIME</code> column to <code>datetime</code> (you don't have to do that row by row):</p>
<pre><code>df['EVENT_TIME'] = pd.to_datetime(df['EVENT_TIME'])
</code></pre>
<p>Sort (you already did that):</p>
<pre><code>df.sort_index(inplace=True)
df.sort_values(by='PLATE', inplace=True)
</code></pre>
<p>Fetch the plates, excluding the one that isn't needed):</p>
<pre><code>plates = set(df.index).difference({"------"})
</code></pre>
<p>Process the plate-chunks:</p>
<pre><code>for plate in plates:
print(df.loc[plate])
</code></pre>
|
python|statistics|data-analysis
| 2 |
1,908,448 | 65,516,351 |
How To Sum all the values of a column for a date instance in pandas
|
<p>I am working on time-series data, where I have two columns date and quantity. The date is day wise. I want to add all the quantity for a month and convert it into a single date.</p>
<p><strong>date is my index column</strong></p>
<p>Example</p>
<pre><code> quantity
date
2018-01-03 30
2018-01-05 45
2018-01-19 30
2018-02-09 10
2018-02-19 20
</code></pre>
<p>Output :</p>
<pre><code> quantity
date
2018-01-01 105
2018-02-01 30
</code></pre>
<p>Thanks in advance!!</p>
|
<p>You can downsample to combine the data for each month and sum it by chaining the sum method.</p>
<p><code>df.resample("M").sum()</code></p>
<p>Check out the pandas user guide on resampling <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#resampling" rel="nofollow noreferrer">here</a>.</p>
<p>You'll need to make sure your index is in datetime format for this to work. So first do: <code>df.index = pd.to_datetime(df.index)</code>. Hat tip to sammywemmy for the same advice in the comments.</p>
|
python|pandas|dataframe|time-series|data-analysis
| 0 |
1,908,449 | 50,819,486 |
Plot specific values on y axis instead of increasing scale from dataframe
|
<p>When plotting 2 columns from a dataframe into a line plot, is it possible to, instead of a consistently increasing scale, have fixed values on your y axis (and keep the distances between the numbers on the axis constant)? For example, instead of 0, 100, 200, 300, ... to have 0, 21, 53, 124, 287, depending on the values from your dataset? So basically to have on the axis all your possible values fixed instead of an increasing scale?</p>
|
<p>Yes, you can use: <code>ax.set_yticks()</code></p>
<p>Example:</p>
<pre><code>df = pd.DataFrame([[13, 1], [14, 1.5], [15, 1.8], [16, 2], [17, 2], [18, 3 ], [19, 3.6]], columns = ['A','B'])
fig, ax = plt.subplots()
x = df['A']
y = df['B']
ax.plot(x, y, 'g-')
ax.set_yticks(y)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/O8uvs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O8uvs.jpg" alt="enter image description here"></a>
Or if the values are very distant each other, you can use <code>ax.set_yscale('log')</code>.
Example:</p>
<pre><code>df = pd.DataFrame([[13, 1], [14, 1.5], [15, 1.8], [16, 2], [17, 2], [18, 3 ], [19, 3.6], [20, 300]], columns = ['A','B'])
fig, ax = plt.subplots()
x = df['A']
y = df['B']
ax.plot(x, y, 'g-')
ax.set_yscale('log', basex=2)
ax.yaxis.set_ticks(y)
ax.yaxis.set_ticklabels(y)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/aJeZK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aJeZK.jpg" alt="enter image description here"></a></p>
|
python|pandas|dataframe|plot
| 4 |
1,908,450 | 3,242,448 |
Quick and dirty reports based on a SQL query
|
<p>I never thought I'd ever say this but I'd like to have something like the report generator in Microsoft Access. Very simple, just list data from a SQL query.</p>
<p>I don't really care what language is used as long as I can get it done fast.
C#,C++,Python,Javascript...</p>
<p>I want to know the quickest (development sense) way to display data from a database.</p>
<p>edit :</p>
<p>I'm using MySQL with web interface for data input. I would be much better if the user had some kind of GUI.</p>
|
<p>Depends on the database -- with <code>[sqlite][1]</code>, for example, ...:</p>
<pre><code>$ sqlite3 databasefile 'select foo, bar from baz'
</code></pre>
<p>is all it takes (see the URL I pointed to for more options you can use, e.g. to change the output format, etc). Mysql has a similar command-line client (see e.g. <a href="http://dev.mysql.com/doc/refman/5.5/en/mysql.html" rel="nofollow noreferrer">here</a>), so does PostgreSQL (see <a href="http://www.rootr.net/man/man/psql/1" rel="nofollow noreferrer">here</a>), etc, etc.</p>
<p>So, what specific DB engine are you concerned with? Or, if more than one, which set?</p>
|
c#|javascript|python|sql|database
| 0 |
1,908,451 | 3,480,371 |
How to close stdout/stderr window in python?
|
<p>In my python app, I print some stuff during a cycle.</p>
<p>After the cycle, I want to close the <code>stdout/stderr</code> window that the prints produced using python code.</p>
|
<pre><code>import sys
sys.stdout.close()
sys.stderr.close()
</code></pre>
<p>Might be what you want. This will certainly close stdout/stderr at any rate.</p>
|
python|printing|window
| 7 |
1,908,452 | 50,559,117 |
Prime numbers that are factors of a received integer
|
<p>I've been trying to complete this assignment but I couldn't get what is asked, which is: in <strong>Python 3</strong>, Ask a user to enter an integer (1, 1000). Out of the list of the first prime numbers 2,3,5,7, print those prime numbers that are factors of the received integer.<br>
I hope could help me to get this.</p>
|
<pre><code>def get_primes(n):
out = list()
sieve = [True] * (n+1)
for p in range(2, n+1):
if (sieve[p]):
out.append(p)
for i in range(p, n+1, p):
sieve[i] = False
return out
def get_factors(n):
output = list()
for i in range(1, n + 1):
if n % i == 0:
output.append(i)
return output
# input_number = input('Enter a number')
# input_number = int(input_number)
input_number = 30
primes = get_primes(input_number+1) # [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31]
factors = get_factors(input_number) # [1, 2, 3, 5, 6, 10, 15, 30]
prime_factors = list()
for i in factors:
if i in primes:
prime_factors.append(i)
print(prime_factors)
</code></pre>
<p>Output:</p>
<pre><code>[2, 3, 5]
</code></pre>
<p>Code for getting prime numbers:
<a href="https://stackoverflow.com/questions/11619942/print-series-of-prime-numbers-in-python">Print series of prime numbers in python</a></p>
|
python
| 0 |
1,908,453 | 34,993,363 |
How can I call boto.ec2.connect_to_region() with IAM:PassRole credentials?
|
<p>I would like to run boto (2.38.0) on an EC2 instance. I would like to connect without specifying credentials -- the permissions available to boto are defined via an IAM:PassRole definition.</p>
<p>I already have aws CLI commands working on the instance (e.g. <code>aws s3 cp</code>). My problem is that calling <code>boto.ec2.connect_to_region('eu-west-1')</code> fails with this:</p>
<p><code>Traceback (most recent call last):
File "/usr/local/bin/get-attached-volume", line 5, in <module>
c = boto.ec2.connect_to_region('eu-west-1')
File "/usr/local/lib/python2.7/dist-packages/boto-2.38.0-py2.7.egg/boto/ec2/__init__.py", line 66, in connect_to_region
return region.connect(**kw_params)
File "/usr/local/lib/python2.7/dist-packages/boto-2.38.0-py2.7.egg/boto/regioninfo.py", line 187, in connect
return self.connection_cls(region=self, **kw_params)
File "/usr/local/lib/python2.7/dist-packages/boto-2.38.0-py2.7.egg/boto/ec2/connection.py", line 103, in __init__
profile_name=profile_name)
File "/usr/local/lib/python2.7/dist-packages/boto-2.38.0-py2.7.egg/boto/connection.py", line 1100, in __init__
provider=provider)
File "/usr/local/lib/python2.7/dist-packages/boto-2.38.0-py2.7.egg/boto/connection.py", line 569, in __init__
host, config, self.provider, self._required_auth_capability())
File "/usr/local/lib/python2.7/dist-packages/boto-2.38.0-py2.7.egg/boto/auth.py", line 987, in get_auth_handler
'Check your credentials' % (len(names), str(names)))
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials
</code></p>
<p><a href="https://stackoverflow.com/questions/11129976/boto-issue-with-iam-role">This question</a> seems to have the answer, but running <a href="https://stackoverflow.com/a/11130701/3904">this answer</a> from the command line gives me the same failure.</p>
<p>The answer there mentions not having "no other credentials are found in environment variables or in a boto config file". I haven't been able to find any such configuration.</p>
|
<p>Can you post the policy associated with the user? I am guessing the user is allowed to pass the S3 role but not the EC2 role. That explains why the CLI works and boto/ec2. Or try this CLI command and see if it works:</p>
<pre><code>aws ec2 list-regions
</code></pre>
|
python|amazon-web-services|boto|aws-sdk|amazon-iam
| 0 |
1,908,454 | 56,676,953 |
Creating heap algorithm to output a list of all permutations, in base python without other modules
|
<p>I am trying to build an algorithm which will output a list of all permutations of an inputted string and I am getting very lost, especially when it comes to heap algorithm. I tried to copy the code listed on the Wikipedia page to no avail. I want a solution in base Python.</p>
<pre><code># Desired output
heaps_func('art')
['rta', 'tra', 'tar', 'rat', 'art', 'atr']
# Current code
def heaps_func(a):
lst=[a]
l=len(a)
if len(a)==1:
return lst
else:
for x in range(len(a)-1):
if x<(l-1):
if l%2==0:
k=list(a)
p=k[i]
k[i]=k[l-1]
k[l-1]=p
k=''.join(k)
lst.append(k)
else:
k=list(a)
p=k[0]
k[0]=k[l-1]
k[l-1]=p
k=''.join(k)
lst.append(k)
return lst
</code></pre>
|
<p>You can do it by using recursion. Here I am goiing to add the python code for you.</p>
<pre><code>def heaps_func(a,size):
if size ==1:
a = ''.join(a)
print(a)
return
for i in range(size):
heaps_func(a,size-1)
if size%2==1:
a[0],a[size-1]=a[size-1],a[0]
else:
a[i], a[size - 1] = a[size - 1], a[i]
heaps_func(list('art'),3)
</code></pre>
<p>If the given string contains duplicate characters, this program will also print duplicate element. For example, in string "arr", 'r' contains two times. And output of this program will be :</p>
<blockquote>
<p>arr rar rar arr rra rra</p>
</blockquote>
<p>To get rid of this, we can use a list and before printing we will search on that list whether this element exist in the list or not. If not exist then we will print it and store it on the list. </p>
<p>Programs:</p>
<pre><code>def heaps_func(a,size,listofwords):
if size ==1:
a = ''.join(a)
#print(a)
if listofwords.count(a)==0:
print(a)
listofwords.append(a)
return
for i in range(size):
heaps_func(a,size-1,listofwords)
if size%2==1:
a[0],a[size-1]=a[size-1],a[0]
else:
a[i], a[size - 1] = a[size - 1], a[i]
listofwords=[]
heaps_func(list('arr'),len('arr'),listofwords)
</code></pre>
<p>for details please read the following link . But there it is described in C/C++.</p>
<p><a href="https://www.geeksforgeeks.org/heaps-algorithm-for-generating-permutations/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/heaps-algorithm-for-generating-permutations/</a></p>
|
python|python-3.x|list|for-loop|heapsort
| 0 |
1,908,455 | 45,064,765 |
Weights given by MLPClassifier in sklearn.neural_network (Python)
|
<p>I am currently working on the MLPClassifier of the neural_network package in sklearn. </p>
<p>I have fit the model; I want to access the weights given by the classifier to the input features. How do I access them?</p>
<p>Thanks in advance!</p>
|
<p>Check out the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html" rel="noreferrer">documentation</a>.
See the field coefs_.</p>
<p>Try:</p>
<pre><code>print model.coefs_
</code></pre>
<p>Generally, I recommend:</p>
<ul>
<li>checking the documentation</li>
<li><p>if that fails, then</p>
<pre><code>print dir(model)
</code></pre>
<p>or </p>
<pre><code>help(model)
</code></pre>
<p>will tell you what's available for in most cases.</p></li>
</ul>
|
python-3.x|scikit-learn
| 8 |
1,908,456 | 45,045,874 |
python apply lambda function
|
<p>I have a use-case where I need to apply an function argument on another argument which is typically a list. Example I might need to apply min on a list, max on a list or sum on a list.</p>
<pre><code>def calc_df_query(select_col, agg_func, where_col, mn, mx):
tmp = globals().get('data')[select_col][globals().get('data')[where_col].between(mn, mx, inclusive=True)]
agg_method = lambda col,agg: agg(col)
return (agg_method(tmp, agg_func))
</code></pre>
<p>As a result of the last return statement I am getting an error "str object is not callable". Any help to do this trick is appreciated.</p>
|
<p>the solution is to use a dictionary to map strings to pandas.DataFrame functions</p>
<pre><code>dispatcher={'min':pd.DataFrame.min, 'max':pd.DataFrame.max, 'sum': pd.DataFrame.sum, 'mean': pd.DataFrame.mean}
</code></pre>
<p>and then the usage:</p>
<pre><code>def calc_df_query(select_col, agg_func, where_col, mn, mx):
tmp = globals().get('data')[select_col][globals().get('data')[where_col].between(mn, mx, inclusive=True)]
agg_method = lambda col, agg: agg(col)
return (agg_method(tmp, dispatcher.get(agg_func)))
</code></pre>
<p>thanks for the tips</p>
|
python|lambda|aggregate
| 0 |
1,908,457 | 45,017,774 |
python set vs tuple lookup. is Lookup in Tuple O(1)?
|
<p>recently i saw a question here on python <a href="https://stackoverflow.com/questions/45016954/python-if-statement-and-logical-operator-issue?noredirect=1#comment77008672_45016954">here</a></p>
<p><a href="https://stackoverflow.com/questions/45016954/python-if-statement-and-logical-operator-issue?noredirect=1#comment77008672_45016954">1</a>: <a href="https://stackoverflow.com/questions/45016954/python-if-statement-and-logical-operator-issue?noredirect=1#comment77008672_45016954">Python If statement and logical operator issue</a>. There was someone in comments who gave an answer that it can be done like this:</p>
<p><code>1 in (1, 2, 3)</code> to check if 1 is in present in a collection of items.but according to me this should be much faster <code>1 in {1, 2, 3}</code>. as you can see there in discussions ,someone of high reputation goes on saying that <code>( )</code> is faster for fixed size input. and has faster lookup than <code>{ }</code>. i am asking it here because i want to know for my own understanding which one is correct and also i do not get the idea of <code>( )</code> being <code>fiexd-size</code> or <code>variable size</code>. i just asked for a refrence in that original question so i can correct myself if i am wrong but the user insits upon clearing my basics of computer science knowledge without givin g a single refrence to his argument that <code>lookup in Tuple is O(1)</code>.so i am asking it here.</p>
|
<p>When you say something like <code>O(n)</code>, you have to say what <code>n</code> is. Here, <code>n</code> is the length of the tuple... but the tuple is not an input. You're not taking the tuple as an argument or anything. <code>n</code> is always <code>2</code> in the conversation you linked, or <code>3</code> for your example tuple, so for this particular <code>n</code>, <code>O(n)</code> is the same thing as <code>O(2)</code>, or <code>O(1)</code>.</p>
<p>As you may have noticed by now, it doesn't make much sense to talk about <code>O(n)</code> when <code>n</code> is a constant. If you had a function like</p>
<pre><code>def in_(element, tup):
return element in tup
</code></pre>
<p>you could say that the runtime is <code>O(n)</code> element comparisons, where <code>n</code> is <code>len(tup)</code>, but for something like</p>
<pre><code>usr in ('Y', 'y')
</code></pre>
<p>talking about <code>n</code> isn't very useful.</p>
|
python
| 6 |
1,908,458 | 65,025,999 |
Python Button Algorithm
|
<p>I'm currently trying to solve a puzzle programmatically with Python, I want to be able to solve it myself but I'm finding it hard to describe the problem so I can seek assistance with it through online resources. Below I'll describe the nature of the problem and any help given is really appreciated.</p>
<p>So there is a set of 4 coloured buttons and each of them are assigned a function which changes the colour of one or more of the buttons in a looping manner. A code representation of these buttons might be as follows:</p>
<pre><code># 1 = green, 2 = red, 3 = blue, 4 = purple
# goes 1->4 then loops
cstate = [1,3,4,1] # current state of the numbers
</code></pre>
<p>The four different functions a button can perform are:</p>
<ol>
<li>Increments itself by 1</li>
<li>Increments itself and one other by 1</li>
<li>Increments itself and 2 others by 1</li>
<li>Increments all by 1</li>
</ol>
<p>Each function is unique to each button, hence no two buttons can be assigned the same function.</p>
<p>My attempt at representing these functions was to create an array describing the index of the buttons that are affected by clicking each button, for example:</p>
<pre><code>incArray =[[0,3],[0,1,3],[0,1,2,3],[3]]
</code></pre>
<p>Following this I created a function that applies the buttons functions to the <code>cstate</code> array described above:</p>
<pre><code>def increment(currentState, whichClicked, whichEffects):
retArr = currentState
for click in whichClicked:
for change in whichEffects[click]:
if currentState[change] == 4:
retArr[change] = 1
else:
retArr[change] += 1
print(retArr)
return retArr
</code></pre>
<p>Now in this particular example I fed the <code>increment</code> function with <code>whichClicked = [2,2,2,1,0,3,3]</code>, as I know this to be the correct combination (or final state) to be <code>fstate = [2,3,3,4]</code>.</p>
<p>What I'm trying to achieve is to write code to generate the <code>whichClicked</code> array described above, given the <code>cstate</code> and the <code>fstate</code>. Thanks in advance for any help provided!</p>
|
<p>I tend to develop these kind of algorithms by starting with a 'dumb' brute-force algorithm and then optimize it further</p>
<h1>Brute force</h1>
<p>You could implement this in a "brute-force" way by a kind of <a href="https://en.wikipedia.org/wiki/Breadth-first_search" rel="nofollow noreferrer">Breadth-first search algorithm</a>, where you are going to just:</p>
<ul>
<li>click all buttons on the initial state (4 options)</li>
<li>for all of the resulting states, you will click all buttons again (16 options)</li>
<li>etc. where you constantly check whether you reached the goal state.</li>
</ul>
<p>Something like this:</p>
<pre class="lang-py prettyprint-override"><code>from collections import deque
from dataclasses import dataclass
start_state = [1,3,4,1] # current state of the numbers
incArray =[[0,3],[0,1,3],[0,1,2,3],[3]]
@dataclass
class Node:
path: list
state: list
def apply_button(state, inc_array, button):
new_state = state.copy()
for affected_button in inc_array[button]:
new_state[affected_button] = new_state[affected_button] % 4 + 1
return new_state
def brute_force(start_state, inc_array, goal_state):
iterations=0
leafNodes = deque([Node([], start_state)])
while True:
node = leafNodes.popleft()
for button in range(4):
iterations+=1
new_state = apply_button(node.state, inc_array, button)
new_path = node.path + [button]
if new_state==goal_state:
print(f"iterations: {iterations}")
return new_path
leafNodes.append(Node(new_path, new_state))
print(brute_force(start_state,incArray,[2, 3, 3, 4]))
# OUTPUT:
# iterations: 7172
# [0, 1, 2, 2, 2, 3, 3]
</code></pre>
<h2>First optimization</h2>
<p>You will see that the resulting output is the same as the "whichClicked" array you provided in your example, but that all items are sorted. This is because the <strong>order of clicking the buttons does not affect the end result</strong>.
You can use that knowledge to optimize your algorithm as it is evaluating tons of redundant options. (e.g. path [0,1] gives the same result as path [1,0])</p>
<p>So a new strategy could be to exclude these redundant options in your solution. If you draw the whole search graph on paper (or uncomment the <code># print(new_path)</code> line), you see that following code only iterates over the "sorted" path:</p>
<pre class="lang-py prettyprint-override"><code>
def brute_force_opt(start_state, inc_array, goal_state):
iterations=0
leafNodes = deque([Node([], start_state)])
while True:
node = leafNodes.popleft()
min_button = node.path[-1] if len(node.path) else 0
for button in range(min_button, 4):
iterations+=1
new_state = apply_button(node.state, inc_array, button)
new_path = node.path + [button]
# print(new_path)
if new_state==goal_state:
print(f"iterations: {iterations}")
return new_path
leafNodes.append(Node(new_path, new_state))
print(brute_force_opt(start_state,incArray,[2, 3, 3, 4]))
# OUTPUT:
# iterations: 283
# [0, 1, 2, 2, 2, 3, 3]
</code></pre>
<p>As you see from the input, the number of iterations has been reduced from 7172 to 283</p>
<p>The first paths to be evaluated are now:</p>
<pre><code>[0]
[1]
[2]
[3]
[0, 0]
[0, 1]
[0, 2]
[0, 3]
[1, 1]
[1, 2]
[1, 3]
[2, 2]
[2, 3]
[3, 3]
[0, 0, 0]
[0, 0, 1]
[0, 0, 2]
[0, 0, 3]
</code></pre>
<hr />
<p><strong>edited</strong></p>
<h2>Second Optimization</h2>
<p>A second optimization could be to take into account that there are 'cyclic' paths: e.g. after pressing the fourth button four times (path [3,3,3,3]), you will end up in the same state. A straighforward way to take this into account is to keep a list of states that you already encountered. If you end up in such a state again, you could just ignore it as it will not give a better solution (the path will always be longer getting to the solution via this cyclic path):</p>
<pre class="lang-py prettyprint-override"><code>def brute_force_opt2(start_state, inc_array, goal_state):
iterations=0
encoutered_states = set()
leafNodes = deque([Node([], start_state)])
while True:
node = leafNodes.popleft()
min_button = node.path[-1] if len(node.path) else 0
for button in range(min_button, 4):
new_state = apply_button(node.state, inc_array, button)
if tuple(new_state) not in encoutered_states:
iterations+=1
new_path = node.path + [button]
# print(new_path)
if new_state==goal_state:
print(f"iterations: {iterations}")
return new_path
leafNodes.append(Node(new_path, new_state))
encoutered_states.add(tuple(new_state))
print(brute_force_opt2(start_state,incArray,[2, 3, 3, 4]))
# OUTPUT:
# iterations: 213
# [0, 1, 2, 2, 2, 3, 3]
</code></pre>
<p>As you see, the number of iterations is now only 182. This number is, as could be expected, lower than the maximum number of unique states (4^4 = 256).</p>
<h1>Analytical approaches</h1>
<p>Suppose the complexity of this problem would have been much bigger (e.g. much more buttons and colors), a brute force approach might not be feasible and you could consider a more analytical approach where you e.g. calculate how many times every button must be incremented (modulo 4) to go from start to end state and find a combination of button clicks that fulfills this for all buttons.</p>
|
python|algorithm
| 2 |
1,908,459 | 50,742,538 |
ImportError: No module named sysconfig--can't get pip working
|
<p>I'm really struggling with pip on a RedHat 6.9 system. Every time I tried to use pip, I got </p>
<pre><code>ImportError: No module named sysconfig
</code></pre>
<p>I tried Googling for solutions. I don't have apt-get and can't seem to get it with yum, so purging setuptools was out of the question. I did my best to delete setuptools by hand so I could reinstall them, but yum is convinced there are still setuptools on the machine.</p>
<p>Pretty much any of the advice involving downloading something with yum doesn't work for me. Yum always says it can't find what I'm looking for. So if there's a way I can download something without yum or apt-get (for example, not through the terminal), that would probably be best.</p>
<p>I have both Python 3 and Python 2 on my machine, so I don't know if that will change the advice that you guys can give me.</p>
<p>1000 thanks to anyone who can help! Right now I can only get things done through anaconda interfaces (such as Jupyter notebooks and Spyder) which is really limiting.</p>
<p><strong>EDIT:</strong> Here is my error trace:</p>
<pre><code>Traceback (most recent call last):
File "/usr/bin/pip2", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 947, in <module>
class Environment(object):
File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 951, in Environment
self, search_path=None, platform=get_supported_platform(),
File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 180, in get_supported_platform
plat = get_build_platform()
File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 380, in get_build_platform
from sysconfig import get_platform
ImportError: No module named sysconfig
</code></pre>
<p><strong>EDIT 2:</strong> @hoefling requested that I post the output of the following commands; first:</p>
<pre><code>$ yum list installed | grep setuptools
*Note* Red Hat Network repositories are not listed below. You must run this command as root to access RHN repositories.
python-setuptools.noarch 0.6.10-4.el6_9 @ncep-base-x86_64-workstation-6
</code></pre>
<p>and:</p>
<pre><code>$ grep ^Version: /usr/lib/python2.6/site-packages/setuptools-*.egg-info/PKG-INFO
grep: /usr/lib/python2.6/site-packages/setuptools-*.egg-info/PKG-INFO: No such file or directory
</code></pre>
|
<p>I've got the same error with python2.6 on redHat server 6.9 :</p>
<pre><code>pip version
Traceback (most recent call last):
File "/usr/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 947, in <module>
class Environment(object):
File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 951, in Environment
self, search_path=None, platform=get_supported_platform(),
File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 180, in get_supported_platform
plat = get_build_platform()
File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 380, in get_build_platform
from sysconfig import get_platform
ImportError: No module named sysconfig
</code></pre>
<p>I removed :</p>
<pre><code>rm /usr/lib/python2.6/site-packages/pkg_resources*
</code></pre>
<p>and i reinstalled python-setuptools</p>
<pre><code>yum reinstall python-setuptools
</code></pre>
<p>After this fix :</p>
<pre><code>pip --version
pip 7.1.0 from /usr/lib/python2.6/site-packages (python 2.6)
</code></pre>
|
python|pip|setuptools|yum
| 10 |
1,908,460 | 60,494,088 |
I'm trying to load a file into Python using pd.read_csv(), but I cannot understand the file's format
|
<p>This is my very first question on stackoverflow, so I must beg your patience.</p>
<p>I believe there is something wrong with the format of a csv file I need to load into Python. I'm using a Jupyter Notebook. The link to the file is <a href="https://drive.google.com/open?id=1X7tpITYVN5Np7jadfY5qE6IVvfLHORu7" rel="nofollow noreferrer">here</a>.
It is from the World Inequality Database data portal. </p>
<p>I'm pretty sure the delimiter is a semi-colon ( <code>sep=";"</code> ) because the bottom half of the data renders neatly when I specify this argument. However the first half of the text in the file seems to make no sense. I have no idea how to tell the <code>pd.read_csv()</code> function how to read it. I suspect the first half of the data simply has terrible formatting. I've also tried <code>header=None</code> and <code>sep="|"</code> to no avail.</p>
<p>Any ideas or suggestions would be very helpful. Thank you very much!</p>
|
<p>This is common with speadsheets. You have have some commentary, tables may be inserted all over the place. It looks great to the content creator, but the CSV is a mess. You need to preprocess the CSV to create clean content for your analysis. In this case, its easy. The content starts at canned header and you can split the file there. If that header changes, you'll get an error and now its just one more sleepless night figuring out what they've done.</p>
<pre><code>import itertools
canned_header_line = "Variable Code;country;year;perc;agdpro999i;"\
"npopul999i;mgdpro999i;inyixx999i;xlceux999i;xlcusx999i;xlcyux999i"
def scrub_WID_file(in_csv_filename, out_csv_filename):
with open(in_csv_filename) as in_file,\
open(out_csv_filename, 'w') as out_file:
out_file.writelines(itertools.dropwhile(
lambda line: line.strip() != canned_header_line,
in_fp))
if not os.stat.st_size:
raise ValueError("No recognized header in " + in_csv_filename)
</code></pre>
|
python|file|upload
| 2 |
1,908,461 | 57,980,794 |
do I have to respect order of commands in pyspark sql?
|
<p>I am learning pyspark sql and, I am unsure if the order for the function has to be the next? groupby() agg() join() select()</p>
<pre class="lang-py prettyprint-override"><code>test = test.groupBy('year')\
.agg(f.max('value').alias('value'))\
.join(sch,['year','value'])\
.select(['year','station','value'])
</code></pre>
<p>I am used to pure sql where the ordering is select from where, etc...Here, I tried putting the groupBy() at the end but it fails... so I assume there must be an ordering to be respected. Where is this order specified?</p>
<p>I checked <a href="https://spark.apache.org/docs/1.6.0/api/python/pyspark.sql.html" rel="nofollow noreferrer">https://spark.apache.org/docs/1.6.0/api/python/pyspark.sql.html</a> but it does not say anything about respecting the order of the commands.</p>
|
<p>The code which you wrote is in pyspark dataframe. you can write same code using pyspark sql.</p>
<p>spark.sql("select year,station,max(value) as value from test join sch on group by year")</p>
<p>pyspark sql is similar to mysql.</p>
<p>the order of execution is
<strong>join</strong> the datasets,
check <strong>where</strong> condition,
<strong>groupby</strong>,
<strong>select</strong> and <strong>order</strong> the data</p>
<p>you can learn pyspark from following link: <a href="https://www.youtube.com/playlist?list=PLf0swTFhTI8rT3ApjBqt338MCO0ZvReFt" rel="nofollow noreferrer">https://www.youtube.com/playlist?list=PLf0swTFhTI8rT3ApjBqt338MCO0ZvReFt</a></p>
|
python|pyspark-sql
| 0 |
1,908,462 | 58,083,502 |
Binary Classification Neural Network: both Nan loss and NaN predictions
|
<p>This model tries to predict two states based on an array with 400 numbers. During the first training round the model starts with loss on the first +- 200 samples, and then goes into Nan loss. The accuracy stays around 50% and when I print the predictions for the test set, it only predicts NaN. My <code>X_train</code> has a shape of <code>(1934, 400, 1)</code> and my <code>y_train</code> a shape of <code>(1934,)</code>. I already tried checking for NaNs in the dataset, but there were none.</p>
<p>My model looks like this:</p>
<pre><code>model = Sequential()
model.add(LSTM(128, input_shape=(400,1), activation='relu', return_sequences=True))
model.add(Dropout(0,2))
model.add(LSTM(128, activation='relu'))
model.add(Dropout(0,2))
model.add(Dense(32, activation='relu'))
model.add(Dropout(0,2))
model.add(Dense(1, activation='sigmoid'))
opt = tf.keras.optimizers.Adam(lr=0.01)
# mean_squared_error = mse
model.compile(loss='binary_crossentropy',
optimizer=opt,
metrics=['accuracy'])
history = model.fit(X_train, y_train, epochs=3, validation_split = 0.1, shuffle=True, batch_size = 64)
</code></pre>
<p>edit: Solved by changing the activation function to tanh. Sigmoid stays sigmoid!</p>
|
<p>The problem was solved with changing the activation functions to "tanh". Seems that dropout should be 0.2 instead of 0,2 also, but this wasn't the cause of the problem.</p>
|
python|keras|deep-learning
| 1 |
1,908,463 | 56,343,109 |
Pandas still converts ints to float when I make a dictionary
|
<p>Pandas still makes ints into floats when I try to make a mixed-type series into a dictionary. </p>
<p>The suggested workarounds don't work for me. I've tried casting to object and using <code>iteritems</code>, and both together. I'm using pandas version 0.24.2</p>
<pre><code>test_series = pd.Series([1,1.3])
result = [{index:value} for index,value in test_series.astype(object).iteritems()]
print (results)
</code></pre>
<p>Expected :</p>
<pre><code>[{0: 1}, {1: 1.3}]
</code></pre>
<p>Actual :</p>
<pre><code>[{0: 1.0}, {1: 1.3}]
</code></pre>
|
<p>You can use the function <strong>is_integer()</strong> to check if your float is an integer</p>
<pre><code>test_series = pd.Series([int(1),1.3])
results = [{index:int(value)} if value.is_integer() else {index:value} for index,value in test_series.iteritems()]
print (results)
</code></pre>
<hr>
<p>Out : </p>
<pre><code>[{0: 1}, {1: 1.3}]
</code></pre>
|
python|pandas
| 3 |
1,908,464 | 56,282,228 |
Tensorflow Hub causes tensorflow logging to duplicate!
|
<p>My Tensorflow logging messages shows twice. After some investigation I
figured out the cause is Tensorflow Hub. </p>
<blockquote>
<p><strong>Example:</strong></p>
</blockquote>
<p><strong>Code:</strong></p>
<pre><code>import tensorflow as tf
import tensorflow_hub
tf.logging.set_verbosity(tf.logging.INFO)
tf.logging.info("Hello test!")
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>INFO:tensorflow:Hello test!
I0523 16:35:51.024926 140735788589952 log.py:13] Hello test!
</code></pre>
<p><strong>Desired output:</strong></p>
<pre><code> INFO:tensorflow:Hello test!
</code></pre>
<blockquote>
<p><strong>What I tried:</strong></p>
</blockquote>
<p>I tried to inverse the order of the imports and I ended up with only the second line of output. This is better but I want to know how to get only the first line of the output! Thanks for your help.</p>
|
<p>I think the issue here is that Tensorflow Hub uses <a href="https://abseil.io/docs/python/guides/logging" rel="nofollow noreferrer">https://abseil.io/docs/python/guides/logging</a> whether as tensorflow uses regular python logging. </p>
<p>Switching the type of logging used by Tensorflow Hub is something to consider. Meanwhile, the issue could be worked around by re-order the import statements:</p>
<pre><code>import tensorflow_hub as hub
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.INFO)
tf.logging.info('This is a log')
</code></pre>
|
machine-learning|deep-learning|tensorflow
| 0 |
1,908,465 | 56,095,177 |
How do I pass parameters to a function from a guizero object?
|
<p>I am unable to pass any parameter to a function called by a guizero widget either with the attribute "command" at initialization or by invoking an event.</p>
<p>This works as expected (no parameters passed):</p>
<pre><code>from guizero import App, PushButton
def go():
print (10)
app = App(title="Test")
button = PushButton(app, text = "Click Here", command = go)
app.display()
</code></pre>
<p>but the following prints the number 10 <strong>once, before</strong> the button is clicked and then when I click, nothing happens</p>
<pre><code>from guizero import App, PushButton
def go(n):
print (n)
app = App(title="Test")
button = PushButton(app, text = "Click Here", command = go(10))
app.display()
</code></pre>
<p>The same result I get with this:</p>
<pre><code>from guizero import App, PushButton
def go(n):
print (n)
app = App(title="Test")
button = PushButton(app, text = "Click Here")
button.when_clicked = go(10)
app.display()
</code></pre>
<p>What am I missing?</p>
<p>Thanks in advance!</p>
|
<pre><code>from guizero import App, PushButton
def go(n):
print (n)
app = App(title="Test")
button = PushButton(app, text = "Click Here", command = lambda: go(10))
app.display()
</code></pre>
<p>Whenever you write <code>go(10)</code> anywhere, you are invoking the function <code>go</code>. You might think you are passing <code>go</code> with arguments, but you aren't because the parentheses next to <code>go()</code> invoke the function right then and there. If you want to pass the function <code>go</code> to another function, and <code>go</code> should also be passed with some arguments, then you need to wrap the function <code>go</code> and pass the wrapped function as the argument "command". Using a lambda function to wrap <code>go(10)</code> is one such way of doing this.</p>
<p>The reason this works is that the lambda function is NOT invoked right then and there. You are saying that that <code>command()</code> should invoke the <strong>declared</strong> anonymous lambda function eventually, and when that lambda function is called it will itself call <code>go(10)</code>. You are <strong>declaring</strong> an anonymous lambda function, NOT invoking it. The lambda function will be invoked later on as <code>command()</code>.</p>
|
python|function|user-interface|parameters|event-handling
| 3 |
1,908,466 | 18,282,448 |
Drawing Polygons of Varying Transparency Over the Top of Each Other in Python Pillow / PIL
|
<p>I have the following code:</p>
<pre><code>im = Image.new("RGBA", (800,600))
draw = ImageDraw.Draw(im,"RGBA")
draw.polygon([(10,10),(200,10),(200,200),(10,200)],(20,30,50,125))
draw.polygon([(60,60),(250,60),(250,250),(60,250)],(255,30,50,0))
del draw
im.show()
</code></pre>
<p>but the polygons do not exhibit any variance in alpha/transparency between. Is it possible to do this using these polygons or does the alpha level only apply to composited images (I'm aware of this solution but only see comments based on PIL and thought that I had seen this fixed in Pillow).</p>
<p>If such a thing is not available is there an nice, easy, efficient way of putting something like this in the library?</p>
|
<p>According to the doc <a href="http://effbot.org/imagingbook/image.htm" rel="noreferrer">http://effbot.org/imagingbook/image.htm</a>:</p>
<blockquote>
<p><strong><code>im.show()</code></strong></p>
<blockquote>
<p>Displays an image. This method is mainly intended for <em>debugging
purposes</em>.</p>
<p>On Unix platforms, this method <em>saves the image to a temporary PPM
file</em>, and calls the xv utility.</p>
</blockquote>
</blockquote>
<p>And as far as I know, the <a href="http://en.wikipedia.org/wiki/Netpbm_format#PPM_example" rel="noreferrer">PPM file format</a> does <em>not</em> support transparency ("alpha channel").</p>
<hr>
<p>So... the transparency does not appears when you call <code>im.show()</code> -- but it <em>will be applied</em> if you save your file using a format that <em>does</em> support transparency:</p>
<pre><code>from PIL import Image,ImageDraw
im = Image.new("RGBA", (800,600))
draw = ImageDraw.Draw(im,"RGBA")
draw.polygon([(10,10),(200,10),(200,200),(10,200)],(20,30,50,125))
draw.polygon([(60,60),(250,60),(250,250),(60,250)],(255,30,50,120))
del draw
im.save("out.png") # Save to file
</code></pre>
|
python|python-imaging-library|pillow
| 5 |
1,908,467 | 71,448,372 |
Reading from a txt file in Python
|
<p>I have this data (Remark: don't consider this data a json file consider it a normal txt file). :</p>
<pre><code>{"tstp":1383173780727,"ststates":[{"nb":901,"state":"open","freebk":6,"freebs":14},{"nb":903,"state":"open","freebk":2,"freebs":18}]}{"tstp":1383173852184,"ststates":[{"nb":901,"state":"open","freebk":6,"freebs":14}]}
</code></pre>
<p>I want to take all the values inside the first tstp only and stop when reaching the other tstp.</p>
<p>What I am trying to do is to create a file for each tstp and inside this file, it will have nb, state, freebk, freebs as columns in this file.</p>
<p>expected output:</p>
<p>first tstp file:</p>
<pre><code>nb state freebk freebs
901 open 6 14
903 open 2 18
</code></pre>
<p>second tstp file:</p>
<pre><code>nb state freebk freebs
901 open 6 14
</code></pre>
<p>this output is for the first tstp I want to create a different file for each tstp in my data so for the provided data 2 files will be created ( because we have only 2 tstp in the data)</p>
<p>Remark: don't consider this data a json file consider it a normal txt file.</p>
|
<p>This below approach will help you with all types of data available for "tstp" which may have spaces in between.</p>
<p><strong>I used regex for properly capturing starting of each JSON to prepare a valid data. (Also works If your data is unorganized in your file.)</strong></p>
<pre><code>import re
import ast
# Reading Content from Text File
with open("text.txt", "r") as file:
data = file.read()
# Transforming Data into Json for better value collection
regex = r'{[\s]*"tstp"'
replaced_content = ',{"tstp"'
# replacing starting of every {json} dictionary with ,{json}
data = re.sub(regex, replaced_content, data)
data = "[" + data.strip()[1:] + "]" # removing First unnecessary comma (,)
data = ast.literal_eval(data) # converting string to list of Json
# Preparing data for File
headings_data = "nb state freebk freebs"
for count, json in enumerate(data, start=1):
# Remove this part with row = "" if you dont want tstp value in file.
row = "File - {0}\n\n".format(json["tstp"])
row += headings_data
for item in json["ststates"]:
row += "\n{0} {1} {2} {3}".format(
item["nb"], item["state"], item["freebk"], item["freebs"])
# Preparing different file for each tstp
filename = "file-{0}.txt".format(count)
with open(filename, "w") as file:
file.write(row)
</code></pre>
<p>Output:</p>
<p><strong>File 1</strong></p>
<pre><code>File - 1383173780727
nb state freebk freebs
901 open 6 14
903 open 2 18
</code></pre>
<p>File 2</p>
<pre><code>File - 1383173852184
nb state freebk freebs
901 open 6 14
</code></pre>
<ul>
<li>And So on.... for total number of <strong>"tstp" entries</strong>.</li>
</ul>
<p><strong>Note:</strong> We cannot <strong>replace "}{"</strong> in every situation. Maybe, in your data the brackets may placed in different lines.</p>
|
python
| 1 |
1,908,468 | 69,512,046 |
OnMessage routine for a specific discord channel
|
<p>I am triying to develop a game using a discord bot. Im having trouble dealing with the onmessage routine.. I need it only to "listen" one specific channel, not all the server.. by now I did the following:</p>
<pre><code>@client.event
async def on_message(message):
global rojo
global IDS
canal = IDS['C_Juego']
if message.author == client.user or str(message.channel) != IDS['C_Juego']:
return
else:
if(rojo == 1):
autor = message.author
await message.add_reaction("")
await message.channel.send("Player: " + str(autor) + " removed!")
role = get(message.guild.roles, name="Jugador")
await message.author.remove_roles(role)
elif(str(message.channel) == IDS['C_Juego']):
await message.add_reaction("")
print("verde")
</code></pre>
<p>What's going on? when I enable this function .. the rest of my commands stop having effect (in any channel of the server) in addition to the fact that this function is called by each message sent....</p>
<p><strong>I explain the context</strong>: It is a game in which while listening to a song the players must place different words under a theme, when the music stops, if someone writes they are eliminated.</p>
<p><strong>Commands definitios</strong>:
I have plenty command definitios... which works fine until I add this problematic function.. I add as example two of them:</p>
<pre><code>@client.command()
@commands.has_role("Owner")
async def clear(ctx):
await ctx.channel.purge()
@client.command()
@commands.has_role("Owner")
async def swipe(ctx, role: discord.Role):
print(role.members)
for member in role.members:
await member.remove_roles(role)
await ctx.send(f"Successfully removed all members from {role.mention}.")
</code></pre>
|
<blockquote>
<p>Overriding the default provided on_message forbids any extra commands from running. To fix this, add a bot.process_commands(message) line at the end of your on_message.</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>@client.event
async def on_message(message):
# do what you want to do here
await client.process_commands(message)
</code></pre>
<p><a href="https://discordpy.readthedocs.io/en/latest/faq.html#why-does-on-message-make-my-commands-stop-working" rel="nofollow noreferrer">https://discordpy.readthedocs.io/en/latest/faq.html#why-does-on-message-make-my-commands-stop-working</a></p>
|
python|discord|discord.py
| 2 |
1,908,469 | 55,550,618 |
Bottle -- Redirect output of SimpleTemplate to static file
|
<p>I'm wanting to implement a caching system for a bottle site I've been working on. The idea is that a couple of routes take a bit longer to render, so, if the sqlite table hasn't been updated since the html file was generated, I'll return that, if it has, I'll retrieve the rows from the database and save that to a file and return that. </p>
<p>Probably someone has already done this, so any tips for redirecting the output of a '.tpl' template to a '.html' file would be appreciated. </p>
<p>I've looked at some general caching libs but they seem to work by refreshing the cache at particular time intervals whereas I want to refresh when the database changes.</p>
<p>Thanks.</p>
<p>Edit: I'm using Apache as a reverse proxy, cheroot as the app server.</p>
|
<p>First you need a cache object. A dict is a great option. </p>
<pre><code>cachetpl = {}
lastupdated = time.time()
cache = Cache()
</code></pre>
<p>Then create a class to handle stuff in your dict:</p>
<pre><code>class Cache(object):
def __init__(self):
global lastupdated
self.lastupdated = lastupdated
global cachetpl
self.cachetpl = cachetpl
def keys(self):
return self.cachetpl.keys()
def lastupdate(self):
return self.lastupdate
def meta(self, clienthash, zipcode=None):
return self.cachetpl[clienthash]
</code></pre>
<p>Then you need to setup a pulse to check for changes, maybe checking lastupdated column of your SQL storage. </p>
<pre><code>@staticmethod
def update():
global cachetpl
global lastupdated
if not cachetpl:
return None, None
PSQL = ''' SELECT key, meta FROM schema.table WHERE lastupdated >= %s '''
changes = db.fetchall(PSQL, lastupdated)
if changes:
numchange = len(changes)
for x in changes:
cachetpl[x[0]] = x[1]
lastupdated = time.time()
return len(cachetpl), numchange
else:
lastupdated = nz.now()
return None, None
</code></pre>
<p>How you setup this pulse is up to you. I use a python library called <code>scheduler</code> and use gevent for my app for async. Works awesome. </p>
<p>Now you just call your class <code>Cache()</code> and feed it whatever data you want. My suggestion is that instead of returning the template, you save it first in your database, and then return it. Let the call to the route check for a cached version before rendering it. Then you could modify the update to render pages in the background if data changes, so the next call to the cache is the new data. </p>
|
python|bottle
| 0 |
1,908,470 | 55,268,494 |
How to merge two querysets django
|
<p>I'm trying to get a list of latest 100 posts and also the aggregated count of approved, pending, and rejected posts for the user of that post.</p>
<p><strong>models.py</strong></p>
<pre><code>class BlogPost(models.Model):
POST_STATUSES = (
('A', 'Approved'),
('P', 'Pending'),
('R', 'Rejected')
)
author = models.ForeignKey(User)
title = models.CharField(max_length=50)
description = models.TextField()
status = models.ChoiceField(max_length=1, choices=POST_STATUSES)
</code></pre>
<hr>
<p><strong>views.py</strong></p>
<p>Now I'm getting the the aggregated count like so, but I'm confused on how to merge the count with the title of the posts</p>
<pre><code>top_post_users = list(BlogPost.objects.values_list('user_id', flat=True))[:100]
users = User.objects.filter(pk__in=top_post_users).annotate(approved_count=Count(Case(When(user_posts__status="A", then=1),output_field=IntegerField()))).annotate(pending_count=Count(Case(When(user_posts__status="P", then=1),output_field=IntegerField()))).annotate(reject_count=Count(Case(When(user_posts__status="R", then=1),output_field=IntegerField())))
users.values('approved_count', 'pending_count', 'reject_count')
</code></pre>
<hr>
<p>This is the result I want:</p>
<ul>
<li>Title Of Post, Author1, 10, 5, 1</li>
<li>Title Of Post2, Author2, 7, 3, 1</li>
<li>Title Of Post3, Author1, 10, 5, 1</li>
</ul>
<hr>
<p><strong>How can I merge the returned counts with the titles?</strong> </p>
<p>I know I could use a for loop and append each one, but efficiency wise I don't think this is the right way to do it. Is there a more efficient way using django database ORM?</p>
<p>I've tried this</p>
<p>users.values('title', approved_count', 'pending_count', 'reject_count')</p>
<p>...and that works, but it returns more than the latest 100 posts, so I think it's getting all the posts for those users and the aggregated count.</p>
|
<p>Ultimately, you want a list of BlogPosts:</p>
<pre><code>main_qs = BlogPost.objects
# add filters, ordering etc. of the posts
</code></pre>
<p>and you want to display not only the authors next to the title of the post but also enrich the author information with the annotated counts.</p>
<pre><code>from django.db.models import OuterRef, Subquery, Count
# you need subqueries to annotate the blog posts
base_sub_qs = BlogPost.objects.filter(author__pk=OuterRef('author__pk'))
# add subqueries for each count
main_qs = main_qs.annotate(
user_approved_count=Subquery(base_sub_qs.filter(status="A").annotate(
c=Count('*')).values('c'), output_field=IntegerField()),
user_pending_count=Subquery(base_sub_qs.filter(status="P").annotate(
c=Count('*')).values('c'), output_field=IntegerField()),
user_rejected_count=Subquery(base_sub_qs.filter(status="R").annotate(
c=Count('*')).values('c'), output_field=IntegerField()),
)
</code></pre>
<p>You can then access these in your template:</p>
<pre><code>{% for post in posts %}
{{ post.title }}
{{ post.author.get_full_name }}
approved: {{ post.user_approved_count }}
pending: {{ post.user_pending_count }}
rejected: {{ post.user_rejected_count }}
{% endfor %}
</code></pre>
<p>Documentation: <a href="https://docs.djangoproject.com/en/2.1/ref/models/expressions/#subquery-expressions" rel="nofollow noreferrer">https://docs.djangoproject.com/en/2.1/ref/models/expressions/#subquery-expressions</a></p>
|
python|django
| 2 |
1,908,471 | 42,427,239 |
Python 3: How to get a random 4 digit long number with no repeated digits inside that number?
|
<p>Ok so I need my program to be able to get a random number that has no repeated digits inside that number. So like 0012 has two 0s and therefore I don't need that, however, 1234 would work. The numbers also need to be JUST 4 digits long. </p>
<pre><code>import random
</code></pre>
|
<p>You could use sample:</p>
<pre><code>import random
numbers = random.sample(range(10), 4)
print(''.join(map(str, numbers)))
</code></pre>
<p>@Copperfield variation in the comments is elegant as it forgoes the need to cast (since you are sampling from a string).</p>
<pre><code>import random
number = ''.join(random.sample("0123456789", 4))
print(number)
</code></pre>
|
python|python-3.x|random|numbers
| 5 |
1,908,472 | 59,112,801 |
Python Multi Threading not working properly
|
<p>I can't seem to get this Multi Threading code to work with my already structured Python script of a simple IP Pining script with a few other features.</p>
<p>After testing the Multi Threading code i though i was ready to implement onto my code, however i can't seem to be able to call a new thread correctly. I know this because if Multi Threading was working properly my GUI interface would not stop responding when the scanall() function gets executed upon pressing the Scan all IPs button on the GUI interface.</p>
<p>I'm also not getting anymore errors after finishing the implementation, so it's hard to know now what to proceed with. This extremely frustrating thank you for the help guys, i would love to tackle this one down!</p>
<p>This is the Multi Threading code:</p>
<pre><code> class ThreadManager:
"""Multi Threading manager"""
def __init__(self):
pass
def start(self, threads):
thread_refs = []
for i in range(threads):
t = MyThread(i) # Thread(args=(1,)) # target=test(),
t.daemon = True
print('starting thread %i' % i)
t.start()
for t in thread_refs:
t.join()
class MyThread(Thread):
"""Multi Threading"""
def __init__(self, i):
Thread.__init__(self)
self.i = i
def run(self):
while True:
print('thread # {}'.format(self.i))
time.sleep(.25)
break
</code></pre>
<p>And This is the code that executes the multi threading:</p>
<pre><code>print("[Debug] Main Thread has been started")
self.manager = ThreadManager()
self.manager.start(1)
</code></pre>
<p>This is the Github for the entire script code and the Multi Threading implementation.</p>
<p><a href="https://github.com/Hontiris1/IPPing" rel="nofollow noreferrer">https://github.com/Hontiris1/IPPing</a></p>
|
<p>As you are not adding the value of <code>t</code> to <code>thread_refs</code> array. Its empty and is not waiting for the threads to join.</p>
<p>Change you <code>start</code> function like this:</p>
<pre class="lang-py prettyprint-override"><code>def start(self, threads):
thread_refs = []
for i in range(threads):
t = MyThread(i) # Thread(args=(1,)) # target=test(),
t.daemon = True
print('starting thread %i' % i)
t.start()
thread_refs.append(t)
for t in thread_refs:
t.join()
</code></pre>
<p>secondly you might want to remove the <code>break</code> statement from your <code>while</code> loop in the <code>run</code> function. Otherwise it will exit after printing <code>thread 0</code> once.</p>
|
python|multithreading|class|object
| 0 |
1,908,473 | 53,964,732 |
multi-armed bandit agent in Keras
|
<p>I am trying to work through some old tutorials and found it stimulating to keep everything in Keras. Though, I am having trouble with an extremely simple thing when written in Tensorflow. Here is the tf agent code from the tutorial.</p>
<pre><code>tf.reset_default_graph()
weights = tf.Variable(tf.ones([num_bandits]))
chosen_action = tf.argmax(weights,0)
reward_holder = tf.placeholder(shape=[1],dtype=tf.float32)
action_holder = tf.placeholder(shape=[1],dtype=tf.int32)
responsible_weight = tf.slice(weights,action_holder,[1])
loss = -(tf.log(responsible_weight)*reward_holder)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
update = optimizer.minimize(loss)
</code></pre>
<p>It's a simple multi-armed bandit.
My work thus far in trying to convert the agent to Keras is;</p>
<pre><code>size = 4
weights = K.variable(K.ones(shape=(size), dtype='float32'))
best_action = Lambda(lambda x: K.cast(K.argmax(x), dtype=K.floatx()))(weights)
reward = Input(shape=(1,), dtype='float32')
action = Input(shape=(1,), dtype='int32')
responsible_weight = K.slice(weights, action[-1], [1])
custom_loss = -(K.log(responsible_weight) * reward)
opti = SGD(lr=0.001)
model = Model(inputs=[reward, action], outputs=best_action)
model.compile(optimizer=opti, loss=custom_loss)
</code></pre>
<p>The challenges seem to be that Input tensors have to come from the Input layer (atleast from other exercises). </p>
<p>Can anyone see an obvious error here? When I get to the model=Model() line, an attributeError tells me </p>
<pre><code>'NoneType' object has no attribute '_inbound_nodes'
</code></pre>
<p>My "output" is already wrapped in a Lambda function, which in part, takes care of the Keras Tensor portion as suggested by the potential duplication. Just for fun, I added another layer and multiplied by one as the other thread suggested, but this did not change the error. </p>
|
<p>Keras works a little different from tensorflow in the sense that it's mandatory to have inputs (usually x_train) and outputs (usually y_train) passed as known data. </p>
<p>The loss in Keras must necessarily be a function that takes ground truth values and predicted (output) values: <code>function(y_true, y_pred)</code>. </p>
<p>By looking at your code, it seems that the loss is a <a href="https://en.wikipedia.org/wiki/Cross_entropy" rel="nofollow noreferrer">crossentropy</a> where <strong>p</strong> (y_true) is <code>reward</code> and <strong>q</strong> (y_pred) is <code>responsible_weight</code>. </p>
<p>Thus, we can remake it as if <code>reward</code> were an output (y_train or y_true) and <code>action_holder</code> an input (x_train).</p>
<pre><code>def loss(y_true,y_pred):
return - K.log(y_pred)*y_true
</code></pre>
<p>Also, <code>action_holder</code> is nothing but taking a single row of the weights, which fits perfectly the idea of an <code>Embedding</code> layer whose size is 1 and vocabulary is <code>num_bandits</code>.</p>
<p>That said, we can start modeling:</p>
<pre class="lang-py prettyprint-override"><code>#because of the log, let's also add a non negative constraint to the weights
from keras.constraints import NonNeg
input_action = Input((1,))
responsible_weight = Embedding(num_bandits,
1,
name='weights',
embeddings_initializer='ones',
embeddings_constraint=NonNeg())(input_action)
model = Model(input_action, responsible_weight)
model.compile(optimizer=anyOptimizer, loss=loss)
</code></pre>
<p>For training, use:</p>
<pre><code>model.fit(data_for_action_holder, data_for_reward, ...)
</code></pre>
<p>Input and output data must be shaped both as <code>(examples, 1)</code></p>
<hr>
<p>About the best action or chosen action, it's not participating in training at all. To get it, you will need to take the embedding weights and get its max:</p>
<pre><code>weights = model.get_layer('weights').get_weights()[0].max()
</code></pre>
<p>About the risk of log of zero, you could slightly change the loss function to avoid zero predictions:</p>
<pre><code>def loss(y_true, y_pred):
return - K.log(y_pred + K.epsilon())*(y_true + K.epsilon())
</code></pre>
|
python-3.x|tensorflow|keras
| 1 |
1,908,474 | 58,579,158 |
Upload Image to Google Drive using PyDrive
|
<p>I have a silly question about PyDrive.
I try to make a REST API using FastAPI that will upload an Image to Google Drive using PyDrive. Here is my code:</p>
<pre><code>from fastapi import FastAPI, File
from starlette.requests import Request
from starlette.responses import JSONResponse
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
app = FastAPI()
@app.post('/upload')
def upload_drive(img_file: bytes=File(...)):
g_login = GoogleAuth()
g_login.LoadCredentialsFile("google-drive-credentials.txt")
if g_login.credentials is None:
g_login.LocalWebserverAuth()
elif g_login.access_token_expired:
g_login.Refresh()
else:
g_login.Authorize()
g_login.SaveCredentialsFile("google-drive-credentials.txt")
drive = GoogleDrive(g_login)
file_drive = drive.CreateFile({'title':'test.jpg'})
file_drive.SetContentString(img_file)
file_drive.Upload()
</code></pre>
<p>After try to access my endpoint, i get this error:</p>
<pre><code>file_drive.SetContentString(img_file)
File "c:\users\aldho\anaconda3\envs\fastai\lib\site-packages\pydrive\files.py", line 155, in SetContentString
self.content = io.BytesIO(content.encode(encoding))
AttributeError: 'bytes' object has no attribute 'encode'
</code></pre>
<p>What should i do to complete this very simple task?</p>
<p>thanks for your help!</p>
<p>**</p>
<h2>UPDATED</h2>
<p>**</p>
<p>Thanks for answer and comment from Stanislas Morbieu, here is my updated and working code:</p>
<pre><code>from fastapi import FastAPI, File
from starlette.requests import Request
from starlette.responses import JSONResponse
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from PIL import Image
import os
app = FastAPI()
@app.post('/upload')
def upload_drive(filename, img_file: bytes=File(...)):
try:
g_login = GoogleAuth()
g_login.LocalWebserverAuth()
drive = GoogleDrive(g_login)
file_drive = drive.CreateFile({'title':filename, 'mimeType':'image/jpeg'})
if not os.path.exists('temp/' + filename):
image = Image.open(io.BytesIO(img_file))
image.save('temp/' + filename)
image.close()
file_drive.SetContentFile('temp/' + filename)
file_drive.Upload()
return {"success": True}
except Exception as e:
print('ERROR:', str(e))
return {"success": False}
</code></pre>
<p>Thanks guys</p>
|
<p><code>SetContentString</code> requires a parameter of type <code>str</code> not <code>bytes</code>. Here is the documentation:</p>
<blockquote>
<p>Set content of this file to be a string.</p>
<p>Creates io.BytesIO instance of <strong>utf-8 encoded string</strong>. Sets mimeType to be ‘text/plain’ if not specified.</p>
</blockquote>
<p>You should therefore decode <code>img_file</code> (of type <code>bytes</code>) in utf-8:</p>
<pre><code>file_drive.SetContentString(img_file.decode('utf-8'))
</code></pre>
|
python|pydrive|fastapi
| 4 |
1,908,475 | 58,363,123 |
Query on array on a nested document with pymongodb
|
<p>I am building a chat program
And in the backend, I have</p>
<pre><code>
chatcol.insert({ "chatid": "133235", "messages": [ {"from": "user3", "content": "Hello", "time": "20101213T172215"}, {"from": "user2", "content": "Hi", "time": "20101214T172215"} ] })
chatcol.insert({ "chatid": "134735", "messages": [ {"from": "user2", "content": "Hello", "time": "20101217T172215"}, {"from": "user12", "content": "Hi", "time": "20101213T172215"} ] })
</code></pre>
<p>Since there can be a lot of messages, I want the server only give the new message the client didnt see.
The client will give me a lastuptime, the time where they last logon.
I want to find chats with only the new message only.</p>
<p>How do I write such query?</p>
|
<p>I have considered <code>time</code> as <code>ISODate()</code><br>
Also, <code>lastuptime</code> given by the client wil be <code>ISODate()</code><br></p>
<p><strong>collection</strong>:</p>
<pre><code>{
"_id" : ObjectId("5da31a63c35040d7fbf1db3c"),
"chatid" : "133235",
"messages" : [
{
"from" : "user3",
"content" : "Hello",
"time" : ISODate("2016-05-01T00:00:00Z")
},
{
"from" : "user2",
"content" : "Hi",
"time" : ISODate("2018-05-01T00:00:00Z")
}
]
}
{
"_id" : ObjectId("5da31ab2c35040d7fbf1db3d"),
"chatid" : "133235",
"messages" : [
{
"from" : "user2",
"content" : "Hello",
"time" : ISODate("2010-05-01T00:00:00Z")
},
{
"from" : "user12",
"content" : "Hi",
"time" : ISODate("2019-05-01T00:00:00Z")
}
]
}
</code></pre>
<p>There are two scenarios. Depending on your use-case, select the appropriate:</p>
<p><strong>1</strong>. Fetch <code>chats</code> where any one of the messages has <code>time</code> <code>>=</code> <code>lastuptime</code></p>
<pre class="lang-js prettyprint-override"><code>db.chatcol.find({'messages.time': {"$gte": ISODate("2017-10-01T00:00:00.000Z")}})
</code></pre>
<p><strong>Output</strong>:</p>
<pre><code>{
"_id" : ObjectId("5da31a63c35040d7fbf1db3c"),
"chatid" : "133235",
"messages" : [
{
"from" : "user3",
"content" : "Hello",
"time" : ISODate("2016-05-01T00:00:00Z")
},
{
"from" : "user2",
"content" : "Hi",
"time" : ISODate("2018-05-01T00:00:00Z")
}
]
}
{
"_id" : ObjectId("5da31ab2c35040d7fbf1db3d"),
"chatid" : "133235",
"messages" : [
{
"from" : "user2",
"content" : "Hello",
"time" : ISODate("2010-05-01T00:00:00Z")
},
{
"from" : "user12",
"content" : "Hi",
"time" : ISODate("2019-05-01T00:00:00Z")
}
]
}
</code></pre>
<hr>
<p><strong>2</strong>. Fetch <code>messages</code> where <code>time</code> <code>>=</code> <code>lastuptime</code>
This one flattens the <code>message</code> <code>array</code></p>
<pre class="lang-js prettyprint-override"><code>db.chatcol.aggregate([
{ $unwind :'$messages'},
{ $match : {"messages.time": { "$gte": ISODate("2017-10-01T00:00:00.000Z") }}}
])
</code></pre>
<p><strong>Output</strong>:</p>
<pre><code>{
"_id" : ObjectId("5da31a63c35040d7fbf1db3c"),
"chatid" : "133235",
"messages" : {
"from" : "user2",
"content" : "Hi",
"time" : ISODate("2018-05-01T00:00:00Z")
}
}
{
"_id" : ObjectId("5da31ab2c35040d7fbf1db3d"),
"chatid" : "133235",
"messages" : {
"from" : "user12",
"content" : "Hi",
"time" : ISODate("2019-05-01T00:00:00Z")
}
}
</code></pre>
<p><strong>Note</strong>: Map the <code>$gte</code> query according to your <code>lastuptime</code></p>
|
python-3.x|pymongo
| 0 |
1,908,476 | 22,927,288 |
TypeError: takes exactly 0 arguments (1 given)
|
<p>I am using python to generate a rss file,but when i call this method, it post a error"<code>TypeError: writeRssFile() takes exactly 0 arguments (1 given)</code>" </p>
<pre><code>#!/usr/bin/env python2
# encoding: utf-8
import os,PyRSS2Gen
def writeRssFile(*newslist):
item =[]
for i in range(0,len(newslist)):
item.append(PyRSS2Gen.RSSItem(
title = newslist[i].get('title'),
description = newslist[i].get('content'),
pubDate = datetime.datetime.now()))
rss = PyRSS2Gen.RSS2(
title = "Andrew's PyRSS2Gen feed",
link = "http://www.dalkescientific.com/Python/PyRSS2Gen.html",
description = "The latest news about PyRSS2Gen, a "
"Python library for generating RSS2 feeds",
lastBuildDate = datetime.datetime.now(),
items = item[:],
)
rss.write_xml(open("pyrss2gen.xml", "w"))
</code></pre>
<p>i want call this method like this way:</p>
<pre><code>newslist=[{'title':'title1','content':'content1'},{'title':'title2','content':'content2'}]
writeRssFile(newslist)
</code></pre>
<p>I had try googling on this but I'm not really very sure what is the exactly reason, so hopefully can get help from here.
Thanks!</p>
|
<p>You accept variable number of arguments. So, you need to unpack the list while calling, like this</p>
<pre><code>writeRssFile(*newslist)
</code></pre>
<p>Also, you need to import <code>datetime</code> module.</p>
<p>Apart from that, when the <code>range</code> actually starts from <code>0</code>, you can omit that. So,</p>
<pre><code>range(0, len(newslist))
</code></pre>
<p>is the same as</p>
<pre><code>range(len(newslist))
</code></pre>
|
python|rss
| 1 |
1,908,477 | 22,876,702 |
Creating a dictionary
|
<p>my goal is to create a dictionary in Python. I have a .csv file which contains two columns, first one being 'word', other being 'meaning'. I am trying to read the csv file in the dictionary format and get the 'meaning' when 'word' is given.</p>
<p>Can you please help me by telling me how to get the value of 'word'? this is what I tried:</p>
<p>My codes are,</p>
<pre><code>>>> with open('wordlist.csv', mode = 'r') as infile:
... reader = csv.reader(infile)
... with open('wordlist.csv', mode = 'w') as outfile:
... writer = csv.writer(outfile)
... mydict = {rows[0]:rows[1] for rows in reader}
... print(mydict)
...
</code></pre>
<p>The result turns out to be,</p>
<pre><code>{}
</code></pre>
<p>the next one I tried was,</p>
<pre><code>>>> reader = csv.reader(open('wordlist.csv', 'r'))
>>> d = {}
>>> for row in reader:
... k, v = row
... d[k] = v
...
</code></pre>
<p>But when I wanted to use this, the result was like this- </p>
<pre><code>>>> d['Try']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
KeyError: 'Try'
</code></pre>
<p>The next code I tried was,</p>
<pre><code>>>> reader = csv.DictReader(open('wordlist.csv'))
>>> result = {}
>>> for row in reader:
... key = row.pop('word')
... if key in result:
... pass
... result[key] = row
... print result
...
</code></pre>
<p>It didn't give me any answer at all.</p>
<pre><code>>>> for row in reader:
... for column, value in row.iteritems():
... result.setdefault(column, []).append(value)
... print result
...
</code></pre>
<p>Neither did this give me a result.</p>
|
<p>If "final_word.csv" looks like this:</p>
<pre><code>word1, synonym1, meaning1, POS_tag1
word2, synonym2, meaning2, POS_tag2
</code></pre>
<p>This will read it in as a dictionary:</p>
<pre><code>with open("final_word.csv",'r') as f:
rows = f.readlines()
dictionary = {}
for row in rows:
row = row.strip()
word, synonym, meaning, POS_tag = row.split(", ")
dictionary[word] = [synonym, meaning, POS_tag]
print(dictionary['word1'])
#out>> ['synonym1', 'meaning1', 'POS_tag1']
print(dictionary['word2'][0])
#out>> synonym2
</code></pre>
<p><em>The strip() is used to get rid of the newlines "\n" that's in the end of each csv-row</em></p>
|
python|csv|dictionary
| 1 |
1,908,478 | 45,293,524 |
Adding a new column (and data to it) into multiple CSV files in Python
|
<p>I'm a beginner in Python, and I've been struggling to create a code to add columns to a set of similar csv files.</p>
<p>Here's what I have so far:</p>
<pre><code>import csv, os
for csvFilename in os.listdir('.'):
if not csvFilename.endswith('.csv'):
continue
print('Editing file ' + csvFilname + '...')
file = open(csvFilename)
reader = csv.reader(file)
writer = csv.writer(open('new_' + csvFilename, 'w'))
headers = reader.next()
headers.append('ColName')
writer.write(headers)
for row in reader:
row.append(str(row[12]) + ' ' + str(row[13]) + " some text")
writer.write(row)
</code></pre>
<p>Basically, I'd like to add a column in which I had "<em>Row 13's text</em> + <em>row 14's text</em> + <em>more text, the same every time</em>".</p>
<p>I get this error message on the <em>writer.write(headers)</em> line, though:
<em>AttributeError: '_csv.writer' object has no attribute 'write'</em></p>
<p>What should I do?</p>
|
<p>You need read the API document,</p>
<p><a href="https://docs.python.org/3/library/csv.html" rel="nofollow noreferrer">https://docs.python.org/3/library/csv.html</a></p>
<p>The <code>csv.writer</code> write function used <code>writerow</code> not the <code>write</code>, please check.</p>
<p>The sample to write CSV file, </p>
<pre><code>import csv
with open('eggs.csv', 'w', newline='') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=' ',
quotechar='|', quoting=csv.QUOTE_MINIMAL)
spamwriter.writerow(['Spam'] * 5 + ['Baked Beans'])
spamwriter.writerow(['Spam', 'Lovely Spam', 'Wonderful Spam'])
</code></pre>
|
python|csv
| 0 |
1,908,479 | 28,516,319 |
Get XML values from string using Python
|
<p>I would like to identify all TIMEX3 values inside a string using Python. For example, if my string is:</p>
<pre><code> Ecole Polytechnique, maar hij bleef daar slechts tot <TIMEX3 tid="t5" type="DATE" value="1888">1888</TIMEX3>.
Daarna had hij een korte carriere bij het leger als officier d'artillerie in <TIMEX3 tid="t6" type="DATE" value="1889">1889</TIMEX3>
</code></pre>
<p>I would like to get back the list</p>
<pre><code> ["1888", "1889"]
</code></pre>
<p>So far I tried converting to a tree using the xml.eTree.ElementTree, but this crashes on my data with a parse error - not well formed, invalid token message. I am thinking that maybe I could avoid this using a regular expression? Any help much appreciated, thank you!</p>
|
<p>You could use <a href="http://www.crummy.com/software/BeautifulSoup/" rel="nofollow">BeautifulSoup</a>.</p>
<pre><code>>>> from bs4 import BeautifulSoup
>>> s = '''Ecole Polytechnique, maar hij bleef daar slechts tot <TIMEX3 tid="t5" type="DATE" value="1888">1888</TIMEX3>.
Daarna had hij een korte carriere bij het leger als officier d'artillerie in <TIMEX3 tid="t6" type="DATE" value="1889">1889</TIMEX3>'''
>>> soup = BeautifulSoup(s)
>>> [i.text for i in soup.find_all('timex3')]
['1888', '1889']
>>> [i['value'] for i in soup.find_all('timex3')]
['1888', '1889']
>>> [i['value'] for i in soup.find_all('timex3') if i.has_attr("value")]
['1888', '1889']
</code></pre>
|
python
| 2 |
1,908,480 | 14,471,212 |
Why is python's urllib2.urlopen giving me a 403 error?
|
<blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://stackoverflow.com/questions/2572266/pythons-urllib2-doesnt-work-on-some-sites">Python’s urllib2 doesn’t work on some sites</a> </p>
</blockquote>
<p>Ok, I just want to access this URL using python: <a href="http://www.gocomics.com/wizardofid/2013/01/22" rel="nofollow noreferrer">http://www.gocomics.com/wizardofid/2013/01/22</a></p>
<p>But, whenever I call urllib2.urlopen('<a href="http://www.gocomics.com/wizardofid/2013/01/22" rel="nofollow noreferrer">http://www.gocomics.com/wizardofid/2013/01/22</a>').read(), it gives me a 403 error. With urllib, all I can do is read the error page, but urllib2 raises the error. When I look at the page in Chrome, it doesn't give me any problems. Why is this, and how can I fix it? Thanks!</p>
|
<p>This particular website requires a "browser-like" <code>User-Agent</code> header, otherwise it will deny access.</p>
<p>Try adding a header, like (for instance) this:</p>
<pre><code>import urllib2
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
urllib2.install_opener(opener)
print urllib2.urlopen('http://gocomics.com/wizardofid/2013/01/22').read()
</code></pre>
|
python|urllib2|urllib|http-status-code-403|python-2.5
| 3 |
1,908,481 | 6,947,210 |
How to add set elements to a string in python
|
<p>how would I add set elements to a string in python? I tried:</p>
<pre><code>sett = set(['1', '0'])
elements = ''
for i in sett:
elements.join(i)
</code></pre>
<p>but no dice. when I print elements the string is empty. help</p>
|
<p>I believe you want this:</p>
<pre><code>s = set(['1', '2'])
asString = ''.join(s)
</code></pre>
<p>Be aware that sets are not ordered like lists are. They'll be in the order added typically until something is removed, but the order could be different than the order you added them.</p>
|
python|string|set
| 4 |
1,908,482 | 6,665,566 |
Python Concatenation
|
<p>This issue is probably very easy but I can't figure it out - </p>
<p>I have the following values: <code>['2000']['09']['22']</code></p>
<p>I want the following: <code>20000922</code> or <code>'20000922'</code> </p>
<p><em><strong>code</em></strong></p>
<pre><code>def transdatename(d):
year = re.findall('\d\d\d\d', str(d))
premonth = re.findall('[A-Z][a-z]{2,9}', str(d))
month = replace_all(str(premonth), reps)
daypre = re.findall('\d{1,2},', str(d))
day = re.sub(',','', str(daypre))
fulldate = str(year)+str(month)+str(day)
return fulldate
</code></pre>
<p><strong>Example data</strong></p>
<p><strong>Input:</strong>
['\nSeptember 23, 2000'] </p>
<p><strong>Expected Output:</strong> '20000922'</p>
|
<p>Are you trying to do this?</p>
<pre><code>>>> input= '\nSeptember 23, 2000'
>>> format= '\n%B %d, %Y'
>>> datetime.datetime.strptime(input,format)
datetime.datetime(2000, 9, 23, 0, 0)
>>> (_+datetime.timedelta(-1)).strftime("%Y%m%d")
'20000922'
</code></pre>
<p>If so, you're making it too hard.</p>
|
python|concatenation
| 4 |
1,908,483 | 44,679,890 |
How to do port translation when using an address translation map with cassandra?
|
<p>I'm trying to connect to a scylladb cluster on compose using the Address Translation Map.</p>
<p>I can only get the code working if I hard code the port when instantiating the Cluster instance:</p>
<pre><code>from cassandra.cluster import Cluster
from cassandra.policies import AddressTranslator
from cassandra.auth import PlainTextAuthProvider
################################################################################
# our variables
address_map = {
"10.0.24.69:9042": "sl-eu-lon-2-portal.3.dblayer.com:15227",
"10.0.24.71:9042": "sl-eu-lon-2-portal.2.dblayer.com:15229",
"10.0.24.70:9042": "sl-eu-lon-2-portal.1.dblayer.com:15228"
}
username = 'scylla'
password = 'changeme'
port = 15227
################################################################################
</code></pre>
<p>Next a class for translating the addresses:</p>
<pre><code>class ComposeAddressTranslator(AddressTranslator):
def set_map(self, address_map):
# strip ports from both source and destination as the cassandra python
# client doesn't appear to support ports translation
self.address_map = {key.split(':')[0]: value.split(':')[0] for (key, value) in address_map.items()}
def contact_points(self):
return [value.split(':')[0] for (key, value) in address_map.items()]
def translate(self, addr):
# print some debug output
print('in translate(self, addr) method', type(addr), addr)
trans_addr = self.address_map[addr]
return trans_addr
</code></pre>
<p>Now let's connect:</p>
<pre><code>compose_translator = ComposeAddressTranslator()
compose_translator.set_map(address_map)
auth_provider = PlainTextAuthProvider(
username=username,
password=password
)
# if the port parameter value is removed from below, we are unable
# to establish a connection
cluster = Cluster(
contact_points = compose_translator.contact_points(),
address_translator = compose_translator,
auth_provider = auth_provider,
cql_version = '3.2.1',
protocol_version = 2,
port = port
)
session = cluster.connect()
session.execute("USE my_keyspace;")
session.shutdown()
</code></pre>
<p>It appears that the cassandra python library does not support port translation with the <a href="https://datastax.github.io/python-driver/api/cassandra/policies.html#cassandra.policies.AddressTranslator.translate" rel="nofollow noreferrer">translate method</a>? You can see below in my debug output that the addr passed into the translate method is a string ip address value without the port:</p>
<pre><code>in translate(self, addr) method <class 'str'> 10.0.24.69
in translate(self, addr) method <class 'str'> 10.0.24.71
</code></pre>
<p>My Environment:</p>
<pre><code>$ pip freeze | grep cassandra
cassandra-driver==3.10
$ pip freeze | grep cassandra
cassandra-driver==3.10
</code></pre>
|
<p>Other Cassandra drivers such as the node driver support port translation. The nodejs <a href="http://datastax.github.io/nodejs-driver/features/address-resolution/#the-address-translator-interface" rel="nofollow noreferrer">translator documentation</a>:</p>
<pre><code>MyAddressTranslator.prototype.translate = function (address, port, callback) {
// Your custom translation logic.
};
</code></pre>
<p>Above you can see that the translator receives both the ip address and the port.</p>
<p>However, I don't believe the current Cassandra python driver supports port address <a href="http://datastax.github.io/python-driver/api/cassandra/policies.html#cassandra.policies.AddressTranslator.translate" rel="nofollow noreferrer">translation</a>:</p>
<pre><code>translate(addr)
</code></pre>
<blockquote>
<p>Accepts the node ip address, and returns a translated address to be used connecting to this node.</p>
</blockquote>
<p>Here you can see that the translator only receives the ip address.</p>
|
python|ibm-cloud|compose-db|scylla|cassandra-python-driver
| 1 |
1,908,484 | 44,398,350 |
Airflow dag bash task lag on remote executions
|
<p>I am experimenting with Airflow to replace our existing cron orchestration and everything looks promising. I have successfully installed and gotten a dag to be scheduled and executed, but I noticed that their is a significant delay between each of the tasks I have specified (at least 15 minutes to 60 minutes).</p>
<p>My dag is defined as follows</p>
<p>Am I missing something to make them run one right after the other?</p>
<p>I am not using celery
both scheduler and webserver are running on the same host
and yes - need to call for a remote execution (working on some form of local until then)
and no cannot install airflow on the remote server
Dag should run once a day at 1 am UTC, follow the set path of tasks I have given it. </p>
<blockquote>
<pre><code>import airflow
from builtins import range
from airflow import DAG
from airflow.operators.bash_operator import BashOperator
from airflow.models import DAG
from datetime import datetime, timedelta
args = {
'owner': 'user1',
'depends_on_past': False,
'start_date': airflow.utils.dates.days_ago(2),
'email': ['data-etl-errors@user1.com'],
'email_on_failure': True,
'email_on_retry': False,
'wait_for_downstream': True,
'schedule_interval': None,
'depends_on_past': True,
'retries': 1,
'retry_delay': timedelta(minutes=5)
}
dag = DAG(
dag_id='airflow_pt1'
, default_args=args
, schedule_interval='0 1 * * *'
, dagrun_timeout=timedelta(hours=8))
task1 = BashOperator(
task_id='task1'
, bash_command='ssh user1@remoteserver /path/to/remote/execution/script_task1.sh'
, dag=dag,env=None, output_encoding='utf-8')
task2 = BashOperator(
task_id='task2'
, bash_command='ssh user1@remoteserver /path/to/remote/execution/script_task2.sh'
, dag=dag,env=None, output_encoding='utf-8')
task3 = BashOperator(
task_id='task3'
, bash_command='ssh user1@remoteserver /path/to/remote/execution/script_task3.sh'
, dag=dag,env=None, output_encoding='utf-8')
task4 = BashOperator(
task_id='task4'
, bash_command='ssh user1@remoteserver /path/to/remote/execution/script_task4.sh'
, dag=dag,env=None, output_encoding='utf-8')
task2.set_upstream(task1)
task3.set_upstream(task1)
task4.set_upstream(task2)
</code></pre>
</blockquote>
<p>Note I have not executed airflow backfill (is that important?)</p>
|
<p>Found the issue
I had not altered the configuration from sequential to localExecutor in airflow.cfg file</p>
<p>I found my answer through <a href="https://stlong0521.github.io/20161023%20-%20Airflow.html" rel="nofollow noreferrer">https://stlong0521.github.io/20161023%20-%20Airflow.html</a></p>
<p>and watching the detailed video in <a href="https://www.youtube.com/watch?v=Pr0FrvIIfTU" rel="nofollow noreferrer">https://www.youtube.com/watch?v=Pr0FrvIIfTU</a></p>
|
python|scheduling|directed-acyclic-graphs|airflow
| 1 |
1,908,485 | 44,770,065 |
Sort IPs via Jinja2 template
|
<p>I am having an issue sorting IPs via Jinja2 and Ansible. Here are my variables and jinja2 code for ansible templates.</p>
<p><strong>roles/DNS/vars/main.yml:</strong></p>
<pre><code>---
DC1:
srv1:
ip: 10.2.110.3
srv2:
ip: 10.2.110.11
srv3:
ip: 10.2.110.19
srv4:
ip: 10.2.110.24
DC2:
srv5:
ip: 172.26.158.3
srv6:
ip: 172.26.158.11
srv7:
ip: 172.26.158.19
srv8:
ip: 172.26.158.24
</code></pre>
<p><strong>roles/DNS/templates/db.example.com.j2:</strong></p>
<pre><code>$TTL 86400
@ IN SOA example.com. root.example.com. (
2014051001 ; serial
3600 ; refresh
1800 ; retry
604800 ; expire
86400 ; minimum
)
; Name server
IN NS dns01.example.com.
; Name server A record
dns01.example.com. IN A 10.2.110.92
; 10.2.110.0/24 A records in this Domain
{% for hostname, dnsattr in DC1.iteritems() %}
{{hostname}}.example.com. IN A {{dnsattr.ip}}
; 172.26.158.0/24 A records in this Domain
{% for hostname, dnsattr in DC2.iteritems() %}
{{hostname}}.example.com. IN A {{dnsattr.ip}}
</code></pre>
<p><strong>roles/DNS/tasks/main.yml:</strong></p>
<pre><code>- name: Update DNS zone file db.example.com
template:
src: db.example.com.j2
dest: "/tmp/db.example.com"
with_items: "{{DC1,DC2}}"
- name: Restart DNS Server
service:
name: named
state: restarted
</code></pre>
<p>The DNS zone files get created correctly, but the IPs are not numerically sorted. I have tried using the following with no luck:</p>
<p><strong>Sorts on hostname alphabetically</strong></p>
<pre><code>{% for hostname, dnsattr in center.iteritems() | sort %}
</code></pre>
<p><strong>Does not find the attribute dnsattr</strong></p>
<pre><code>{% for hostname, dnsattr in center.iteritems() | sort(attribute='dnsattr.ip') %}
</code></pre>
<p><strong>Does not find the attribute ip</strong></p>
<pre><code>{% for hostname, dnsattr in center.iteritems() | sort(attribute='ip') %}
</code></pre>
|
<p>To have the IPs numerically sorted you could implement and use your own filter plugin (btw I'd be interested in any other solution):</p>
<p>In <code>ansible.cfg</code> add <code>filter_plugins = path/to/filter_plugins</code>.</p>
<p>In <code>path/to/filter_plugins/ip_filters.py</code>:</p>
<pre><code>#!/usr/bin/python
def ip_sort(ip1, ip2):
# Sort on the last number
return int(ip1.split('.')[-1]) - int(ip2.split('.')[-1])
class FilterModule(object):
def filters(self):
return {
'sort_ip_filter': self.sort_ip_filter,
}
def sort_ip_filter(self, ip_list):
return sorted(ip_list, cmp=ip_sort)
</code></pre>
<p>Then, in Ansible:</p>
<pre><code>- name: "Sort ips"
debug:
msg: vars='{{ my_ips | sort_ip_filter }}'
</code></pre>
<hr>
<p>I would also use the <code>ipaddr</code> filter to ensure the format is right:</p>
<pre><code>- name: "Sort ips"
debug:
msg: vars='{{ my_ips | ipaddr | sort_ip_filter }}'
</code></pre>
|
python|ansible|jinja2
| 1 |
1,908,486 | 23,528,557 |
How to import relative Python package (pycrypto)
|
<p>I am new to Python (as of today) and having trouble following this example for AES: <a href="https://pypi.python.org/pypi/pycrypto/2.6.1" rel="nofollow">https://pypi.python.org/pypi/pycrypto/2.6.1</a> using Python 3.3</p>
<p><code>from Crypto.Cipher import AES</code></p>
<p>I downloaded the package from here <a href="https://www.dlitz.net/software/pycrypto/" rel="nofollow">https://www.dlitz.net/software/pycrypto/</a> (pycrypto-2.6.1.tar.gz) as <strong>I want it as a local dependency since this is a portable plugin for Sublime Text 3</strong>.</p>
<p>So I have <code>/MyPLugin/Crypto/</code> and Crypto looks good having the expected <code>__init__.py</code> files in the right places.</p>
<p>In <code>/MyPlugin/myplugin.py</code> I am trying to import AES like in the example (<code>from Crypto.Cipher import AES</code>). I have tried many combinations with dots and stuff but nothing seems to work.</p>
<p>How can I import AES from this relative Crypto folder?</p>
<p>Couple of the tries:</p>
<p><code>from MyPlugin.Crypto.Cipher import AES</code> = ImportError: cannot import name AES</p>
<p><code>import Crypto</code> = ImportError: No module named 'Crypto'</p>
<p><code>import .Crypto</code> = SyntaxError: invalid syntax</p>
<p>PS I made a mistake - it is using <strong>Python 3.3</strong></p>
|
<p>Make sure that the library you are talking about is in your python path. Information about modifying your python path <a href="https://docs.python.org/2/install/index.html#modifying-python-s-search-path" rel="nofollow">here</a>. I'd try doing that. Although, when you add a new library this usually happens.</p>
|
python|import|sublimetext3|pycrypto
| 2 |
1,908,487 | 24,133,820 |
function names from config file in python
|
<p>I have a JSON config file which tells me what kind of distribution to sample from. For example:</p>
<pre><code>{ "parameter1" : { "distribution" : "exponential", "mean" = 5},
"parameter2" : { "distribution" : "poisson", "mean" = 3} }
</code></pre>
<p>The list above can be exhaustive. I need have a function which will read this json file, and return the appropriate distribution to the calling code.</p>
<p>I tried using string concatenation and <code>eval()</code>, but that gives me the sample values directly.
I should be able to return the object/function to the calling function.</p>
<p>Can some one help me do it?</p>
<p>My attempt:</p>
<pre><code>import numpy.random as random
def getDistribution(distribution, params):
string= 'random.'+distribution
return eval(string)(params["mean"])
</code></pre>
<p>This returns a value to me. Is there a way to return a handle to the actual distribution function like <code>random.exponential()</code> or <code>random.poisson()</code> which I can use in the function calling <code>getDistribution()</code>?</p>
|
<p>You can use getattr to return the method (which is an attribute of <code>random</code>):</p>
<pre><code>def get_method(name):
return getattr(random, name, None)
def get_distribution(method, params):
return method(params['mean'])
method_name = 'exponential'
method = get_method(method_name)
if method:
results = get_distribution(method, params)
else:
raise AttributeError('No such method in random: {}'.format(method_name))
</code></pre>
<p><code>getattr</code> takes an optional third argument which is the value to return when the attribute cannot be found. I am using this to explicitly return <code>None</code>. You can change that to a default method that you want to use if the chosen method name is not available.</p>
|
python|json|numpy
| 3 |
1,908,488 | 61,046,742 |
Cannot import torchvision in Python on Ubuntu
|
<p>I'm trying to go through a <a href="https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py" rel="nofollow noreferrer">simple neural networking tutorial</a> on my Ubuntu 18.04 LTS machine and run into this error when trying to import the <code>torchvision</code> module:</p>
<pre><code> Traceback (most recent call last):
File "class.py", line 2, in <module>
import torchvision
File "/home/drubbels/anaconda3/lib/python3.7/site-packages/torchvision/__init__.py", line 4, in <module>
from torchvision import datasets
File "/home/drubbels/anaconda3/lib/python3.7/site-packages/torchvision/datasets/__init__.py", line 9, in <module>
from .fakedata import FakeData
File "/home/drubbels/anaconda3/lib/python3.7/site-packages/torchvision/datasets/fakedata.py", line 3, in <module>
from .. import transforms
File "/home/drubbels/anaconda3/lib/python3.7/site-packages/torchvision/transforms/__init__.py", line 1, in <module>
from .transforms import *
File "/home/drubbels/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py", line 17, in <module>
from . import functional as F
File "/home/drubbels/anaconda3/lib/python3.7/site-packages/torchvision/transforms/functional.py", line 5, in <module>
from PIL import Image, ImageOps, ImageEnhance, PILLOW_VERSION
ImportError: cannot import name 'PILLOW_VERSION' from 'PIL' (/home/drubbels/anaconda3/lib/python3.7/site-packages/PIL/__init__.py)
</code></pre>
<p>I've previously run into a similar <code>PIL</code>-related error when trying to display images in Open CV with Python, and was unable to resolve it then.</p>
<p>I know that both cases (this tutorial and the Open CV program) should have worked fine in principle, because I've previously done both with no problems on a Windows 8.1 machine (which I now no longer have access to). I have also run into the exact same problem with importing <code>torchvision</code> on <em>another</em> Ubuntu machine, so I don't think it's some weird quirk of my computer in particular. Therefore, I believe the problem is Linux-related.</p>
<p>I have already reinstalled <code>Pillow</code>, which didn't help.</p>
<p>EDIT: everything is installed with conda. I don't imagine there could be much wrong with the environment - I did a fresh install of Anaconda this morning.</p>
|
<p>Check this.</p>
<pre><code>https://github.com/python-pillow/Pillow/blob/master/CHANGES.rst
</code></pre>
<p>The PILLOW_VERSION definition has been removed after 7.0.0 of Pillow.</p>
<p>So installing an old version Pillow will help.</p>
<pre><code>pip install 'pillow<7.0.0'
</code></pre>
<p>or</p>
<pre><code>pip3 install 'pillow<7.0.0'
</code></pre>
<p>If you are using conda.</p>
<pre><code>conda uninstall pillow
conda install pillow=6.1
</code></pre>
|
python|ubuntu|python-imaging-library|torchvision
| 2 |
1,908,489 | 60,932,252 |
python network analysis: export DICOM object from pcap file
|
<p>In Wireshark I can use the feature "export object => DICOM" to extract from network packets the DICOM file sent. </p>
<p>I would like to do the same thing with Python or with Wireshark API, is it possible?</p>
|
<p>If we're using python and tshark, this is mostly a call to subprocess as tshark already has this capability:</p>
<pre class="lang-py prettyprint-override"><code>import subprocess as sp
import os
# Source file
pcap_file = "C:\\...\\DICOM.pcap"
dest_dir = "exported"
os.mkdir(dest_dir)
# Read the file and use --export-objects. Next arg must be `protocol,dir`.
sp.run(["tshark", "-Q", "-r", pcap_file, "--export-objects", "DICOM," + dest_dir])
</code></pre>
<p>Then if you <code>ls exported</code>, you'll see the exported file(s). I have tested and verified that <a href="https://bugs.wireshark.org/bugzilla/attachment.cgi?id=2044&name=packet-dcm-f3-pdv.pcap" rel="nofollow noreferrer">this wireshark bug file</a> has a dicom file that you can export with these commands.</p>
<p>If you want to better understand the extraction process, Wireshark is open source and you can look at its <a href="https://github.com/wireshark/wireshark/blob/master/epan/dissectors/packet-dcm.c" rel="nofollow noreferrer">DICOM code</a>.</p>
|
python|wireshark|packet
| 0 |
1,908,490 | 49,409,556 |
smoothing curves with no local extremums using numpy
|
<p>I am trying to get a smooth curve for my data points. Say (lin_space,rms) are my ordered pairs that I need to plot. For the following code-</p>
<pre><code>spl=UnivariateSpline(lin_space,rms)
x=np.arange(0,1001,0.5)
plt.plot(lin_space,rms,'k.')
plt.plot(lin_space,spl(lin_space),'b-')
plt.plot(x,np.sqrt(x),'r-')
</code></pre>
<p>After smoothing with UnivariateSpline I am getting the blue line whereas I need my plots like the red one like shown (with no local extremums) </p>
<p><a href="https://i.stack.imgur.com/iqrV6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iqrV6.png" alt="enter image description here"></a></p>
|
<p>You'll want a more limited class of models.</p>
<p>One option, for the data that you have shown, is to do least squares with a square-root function. That should produce good results.</p>
<p>A running average will be smooth(er), depending on how you weight the terms.</p>
<p>A Gaussian Process regression with an RBF + WhiteNoise kernel might be worth checking into, with appropriate a priori bounds on the length scale of the RBF kernel. OTOH, your residuals aren't normally distributed, so this model may not work as well for values toward the edges.</p>
<p>Note: If you specifically want a function with no local extrema, you need to select a class of models that has that property. e.g. fitting a square root function.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import sklearn.linear_model
mpl.rcParams['figure.figsize'] = (18,16)
WINDOW=30
def ma(signal, window=30):
return sum([signal[i:-window+i] for i in range(window)])/window
X=np.linspace(0,1000,1000)
Y=np.sqrt(X) + np.log(np.log(X+np.e))*np.random.normal(0,1,X.shape)
sqrt_model_X = np.sqrt(X)
model = sklearn.linear_model.LinearRegression()
model.fit(sqrt_model_X.reshape((-1,1)),Y.reshape((-1,1)))
plt.scatter(X,Y,c='b',marker='.',s=5)
plt.plot(X,np.sqrt(X),'r-')
plt.plot(X[WINDOW:],ma(Y,window=WINDOW),'g-.')
plt.plot(X,model.predict(sqrt_model_X.reshape((-1,1))),'k--')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/JYhPY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JYhPY.png" alt="enter image description here"></a></p>
|
python|numpy|scipy|curve-fitting|smoothing
| 1 |
1,908,491 | 49,755,480 |
Case-insensitive sections in ConfigParser
|
<p>I am looking at <a href="https://docs.python.org/3.6/library/configparser.html" rel="nofollow noreferrer">Python 3.6 documentation</a> where it says</p>
<blockquote>
<p>By default, section names are case sensitive but keys are not [1].</p>
</blockquote>
<p>For the footnote it says</p>
<blockquote>
<p>[1] (1, 2, 3, 4, 5, 6, 7, 8, 9, 10) Config parsers allow for heavy customization. If you are interested in changing the behaviour outlined by the footnote reference, consult the Customizing Parser Behaviour section.</p>
</blockquote>
<p>So I look at "14.2.7. Customizing Parser Behaviour" but I cannot find the description of how to make sections case-insensitive.</p>
<p>I want a section like this:</p>
<pre><code>[SETTINGS]
...
</code></pre>
<p>To be accessible like this <code>config['section']</code>, but currently I get an error. This is the only change to the config parser I want to apply.</p>
|
<p>You can do this fairly easily in Python 3.x by passing something as the optional <code>dict_type=</code> keyword argument described in the <a href="https://docs.python.org/3.6/library/configparser.html#configparser-objects" rel="nofollow noreferrer"><code>ConfigParser</code> documentation</a>—which in this case we'd like the type to be a case-insensitive ordered <code>dictionary</code>.</p>
<p>Unfortunately there isn't one in standard library, nor a conical implementation of one that I know of...so I cobbled one together to use as an example. It hasn't been rigorously tested, but works well enough to illustrate the general idea.</p>
<p><strong>Note:</strong> For testing I used the following <code>simple.ini</code> file (which I swiped from <a href="https://pymotw.com/3/configparser/index.html" rel="nofollow noreferrer">pymotw</a>):</p>
<pre class="lang-none prettyprint-override"><code># This is a simple example with comments.
[bug_tracker]
url = http://localhost:8080/bugs/
username = dhellmann
; You should not store passwords in plain text
; configuration files.
password = SECRET
</code></pre>
<p>Here's a demonstration showing using one to do what's needed:</p>
<pre><code>import collections
from configparser import ConfigParser
class CaseInsensitiveDict(collections.MutableMapping):
""" Ordered case insensitive mutable mapping class. """
def __init__(self, *args, **kwargs):
self._d = collections.OrderedDict(*args, **kwargs)
self._convert_keys()
def _convert_keys(self):
for k in list(self._d.keys()):
v = self._d.pop(k)
self._d.__setitem__(k, v)
def __len__(self):
return len(self._d)
def __iter__(self):
return iter(self._d)
def __setitem__(self, k, v):
self._d[k.lower()] = v
def __getitem__(self, k):
return self._d[k.lower()]
def __delitem__(self, k):
del self._d[k.lower()]
parser = ConfigParser(dict_type=CaseInsensitiveDict)
parser.read('simple.ini')
print(parser.get('bug_tracker', 'url')) # -> http://localhost:8080/bugs/
print(parser.get('Bug_tracker', 'url')) # -> http://localhost:8080/bugs/
</code></pre>
|
python|configparser
| 2 |
1,908,492 | 49,787,852 |
Group consecutive integers together
|
<p>Have the following code:</p>
<pre><code>import sys
ints = [1,2,3,4,5,6,8,9,10,11,14,34,14,35,16,18,39,10,29,30,14,26,64,27,48,65]
ints.sort()
ints = list(set(ints))
c = {}
for i,v in enumerate(ints):
if i+1 >= len(ints):
continue
if ints[i+1] == v + 1 or ints[i-1] == v - 1:
if len(c) == 0:
c[v] = [v]
c[v].append(ints[i+1])
else:
added=False
for x,e in c.items():
last = e[-1]
if v in e:
added=True
break
if v - last == 1:
c[x].append(v)
added=True
if added==False:
c[v] = [v]
else:
if v not in c:
c[v] = [v]
print('input ', ints)
print('output ', c))
</code></pre>
<p>The objective:</p>
<p>Given a list of integers, create a dictionary that contains consecutive integers grouped together to reduce the overall length of the list.</p>
<p>Here is output from my current solution:</p>
<pre><code>input [1, 2, 3, 4, 5, 6, 8, 9, 10, 11, 14, 16, 18, 26, 27, 29, 30, 34, 35, 39, 48, 64, 65]
output {1: [1, 2, 3, 4, 5, 6], 8: [8, 9, 10, 11], 14: [14], 16: [16], 18: [18], 26: [26, 27], 29: [29, 30], 34: [34, 35], 39: [39], 48: [48], 64: [64]}
</code></pre>
<p>Conditions/constraints:</p>
<ul>
<li>If the current integer is either a) in an existing list or b) is the last item in an existing list, we don't want to create another list for this item.
i.e. in the range 1-5 inclusive, when we get to <code>3</code>, don't create a list <code>3,4</code>, instead append <code>3</code> to the existing list <code>[1,2]</code></li>
</ul>
<p>My current iteration works fine, but it gets exponentially slower the bigger the list is because of the <code>for x,e in c.items()</code> existing list check. </p>
<p>How can I make this faster while still achieving the same result?</p>
<p>New solution (from 13 seconds to 0.03 seconds using an input list of 19,000 integers):</p>
<pre><code>c = {}
i = 0
last_list = None
while i < len(ints):
cur = ints[i]
if last_list is None:
c[cur] = [cur]
last_list = c[cur]
else:
if last_list[-1] == cur-1:
last_list.append(cur)
else:
c[cur] = [cur]
last_list = c[cur]
i += 1
</code></pre>
|
<p>As you have lists of consecutive numbers, I suggest you to use <code>range</code> objects instead of <code>list</code>s:</p>
<pre><code>d, head = {}, None
for x in l:
if head is None or x != d[head].stop:
head = x
d[head] = range(head, x+1)
</code></pre>
|
python|algorithm|list
| 3 |
1,908,493 | 21,194,519 |
Can't use mkvirtualenv, "site-packages/_markerlib" permission denied
|
<p>Edit, this seems to be a general permissions issue:</p>
<pre><code>The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was:
/Library/Python/2.7/site-packages/
Perhaps your account does not have write access to this directory?
</code></pre>
<hr>
<p>I found this questions, which is exactly my problem, but I don't have a <code>~/.pydistutils.cfg</code> file: </p>
<p><a href="https://stackoverflow.com/questions/18979749/virtualenv-could-not-create-lib-python2-7-permission-denied/18979750">virtualenv: could not create '/lib/python2.7': Permission denied</a></p>
<p>I also definitely have setuptools and command line tools installed as mentioned here: <a href="https://stackoverflow.com/questions/12026843/error-when-ex-mkvirtualenv-in-mountain-lion">Error when ex mkvirtualenv in Mountain Lion</a></p>
<p>I installed everything with sudo, but I can't use sudo here: </p>
<pre><code>sudo: mkvirtualenv: command not found
</code></pre>
<p>Traceback:</p>
<pre><code>mkvirtualenv myenv
......stuff here......
build/lib/setuptools/_backport/hashlib
running install_lib
creating /Library/Python/2.7/site-packages/_markerlib
error: could not create '/Library/Python/2.7/site-packages/_markerlib': Permission denied
----------------------------------------
...Installing Setuptools...done.
Traceback (most recent call last):
File "/usr/local/bin/virtualenv", line 8, in <module>
load_entry_point('virtualenv==1.10.1', 'console_scripts', 'virtualenv')()
File "/Library/Python/2.7/site-packages/virtualenv.py", line 821, in main
symlink=options.symlink)
File "/Library/Python/2.7/site-packages/virtualenv.py", line 961, in create_environment
install_sdist('Setuptools', 'setuptools-*.tar.gz', py_executable, search_dirs)
File "/Library/Python/2.7/site-packages/virtualenv.py", line 932, in install_sdist
filter_stdout=filter_install_output)
File "/Library/Python/2.7/site-packages/virtualenv.py", line 899, in call_subprocess
% (cmd_desc, proc.returncode))
OSError: Command /Users/cbron/.virtualenvs/sam/bin/python setup.py install --single-version-externally-managed --record record failed with error code 1
</code></pre>
|
<pre><code>sudo chown -R your-username:wheel /Library/Python/2.7/site-packages
</code></pre>
<p>You may need to chmod the permissions of the dir and/or uninstall and re-install virtualenv:</p>
<pre><code>sudo pip uninstall virtualenv
sudo pip uninstall virtualenvwrapper
sudo pip install virtualenv
sudo pip install virtualenvwrapper
echo "source `which virtualenvwrapper.sh`" >> ~/.bash_profile
. ~/.bash_profile
</code></pre>
|
python|virtualenv
| 3 |
1,908,494 | 21,036,701 |
Django http POST error
|
<p><img src="https://i.stack.imgur.com/DI0p3.png" alt="enter image description here"></p>
<p>I am using the chrome postman extension to test out Django's request and response functionality as I'm going to need to POST data to a django app. My apps view is:</p>
<pre><code>def index(request):
# loop through keys
for key in request.POST:
value = request.POST[key]
# loop through keys and values
output =""
for key, value in request.POST.iteritems():
output= output + str(key) + " " + str(value) + "<br>"
return HttpResponse(output)
</code></pre>
<p>When I send the request I get:</p>
<pre><code>Forbidden (403)
CSRF verification failed. Request aborted.
Help
Reason given for failure:
CSRF cookie not set.
</code></pre>
<p>How can I fix this?</p>
<p>Edit: Here is the output after your recommended changes:</p>
<p><img src="https://i.stack.imgur.com/waHsA.png" alt="enter image description here"></p>
|
<p>Decorate your view with <a href="https://docs.djangoproject.com/en/dev/ref/contrib/csrf/#django.views.decorators.csrf.csrf_exempt">csrf_exempt</a>. For this to work, you need to add 'django.middleware.csrf.CsrfViewMiddleware' to your MIDDLEWARE_CLASSES variable. csrf_exempt basically marks your view for being exempted from any CSRF checks. More details <a href="https://docs.djangoproject.com/en/dev/ref/contrib/csrf/#django.views.decorators.csrf">here.</a></p>
<pre><code>from django.views.decorators.csrf import csrf_exempt
@csrf_exempt
def index(request):
# loop through keys
for key in request.POST:
value = request.POST[key]
# loop through keys and values
output =""
for key, value in request.POST.iteritems():
output= output + str(key) + " " + str(value) + "<br>"
return HttpResponse(output)
</code></pre>
|
python|django
| 7 |
1,908,495 | 21,139,364 |
How to import python class file from same directory?
|
<p>I have a directory in my Python 3.3 project called /models.</p>
<p>from my <code>main.py</code> I simply do a</p>
<pre><code>from models import *
</code></pre>
<p>in my <code>__init__.py</code>:</p>
<pre><code>__all__ = ["Engine","EngineModule","Finding","Mapping","Rule","RuleSet"]
from models.engine import Engine,EngineModule
from models.finding import Finding
from models.mapping import Mapping
from models.rule import Rule
from models.ruleset import RuleSet
</code></pre>
<p>This works great from my application.</p>
<p>I have a model that depends on another model, such that in my <code>engine.py</code> I need to import <code>finding.py</code> in <code>engine.py</code>. When I do: <code>from finding import Finding</code></p>
<p>I get the error <code>No Such Module exists</code>.</p>
<p>How can I import class B from file A in the same module/directory?</p>
|
<p>Since you are using Python 3, which disallows these relative imports (it can lead to confusion between modules of the same name in different packages).</p>
<p>Use either:</p>
<pre><code>from models import finding
</code></pre>
<p>or</p>
<pre><code>import models.finding
</code></pre>
<p>or, probably best:</p>
<pre><code>from . import finding # The . means "from the same directory as this module"
</code></pre>
|
python|python-3.x|python-import
| 61 |
1,908,496 | 70,168,279 |
TypeError: color must be int or single-element tuple
|
<p>Here is the error encoutered:</p>
<blockquote>
<p>TypeError: color must be int or single-element tuple</p>
</blockquote>
<p>Here is the code I am running:</p>
<pre><code>from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont, ImageOps
label1 = "VIKRAM"
font = ImageFont.truetype('D:/vikram/pythonProject/fonts/BalloonCraft-9YBK7.ttf', 800)
line_height = sum(font.getmetrics())
fontimage1 = Image.new('L', (font.getsize(label1)[0], line_height))
ImageDraw.Draw(fontimage1).text((0,0),str(label1),stroke_width=10, stroke_fill=(0, 0, 0), fill=255, font=font)
fontimage1 = fontimage1.rotate(90, resample=Image.NEAREST, expand=True)
orig = Image.open('F:/desktop/Kitty/PSI Hello Kitty Theme Personalized Door Poster-1.png')
orig.paste((135, 255, 135), box=(2200, 1500), mask=fontimage1)
orig.show()
</code></pre>
<p>If you have any idea how to fix this, please I would be very grateful. Thanks in advance.</p>
|
<p>I belive as you put the 'L' in <code>Image.new</code> your image will be gray scale, therefore you can't specify a RGB tuple for <code>stroke_fill</code> because you only have black shades. As specified in the error message you have to use a int or a single tuple.</p>
|
python
| 0 |
1,908,497 | 70,297,737 |
Python Subsequence
|
<p>Example: I have the array[9,0,1,2,3,6,4,5,0,9,7,8,9] and It should return the max consecutive subsequence, so the answer should be (and if the number is 9 and the next one is 0 than it's fine) 9,0,1,2,3, but my code is returning 0,1,2,3</p>
|
<p>Starting from each element, do a loop that compares adjacent elements, until you get a pair that isn't consecutive.</p>
<p>Instead of saving all the consecutive sublists in another list, just save the sublist in a variable. When you get another sublist, check if it's longer and replace it.</p>
<pre><code>def constructPrintLIS(arr: list, n: int):
longest_seq = []
for i in range(n-1):
for j in range(i, n-1):
if not (arr[j] == arr[j+1]-1 or (arr[j] == 9 and arr[j+1] == 0)):
break
else:
# if we reach the end, need to update j
j = n
if j - i > len(longest_seq):
longest_seq = arr[i:j+1]
if n - i <= len(longest_seq):
# there can't be any longer sequences, so stop
break
printLIS(longest_seq)
</code></pre>
|
python-2.7
| 0 |
1,908,498 | 53,603,783 |
pysnmp OID resolution
|
<p>Using pysnmp, how you perform resolution on queries that return an OID instead of value?</p>
<p>I wrote a lookup tool using pysnmp, here are the inputs and results :</p>
<pre><code>./run_snmp_discovery.py --host 1.1.1.1 --community XXXXXX --command get --mib_oid_index '{ "mib" : "SNMPv2-MIB", "oid" : "sysObjectID", "index" : "0" }' --verbose
Debug: 'varBind': SNMPv2-MIB::sysObjectID.0 = SNMPv2-SMI::enterprises.9.1.222
{"0": {"sysObjectID": "SNMPv2-SMI::enterprises.9.1.222"}}
</code></pre>
<p>How can the result be converted to the text value <code>cisco7206VXR</code> (reference <a href="http://www.circitor.fr/Mibs/Html/C/CISCO-PRODUCTS-MIB.php#cisco7206VXR" rel="nofollow noreferrer">http://www.circitor.fr/Mibs/Html/C/CISCO-PRODUCTS-MIB.php#cisco7206VXR</a>)</p>
|
<p>If you are using code <a href="http://snmplabs.com/pysnmp/quick-start.html#fetch-snmp-variable" rel="nofollow noreferrer">like this</a>:</p>
<pre><code>from pysnmp.hlapi import *
errorIndication, errorStatus, errorIndex, varBinds = next(
getCmd(SnmpEngine(),
CommunityData('public'),
UdpTransportTarget(('demo.snmplabs.com', 161)),
ContextData(),
ObjectType(ObjectIdentity('SNMPv2-MIB', 'sysDescr', 0)))
)
for varBind in varBinds:
print(' = '.join([x.prettyPrint() for x in varBind]))
</code></pre>
<p>And you want MIB object to be represented as an OID, then the <code>varBind</code> in the code above is actually an <a href="http://snmplabs.com/pysnmp/docs/api-reference.html#pysnmp.smi.rfc1902.ObjectType" rel="nofollow noreferrer">ObjectType</a> class instance which behaves like a tuple of two elements. The first element is <a href="http://snmplabs.com/pysnmp/docs/api-reference.html#pysnmp.smi.rfc1902.ObjectIdentity" rel="nofollow noreferrer">ObjectIdentity</a> which has the <a href="http://snmplabs.com/pysnmp/docs/api-reference.html#pysnmp.smi.rfc1902.ObjectIdentity.getOid" rel="nofollow noreferrer">.getOid</a> method:</p>
<pre><code>>>> objectIdentity = ObjectIdentity('SNMPv2-MIB', 'sysDescr', 0)
>>> objectIdentity.resolveWithMib(mibViewController)
>>> objectIdentity.getOid()
ObjectName('1.3.6.1.2.1.1.1.0')
</code></pre>
<p>If you want MIB object and its value to be fully represented in MIB terms (i.e. value resolved into an enumeration), then you just need to load the MIB that defines that MIB object (perhaps, CISCO-PRODUCTS-MIB) using <a href="http://snmplabs.com/pysnmp/docs/api-reference.html#pysnmp.smi.rfc1902.ObjectIdentity.loadMibs" rel="nofollow noreferrer">.loadMibs()</a> method. You might also need to set up a <a href="http://snmplabs.com/pysnmp/docs/api-reference.html#pysnmp.smi.rfc1902.ObjectIdentity.addAsn1MibSource" rel="nofollow noreferrer">search path</a> to let pysnmp find the MIB you refer.</p>
|
python|snmp|pysnmp
| 1 |
1,908,499 | 53,384,553 |
How do I upload a secondary python file to google cloud platform?
|
<p>I am using the google cloud platform app engine to have a main python file serve up a website using flask and a route in that file that will call a python function that is kept in a secondary python file. Whenever I try to call the function via the website I get a module error. Is there any way for me to upload this file alongside main.py so that I can call functions in it without just copying the file to main.py. Also I checked in each bucket and could not see any of my files in any of them but is there a bucket I should upload it to.</p>
|
<p>As I understand, you are calling from the main.py file in an App Engine application to a function developed by you in a secondary file. If you want that, is as simple as adding the file in the same folder of your main.py and then you have to deploy your application. Then, all the files of the folder will be uploaded to your App Engine application.</p>
|
python|python-3.x|google-cloud-platform
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.