Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,905,900 | 15,304,779 |
is it possible to change ms access table name with python
|
<p>I have several ms access databases that each have a table named <code>PlotStatus-name-3/13/12</code>.</p>
<p>I need to import each of these tables into a <code>.csv</code> table. If I manually change the name of the tables to <code>PlotStatus_name_3_13_12</code>, this code works. <strong>Does anyone know how to change the table namees using python?</strong></p>
<pre class="lang-python prettyprint-override"><code>#connect to access database
for filename in os.listdir(prog_rep_local):
if filename[-6:] == ".accdb":
DBtable = os.path.join(prog_rep_local, filename)
conn = pyodbc.connect(r'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=' + DBtable)
cursor = conn.cursor()
ct = cursor.tables
for row in ct():
rtn = row.table_name
if rtn[:10] == "PlotStatus":
#this does not work:
#Oldpath = os.path.join(prog_rep_local, filename, rtn)
#print Oldpath
#fpr = Oldpath.replace('-', '_')#.replace("/","_")
#print fpr
#newname = os.rename(Oldpath, fpr) this does not work
#print newname
#spqaccdb = "SELECT * FROM " + newname
#this workds if I manually change the table names in advance
sqlaccdb = "SELECT * FROM " + rtn
print sqlaccdb
cursor.execute(sqlaccdb)
rows = cursor.fetchall()
</code></pre>
|
<p>An easier solution would be to just add brackets around the table name so that the /s don't throw off the SQL command interpreter.</p>
<pre><code>sqlaccdb = "SELECT * FROM [" + rtn + "]"
</code></pre>
|
python|ms-access
| 1 |
1,905,901 | 49,584,917 |
How to merge two pandas time series objects with different date time indices?
|
<p>I have two disjoint time series objects, for example</p>
<p>-ts1</p>
<pre><code> Date Price
2010-01-01 1800.0
2010-01-04 1500.0
2010-01-08 1600.0
2010-01-09 1400.0
Name: Price, dtype: float64
</code></pre>
<p>-ts2</p>
<pre><code> Date Price
2010-01-02 2000.0
2010-01-03 2200.0
2010-01-05 2010.0
2010-01-07 2100.0
2010-01-10 2110.0
</code></pre>
<p>How I could merge the two into a single time series that should be sorted on date? like </p>
<p>-ts3</p>
<pre><code> Date Price
2010-01-01 1800.0
2010-01-02 2000.0
2010-01-03 2200.0
2010-01-04 1500.0
2010-01-05 2010.0
2010-01-07 2100.0
2010-01-08 1600.0
2010-01-09 1400.0
2010-01-10 2110.0
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="noreferrer"><code>pandas.concat</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.append.html" rel="noreferrer"><code>DataFrame.append</code></a> for join together and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html" rel="noreferrer"><code>DataFrame.sort_values</code></a> by column <code>Date</code>, last for default indices <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="noreferrer"><code>DataFrame.reset_index</code></a> with parameter <code>drop=True</code>:</p>
<pre><code>df3 = pd.concat([df1, df2]).sort_values('Date').reset_index(drop=True)
</code></pre>
<p>Alternative:</p>
<pre><code>df3 = df1.append(df2).sort_values('Date').reset_index(drop=True)
</code></pre>
<hr>
<pre><code>print (df3)
Date Price
0 2010-01-01 1800.0
1 2010-01-02 2000.0
2 2010-01-03 2200.0
3 2010-01-04 1500.0
4 2010-01-05 2010.0
5 2010-01-07 2100.0
6 2010-01-08 1600.0
7 2010-01-09 1400.0
8 2010-01-10 2110.0
</code></pre>
<p>EDIT:</p>
<p>If TimeSeries then solution is simplify:</p>
<pre><code>s3= pd.concat([s1, s2]).sort_index()
</code></pre>
|
python|pandas
| 10 |
1,905,902 | 49,602,669 |
How can I break line the values called from a function
|
<p>I am using Python 2 to make an app that counts the string words number and letters number.</p>
<pre><code>def string_length(aStr):
number_of_words = len(aStr.split())
number_of_letters = len(aStr)
return number_of_words, number_of_letters
print string_length("my name is ahmed") #Returns (4, 16)
</code></pre>
<p>How can I turn (4, 16) to:</p>
<p>4</p>
<p>16</p>
|
<p>Since the <code>string_length</code> function is returning a <a href="https://docs.python.org/3/tutorial/datastructures.html#tuples-and-sequences" rel="nofollow noreferrer">tuple</a>, you have to <em>unpack</em> them before printing.</p>
<pre><code>def string_length(aStr):
number_of_words = len(aStr.split())
number_of_letters = len(aStr)
return number_of_words, number_of_letters
# Unpacking the tuple
words, letters = string_length('my name is ahmed')
print words
print letters
</code></pre>
|
python
| 1 |
1,905,903 | 20,911,725 |
How to receive data from a socket, process, and send back data in python?
|
<p>i have the following code:</p>
<pre><code>import socket # Import socket module
import sys
s = socket.socket() # Create a socket object
host = '' # Get local machine name
port = 1234 # Reserve a port for your service.
s.connect((host, port))
while 1:
data = s.recv(1024)
print ' data ' , data
d = data.split('?') # parsing values from server
if len(d) < 2:
# if does not contain ?, do nothing
continue
else:
a = d[0]
b = d[1].replace('\n', '')
# check how a compares to b, and send response accordingly
if (a > b):
s.send('1')
elif (a == b):
s.send('2')
else:
s.send('3')
s.close() # Close the socket when done
</code></pre>
<p>Without the processing code I have, it works fine if I just send a random value. But with the code above, I can only parse the first set of line, and then it stops. (I assume it closes the socket or something?)</p>
<p>The data coming from the socket looks like '1 ? 23' or '23 ? 1' , etc. it expects a response that determines how the two numbers relate.</p>
<p>In comparison, if I have this code:</p>
<pre><code>import socket # Import socket module
import sys
s = socket.socket() # Create a socket object
host = '' # Get local machine name
port = 1234 # Reserve a port for your service.
s.connect((host, port))
backlog = ''
while 1:
data = s.recv(1024)
sp = data.split('\n')
if len(sp) < 2:
backlog += data
continue
line = backlog + sp[0]
backlog = sp[1]
data = line
print ' data ' , data
if not data:
break
s.send ('2')
s.close() # Close the socket when done
</code></pre>
<p>This code will yield a server response of either 'Correct!' or 'Incorrect...try again!' depending on whether it's right or wrong.</p>
|
<p>You seem to assume that you always get a full line with each <code>read()</code> call. That is wrong.</p>
<p>You should split your input into lines, and only if you have a full line, you proceed.</p>
<pre><code>backlog = ''
while 1:
data = s.recv(1024)
# do we have a line break?
sp = data.split('\n')
if len(sp) < 2:
# no, we haven't...
backlog += data
continue
# yes, we have.
line = backlog + sp[0] # first part is the now complete line.
backlog = sp[1] # 2nd part is the start of the new line.
print ' line ' , line
d = line.split('?') # parsing values from server
if len(d) < 2:
# if does not contain ?, do nothing
continue
else:
a = int(d[0]) # we want to compare numbers, not strings.
b = int(d[1])
# check how a compares to b, and send response accordingly
if (a > b):
s.send('1')
elif (a == b):
s.send('2')
else:
s.send('3')
</code></pre>
<p>Try out what happens now.</p>
<p>Another question which occurs to me is what exactly does the server expect? Really only one byte? Or rather <code>'1\n'</code>, <code>'2\n'</code>, <code>'3\n'</code>?</p>
|
python|sockets
| 0 |
1,905,904 | 62,780,985 |
ModuleNotFoundError: No module named 'cv2.cv2' after changing `sys.path`
|
<p>I have two python3 environments on my Ubuntu 18.04 machine.</p>
<ol>
<li>The built-in Python environment at <code>/usr/local/lib/python3.6/dist-packages/</code> and</li>
<li>Anaconda python3. <strong>Note: Opencv works perfectly in both of them.</strong></li>
</ol>
<p>The system I am building need to use <code>sudo python3</code>(<code>/usr/local/lib/python3.6/dist-packages/</code>) and <code>python3</code>(<code>anaconda</code>) at different times, but I want to minimize the dependency size for users. So what I am trying to do is that the dependencies will be installed in the built-in python3 only and when the program is called by normal <code>python3</code> the script will set the <code>sys.path</code> to the <code>sys.path</code> of <code>sudo python</code>.(I have stored that path in a file at the installation time.)</p>
<p>But when I do that <code>import cv2</code> raises: <code>ModuleNotFoundError: No module named 'cv2.cv2'</code></p>
<p><strong>Note: Other libraries works fine. Only Opencv is having this problem.</strong></p>
|
<p>The correct way to do it was by installing OpenCV using <code>sudo apt install python3-opencv</code>. OpenCV installed with <code>apt install</code> can be used anywhere in your system whereas pip will only install OpenCV to a particular python environment.</p>
|
python|python-3.x|opencv|ubuntu|opencv-python
| 0 |
1,905,905 | 70,192,439 |
(sqlite, Flask + React), flask session session.get() returns None
|
<p>I've spent too much time on this problem, and I think i'm just doing something wrong, and there's a better way to do this.</p>
<p>I am creating a website with a react frontend and flask backend, with sqlite database. My issue is that when I am using session.get() it returns None in one function but not another.</p>
<p>In this function, the session.get() returns the correct user id and therefore the function is able to add the data to the database correctly. Even when I sign in as a different user, and the user id changes.</p>
<pre class="lang-py prettyprint-override"><code>@app.route('/addlist', methods=['GET','POST'])
def addlist():
if request.method == 'POST':
user_id = session.get("user_id")
list_name = request.form['ListName']
color = request.form['color']
new_list = ListOfLists(user_id = user_id,name = list_name,color=color)
db.session.add(new_list)
db.session.commit()
return redirect('http://localhost:3000/userPage')
</code></pre>
<p>but when I try to get the lists in the frontend, session.get('user_id') returns None, and so I can not get the user's lists.</p>
<pre class="lang-py prettyprint-override"><code>@app.route('/getlists', methods=['GET', 'POST'])
def getlists():
user_id = session.get('user_id')
print("USER ID: {}".format(user_id))
lists = ListOfLists.query.filter_by(user_id=1).all()
return jsonify(lists)
</code></pre>
<p>print output: <code>USER ID: None</code></p>
<p>the function itself definitely works because when I changed <code>.filter_by(user_id=user_id)</code> to <code>.filter_by(user_id=1)</code> where the user id in the database is 1, it actually sent the correct lists I have in the database to the frontend.</p>
<hr />
<p>here is the app configuration, register and login endpoints, and the javascript function that gets the lists if that's of any interest:</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, request, redirect, session, jsonify
from flask_session import Session
from flask_sqlalchemy import SQLAlchemy
from flask_cors import CORS
#flask app initialization
app = Flask(__name__)
app.config["SECRET_KEY"] = "changeme"
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///database.db'
db = SQLAlchemy(app)
app.config["SESSION_TYPE"] = "sqlalchemy"
app.config['SESSION_SQLALCHEMY'] = db
Session(app)
CORS(app)
@app.route('/register', methods = ['GET', 'POST'])
def register():
if request.method == 'POST':
username = request.form['email']
password = request.form['password']
new_user = User(username=username, password=password) #user table constructor
db.session.add(new_user)
db.session.commit()
return redirect('http://localhost:3000/mainpage')
@app.route('/login', methods=['GET', 'POST'])
def Login():
if request.method == 'POST':
username = request.form['email']
password = request.form['password']
user = User.query.filter_by(username = username).first()
if user and user.password == password:
#save user to session
session['user_id'] = user.id
return redirect('http://localhost:3000/userPage')
</code></pre>
<pre class="lang-js prettyprint-override"><code>import React, { useState, useEffect } from 'react';
export default function GetLists() {
const [lists, setLists] = useState('');
useEffect(() => {
fetch('http://localhost:5000/getlists')
.then(response => response.json())
.then(data => console.log(data))
},[]);
//return currently doesn't matter as I am only console.log-ing the data
return (...);
}
</code></pre>
<h2>EDIT 1:</h2>
<p>I noticed the print output is <code>USER ID: 1</code> (the correct user id) when I manually go to <code>http://localhost:5000/getlists</code>, but the output is <code>None</code> when I use fetch() in javascript</p>
|
<p>Your session is stored in a cookie called session.
According to <a href="https://medium.com/@esimmler/fetch-doesn-t-send-cookies-by-default-f99ca4111774" rel="nofollow noreferrer">this article</a>, fetch does not send cookies by default.</p>
<p>To include the session cookie in your request, try this:</p>
<pre><code>fetch(url, {
credentials: "same-origin"
}).then(...).catch(...);
</code></pre>
|
javascript|python|reactjs|flask|flask-sqlalchemy
| 0 |
1,905,906 | 53,360,471 |
Semantic Segmentation to Bounding Boxes
|
<p>Suppose you are performing semantic segmentation. For simplicity, let's assume this is 1D segmentation rather than 2D (i.e. we only care about finding objects with width).</p>
<p>So the desired output of our model might be something like:</p>
<pre><code>[
[0, 0, 0, 0, 1, 1, 1], # label channel 1
[1, 1, 1, 0, 0, 1, 1], # label channel 2
[0, 0, 0, 1, 1, 1, 0], # label channel 3
#...
]
</code></pre>
<p>However, our trained imperfect model might be more like</p>
<pre><code>[
[0.1, 0.1, 0.1, 0.4, 0.91, 0.81, 0.84], # label channel 1
[0.81, 0.79, 0.85, 0.1, 0.2, 0.61, 0.91], # label channel 2
[0.3, 0.1, 0.24, 0.87, 0.62, 1, 0 ], # label channel 3
#...
]
</code></pre>
<p>What would be a performant way, using python, for getting the boundaries of the labels (or bounding box)</p>
<p>e.g. (zero-indexed)</p>
<pre><code>[
[[4, 6]], # "objects" of label 1
[[0, 2], [5, 6]] # "objects" of label 2
[[3, 5]], # "objects" of label 3
]
</code></pre>
<p>if it helps, perhaps transforming it to a binary mask would be of more use?</p>
<pre><code>def binarize(arr, cutoff=0.5):
return (arr > cutoff).astype(int)
</code></pre>
<p>with a binary mask we just need to find the consecutive integers of the indices of nonzero values:</p>
<p>def consecutive(data, stepsize=1):
return np.split(data, np.where(np.diff(data) != stepsize)[0]+1)</p>
<p>find "runs" of labels:</p>
<pre><code>def binary_boundaries(labels, cutoff=0.5):
return [consecutive(channel.nonzero()[0]) for channel in binarize(labels, cutoff)]
</code></pre>
<p>name objects according to channel name:</p>
<pre><code>def binary_objects(labels, cutoff=0.5, channel_names=None):
if channel_names == None:
channel_names = ['channel {}'.format(i) for i in range(labels.shape[0])]
return dict(zip(channel_names, binary_boundaries(labels, cutoff)))
</code></pre>
|
<p>Your trained model returned the <code>float image</code> and not the <code>int image</code> you were looking for (and it's not 'imperfect' if decimals were bothering you) and Yes! you do need to <code>threshold</code> it to get <code>binary image</code>.</p>
<p>Once you do have the binary image, lets do some work with <code>skimage</code>.</p>
<pre><code>label_mask = measure.label(mask)
props = measure.regionprops(label_mask)
</code></pre>
<p>mask is your binary image and here you do have <code>props</code> the properties of all the regions which are detected objects actually.</p>
<p>Among these properties, there exists bounding box!</p>
|
python|numpy|machine-learning|computer-vision
| 0 |
1,905,907 | 46,043,768 |
Same optimization code different results on different computers
|
<p>I am running nested optimization code.</p>
<pre><code>sp.optimize.minimize(fun=A, x0=D, method="SLSQP", bounds=(E), constraints=({'type':'eq','fun':constrains}), options={'disp': True, 'maxiter':100, 'ftol':1e-05})
sp.optimize.minimize(fun=B, x0=C, method="Nelder-Mead", options={'disp': True})
</code></pre>
<p>The first minimization is the part of the function B, so it is kind of running inside the second minimization.</p>
<p>And the whole optimization is based on the data, there's no random number involved.</p>
<p>I run the exactly same code on two different computers, and get the totally different results.</p>
<p>I have installed different versions of anaconda, but </p>
<p>scipy, numpy, and all the packages used have the same versions.</p>
<p>I don't really think OS would matter, but one is windows 10 (64bit), and the other one is windows 8.1 (64 bit)</p>
<p>I am trying to figure out what might be causing this.</p>
<p>Even though I did not state the whole options, if two computers are running the same code, shouldn't the results be the same? </p>
<p>or are there any options for sp.optimize that default values are set to be different from computer to computer?</p>
<p>PS. I was looking at the option "eps". Is it possible that default values of "eps" are different on these computers? </p>
|
<p>You should never expect numerical methods to perform identically on different devices; or even different runs of the same code on the same device. Due to the finite precision of the machine you can never calculate the "real" result, but only numerical approximations. During a long optimization task these differences can sum up.</p>
<p>Furthermore, some optimazion methods use some kind of randomness on the inside to solve the problem of being stuck in local minima: they add a small, alomost vanishing noise to the previous calculated solution to allow the algorithm to converge faster in the global minimum and not being stuck in a local minimum or a saddle-point.</p>
<p>Can you try to plot the landscape of the function you want to minimize? This can help you to analyze the problem: If both of the results (on each machine) are local minima, then this behaviour can be explained by my previous description.</p>
<p>If this is not the case, you should check the version of <code>scipy</code> you have installed on both machines. Maybe you are implicitly using <code>float</code> values on one device and <code>double</code> values on the other one, too?</p>
<p>You see: there are a lot of possible explanations for this (at the first look) strange numerical behaviour; you have to give us more details to solve this.</p>
|
python|optimization|scipy|minimization
| 0 |
1,905,908 | 45,986,266 |
Is it possible to design GUI with HTML+CSS+JavaScript but it will actually run python script?
|
<p>I've built a very simple assistant app in python which can do very basic tasks like taking notes, reminding you, stopwatch, timer, web scrape for news feeds etc. tkinter seems confusing and looks oldish to me. On the other hand, css js seems much easier to design gui side and way more elegant looking. Is it possible to design a desktop gui app (may be with electron?) using HTML+CSS+JavaScript but it will run my old python codes?</p>
<p>I've been coding for only two months and i suck at it. Please excuse my newbiness.</p>
<p>TLDR: Simply, i want to make the gui side using HTML+CSS+JavaScript to take user input but then it will run python scripts and shows output in the gui app. Is it possible?</p>
|
<p>The popular form of Javascript or ES6 (which you are talking about) is designed to run in browser, so, the limitations are that it can only make calls via browser, i.e. it cannot directly interact with the OS like python's OS module. This means you will need a web-service in your computer that would run a specific python code and return you the responses, this requires a web-service/web-framework, preferably python's like Django, Flask which will run the python script for you because they can make OS calls on the server machine. I do think other non-python web-services are cacpable to do so, but of course, the natural preference would be 'Python-based services'. </p>
<p><b> Sidenote</b>:
If the case was with Node.js(i.e. the server-side js) and not ES6(client-side browser-run) you would have an upperhand i.e. you could invoke python scripts on your server because node.js like the python-based web-servers do support os calls. </p>
|
javascript|python|html|css|user-interface
| 0 |
1,905,909 | 33,454,555 |
Bind relative image path to *.py file
|
<p>I'm trying to create PyQt-application that contains directory with images. Problem is that if I'm trying to load image using it's relative path it's result will depends on current directory from which application will be started.</p>
<p>E.g I've project with the following structure:</p>
<pre><code>qttest/
├── gui.py
└── icon.png
</code></pre>
<p>gui.py source:</p>
<pre><code>import sys
from PyQt5.QtGui import QPixmap
from PyQt5.QtWidgets import QApplication, QMainWindow, QLabel
app = QApplication(sys.argv)
main = QMainWindow()
lbl = QLabel()
lbl.setPixmap(QPixmap("icon.png"))
main.setCentralWidget(lbl)
main.show()
sys.exit(app.exec_())
</code></pre>
<p>Now if I'll try to launch it from project directory with <code>python3 gui.py</code> it will succesfully load image and display it but if I'll launch it from parent directory with <code>python3 qttest/gui.py</code> it won't work.</p>
<p>Is there are any way to bind image to source file where it's used without hardcoding full image path?</p>
|
<pre><code>import os
path = os.path.dirname(os.path.abspath(__file__))
lbl.setPixmap(QPixmap(os.path.join(path, 'icon.png')))
</code></pre>
|
python|qt|pyqt
| 4 |
1,905,910 | 73,663,984 |
Roles Required to write to Cloud Storage (GCP) from python (pandas)
|
<p>I have a question for the GCP connoisseurs among you.</p>
<p>I have an issue that I can upload to a bucket via UI and <code>gsutil</code> - but if I try to do this via python</p>
<pre><code>df.to_csv('gs://BUCKET_NAME/test.csv')
</code></pre>
<p>I get a 403 insufficient permission error.</p>
<p>My guess at the moment is that python does this via an API and requires an extra role - to make things more confusing I am already <strong>project owner</strong> of the project of the bucket and compared to other team members did not really find lacking permissions for this specific bucket.</p>
<p>I use python 3.9.1 via pyenv and pandas '1.4.2'</p>
<p>Anyone had the same issue/ knows what role I am missing?</p>
<ol>
<li>I checked that I have in principal rights to upload both via UI and gsutil</li>
<li>I used the same virtual python environemnt to read and write from bigquery to check that I can in principle use GCP data in python - this works</li>
<li>I have the following Roles on the Bucket
Storage Admin, Storage Object Admin, Storage Object Creator, Storage Object Viewer</li>
</ol>
|
<p><code>gsutil</code> and <code>gcloud</code> share credentials.</p>
<p>These credentials are <strong>not</strong> shared with other code running locally.</p>
<p>The quick-fix but sub-optimal solution is to:</p>
<pre class="lang-bash prettyprint-override"><code>gcloud auth application-default login
</code></pre>
<p>And run the code again.</p>
<p>It will then use your <code>gcloud</code> (<code>gsutil</code>) user credentials configured to run as if you were using a Service Account.</p>
<p>These credentials are stored (on Linux) in <code>${HOME}/.config/gcloud/application_default_credentials.json</code>.</p>
<p>A better solution is to create a Service Account specifically for your app and grant it the minimal set of IAM permissions that it will need (BigQuery, GCS, ...).</p>
<p><strong>For testing purposes</strong> (!) you can download the Service Account key locally.</p>
<p>You can then auth your code using Google's <a href="https://cloud.google.com/docs/authentication/provide-credentials-adc" rel="nofollow noreferrer">Application Default Credentials (ADC)</a> by (on Linux):</p>
<pre class="lang-bash prettyprint-override"><code>export GOOGLE_APPLICATION_CREDENTIALS=/path/to/your/key.json
python3 your_app.py
</code></pre>
<p>When you deploy code that leverages ADC to a Google Cloud compute service (Compute Engine, Cloud Run, ...), it can be deployed unchanged because the credentials for the compute resource will be automatically obtained from the Metadata service.</p>
<p>You can Google e.g. "Google IAM BigQuery" to find the documentation that lists the roles:</p>
<ul>
<li><a href="https://cloud.google.com/bigquery/docs/access-control-basic-roles" rel="nofollow noreferrer">IAM roles for BigQuery</a></li>
<li><a href="https://cloud.google.com/storage/docs/access-control/iam-roles" rel="nofollow noreferrer">IAM roles for Cloud Storage</a></li>
</ul>
|
python|pandas|google-cloud-platform|google-cloud-storage
| 4 |
1,905,911 | 13,018,115 |
matplotlib savefig image size with bbox_inches='tight'
|
<p>I have to make a vector plot and I want to just see the vectors without the axes, titles etc so here is how I try to do it:</p>
<pre><code>pyplot.figure(None, figsize=(10, 16), dpi=100)
pyplot.quiver(data['x'], data['y'], data['u'], data['v'],
pivot='tail',
units='dots',
scale=0.2,
color='black')
pyplot.autoscale(tight=True)
pyplot.axis('off')
ax = pyplot.gca()
ax.xaxis.set_major_locator(pylab.NullLocator())
ax.yaxis.set_major_locator(pylab.NullLocator())
pyplot.savefig("test.png",
bbox_inches='tight',
transparent=True,
pad_inches=0)
</code></pre>
<p>and despite my efforts to have an image 1000 by 1600 I get one 775 by 1280. How do I make it the desired size?
Thank you.</p>
<p><strong>UPDATE</strong> The presented solution works, except in my case I also had to manually set the axes limits. Otherwise, matplotlib could not figure out the "tight" bounding box.</p>
|
<pre><code>import matplotlib.pyplot as plt
import numpy as np
sin, cos = np.sin, np.cos
fig = plt.figure(frameon = False)
fig.set_size_inches(5, 8)
ax = plt.Axes(fig, [0., 0., 1., 1.], )
ax.set_axis_off()
fig.add_axes(ax)
x = np.linspace(-4, 4, 20)
y = np.linspace(-4, 4, 20)
X, Y = np.meshgrid(x, y)
deg = np.arctan(Y**3-3*Y-X)
plt.quiver(X, Y, cos(deg), sin(deg), pivot='tail', units='dots', color='red')
plt.savefig('/tmp/test.png', dpi=200)
</code></pre>
<p>yields</p>
<p><img src="https://i.stack.imgur.com/vmLrM.png" alt="enter image description here" /></p>
<p>You can make the resultant image 1000x1600 pixels by setting the figure to be 5x8 inches</p>
<pre><code>fig.set_size_inches(5, 8)
</code></pre>
<p>and saving with <code>DPI=200</code>:</p>
<pre><code>plt.savefig('/tmp/test.png', dpi=200)
</code></pre>
<p>The code to remove the border was taken from <a href="https://stackoverflow.com/q/8218887/190597">here</a>.</p>
<p>(The image posted above is not to scale since 1000x1600 is rather large).</p>
|
python|matplotlib
| 22 |
1,905,912 | 21,556,097 |
How do I prevent my FOR loop from ending too early?
|
<p>CODE:</p>
<pre><code>tsol = [6,7,8,9,10]
lenth = len(tsol)
for t,tnext in zip(tsol[0:lenth],tsol[1:lenth]):
print t,tnext
</code></pre>
<p>RESULTS:</p>
<p>6,7 <br>
7,8 <br>
8,9 <br>
9,10 <br>
and t value "10" is missing</p>
|
<p>You want to use function <a href="http://docs.python.org/2/library/itertools.html#itertools.izip_longest" rel="nofollow"><code>itertools.izip_longest</code></a>:</p>
<pre><code>from itertools import izip_longest
for t,tnext in izip_longest(tsol[0:lenth],tsol[1:lenth]):
print t,tnext
</code></pre>
<p>Output:</p>
<pre><code>6 7
7 8
8 9
9 10
10 None
</code></pre>
<p>If you want to use a placeholder value different from <code>None</code> you can specify the <code>fillvalue</code> keyword argument:</p>
<pre><code>izip_longest(tsol[0:lenth],tsol[1:lenth], fillvalue="whatever")
</code></pre>
<p>Output:</p>
<pre><code>6 7
7 8
8 9
9 10
10 whatever
</code></pre>
|
python|for-loop
| 7 |
1,905,913 | 24,691,920 |
Raising exception during SQLite database connection in Python
|
<p>Here is my code snippet:-</p>
<pre><code>import sqlite3
database = "sample.db"
def dbConnection(database):
try:
connection = sqlite3.connect(database)
db_cursor = connection.cursor()
db_cursor.execute("show tables;")
rows = db_cursor.fetchall()
for row in rows:
print row
connection.close()
except sqlite3.Error, e:
print "Error in connection",e
dbConnection("enb.db")
</code></pre>
<p>It is raising this exception:-</p>
<pre><code>Error in connection near "show": syntax error
</code></pre>
<p>I can't see anything wrong with the syntax as I just want to view the tables in the database. What could be the problem here?Thanks</p>
|
<p>"SHOW TABLES" is not supported by SQLite.
It is valid for other databases such as MySQL.</p>
<p><a href="http://sqlite.org/lang.html" rel="nofollow noreferrer">SQLite sql reference</a></p>
<p><a href="https://stackoverflow.com/questions/82875/how-do-i-list-the-tables-in-a-sqlite-database-file">How to 'show tables' in SQLite</a></p>
|
python|database|sqlite
| 1 |
1,905,914 | 24,664,718 |
How to read a dictionary from a file in Python 2.7?
|
<p>I have a problem with keeping user totals in Python. I've searched and tried many things, but with no success which brings me here. I want to be able to store user totals in a file and retrieve them as needed. I have been <code>json.dump()</code> the info in and I tried <code>json.load()</code> but I am not able to retrieve one specific value like if I wanted to know what balance user2123 had, not everyone. So basically, I need to know what to call the json.load so I can do <code>nameofdictionary[user2123]</code> and get their balance. I don't think my current code would help any, but if you need it, just let me know. Thanks a bunch!</p>
<pre><code>#gets the username
combine=[{str(signup):0}]
json.dump(combine,open('C:\Users\Joshua\Desktop\Balances.txt','a'))
#stuff that doesn't matter
print 'Withdrawing %s dollars... '%howmuchwd
json.load(open('C:\Users\Joshua\Desktop\Database.txt'))
print 'You now have %s dollars' %Idkwhattocallit
</code></pre>
<p>The file looks like this:
[{"12": 0}][{"123": 0}]</p>
|
<p>You are not assigning the return-value (a dictionary) of <code>json.load</code> to a variable. Actually you are not doing anything with the return value :)</p>
<p>You can do </p>
<pre><code>d = json.load(open('C:\Users\Joshua\Desktop\Database.txt'))
print d['user2123']
</code></pre>
<p>Or if you don't need the dictionary after checking 'user2123':</p>
<pre><code>print json.load(open('C:\Users\Joshua\Desktop\Database.txt'))['user2123']
</code></pre>
<p>Demo-file <code>Database.txt</code>:</p>
<pre><code>{"userXYZ":"3.50", "user2123":"42"}
</code></pre>
<p>Python-Demo:</p>
<pre><code>>>> import json
>>> with open('Database.txt') as f:
... print(json.load(f)['user2123'])
...
42
</code></pre>
<p>Edit:</p>
<p>Sorry, I overlooked this issue: The content of your file </p>
<pre><code>[{"12": 0}][{"123": 0}]
</code></pre>
<p>is <strong>not</strong> valid JSON. Valid JSON would look like this:</p>
<pre><code>{"12": 0,"123": 0}
</code></pre>
<p>Assuming that's the content of your file:</p>
<pre><code>>>> with open('Database.txt') as f:
... print(json.load(f)['123'])
...
0
</code></pre>
|
python|json|file|python-2.7|dictionary
| 2 |
1,905,915 | 38,058,844 |
Understanding Fibonacci number generator
|
<p>I'm trying to understand what is happening in this block of code</p>
<pre><code>def enum(seq):
n = 0
for i in seq:
yield n, i
n += 1
def fibonacci():
i = j = 1
while True:
r, i, j = i, j, i + j
yield r
</code></pre>
<p>I have a general understanding of how generators work, I'm just confuse about the line:</p>
<pre><code>r, i, j = i, j, i +j
</code></pre>
<p>and what is happening on it. Thanks.</p>
|
<p>It is called 'tuple assignment'.</p>
<p>Often, you encounter multiple assignments like</p>
<pre><code>a = 1
b = 2
c = 3
</code></pre>
<p>This can be rewritten as:</p>
<pre><code>(a, b, c) = (1, 2, 3)
</code></pre>
<p>Or even like this:</p>
<pre><code>a, b, c = 1, 2, 3
</code></pre>
<p>If you want to swap values:</p>
<pre><code>a, b = b, a
</code></pre>
<p>Yes, that also works.</p>
<p>In your case, some swapping and some addition is happening simultaneously*. <code>r</code> is the value that will be returned, <code>i</code> is the value to be returned on the next step, <code>j</code> - to be returned afterwards. And what would be returned after <code>j</code>? <code>i + j</code> and so on.</p>
|
python
| 0 |
1,905,916 | 30,806,553 |
Running flask server, nosetests and coverage
|
<p>I am writing an api in Flask. I have couple of views that return json responses and I have written some unit tests that will check if those views are working properly and are returning correct data. Then I turned on coverage plugin for nosetests (in my case nose-cov).</p>
<p>And here's where my problem starts, coverage is not seeing my views as being executed by tests.</p>
<p>First some base code to give you full picture:</p>
<p>My view:</p>
<pre><code>def get_user(uid):
"""Retrieve user.
Args:
uid (url): valid uid value
Usage: ::
GET user/<uid>/
Returns:
obj:
::
{
'data': {
`response.User`,
},
'success': True,
'status': 'get'
}
"""
if not uid:
raise exception.ValueError("Uid is empty")
obj = db_layer.user.get_user(uid=value)
return {
'data': {
obj.to_dict(), # to_dict is helper method that converts part of orm into dict
},
'success': True,
'status': 'get'
}
</code></pre>
<p>My test:</p>
<pre><code>class TestUserViews(base.TestViewsBase):
def test_get_user(self):
uid = 'some_uid_from_fixtures'
name = 'some_name_from_fixtures'
response = self.get(u'user/uid/{}/'.format(uid))
self.assertEqual(response.status_code, 200)
user_data = json.loads(response.text)['data']
self.assertEqual(name, user_data['username'])
self.assertEqual(uid, user_data['uid'])
def get(self, method, headers=None):
"""
Wrapper around requests.get, reassures that authentication is
sorted for us.
"""
kwargs = {
'headers': self._build_headers(headers),
}
return requests.get(self.get_url(method), **kwargs)
def get_url(self, method):
return '{}/{}/{}'.format(self.domain, self.version, method)
def _build_headers(self, headers=None):
if headers is None:
headers = {}
headers.update({
'X-Auth-Token': 'some-token',
'X-Auth-Token-Test-User-Id': 'some-uid',
})
return headers
</code></pre>
<p>To run test suite I have special shell script that performs few actions for me:</p>
<pre><code>#!/usr/bin/env bash
HOST="0.0.0.0"
PORT="5001"
ENVS="PYTHONPATH=$PYTHONPATH:$PWD"
# start server
START_SERVER="$ENVS python $PWD/server.py --port=$PORT --host=$HOST"
eval "$START_SERVER&"
PID=$!
eval "$ENVS nosetests -s --nologcapture --cov-report html --with-cov"
kill -9 $PID
</code></pre>
<p>After that view is reported as not being executed.</p>
|
<p>Ok guys, 12h later I found a solution. I have checked flask, werkzeug, requests, subprocess and thread lib. To only learn that problem is somewhere else. Solution was easy in fact. The bit of code that has to modified is execution of server.py. We need to cover it as well and then merge results generated by server.py and generated by nosetests. Modified test-runner.sh looks as follows:</p>
<pre><code>#!/usr/bin/env bash
HOST="0.0.0.0"
PORT="5001"
ENVS="COVERAGE_PROCESS_START=$PWD/.apirc PYTHONPATH=$PYTHONPATH:$PWD"
START_SERVER="$ENVS coverage run --rcfile=.apirc $PWD/server.py --port=$PORT --host=$HOST"
eval "$START_SERVER&"
eval "$ENVS nosetests -s --nologcapture --cov-config=.apirc --cov-report html --with-cov"
# this is important bit, we have to stop flask server gracefully otherwise
# coverage won't get a chance to collect and save all results
eval "curl -X POST http://$HOST:$PORT/0.0/shutdown/"
# this will merge results from both coverage runs
coverage combine --rcfile=.apirc
coverage html --rcfile=.apirc
</code></pre>
<p>Where .apirc in my case is looks as follows:</p>
<pre><code>[run]
branch = True
parallel = True
source = files_to_cover/
[html]
directory = cover
</code></pre>
<p>Last thing we need to do is to build into our flask, view that will allow us to greacefully shut down the server. Previously I was brute killing it with kill -9 and it used to kill not only server but coverage as well.</p>
<p>Follow this snippet: <a href="http://flask.pocoo.org/snippets/67/" rel="nofollow">http://flask.pocoo.org/snippets/67/</a></p>
<p>And my view is like that:</p>
<pre><code>def shutdown():
if config.SHUTDOWN_ALLOWED:
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Not running with the Werkzeug Server')
func()
return 'Server shutting down...'
</code></pre>
<p>It is important to use nose-cov instead of standard coverage plugin, as it uses rcfile and allows to configure more. In our case parallel is the key, please note data_files variable is not working for nose-coverage, so you cannot override it in .apirc and you have to use default values.</p>
<p>After all of that, your coverage will be all but shining with valid values.</p>
<p>I hope it going to be helpful for someone out there.</p>
|
python|flask|code-coverage|nosetests
| 3 |
1,905,917 | 30,943,166 |
How do I get compiled python(with cxFreeze) to get the current working directory as the directory where the executable is in?
|
<p>I am working on a mac and I am writing a game in python and pygame that needs some sound files to be present in the same directory where the source code is. The thing is it works when I run the source code through the python interpreter. It doesn't work when I try to run the executable that has been compiled by cxFreeze. I have searched the internet for solutions but found nothing.</p>
<p>When python is interpreted the cwd would be the directory where the source code is in. When it is compiled with cxFreeze and ran through the terminal, the cwd would change to my home directory. This messes things up because I want to make this game portable and I need the sound files to be in the same folder as the executable.</p>
<pre><code>import os
# Load the sound files.
CWDPATH = os.getcwd()
BEEP1 = pygame.mixer.Sound(os.path.join(CWDPATH, 'beep1.wav'))
BEEP2 = pygame.mixer.Sound(os.path.join(CWDPATH, 'beep2.wav'))
BEEP3 = pygame.mixer.Sound(os.path.join(CWDPATH, 'beep3.wav'))
BEEP4 = pygame.mixer.Sound(os.path.join(CWDPATH, 'beep4.wav'))
</code></pre>
<p>This part of code is what's causing the problem. It runs smoothly when interpreted, but doesn't work when compiled with cxFreeze because os.getcwd() evaluates to the home directory. I have tried putting 'os' in 'packages' option in the cxFreeze script as well. It doesn't work and I have worked on this hours without a solution. How can I get this to work?</p>
|
<p>You can use <code>sys.argv</code>. The first element of the command line arguments is always the program itself:</p>
<pre><code>import os
import sys
CWDPATH = os.path.abspath(os.path.dirname(sys.argv[0]))
</code></pre>
|
python|pygame|cx-freeze
| 3 |
1,905,918 | 40,074,155 |
How to improve distance function in python
|
<p>I am trying to do a classification exercise on email docs (strings containing words). </p>
<p>I defined the distance function as following:</p>
<pre><code>def distance(wordset1, wordset2):
if len(wordset1) < len(wordset2):
return len(wordset2) - len(wordset1)
elif len(wordset1) > len(wordset2):
return len(wordset1) - len(wordset2)
elif len(wordset1) == len(wordset2):
return 0
</code></pre>
<p>However, the accuracy in the end is pretty low (0.8). I guess this is because of the not so accurate distance function. How can I improve the function? Or what are other ways to calculate the "distance" between email docs?</p>
|
<p>One common measure of similarity for use in this situation is the <a href="https://nickgrattan.wordpress.com/2014/02/18/jaccard-similarity-index-for-measuring-document-similarity/" rel="nofollow">Jaccard similarity</a>. It ranges from 0 to 1, where 0 indicates complete dissimilarity and 1 means the two documents are identical. It is defined as </p>
<pre><code>wordSet1 = set(wordSet1)
wordSet2 = set(wordSet2)
sim = len(wordSet1.intersection(wordSet2))/len(wordSet1.union(wordSet2))
</code></pre>
<p>Essentially, it is the ratio of the intersection of the sets of words to the ratio of the union of the sets of words. This helps control for emails that are of different sizes while still giving a good measure of similarity. </p>
|
python|distance|knn
| 2 |
1,905,919 | 51,755,454 |
Syntax error when importing an excel file into a jupyter notebook
|
<p>I'm having trouble importing an excel file into a Jupyter notebook. I keep getting a syntax error on the last quotation mark. What am I doing wrong?</p>
<pre><code>import pandas as pd
data = pd.read_excel(r "C:\Users\kabir\Documents\Data Science Projects\datasets\Body Fat Data_2016.xls")
File "<ipython-input-13-94de2d4a3dec>", line 2
data = pd.read_excel(r "C:\Users\kabir\Documents\Data Science Projects\datasets\Body Fat Data_2016.xls")
^
SyntaxError: invalid syntax
</code></pre>
|
<p>Try this: </p>
<pre><code>import pandas as pd
data = pd.read_excel(r"C:\Users\kabir\Documents\Data Science Projects\datasets\Body Fat Data_2016.xls")
</code></pre>
<p>You can't put a space between the <code>r</code> and the <code>"</code> for a raw string.</p>
|
python-3.x|jupyter-notebook
| 0 |
1,905,920 | 51,752,140 |
One line string iterator; AttributeError: 'generator' object has no attribute 'replace'
|
<p>Right now this works: </p>
<pre><code>for x in clean_spaces:
final_qry = final_qry.replace(x, ' ')`
</code></pre>
<p><code>clean_spaces</code> is a list of regex group extracted multi-character whitespace values <code>(\s\s+)</code> to be replaced in a output string with a single space. </p>
<p><code>final_qry</code> is a SQL query being modified by other actions.</p>
<p>I'd liked to have a cleaner one-line statement like so:</p>
<pre><code>final_qry = (final_qry.replace(x, ' ') for x in clean_spaces)
</code></pre>
<p>However when trying to use that and a similar statement after it, I get the following error:</p>
<pre><code>AttributeError: 'generator' object has no attribute 'replace'
</code></pre>
<p>Is there a way to make the for...in...:...\n...[action] statement into one line without this error?</p>
|
<p>After</p>
<blockquote>
<p>final_qry = (final_qry.replace(x, ' ') for x in clean_spaces)</p>
</blockquote>
<p><code>final_qry</code> is no longer the SQL query it used to be, but a generator expression.</p>
<p>Consider this (a bit simplified) session:</p>
<pre><code>$ python
>>> clean_spaces="cd"
>>> final_qry="abcdefg"
>>> final_qry
'abcdefg'
>>> final_qry = (final_qry.replace(x, ' ') for x in clean_spaces)
>>> final_qry
<generator object <genexpr> at 0x7f85d2828960>
>>> list(final_qry)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <genexpr>
AttributeError: 'generator' object has no attribute 'replace'
</code></pre>
<p>I think you won't be able to solve your problem this way. Use the solution proposed by @laylog:</p>
<pre><code>final_qry = reduce((lambda x, y: x.replace(y, ' ')), clean_spaces, final_qry)
</code></pre>
|
python|python-3.x
| 0 |
1,905,921 | 51,740,700 |
bitwise operation and Two's complement in python
|
<p>I have an array of (short) data which is shifted 4 to left and it is also a signed number. I need to plot it around zero.</p>
<p>for instance: if the number in the array is <code>0b0000000011111111</code> and I shift it to left by 4, I will get <code>0b000000001111</code>. it is fine. </p>
<p>for instance: if the number in the array is <code>0b100011111111</code> and I shift it to left by 4, I will get <code>0b000010001111</code>. It is fine, but now it is not a negative number.</p>
<p>can someone help ?</p>
|
<p>You have to write your own implementation of the arithmetic right shift of a 16-bit value, if this is what you need. I suggest you this one, very easy to understand: </p>
<pre><code>def arithmetic_right_shift_on_16_bits(val, n):
# Get the sign bit
s = val & 0x8000
# Perform the shifts, padding with the sign bit
for _ in range(n):
val >>= 1
val |= s
return val
a = arithmetic_right_shift_on_16_bits(0b0000000011111111, 4)
print(bin(a)) # 0b1111
b = arithmetic_right_shift_on_16_bits(0b1000000011111111, 4)
print(bin(b)) # 0b1111100000001111
</code></pre>
|
python|twos-complement
| 1 |
1,905,922 | 59,606,228 |
Sparse matrix in LSTM
|
<p>I am building a small LSTM model for binary classification with <code>tf-idf</code> transformation. And I am getting this warning and it's taking a long time to train:</p>
<blockquote>
<p><strong>UserWarning:</strong> Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "</p>
</blockquote>
<p>My code looks like this </p>
<pre><code>xtrain_tfv = tfv.transform(xtrain)
xvalid_tfv = tfv.transform(xvalid)
</code></pre>
<pre><code>model = Sequential()
model.add(Embedding(xtrain_tfv.shape[0], 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(xtrain_tfv, ytrain,
batch_size=32,
epochs=15,
validation_data=(xvalid_tfv, yvalid))
score, acc = model.evaluate(xtest_tfv, ytest,
batch_size=32)
</code></pre>
<p><code>xtrain_tfv</code> has a shape of <code>(6851, 9122)</code>
. How to handle this?</p>
|
<p>You can ignore the warning if the process which you are doing is not causing the slowness of the model. </p>
<p>The warning can be explained as below. </p>
<pre><code># dense
array = ['a', None, None, 'c']
# sparse
array = [(0, 'a'), (3, 'c')]
</code></pre>
<p>So as you can see if you have a lot of empty entries a sparse array will be much more efficient than a dense one. But if all entries are filled in, dense is far more efficient. In your case somewhere in the tensor flow graph a sparse array is being converted to a dense one of indeterminate size. The warning is just saying it is possible that you can waste a lot of memory like this. But it might not be a problem at all if the sparse array is not too big/already quite dense. </p>
<p>You have mentioned that you have solved the issue using one-hot encoding. But if your vocabulary size is large, one-hot encoding is not suggested since it creates sparse matrix of large dimension. </p>
<p>You can follow the below method to convert your text data into sequences of same length. </p>
<pre><code>import tensorflow as tf
from tf.keras.preprocessing.text import Tokenizer
from tf.keras.preprocessing.sequence import pad_sequences
# prepare tokenizer
t = Tokenizer()
t.fit_on_texts(token_list)
vocab_size = 10000 # depends on your data
# integer encode the documents
encoded_docs = t.texts_to_sequences(sentences)
max_length = max_len
X = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
</code></pre>
<p>For embedding layer, you can use pre-trained word embeddings which will create a desnse vector.You can follow this <a href="https://www.tensorflow.org/neural_structured_learning/tutorials/graph_keras_lstm_imdb#create_sample_embeddings" rel="nofollow noreferrer">Tensorflow's documentation</a> for the same.</p>
|
python|tensorflow|keras
| 0 |
1,905,923 | 69,089,390 |
How to position text in a matplotlib plot without mentioning any coordinates?
|
<p>My plot is very dynamic and the function generating the plot have inputs that makes the plots scale vary. I need to position text outside the plot and do not want to provide any coordinates.</p>
<p>I am using the ax.text() function, with horizontal and vertical alignment. Now am using the maximum and minimum limits of the axis dividing it by the 4 times of the number of ticks to find the x-cordinate to place the text. But as the scale of the plot varies the position gets shifted. Thats what I am trying to fix.</p>
<pre><code>ax.text(
date_max+xaxis_shift,
thresh + 0.01 * y_range,
"Trendline surpasses threshold: " + x_cross.strftime("%Y-%m-%d"),
{"color": "r", "fontsize": 10},
ha="right",
va="bottom",
rotation="vertical",
)
</code></pre>
|
<p>If you are only plotting a single axes per figure, you may want to use <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.figtext.html" rel="nofollow noreferrer"><code>figtext</code></a> instead of <code>text</code>; that way you can hard-code figure coordinates and don't have to think about data coordinates.</p>
|
python|matplotlib|plot
| 1 |
1,905,924 | 67,209,437 |
Selecting Rows in Pandas within a certain value radius
|
<p>This is my first question so I don't know how to phrase it properly.
My question is that I have two lists and I need to select values or the index of said values of the first list within a certain region/radius of the second list. For example, I have:</p>
<pre><code>a = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0]
b = [1.1, 2.2, 3.3, 7.7, 8.8, 9.9]
radius = 0.3
</code></pre>
<p>And as a result I want to extract the values from "a" and get an output of the values like so:</p>
<pre><code>1, 2, 3, 8, 9, 10
</code></pre>
<p>To be more precise, I have a large .csv file with which I use pandas to read the columns and I want to create a new .csv file with only the values or index of said values in with the conditions are met.</p>
<p>(I don't think the data of these .csv files is important at the moment, but I will upload them if necessary)</p>
|
<p>This should work, however it is inefficient.</p>
<pre><code>a = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0]
b = [1.1, 2.2, 3.3, 7.7, 8.8, 9.9]
radius = 0.3
gdlist=[]
for itera in a:
for ele in [f for f in b if (itera>= (f-radius) and itera<= (f+radius))]:
gdlist.append(itera)
</code></pre>
|
python|pandas|dataframe
| 0 |
1,905,925 | 36,435,236 |
Python decorator function execution
|
<p>I have below decorator demonstration code. If I execute it without explicitly calling <code>greet</code> function, it is executing <code>print</code> statement inside decorator function and outputs <code>Inside decorator</code>.</p>
<p>I am unable to understand this behavior of decorator. How the <code>time_decorator</code> is called even if I didn't call <code>greet</code> function?</p>
<p>I am using Python 3.</p>
<pre><code>def time_decorator(original_func):
print('Inside decorator')
def wrapper(*args, **kwargs):
start = time.clock()
result = original_func(*args, **kwargs)
end = time.clock()
print('{0} is executed in {1}'.format(original_func.__name__, end-start))
return result
return wrapper
@time_decorator
def greet(name):
return 'Hello {0}'.format(name)
</code></pre>
|
<p>Decorators are called at <strong>start</strong> time <em>(when the python interpreter reads the code as the program starts)</em>, not at <strong>runtime</strong> <em>(when the decorated function is actually called)</em>.</p>
<p>At runtime, it is the wrapped function <code>wrapper</code> which is called and which itself calls the decorated function and returns its result.</p>
<p>So this is totally normal that the <code>print</code> line gets executed.</p>
<p>If, i.e, you decorate 10 functions, you will see 10 times the print output. No need to even call the decorated functions for this to happen.</p>
<p>Move the <code>print</code> inside <code>wrapper</code> and this won't happen anymore.</p>
<p><code>Decorators</code> as well as <code>metaclasses</code> are part of what is called <strong>meta-programming</strong> <em>(modify / create code, from existing code)</em>. This is a really fascinating aspect of programming which takes time to understand but offers amazing possibilities.</p>
|
python|decorator
| 13 |
1,905,926 | 36,651,719 |
Strip nested dict of non zero values
|
<p>I'm trying to strip a nested dict (only 1 level deep eg: <code>some_dict = {'a':{}, b:{}}</code> all all non-zero and none values.<br>
However I'm not sure who to reassemble the dict properly, the below gives me a key error. </p>
<pre><code>def strip_nested_dict(self, some_dict):
new_dict = {}
for sub_dict_key, sub_dict in some_dict.items():
for key, value in sub_dict.items():
if value:
new_dict[sub_dict_key][key] = value
return new_dict
</code></pre>
|
<p>You need to create the nested dictionary before accessing it:</p>
<pre><code>for sub_dict_key, sub_dict in some_dict.items():
new_dict[sub_dict_key] = {} # Add this line
for key, value in sub_dict.items():
# no changes
</code></pre>
<p><em>(In order for <code>new_dict[sub_dict_key][key]</code> to work, <code>new_dict</code> must be a dictionary, & <code>new_dict[sub_dict_key]</code> also has to be a dictionary.)</em></p>
|
python|dictionary
| 1 |
1,905,927 | 13,505,819 |
Python Split path recursively
|
<p>I am trying to split a path given as a string into sub-parts using the "/" as a delimiter <strong>recursively</strong> and passed into a tuple. For ex: "E:/John/2012/practice/question11" should be ('E:', 'John', '2012', 'practice', 'question11').</p>
<p>So I've passed every character excluding the "/" into a tuple but it is not how I wanted the sub-parts joint as displayed in the example. This is a practice question in homework and would appreciate help as I am trying to learn recursion.</p>
<p>Thank You so much</p>
|
<p>Something like this</p>
<pre><code>>>> import os
>>> s = "E:/John/2012/practice/question11"
>>> os.path.split(s)
('E:/John/2012/practice', 'question11')
</code></pre>
<p>Notice <code>os.path.split()</code> doesn't split up the whole path as <code>str.split()</code> would</p>
<pre><code>>>> def rec_split(s):
... rest, tail = os.path.split(s)
... if rest == '':
... return tail,
... return rec_split(rest) + (tail,)
...
>>> rec_split(s)
('E:', 'John', '2012', 'practice', 'question11')
</code></pre>
<p>Edit: Although the question was about Windows paths. It's quite easy to modify it for unix/linux paths including those starting with "/"</p>
<pre><code>>>> def rec_split(s):
... rest, tail = os.path.split(s)
... if rest in ('', os.path.sep):
... return tail,
... return rec_split(rest) + (tail,)
</code></pre>
|
python|recursion|path|split|tuples
| 11 |
1,905,928 | 57,933,822 |
Python Regex: Match paragraph numbers
|
<p>I am attempting to match paragraph numbers inside my block of text. Given the following sentence:</p>
<blockquote>
<p>Refer to paragraph C.2.1a.5 for examples.</p>
</blockquote>
<p>I would like to match the word <code>C.2.1a.5</code>.</p>
<p>My current code like so:</p>
<pre><code>([0-9a-zA-Z]{1,2}\.)
</code></pre>
<p>Only matches <code>C.2.1a.</code> and <code>es.</code>, which is not what I want. Is there a way to match the full <code>C.2.1a.5</code> and not match <code>es.</code>?</p>
<p><a href="https://regex101.com/r/cO8lqs/13723" rel="nofollow noreferrer">https://regex101.com/r/cO8lqs/13723</a></p>
<p>I have attempted to use <code>^</code> and <code>$</code>, but doing so returns no matches.</p>
|
<p>You should use following regex to match the paragraph numbers in your text.</p>
<pre><code>\b(?:[0-9a-zA-Z]{1,2}\.)+[0-9a-zA-Z]\b
</code></pre>
<p><a href="https://regex101.com/r/cUd6Ep/1" rel="nofollow noreferrer">Try this demo</a></p>
<p>Here is the explanation,</p>
<ul>
<li><code>\b</code> - Matches a word boundary hence avoiding matching partially in a large word like <code>examples.</code></li>
<li><code>(?:[0-9a-zA-Z]{1,2}\.)+</code> - This matches an alphanumeric text with length one or two as you tried to match in your own regex.</li>
<li><code>[0-9a-zA-Z]</code> - Finally the match ends with one alphanumeric character at the end. In case you want it to match one or two alphanumeric characters at the end too, just add <code>{1,2}</code> after it</li>
<li><code>\b</code> - Matches a word boundary again to ensure it doesn't match partially in a large word.</li>
</ul>
<p><strong>EDIT:</strong></p>
<p>As someone pointed out, in case your text has strings like <code>A.A.A.A.A.A.</code> or <code>A.A.A</code> or even <code>1.2</code> and you don't want to match these strings and only want to match strings that has exactly three dots within it, you should use following regex which is more specific in matching your paragraph numbers.</p>
<pre><code>(?<!\.)\b(?:[0-9a-zA-Z]{1,2}\.){3}[0-9a-zA-Z]\b(?!\.)
</code></pre>
<p>This new regex matches only paragraph numbers having exactly three dots and those negative look ahead/behind ensures it doesn't match partially in large string like <code>A.A.A.A.A.A</code></p>
<p><a href="https://regex101.com/r/cUd6Ep/2" rel="nofollow noreferrer">Updated regex demo</a></p>
<p>Check these python sample codes,</p>
<pre><code>import re
s = 'Refer to paragraph C.2.1a.5 for examples. Refer to paragraph A.A.A.A.A.A.A for examples. Some more A.A.A or like 1.22'
print(re.findall(r'(?<!\.)\b(?:[0-9a-zA-Z]{1,2}\.){3}[0-9a-zA-Z]\b(?!\.)', s))
</code></pre>
<p>Output,</p>
<pre><code>['C.2.1a.5']
</code></pre>
<p>Also for trying to use <code>^</code> and <code>$</code>, they are called start and end anchors respectively, and if you use them in your regex, then they will expect matching start of line and end of line which is not what you really intend to do hence you shouldn't be using them and like you already saw, using them won't work in this case.</p>
|
python|regex
| 2 |
1,905,929 | 43,708,788 |
Store boolean variable in url dispatching
|
<p>The following url definition should pass whether <code>results/</code> is present in the url:</p>
<pre><code>url(r'^(?P<question_id>[0-9]+)/(?P<results>(results/)?)shorten/$', views.shorten, name='shorten')
</code></pre>
<p>Currently it passes <code>results/</code> or <code>None</code> which is sufficient for a simple:</p>
<pre><code>if results:
pass
</code></pre>
<p>But it would be more elegant to have <code>True</code> and <code>False</code>. How could this be done?</p>
|
<p>You could have two URL patterns and pass <code>results</code> in the kwargs:</p>
<pre><code>url(r'^(?P<question_id>[0-9]+)/results/shorten/$', views.shorten, {'results': True}, name='shorten'),
url(r'^(?P<question_id>[0-9]+)/shorten/$', views.shorten, {'results': False}, name='shorten'),
</code></pre>
<p>If you don't want to do this, then there isn't currently a simple way to cast the <code>results</code> string to a boolean value. You could write a middleware or decorator, but that would be overkill.</p>
|
python|regex|django|url
| 11 |
1,905,930 | 54,369,148 |
Authentication issue, Django
|
<p>Sorry for my English language, I'm from RU.
I'm writing some code (adding information about my clients in a database, selecting this info, displaying it on some pages on my website, and adding the function EDIT for information about clients).</p>
<p>On the page with the full information about clients I'm displaying a link to "edit" information about these clients.</p>
<p>It works, okay, but when I'm wrapping template code around the link to edit:</p>
<pre><code>{% if user.is_authentificated %}
<a href.....>edit</a>
{% endif %}
</code></pre>
<p>THE LINK IS NOT DISPLAYED, but I'm authorized! (
Going to the admin panel does not require authorization)</p>
<p>Please, tell me what I am doing wrong?</p>
|
<p>If you want to authenticate any user Django provides an in-built method. You can try your code as below:</p>
<pre><code>{% if user.is_authenticated %}
<a href.....>edit</a>
{% endif %}
I hope it will work.
</code></pre>
|
django|django-forms|django-templates|django-authentication|python-3.7
| 0 |
1,905,931 | 54,665,397 |
flask upload CSV file without saving
|
<p>trying to place my text classification model into flask applications using CSV file upload to read data without saving the uploaded .csv file and throw it into my classifier model print it on the result pages. below example code of my attempt :</p>
<pre><code>@app.route('/', methods=['GET', 'POST'])
def upload_file():
if request.method == 'POST':
file = request.files['file']
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
file.stream.seek(0)
myfile = file.file
dataframe = pd.read_csv(myfile)
return
else:
return "Not Allowed"
return render_template("home.html")
</code></pre>
<p>This is my form </p>
<pre><code><form action="" method=post enctype=multipart/form-data>
<input type=file name="file[]" multiple>
<input type=submit value=Upload>
</form>
</code></pre>
<p>exception occurred here </p>
<pre><code>NameError: name 'allowed_file' is not defined
</code></pre>
<p>Any idea about this kind of issue ?</p>
|
<p>I think you are using this part of documentation : (<a href="http://flask.pocoo.org/docs/0.12/patterns/fileuploads/" rel="nofollow noreferrer">http://flask.pocoo.org/docs/0.12/patterns/fileuploads/</a>)</p>
<p>But you have to add the function :</p>
<pre><code>def allowed_file(filename):
return '.' in filename and filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS`
</code></pre>
|
python|flask
| 0 |
1,905,932 | 54,460,622 |
python patch object that is under __init__.py
|
<p>I'm writing a test and I want to mock a list that is located in <code>__init__.py</code>, meaning not under a class.
The object reference is:
<code>project/app/management/commands/__init__.py</code></p>
<p>and <code>__init__.py</code> looks something like:</p>
<pre><code>my_list_of_dict = [
{
'name': 'option1',
'vesion': 0,
},
{
'name': 'option1',
'vesion': 0,
}
]
</code></pre>
<p>If it was under a class I would do something like - </p>
<pre><code>@mock.patch.object(Class, 'my_list_of_dict')
</code></pre>
<p>but it isn't the case.</p>
<p>I tried something like</p>
<pre><code>@mock.patch('project.app.management.commands.my_list_of_dict')
def test(self, mock_list):
mock_list.return_value = [{.....}]
</code></pre>
<p>But it didn't work.</p>
<p>EDIT:</p>
<p>Adding info about the test</p>
<p>This is the test: </p>
<pre><code>@mock.patch('project.app.management.commands.my_list_of_dict')
def test_run_command_with_parameters(self, mock_list_of_dict):
mock_list_of_dict.return_value = [
{
'name': 'other_name',
'vesion': 1
}
]
with mock.patch('django.core.management.call_command', return_value=True,
side_effect=None) as call_command_mock:
c = Command()
c.handle()
</code></pre>
<p>This is part of the Command:</p>
<pre><code>from . import my_list_of_dict
class Command(BaseCommand):
def handle(self, *args, **options):
for dict in my_list_of_dict:
.....
</code></pre>
<p>Now, when the test get to the <code>handle()</code> part - it gets the original value, and not the mocked one </p>
|
<p>Just give the correct path to your <code>commands</code> module.</p>
<p>Edit (first example was wrong sorry):</p>
<p>Suppose that tests are inside <code>project</code>, in a <code>tests</code> folder, maybe in <code>some_tests.py</code> (and <code>project</code> folder and every subfolder has a <code>__init__.py</code> inside). Also you invoke tests for example with <code>python -m discover</code> from the <code>project</code> folder.</p>
<pre><code>from unittest import mock
from unittest import TestCase
import app.management.commands # only needed for 2nd assert
class TestCase1(TestCase):
@mock.patch('app.management.commands.my_list_of_dict')
def test(self, mock_list):
mock_list.return_value = [None]
self.assertEqual(mock_list(), [None])
self.assertEqual(app.management.commands.my_list_of_dict(),
[None])
</code></pre>
<p>This works over here now.
Patch requires the right path with a class or module name followed by the attribute to patch. </p>
|
python|testing|mocking
| 0 |
1,905,933 | 39,360,926 |
Matching the structure of a list?
|
<p>For example:</p>
<pre><code>A=[1,[2,3],[4,[5,6]],7]
B=[2,3,4,5,6,7,8]
</code></pre>
<p>How can I get <code>[2,[3,4],[5,[6,7]],8]</code>?</p>
|
<p>You could use a pretty simple recursive function:</p>
<pre><code>def match(struct, source):
try:
return [match(i, source) for i in struct]
except TypeError:
return next(source)
A=[1,[2,3],[4,[5,6]],7]
B=[2,3,4,5,6,7,8]
match(A, iter(B))
# [2, [3, 4], [5, [6, 7]], 8]
</code></pre>
<p>Here is a version of the function that might be a little easier for some people to understand:</p>
<pre><code>def match(struct, source, index=0):
if isinstance(struct, list):
r = []
for item in struct:
next, index = match(item, source, index)
r.append(next)
return r, index
else:
return source[index], index + 1
A=[1,[2,3],[4,[5,6]],7]
B=[2,3,4,5,6,7,8]
match(A, B)
</code></pre>
<p>The basic idea is to loop over the input structure depth first, and consume values from source accordingly. When we hit a number, we can simply take one number from source. If we hit a list we need to apply this algorithm to that list. Along the way need to keep track of how many items we've consumed.</p>
<p>The first version of the algorithm does all this, but in a slightly different way. <code>iter(B)</code> creates an iterator that tracks how many items from b have been consumed and provided the next item when i call <code>next(source)</code> so I don't have to track index explicitly. The try/except checks to see if I can loop over <code>struct</code>. If I can, a list is returned, if I cannot the expect block gets executed and <code>next(source)</code> is returned.</p>
|
python|list
| 9 |
1,905,934 | 52,718,526 |
How to merge all csv files in a folder to single csv ased on columns?
|
<p>Given a folder with multiple csv files with different column lengths</p>
<p>Have to merge them into single csv file using python pandas with printing file name as one column.</p>
<p>Input: <a href="https://www.dropbox.com/sh/1mbgjtrr6t069w1/AADC3ZrRZf33QBil63m1mxz_a?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/sh/1mbgjtrr6t069w1/AADC3ZrRZf33QBil63m1mxz_a?dl=0</a></p>
<p>Output: </p>
<pre><code>Id Snack Price SheetName
5 Orange 55 Sheet1
7 Apple 53 Sheet1
8 Muskmelon 33 Sheet1
11 Orange Sheet2
12 Green Apple Sheet2
13 Muskmelon Sheet2
</code></pre>
|
<p>You can use:</p>
<pre><code>files = glob.glob('files/*.csv')
dfs = [pd.read_csv(fp).assign(SheetName=os.path.basename(fp).split('.')[0]) for fp in files]
df = pd.concat(dfs, ignore_index=True)
print (df)
Id Price SheetName Snack
0 11 NaN Sheet 2 Orange
1 12 NaN Sheet 2 Green Apple
2 13 NaN Sheet 2 Muskmelon
3 5 55.0 Sheet1 Orange
4 7 53.0 Sheet1 Apple
5 8 33.0 Sheet1 Muskmelon
</code></pre>
<p>EDIT:</p>
<pre><code>dfs = []
for fp in files:
df = pd.read_csv(fp).assign(SheetName=os.path.basename(fp).split('.')[0])
#another code
dfs.append(df)
</code></pre>
|
python|pandas
| 1 |
1,905,935 | 47,670,361 |
ReferenceError: weakly-referenced object no longer exists
|
<p>I have two files: <code>test.py</code> and <code>test.kv</code>. When i call <code>insert_update_account()</code> function from the <code>.kv</code> file then it gives an error:</p>
<pre><code>File "kivy/weakproxy.pyx", line 30, in kivy.weakproxy.WeakProxy.__getattr__ (kivy/weakproxy.c:1144)
File "kivy/weakproxy.pyx", line 26, in kivy.weakproxy.WeakProxy.__ref__ (kivy/weakproxy.c:1043)
ReferenceError: weakly-referenced object no longer exists<br/>
</code></pre>
<p>If I comment the line <code>self.display_account()</code> in <code>insert_update_account()</code> function then there is no error.</p>
<h1>test.py</h1>
<pre><code>import kivy
kivy.require('1.9.0') # replace with your current kivy version !
import sqlite3 as lite
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
from kivy.properties import BooleanProperty, ListProperty, StringProperty, ObjectProperty, NumericProperty
from kivy.lang import Builder
from kivy.core.window import Window
Window.maximize()
con = lite.connect('test.db')
#con = lite.connect(path + 'fact.db')
con.text_factory = str
cur = con.cursor()
class MainMenu(BoxLayout):
def display_account(self):
self.dropdown.dismiss()
self.remove_widgets()
self.rvaccount = TEST()
self.content_area.add_widget(self.rvaccount)
self.cur.close()
self.con.close()
def insert_update_account(self, obj):
cur.execute("UPDATE table SET test1=?, test2=? WHERE test3=?",
(obj.col_data[1], obj.col_data[2], obj.col_data[0]))
con.commit()
self.display_account()
class TEST(BoxLayout):
data_items = ListProperty([])
col1 = ListProperty()
col2 = ListProperty()
mode = StringProperty("")
def __init__(self, **kwargs):
super(TEST, self).__init__(**kwargs)
self.get_data()
def update(self):
self.col1 = [{'test1': str(x[0]), 'test2': str(x[1]), 'key': 'test1'} for x in self.data_items]
self.col2 = [{'test1': str(x[0]), 'test2': str(x[1]), 'key': 'test2'} for x in self.data_items]
def get_data(self):
cur.execute("SELECT * from table")
rows = cur.fetchall()
print(rows)
i = 0
for row in rows:
self.data_items_city.append([row[0], row[1], i])
i += 1
print(self.data_items_city)
self.update()
class TestApp(App):
title = "test"
def build(self):
self.root = Builder.load_file('test.kv')
return MainMenu()
if __name__ == '__main__':
TestApp().run()
</code></pre>
<p>Can someone help me?</p>
|
<p>It seems to me you are mixing the cursor <code>cur</code> and connection <code>con</code> in your class <code>MainMenu</code>, since you've defined it in global scope, and are also using the same names within the scope of the class. As such, this could be happening because the variables have been mixed freely in your code.</p>
<p>You should try acquiring a connection and cursor explicitly within the MainMenu class. Something like below should make sure that you are acquiring a new connection every time, and that your code is not mixing the variables out of scope.</p>
<pre><code>class MainMenu(BoxLayout):
def __init__(self):
super(MainMenu, self).__init__(self)
self.con = lite.connect('test.db')
self.cur = con.cursor()
def display_account(self):
self.dropdown.dismiss()
self.remove_widgets()
self.rvaccount = TEST()
self.content_area.add_widget(self.rvaccount)
self.cur.close()
self.con.close()
def insert_update_account(self, obj):
self.cur.execute("UPDATE table SET test1=?, test2=? WHERE test3=?",
(obj.col_data[1], obj.col_data[2], obj.col_data[0]))
self.con.commit()
self.display_account()
</code></pre>
|
python|python-3.x|python-2.7|kivy
| 2 |
1,905,936 | 37,183,948 |
Scrapy can not crawl link - comment of vnexpress website
|
<p>I'm a newbie of Scrapy & Python. I try to get the comment from the following URL but the result always null : <a href="http://vnexpress.net/tin-tuc/oto-xe-may/toyota-camry-2016-dinh-loi-tui-khi-khong-bung-3386676.html" rel="nofollow">http://vnexpress.net/tin-tuc/oto-xe-may/toyota-camry-2016-dinh-loi-tui-khi-khong-bung-3386676.html</a></p>
<p>Here is my code :</p>
<pre><code>from scrapy.spiders import Spider
from scrapy.selector import Selector
from tutorial.items import TutorialItem
import logging
class TutorialSpider(Spider):
name = "vnexpress"
allowed_domains = ["vnexpress.net"]
start_urls = [
"http://vnexpress.net/tin-tuc/oto-xe-may/toyota-camry-2016-dinh-loi-tui-khi-khong-bung-3386676.html"
]
def parse(self, response):
sel = Selector(response)
commentList = sel.xpath('//div[@class="comment_item"]')
items = []
id = 0;
logging.log(logging.INFO, "TOTAL COMMENT : " + str(len(commentList)))
for comment in commentList:
item = TutorialItem()
id = id + 1
item['id'] = id
item['mainId'] = 0
item['user'] = comment.xpath('//span[@class="left txt_666 txt_11"]/b').extract()
item['time'] = 'N/A'
item['content'] = comment.xpath('//p[@class="full_content"]').extract()
item['like'] = comment.xpath('//span[@class="txt_666 txt_11 right block_like_web"]/a[@class="txt_666 txt_11 total_like"]').extract()
items.append(item)
return items
</code></pre>
<p>Thanks for reading</p>
|
<p>Looks like the comments are loaded into the page with some JavaScript code.</p>
<p>Scrapy does not execute JavaScript on a page, it only downloads HTML pages. Try opening the page with JavaScript disabled in your browser, and you should see the page as Scrapy sees it.</p>
<p>You have a handful of options:</p>
<ul>
<li>reverse-engineer how the comments are loaded into the page, using your browser's developer tools panel, in "network" tab (it could be some XHR call loading HTML or JSON data)</li>
<li>use a (headless)browser to render the page (selenium, casper.js, splash...);
<ul>
<li>e.g. you may want to try this page with <a href="https://splash.readthedocs.io/" rel="nofollow">Splash</a> (one of the JavaScript rendering options for web scraping). This is the HTML you get back from Splash (it contains the comments): <a href="http://pastebin.com/njgCsM9w" rel="nofollow">http://pastebin.com/njgCsM9w</a></li>
</ul></li>
</ul>
|
python-2.7|scrapy
| 3 |
1,905,937 | 37,148,414 |
Updating a single value in Firebase with python
|
<p>I am a total newbie when it comes to backend. I am working on a very simple webpage that needs one element to be updated every couple minutes or so. I'd like it to make a request to my Firebase database, get a single integer, and change a number on the webpage to that integer.</p>
<p>Right now I am having trouble updating the Firebase with a simple Python program. Here is what my Firebase looks like every time I run my python script: <a href="https://i.gyazo.com/be5e35cd4b59e7de68a086da680adc04.png" rel="noreferrer">Click</a></p>
<p>When I run the script, it adds 6 new random variables with the value I'd like to send to Firebase. Here is what my code looks like so far:</p>
<pre><code>from firebase import firebase
fb = firebase.FirebaseApplication('https://myAssignedDomain.com/', None)
Result = fb.post('test/coffee', {'percentage': 40})
</code></pre>
<p>What do I need to do in order to only change one existing value in Firebase rather than create 6 new random variables?</p>
|
<p>This is how you can update the value of a particular property in firebase python 1.2 package</p>
<pre><code>from firebase import firebase
fb = firebase.FirebaseApplication('https://myAssignedDomain.com/', None)
fb.put('test/asdf',"count",4) #"path","property_Name",property_Value
</code></pre>
|
python|firebase|firebase-realtime-database
| 5 |
1,905,938 | 66,172,503 |
Celery as a service - PermissionError: [Errno 13] Permission denied: '/var/run/celery'
|
<p>I'm trying to create a service that runs <code>celery</code> but I came across permissions problem. I saw in many tutorials that pidfile path is <code>/var/run/celery/%n.pid</code> but it seems like my user doesn't have rights to write to <code>run</code>.</p>
<p>When I start the service, this is what it returns:</p>
<pre><code>PermissionError: [Errno 13] Permission denied: '/var/run/celery'
</code></pre>
<p><strong>celery.service</strong></p>
<pre><code>[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=master
Group=master
EnvironmentFile=/etc/conf.d/celery
WorkingDirectory=/home/master/myproject/
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
[Install]
WantedBy=multi-user.target
</code></pre>
<p><strong>celery.conf</strong></p>
<pre><code>CELERYD_NODES="celery-worker"
CELERY_BIN="/home/master/.virtualenvs/myproject/bin/celery"
# App instance to use
CELERY_APP="myproject"
CELERYD_MULTI="multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=3"
# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_LEVEL="INFO"
</code></pre>
<p>How to make it work?</p>
|
<p>I saw the same problem, and it turned out that I created <code>/var/log/celery</code> and <code>/var/run/celery</code> with root privilege, it looked like this</p>
<pre><code>drwxr-xr-x 2 root root 6 Aug 4 09:40 /var/log/celery
</code></pre>
<p>but I specified in <code>/etc/default/celeryd</code> my celeryd worker run as some non-root user, assume <code>me</code>, so <code>me</code> has no write/execute permission to <code>/var/log/celery</code>, so it got permission denied.</p>
<p>You may either create your <code>/var/log/celery</code> directory under user <code>master</code> group <code>master</code> or grant write/execute to user <code>master</code> (simply run <code>chmod o+w /var/log/celery</code> may be insecure, though I did in testing and hopefully it worked.</p>
|
python|linux|permissions|celery|systemd
| 0 |
1,905,939 | 66,013,819 |
Extract data with regular expresion with python
|
<p>I am trying to extract data from a txt file (see a sample text below) using python. Take into account that the title can be in one single line, split into two lines or even split with a blank line in the middle (TITLE1).</p>
<p>What I would like to achieve is to extract the information to store in a table like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Code</th>
<th>Title</th>
<th>Opening date</th>
<th>Deadline</th>
<th>Budget</th>
</tr>
</thead>
<tbody>
<tr>
<td>TITLE-SDFSD-DFDS-SFDS-01-01</td>
<td>This is the title 1 that is split in two lines with a blank line in the middle</td>
<td>15-Apr-21</td>
<td>26-Aug-21</td>
<td>EUR 20.00 million</td>
</tr>
<tr>
<td>TITLE-SDFSD-DFDS-SFDS-01-02</td>
<td>This is the title2 in one single line</td>
<td>15-Mar-21</td>
<td>17-Aug-21</td>
<td>EUR 15.00 million</td>
</tr>
<tr>
<td>TITLE-SDFSD-DFDS-SFDS-01-03</td>
<td>This is the title3 that is too long and takes two lines</td>
<td>15-May-21</td>
<td>26-Sep-21</td>
<td>EUR 5.00 million</td>
</tr>
</tbody>
</table>
</div>
<p>I manage to identify the "codes titles" with this piece of code:</p>
<pre><code>import re
with open('doubt2.txt','r', encoding="utf-8") as f:
f_contents = f.read()
pattern = re.compile(r'TITLE-.+-[0-9]{2}-[0-9]{2}(?!,)\S{1}')
matches = pattern.finditer(f_contents)
for match in matches:
print(match)
</code></pre>
<p>And I get this result:</p>
<pre><code><re.Match object; span=(160, 188), match='TITLE-SDFSD-DFDS-SFDS-01-01:'>
<re.Match object; span=(669, 697), match='TITLE-SDFSD-DFDS-SFDS-01-02;'>
<re.Match object; span=(1066, 1094), match='TITLE-SDFSD-DFDS-SFDS-01-03:'>
</code></pre>
<p>My doubt is how to get the information that I identified with the regular expression and extract the rest of the data. Can you help me, please?</p>
<blockquote>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam id diam
posuere, eleifend diam at, condimentum justo. Pellentesque mollis a
diam id consequat.</p>
<p>TITLE-SDFSD-DFDS-SFDS-01-01: This is the title 1 that</p>
<p>is split into two lines with a blank line in the middle</p>
<p>Conditions Pellentesque blandit scelerisque pellentesque. Sed nec quam
purus. Quisque nec tellus sed neque accumsan lacinia sit amet sit amet
tellus. Etiam venenatis nibh vel pellentesque elementum. Nullam eget
tortor quam. Morbi sed leo et arcu aliquet luctus.</p>
<p>Opening date 15 Apr 2021</p>
<p>Deadline 26 Aug 2021</p>
<p>Indicative budget: The total indicative budget for the topic is EUR
20.00 million.</p>
<p>TITLE-SDFSD-DFDS-SFDS-01-02; This is the title2 in one single line</p>
<p>Conditions Cras egestas consectetur sapien at dignissim. Maecenas
commodo purus nibh, a tempus augue vestibulum feugiat. Vestibulum
dolor neque, sagittis ut tortor et, lobortis faucibus quam.</p>
<p>Opening date 15 March 2021</p>
<p>Deadline 17 Aug 2021</p>
<p>Indicative budget: The total indicative budget for the topic is EUR
15.00 million.</p>
<p>TITLE-SDFSD-DFDS-SFDS-01-03: This is the title3 that is too long and takes two lines</p>
<p>Conditions Cras egestas consectetur sapien at dignissim. Maecenas
commodo purus nibh, a tempus augue vestibulum feugiat. Vestibulum
dolor neque, sagittis ut tortor et, lobortis faucibus quam.</p>
<p>Opening date 15 May 2021</p>
<p>Deadline 26 Sep 2021</p>
<p>Indicative budget: The total indicative budget for the topic is EUR
5.00 million.</p>
</blockquote>
|
<p>Use a regular expression with capturing groups. USe the <code>re.DOTALL</code> flag to allow <code>.*</code> to match across multiple lines, so you can capture multi-line titles. And use lazy quantifiers to avoid the matches being too long.</p>
<pre><code>import csv
import re
pattern = re.compile(r'^(TITLE-.+?-\d{2}-\d{2})\S*\s*(.*?)^Conditions.*?^Opening date (\d{1,2} \w+ \d{4})\s*?^Deadline (\d{1,2} \w+ \d{4})\s*^Indicative budget:.*?(EUR [\d.]+ \w+)', re.MULTILINE | re.DOTALL)
matches = pattern.finditer(f_contents)
with open("result.csv", "w") as outfile:
csvfile = csv.writer(outfile)
csvfile.writerow(['Code', 'Title', 'Opening date', 'Deadline', 'Budget'])
for match in matches:
csvfile.writerow([match.group(1), match.group(2).replace('\n', ' '), match.group(3), match.group(4), match.group(5)])
</code></pre>
<p><a href="https://regex101.com/r/5mZSDf/1" rel="nofollow noreferrer">DEMO</a></p>
|
python|regex|text-extraction
| 2 |
1,905,940 | 7,456,102 |
Calling C functions from python code. Dll not working
|
<p>I wrote below dll called djj.dll it has a file called try.cpp with following code </p>
<pre><code>#include<stdio.h>
int print(){
return 4;
}
</code></pre>
<p>Now, i build this dll and go to python idle. </p>
<p>I type print windll.djj.print . It gives syntax error .WHY??</p>
|
<p>As Aaron Gallagher said, <code>print</code> is a Python keyword. Also it's unusual for a compiled DLL to use the standard Windows calling convention (i.e. windll). It's more likely to use <a href="http://en.wikipedia.org/wiki/X86_calling_conventions#cdecl" rel="nofollow">cdecl</a> (i.e. cdll). Here's an approach that should work:</p>
<pre><code>djj = ctypes.cdll.LoadLibrary('djj.dll')
my_print = getattr(djj, 'print')
x = my_print() #x is 4
</code></pre>
|
python|c
| 1 |
1,905,941 | 7,520,622 |
Python re module's cache clearing
|
<p>While reading the documentation on Python <code>re</code> module I decided to have a look on <code>re.py</code> source code.</p>
<p>When I opened it, I found this:</p>
<pre><code>_cache = {}
_MAXCACHE = 100
def _compile(*key):
cachekey = (type(key[0]),) + key
p = _cache.get(cachekey)
if p is not None:
return p
#...Here I skip some part of irrelevant to the question code...
if len(_cache) >= _MAXCACHE:
_cache.clear()
_cache[cachekey] = p
return p
</code></pre>
<p>Why is the cache cleared using<code>_cache.clear()</code> when it reaches <code>_MAXCACHE</code> of entries? </p>
<p>Is it common approach to clear cache completely and start from scratch? </p>
<p>Why just not used the longest time ago cashed value is deleted?</p>
|
<p>Here is a quote from one of the developers of a new <code>regex</code> module scheduled for 3.3 regarding the caching, this is part of a list of features that separates the new module from the current <code>re</code> module.</p>
<blockquote>
<p>7) Modify the re compiled expression cache to better handle the
thrashing condition. Currently, when regular expressions are compiled,
the result is cached so that if the same expression is compiled again,
it is retrieved from the cache and no extra work has to be done. This
cache supports up to 100 entries. Once the 100th entry is reached, the
cache is cleared and a new compile must occur. The danger, all be it
rare, is that one may compile the 100th expression only to find that one
recompiles it and has to do the same work all over again when it may
have been done 3 expressions ago. By modifying this logic slightly, it
is possible to establish an arbitrary counter that gives a time stamp to
each compiled entry and instead of clearing the entire cache when it
reaches capacity, only eliminate the oldest half of the cache, keeping
the half that is more recent. This should limit the possibility of
thrashing to cases where a very large number of Regular Expressions are
continually recompiled. In addition to this, I will update the limit to
256 entries, meaning that the 128 most recent are kept.</p>
</blockquote>
<p><a href="http://bugs.python.org/issue2636" rel="noreferrer">http://bugs.python.org/issue2636</a></p>
<p>This seems to indicate that it is more likely the laziness of the developer or "an emphasis on readability" that explains the current caching behavior.</p>
|
python|regex|caching
| 5 |
1,905,942 | 72,679,090 |
Python If else multiple condition to apply condition on different columns for creating a new columns is giving syntax error
|
<p>I am trying to bulid an if else condition for a data frame, but it seems giving me invalid syntax, The data is below:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(np.random.randint(0,30,size=10),
columns=["Random"],
index=pd.date_range("20180101", periods=10))
df=df.reset_index()
df['Recommandation']=['No', 'Yes', 'No', 'Yes', 'Yes', 'Yes', 'No', 'No', 'Yes', 'No']
df['diff']=[3,2,4,1,6,1,2,2,3,1]
df
</code></pre>
<p>I am trying to create another column in 'new' by using the following condition:</p>
<pre><code>If the 'index' is in the first three date, then, 'new'='random',
elif the 'Recommendation' is yes, than 'new'= 'Value of the previous row of the random column'+'diff'
else: 'new'= 'Value of the previous row of the random column'
</code></pre>
<p>My code is below:</p>
<pre class="lang-py prettyprint-override"><code>def my_fun(df, Recommendation, random, index, diff):
print (x)
if df[(df['index']=='2018-01-01')|(df['index']=='2018-01-02')|(df['index']=='2018-01-03')] :
x = df['random']
elif (df[df['recommendation']=='Yes']):
x = df['random'].shift(1)+df['diff']
else:
x = df['random'].shift(1)
return x
#The expected output:
df['new'] = [22, 20, 10, 31, 26, 6, 27, 5, 10, 13]
df
</code></pre>
|
<p>Following your conditions, the code should be:</p>
<pre><code>import numpy as np
df['new'] = np.select([df['index'].isin(df['index'].iloc[:3]), df['Recommandation'].eq('Yes')],
[df['Random'], df['diff']+df['Random'].shift(1)],
df['Random'].shift(1)
)
</code></pre>
<p>output:</p>
<pre><code> index Random Recommandation diff new
0 2018-01-01 22 No 3 22.0
1 2018-01-02 21 Yes 2 21.0
2 2018-01-03 29 No 4 29.0
3 2018-01-04 19 Yes 1 30.0
4 2018-01-05 1 Yes 6 25.0
5 2018-01-06 8 Yes 1 2.0
6 2018-01-07 0 No 2 8.0
7 2018-01-08 4 No 2 0.0
8 2018-01-09 27 Yes 3 7.0
9 2018-01-10 27 No 1 27.0
</code></pre>
|
python|pandas|dataframe|if-statement|python-datetime
| 1 |
1,905,943 | 39,813,267 |
Identifying and adding values from dict to another dict, while both contain the similar keys
|
<p>Let's say I had these two dictionaries, both containing similar keys to each other:</p>
<pre><code>d_1 = {'a': 3, 'b': 4, 'c': 1}
d_2 = {'a': 5, 'b': 6, 'c': 9, 'd': 7}
</code></pre>
<p>Now, let's say I want to take the values of <code>d_1</code> and add them to the corresponding values of <code>d_2</code>. How would I add the values of <code>d_1</code>'s <code>a</code> key to the respective key in <code>d_2</code> w/o individually saying <code>d_2['a'] += d_1['a']</code> for each key? How could I go about writing a function that can take two dicts, compare their keys, and add those values to the identical existing key values in another dict? </p>
<p>For example, if I had a class named <code>player</code> and it has an attribute of <code>skills</code> which contains several skills and their values, e.g. <code>strength: 10</code> or <code>defense: 8</code>, etc. Now, if that <code>player</code> were to come across an armor set, or whatever, that buffed specific skills, how could I take that buff dictionary, see what keys it has in common with the <code>player</code>'s <code>skills</code> attribute, and add the respective values?</p>
|
<p>For every key in d_2, check if them in d_1, if so, add them up.</p>
<p><code>for key in d_2:
if key in d_1:
d_2[key] += d_1[key]</code></p>
<p>It seems you want to built a game, then you might not want to mess with the player's attribute. add them up and put the outcome to a new dict would be better. What's more, you can add more than two dict together, as you might have more than one buff.</p>
<p><code>for key in player:
outcome[key] = 0
for buff in buffs:
if key in buff:
outcome[key] += buff[key]
outcome[key] += player[key]</code></p>
|
python|dictionary
| 3 |
1,905,944 | 16,504,499 |
Stop python eval() from removing brackets
|
<p>A string variable is defined <code>clause1 = "((1 & z[0]) != 0)"</code></p>
<p>Its eval() gives <code>BoolRef: 1 & v__a != 0</code>
while I actually need <code>BoolRef: ((1 & v__a) != 0)</code></p>
<p>How to keep the brackets in eval() and evaluate everything else</p>
|
<p>I think you're confusing <code>eval()</code> with the exact type of <code>z[0]</code>, which is, I guess, what does the magic here. I believe that if you try to run <code>((1 & z[0]) != 0)</code> directly, without <code>eval()</code>, you'd get the same answer <code>BoolRef: 1 & v__a != 0</code>. Am I correct? If so, then you need to look in the class BoolRef to fix how it implements <code>__repr__()</code>, to include some extra parenthesis in the final string.</p>
|
python|eval
| 0 |
1,905,945 | 16,570,573 |
Optimizing K (ideal # of clusters) Using PyCluster
|
<p>I'm using PyCluster's kMeans to cluster some data -- largely because SciPy's kMeans2() produced an insuperable error. <a href="https://stackoverflow.com/a/2224488/2058922">Mentioned here</a>. Anyhow the PyCluster kMeans worked well, and I am now attempting to optimize the number of kMeans clusters. PyCluster's accompanying literature suggests that I can optimize its kMeans by implementing an EM algorithm -- <a href="http://bonsai.hgc.jp/~mdehoon/software/cluster/cluster.pdf" rel="nofollow noreferrer">bottom of page 13 here</a> -- but I cannot find a single example.</p>
<p>Can someone please point me to a PyCluster k-means optimization problem? Thanks in advance for any help.</p>
|
<p>The manual for PyCluster refers to a different optimization problem than the one you are asking about. While you ask how to determine the optimal number of clusters, the manual deals with how to find the optimal clusters given the total number of clusters. The concept to understand is that k-means, which is a type of an EM (Expectation Maximization problem) algorithm, does not guarantee an optimal clustering solution (where an optimal clustering solution can be defined as an assignment of clusters that minimizes the sum of the square of the distances between each data-point and the mean of its cluster). The way k-means works is this:</p>
<pre><code>set cluster means to equal k randomly generated points
while not converged:
# expectation step:
for each point:
assign it to its expected cluster (cluster whose mean it is closest to)
# maximization step:
for each cluster:
# maximizes likelihood for cluster mean
set cluster mean to be the average of all points assigned to it
</code></pre>
<p>The k-means algorithm will output the best solution given the initialization, but it will not necessarily find the best clustering solution globally. This is what the manual is referring to on the bottom of page 13. The manual says that the kcluster routine will perform EM (which is exactly the k-means algorithm) a number of times and select the optimal clustering. It never refered to the problem of finding the optimal number of clusters.</p>
<p>That said, there are a few heuristics you can use to determine the optimal number of clusters (see for instance <a href="http://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set">Wikipedia</a>):</p>
<ol>
<li>Perhaps the simplest is just to set k=sqrt(n/2), which has often been found to be optimal.</li>
<li>Another approach is to divide your data into two parts, a training set (maybe the first 90% of the data), and a test set (maybe the last 10% of the data). Both sets should be representative of the entire set of data, so you might want to use random.shuffle or random.sample beforehand. Using only the training set, you can apply k-means clustering to find the cluster assignments, from which you can deduce the mean of each cluster. Then, using the test data set, calculate the sum of the squares of the distances between each data point and the mean of its assigned cluster. Finally, if you plot the number of clusters versus the test error, you will (perhaps) find that after a certain value for k, the errors will start increasing, or at least, will stop decreasing. You can then choose the k for which this happens. The use of the test data set will help guarantee that the clustering produced by training is representative of the actual data-set, not the particular training set you happened to sample. If you had n training data points and n clusters, you can of course obtain a perfect clustering on the training set, but the error for the test set might still be large. </li>
<li>Or perhaps you can try the more general mixture of Gaussians model. In the mixture of Gaussians model, there are k Gaussian distributions, N_1, ..., N_k, appearing with weights c_1, ..., c_k, where c_1+...+c_k=1. A data-point is drawn from the Gaussian N_i with probability c_i. The k-means is a special type of a mixture of Gaussians model, where each Gaussian is assumed to be spherical with equal covariances, and with all weights equal. One advantage of this model is that if you see that some of the c_i are really small, then that Gaussian hump might not be a real cluster. To reduce complexity (and the risk of over-fitting), you can constrain the Gaussians to be spherical or to have equal covariances, which gives you a clustering mechanism that behaves almost like k-means except that it shows how important each cluster is.</li>
</ol>
|
python|c|machine-learning|scipy|k-means
| 7 |
1,905,946 | 38,752,081 |
Communication from Python to Arduino over USB
|
<p>I am trying to communicate with an Arduino. I use python and pyserial for communication over USB. As you can see in the source code below, I am trying to send a bytearray, which contains some information for two ledstrips, to the Arduino. But the Arduino does not receive the right information. It looks like the bytearray is transformed or information is getting lost. </p>
<p>I searched the whole day for a solution, but nothing has worked. Hopefully one of you can help me with this problem.</p>
<p>Thanks in advance.</p>
<p><strong>Python Code</strong></p>
<pre><code>import sys
import serial
import time
HEADER_BYTE_1 = 0xBA
HEADER_BYTE_2 = 0xBE
def main():
ser = serial.Serial('/dev/ttyUSB0', 57600)
message = { 'header': [None]*2, 'colors': [None]*6, 'checksum': 0x00 }
message['header'][0] = HEADER_BYTE_1
message['header'][1] = HEADER_BYTE_2
# first led
message['colors'][0] = 0xFF
message['colors'][1] = 0xFF
message['colors'][2] = 0xFF
# second led
message['colors'][3] = 0x00
message['colors'][4] = 0x00
message['colors'][5] = 0x00
# create checksum
for color in message['colors']:
for bit in bytes(color):
message['checksum'] ^= bit
# write message to arduino
cmd = convert_message_to_protocol(message)
ser.write(cmd)
print(cmd)
time.sleep(5)
# read response from arduino
while True:
response = ser.readline()
print(response)
def convert_message_to_protocol(message):
cmd = bytearray()
for header in message['header']:
cmd.append(header)
for color in message['colors']:
cmd.append(color)
cmd.append(message['checksum'])
return cmd
if __name__ == '__main__':
main()
</code></pre>
<p><strong>Arduino Code</strong></p>
<pre><code>const int kChannel1FirstPin = 3;
const int kChannel1SecondPin = 5;
const int kChannel1ThirdPin = 6;
const int kChannel2FirstPin = 9;
const int kChannel2SecondPin = 10;
const int kChannel2ThirdPin = 11;
// Protocol details (two header bytes, 6 value bytes, checksum)
const int kProtocolHeaderFirstByte = 0xBA;
const int kProtocolHeaderSecondByte = 0xBE;
const int kProtocolHeaderLength = 2;
const int kProtocolBodyLength = 6;
const int kProtocolChecksumLength = 1;
// Buffers and state
bool appearToHaveValidMessage;
byte receivedMessage[6];
void setup() {
// set pins 2 through 13 as outputs:
pinMode(kChannel1FirstPin, OUTPUT);
pinMode(kChannel1SecondPin, OUTPUT);
pinMode(kChannel1ThirdPin, OUTPUT);
pinMode(kChannel2FirstPin, OUTPUT);
pinMode(kChannel2SecondPin, OUTPUT);
pinMode(kChannel2ThirdPin, OUTPUT);
analogWrite(kChannel1FirstPin, 255);
analogWrite(kChannel1SecondPin, 255);
analogWrite(kChannel1ThirdPin, 255);
analogWrite(kChannel2FirstPin, 255);
analogWrite(kChannel2SecondPin, 255);
analogWrite(kChannel2ThirdPin, 255);
appearToHaveValidMessage = false;
// initialize the serial communication:
Serial.begin(57600);
}
void loop () {
int availableBytes = Serial.available();
Serial.println(availableBytes);
if (!appearToHaveValidMessage) {
// If we haven't found a header yet, look for one.
if (availableBytes >= kProtocolHeaderLength) {
Serial.println("right size");
// Read then peek in case we're only one byte away from the header.
byte firstByte = Serial.read();
byte secondByte = Serial.peek();
if (firstByte == kProtocolHeaderFirstByte &&
secondByte == kProtocolHeaderSecondByte) {
Serial.println("Right Header");
// We have a valid header. We might have a valid message!
appearToHaveValidMessage = true;
// Read the second header byte out of the buffer and refresh the buffer count.
Serial.read();
availableBytes = Serial.available();
}
}
}
if (availableBytes >= (kProtocolBodyLength + kProtocolChecksumLength) && appearToHaveValidMessage) {
// Read in the body, calculating the checksum as we go.
byte calculatedChecksum = 0;
for (int i = 0; i < kProtocolBodyLength; i++) {
receivedMessage[i] = Serial.read();
calculatedChecksum ^= receivedMessage[i];
}
byte receivedChecksum = Serial.read();
if (receivedChecksum == calculatedChecksum) {
// Hooray! Push the values to the output pins.
analogWrite(kChannel1FirstPin, receivedMessage[0]);
analogWrite(kChannel1SecondPin, receivedMessage[1]);
analogWrite(kChannel1ThirdPin, receivedMessage[2]);
analogWrite(kChannel2FirstPin, receivedMessage[3]);
analogWrite(kChannel2SecondPin, receivedMessage[4]);
analogWrite(kChannel2ThirdPin, receivedMessage[5]);
Serial.print("OK");
Serial.write(byte(10));
} else {
Serial.print("FAIL");
Serial.write(byte(10));
}
appearToHaveValidMessage = false;
}
}
</code></pre>
<p><strong>Example</strong> </p>
<p>Generated Bytes in Python: <code>b'\xba\xbe\xff\xff\xff\x00\x00\x00\x00'</code></p>
<p>Received Bytes on the Arduino: <code>b'L\xc30\r\n'</code></p>
|
<p>Changing the baudrate to 9600 has fixed the communication</p>
|
python|arduino|pyserial
| 0 |
1,905,947 | 40,641,386 |
Conversion of pandas dataframe to sparse key-item matrix with composite key
|
<p>I have a data frame of 3 columns. Col 1 is a string order number, Col 2 is an integer day, and Col 3 is a product name.
I would like to convert this into a matrix where each row represents a unique order/day combination, and each column represents a 1/0 for the presence of a product name for that combination. </p>
<p>My approach so far makes use of a product dictionary, and a dictionary with a composite key of order # & day.
The final step, which iterates through the original dataframe in order to flip the bits in the matrix to 1s is sloooow. Like 10 minutes for a matrix the size of 363K X 331 and a sparseness of ~97%.</p>
<p>Is there a different approach I should consider?</p>
<p>E.g.,</p>
<pre><code>ord_nb day prod
1 1 A
1 1 B
1 2 B
1 2 C
1 2 D
</code></pre>
<p>would become</p>
<pre><code>A B C D
1 1 0 0
0 1 1 1
</code></pre>
<p>My approach has been to create a dictionary of order/day pairs:</p>
<pre><code>ord_day_dict = {}
print("Making a dictionary of ord-by-day keys...")
gp = df.groupby(['day', 'ord'])
for i,g in enumerate(gp.groups.items()):
ord_day_dict[g[0][0], g[0][1]] = i
</code></pre>
<p>I append the index represention to the original dataframe:</p>
<pre><code>df['ord_day_idx'] = 0 #Create a place holder column
for i, row in df.iterrows(): #populate the column with the index
df.set_value(i,'ord_day_idx',ord_day_dict[(row['day'], row['ord_nb'])])
</code></pre>
<p>I then initialize a matrix the size of my ord/day X unique products:</p>
<pre><code>n_items = df.prod_nm.unique().shape[0] #unique number of products
n_ord_days = len(ord_day_dict) #unique number of ord-by-day combos
df_fac_matrix = np.zeros((n_ord_days, n_items), dtype=np.float64)#-1)
</code></pre>
<p>I convert my products from strings into an index via a dictionary:</p>
<pre><code>prod_dict = dict()
i = 0
for v in df.prod:
if v not in prod_dict:
prod_dict[v] = i
i = i + 1
</code></pre>
<p>And finally iterate through the original dataframe to populate the matrix with 1s where a specific order on a specific day included a specific product.</p>
<pre><code>for line in df.itertuples():
df_fac_matrix[line[4], line[3]] = 1.0 #in the order-by-day index row and the product index column of our ord/day-by-prod matrix, mark a 1
</code></pre>
|
<p>Here is one option you can try:</p>
<pre><code>df.groupby(['ord_nb', 'day'])['prod'].apply(list).apply(lambda x: pd.Series(1, x)).fillna(0)
# A B C D
#ord_nb day
# 1 1 1.0 1.0 0.0 0.0
# 2 0.0 1.0 1.0 1.0
</code></pre>
|
python|pandas|dictionary|dataframe|composite-primary-key
| 2 |
1,905,948 | 40,524,490 |
Transforming and Resampling a 3D volume with numpy/scipy
|
<p><strong>UPDATE:</strong></p>
<p><strong>I created a well documented ipython notebook.
If you just want the code, look at the first answer.</strong></p>
<p><strong>Question</strong></p>
<p>I've got a 40x40x40 volume of greyscale values.
This needs to be rotated/shifted/sheared.</p>
<p>Here is a useful collection of homogeneous transformations: <a href="http://www.lfd.uci.edu/~gohlke/code/transformations.py.html" rel="noreferrer">http://www.lfd.uci.edu/~gohlke/code/transformations.py.html</a></p>
<p>I need to treat every voxel in my volume like a pair of (position vector, value).
Then I would transform the position and sample new values for each coordinate from the set of transformed vectors.</p>
<p>The sampling seems rather difficult, and I was glad to find this:
<a href="https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.ndimage.affine_transform.html#scipy.ndimage.affine_transform" rel="noreferrer">https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.ndimage.affine_transform.html#scipy.ndimage.affine_transform</a></p>
<blockquote>
<p>The given matrix and offset are used to find for each point in the
output the corresponding coordinates in the input by an affine
transformation. The value of the input at those coordinates is
determined by spline interpolation of the requested order. Points
outside the boundaries of the input are filled according to the given
mode.</p>
</blockquote>
<p>Sounds perfect.</p>
<p>But the usage is very tricky. <a href="https://stackoverflow.com/questions/20161175/how-can-i-use-scipy-ndimage-interpolation-affine-transform-to-rotate-an-image-ab">Here</a> someone is using that code for rotating an image.
His rotation matrix is 2x2, so that's not in homogenous coordinates.
I tried passing a translation matrix in homogenous coordinates (2D) to the function:</p>
<pre><code>dim =10
arr=np.zeros((dim,dim))
arr[0,0]=1
mat=np.array([[1,0,1],[0,1,0],[0,0,1]])
out3=scipy.ndimage.affine_transform(arr,mat)
print("out3: ",out3)
</code></pre>
<p>Which produces an error:</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/212590884/PycharmProjects/3DAugmentation/main.py", line 32, in <module>
out3=scipy.ndimage.affine_transform(arr,mat)
File "C:\Users\212590884\AppData\Local\Continuum\Anaconda2\lib\site-packages\scipy\ndimage\interpolation.py", line 417, in affine_transform
raise RuntimeError('affine matrix has wrong number of rows')
RuntimeError: affine matrix has wrong number of rows
</code></pre>
<p>Apparently this doesn't work with homogeneous coordinates.
How can I use this to shift the data?</p>
<p>And this was just in 2D, in 3D I can't even rotate the volume:</p>
<pre><code>dim =10
arr=np.zeros((dim,dim,dim))
arr[0,0]=1
angle=10/180*np.pi
c=np.cos(angle)
s=np.sin(angle)
mat=np.array([[c,-s,0,0],[s,c,0,0],[0,0,1,0],[0,0,0,1]])
out3=scipy.ndimage.affine_transform(arr,mat)
print("out3: ",out3)
</code></pre>
<p>The error message is the same: <code>affine matrix has wrong number of rows</code></p>
<p>Is it possible to use this method to transform my volume ?</p>
<p>I found a collection of helper methods, they offer shift and rotate but not shear:
<a href="https://docs.scipy.org/doc/scipy-0.14.0/reference/ndimage.html" rel="noreferrer">https://docs.scipy.org/doc/scipy-0.14.0/reference/ndimage.html</a></p>
<p>But I would prefer to use a custom transformation matrix.</p>
|
<p>I've found another option: <a href="https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.interpolation.map_coordinates.html#scipy.ndimage.interpolation.map_coordinates" rel="nofollow noreferrer">map_coordinates</a></p>
<p>With numpy it is possible to generate a meshgrid of coordinates, then reshape/stack them to form position vectors. These vectors are transformed and converted back into the meshgrid coordinate format. Finally with <code>map_coordinates</code> the sampling problem is solved.</p>
<p>I think that this is a common problem and have created an ipython notebook which explains everything step by step:</p>
<p><a href="http://nbviewer.jupyter.org/gist/lhk/f05ee20b5a826e4c8b9bb3e528348688" rel="nofollow noreferrer">http://nbviewer.jupyter.org/gist/lhk/f05ee20b5a826e4c8b9bb3e528348688</a></p>
<p>There is still one problem: The order of the coordinates is strange. You need to reorder the meshgrids in an unintuitive way. Could be a bug in my code.</p>
<p>Please be aware that this reordering of coordinates influences the axes of the transformations.
If you want to rotate something around the x axis, the corresponding vector is not (1,0,0) but (0,1,0), it's really strange.</p>
<p>But it works, and I think the principle is clear.</p>
|
python|numpy|scipy|geometry-shader|ndimage
| 4 |
1,905,949 | 40,548,257 |
python loop breaks on special chars in django template
|
<p>So I am working with django and an external List of names and values. With a custom template tag I am trying to display some values in my html template.</p>
<p>Here is a example of what the list could look like:</p>
<p>names.txt</p>
<pre><code>hello 4343.5
bye 43233.4
Hëllo 554.3
whatever 4343.8
</code></pre>
<p>My template tag looks like this (simplified names of variables):</p>
<pre><code># -*- coding: utf-8 -*-
from django import template
register = template.Library()
@register.filter(name='operation_name')
def operation_name(member):
with open('/pathtofile/member.txt','r') as f:
for line in f:
if member.member_name in line:
number = float(line.split()[1])
if number is not member.member_number:
member.member_number = number
member.save()
return member.member_number
return 'Not in List'
</code></pre>
<p>It works fine for entries without specials char. But it stops when a name in member.member_names has special chars. So if member.member_names would be Hëllo the entire script just stops. I can't return anything. This is driving me crazy. Even the names without special chars won't be displayed after any name with special chars occurred.</p>
<p>I appreciate any help, thanks in advance. </p>
<p>EDIT:</p>
<p>So this did the trick:</p>
<pre><code>import sys
reload(sys)
sys.setdefaultencoding('utf-8')
</code></pre>
<p>But I don't know if this is a good solution.</p>
|
<p>This may help you try to compare both to unicode:-</p>
<pre><code>if (member.member_name).decode('latin1') in (line).decode('latin1'):
number = float(line.split()[1])
if number is not member.member_number:
member.member_number = number
member.save()
</code></pre>
|
python|django|encoding
| 1 |
1,905,950 | 10,150,368 |
Why is piping output of subprocess so unreliable with Python?
|
<p>(Windows)</p>
<p>I wrote some Python code that calls the program SoX (subprocess module), which outputs the progress on STDERR, if you specify it to do so. I want to get the percentage status from the output. If I call it not from the Python script, it starts immediately and has a smooth progression till 100%.</p>
<p>If I call it from the Python script, it lasts a few seconds till it starts and then it alternates between slow output and fast output. Although I read char by char sometimes there RUSHES out a large block. So I don't understand why at other times I can watch the characters getting more one by one. (It generates 15KiB of data in my test, by the way.)</p>
<p>I have tested the same with mkvmerge and mkvextract. They output percentages, too. Reading STDOUT there is smooth.</p>
<p>This is so unreliable! How can I make the reading of sox's stderr stream smoother, and perhaps prevent the delay at the beginning?</p>
<hr>
<p>How I call and read:</p>
<pre><code>process = subprocess.Popen('sox_call_dummy.bat', stderr = subprocess.PIPE, stdout = subprocess.PIPE)
while True:
char = process.stderr.read(1).encode('string-escape')
sys.stdout.write(char)
</code></pre>
|
<p>As per this closely related thread: <a href="https://stackoverflow.com/questions/1183643/unbuffered-read-from-process-using-subprocess-in-python">Unbuffered read from process using subprocess in Python</a></p>
<pre><code>process = subprocess.Popen('sox_call_dummy.bat',
stderr = subprocess.PIPE, bufsize=0)
while True:
line = process.stderr.readline()
if not line:
break
print line
</code></pre>
<p>Since you aren't reading stdout, I don't think you need a pipe for it.</p>
<p>If you want to try reading char by char as in your original example, try adding a flush each time:</p>
<pre><code>sys.stdout.write(char)
sys.stdout.flush()
</code></pre>
<p>Flushing the stdout every time you write is the manual equivalent of disabling buffering for the python process: <code>python.exe -u <script></code> or setting the env variable <code>PYTHONUNBUFFERED=1</code></p>
|
python|performance|subprocess|pipe|piping
| 1 |
1,905,951 | 10,114,399 |
Pandas: simple 'join' not working?
|
<p>I like to think I'm not an idiot, but maybe I'm wrong. Can anyone explain to me why this isn't working? I can achieve the desired results using 'merge'. But I eventually need to join multiple <code>pandas</code> <code>DataFrames</code> so I need to get this method working.</p>
<pre><code>In [2]: left = pandas.DataFrame({'ST_NAME': ['Oregon', 'Nebraska'], 'value': [4.685, 2.491]})
In [3]: right = pandas.DataFrame({'ST_NAME': ['Oregon', 'Nebraska'], 'value2': [6.218, 0.001]})
In [4]: left.join(right, on='ST_NAME', lsuffix='_left', rsuffix='_right')
Out[4]:
ST_NAME_left value ST_NAME_right value2
0 Oregon 4.685 NaN NaN
1 Nebraska 2.491 NaN NaN
</code></pre>
|
<p>Try using <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging" rel="noreferrer"><code>merge</code></a>:</p>
<pre><code>In [14]: right
Out[14]:
ST_NAME value2
0 Oregon 6.218
1 Nebraska 0.001
In [15]: merge(left, right)
Out[15]:
ST_NAME value value2
0 Nebraska 2.491 0.001
1 Oregon 4.685 6.218
In [18]: merge(left, right, on='ST_NAME', sort=False)
Out[18]:
ST_NAME value value2
0 Oregon 4.685 6.218
1 Nebraska 2.491 0.001
</code></pre>
<p><code>DataFrame.join</code> is a bit of legacy method and apparently doesn't do column-on-column joins (originally it did index on column using the on parameter, hence the "legacy" designation).</p>
|
pandas
| 25 |
1,905,952 | 2,110,588 |
Getting invalid image error in Django, but PIL is installed and passes all tests
|
<p>So I've finally successfully installed PIL (after many difficulties) on RHEL5 with Django (development version) and Python 2.6 installed at /opt/python2.6.</p>
<p>Running selftest.py shows that everything appears to be installed correctly:</p>
<pre><code>$ python2.6 selftest.py
57 tests passed.
</code></pre>
<p>I can upload .png files and .gif files without difficulties, but run into problems when trying to upload .jpg files using the ImageField: "Upload a valid image. The file you uploaded was either not an image or a corrupted image."</p>
<p>I saw this <a href="https://stackoverflow.com/questions/1368724/how-does-django-determine-if-an-uploaded-image-is-valid">other question</a> and ran the test to see if PIL would verify the image, and it did:</p>
<pre><code>>>> from PIL import Image
>>> trial_image=Image.open("/tmp/jordanthecoder.jpg")
>>> trial_image.verify()
>>>
</code></pre>
<p>What could be going on? Obviously, allowing JPEG is kind of important. I realize one option is to use a FileField instead and then check to make sure it is one of GIF, PNG, or JPEG, but I'd much rather use the built-in object.</p>
<p>In case this is helpful, here is the verbose display for the shell above:</p>
<pre><code>$ python2.6 -v
Python 2.6.4 (r264:75706, Jan 15 2010, 14:42:33)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
dlopen("/opt/python2.6/lib/python2.6/lib-dynload/readline.so", 2);
import readline # dynamically loaded from /opt/python2.6/lib/python2.6/lib-dynload/readline.so
>>> from PIL import Image
import PIL # directory /opt/python2.6/lib/python2.6/site-packages/PIL
# /opt/python2.6/lib/python2.6/site-packages/PIL/__init__.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/__init__.py
import PIL # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/__init__.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/Image.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/Image.py
import PIL.Image # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/Image.pyc
# /opt/python2.6/lib/python2.6/lib-tk/FixTk.pyc matches /opt/python2.6/lib/python2.6/lib-tk/FixTk.py
import FixTk # precompiled from /opt/python2.6/lib/python2.6/lib-tk/FixTk.pyc
import ctypes # directory /opt/python2.6/lib/python2.6/ctypes
# /opt/python2.6/lib/python2.6/ctypes/__init__.pyc matches /opt/python2.6/lib/python2.6/ctypes/__init__.py
import ctypes # precompiled from /opt/python2.6/lib/python2.6/ctypes/__init__.pyc
dlopen("/opt/python2.6/lib/python2.6/lib-dynload/_ctypes.so", 2);
import _ctypes # dynamically loaded from /opt/python2.6/lib/python2.6/lib-dynload/_ctypes.so
# /opt/python2.6/lib/python2.6/struct.pyc matches /opt/python2.6/lib/python2.6/struct.py
import struct # precompiled from /opt/python2.6/lib/python2.6/struct.pyc
dlopen("/opt/python2.6/lib/python2.6/lib-dynload/_struct.so", 2);
import _struct # dynamically loaded from /opt/python2.6/lib/python2.6/lib-dynload/_struct.so
# /opt/python2.6/lib/python2.6/ctypes/_endian.pyc matches /opt/python2.6/lib/python2.6/ctypes/_endian.py
import ctypes._endian # precompiled from /opt/python2.6/lib/python2.6/ctypes/_endian.pyc
dlopen("/opt/python2.6/lib/python2.6/site-packages/PIL/_imaging.so", 2);
import PIL._imaging # dynamically loaded from /opt/python2.6/lib/python2.6/site-packages/PIL/_imaging.so
# /opt/python2.6/lib/python2.6/site-packages/PIL/ImageMode.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/ImageMode.py
import PIL.ImageMode # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/ImageMode.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/ImagePalette.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/ImagePalette.py
import PIL.ImagePalette # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/ImagePalette.pyc
dlopen("/opt/python2.6/lib/python2.6/lib-dynload/array.so", 2);
import array # dynamically loaded from /opt/python2.6/lib/python2.6/lib-dynload/array.so
# /opt/python2.6/lib/python2.6/string.pyc matches /opt/python2.6/lib/python2.6/string.py
import string # precompiled from /opt/python2.6/lib/python2.6/string.pyc
# /opt/python2.6/lib/python2.6/re.pyc matches /opt/python2.6/lib/python2.6/re.py
import re # precompiled from /opt/python2.6/lib/python2.6/re.pyc
# /opt/python2.6/lib/python2.6/sre_compile.pyc matches /opt/python2.6/lib/python2.6/sre_compile.py
import sre_compile # precompiled from /opt/python2.6/lib/python2.6/sre_compile.pyc
import _sre # builtin
# /opt/python2.6/lib/python2.6/sre_parse.pyc matches /opt/python2.6/lib/python2.6/sre_parse.py
import sre_parse # precompiled from /opt/python2.6/lib/python2.6/sre_parse.pyc
# /opt/python2.6/lib/python2.6/sre_constants.pyc matches /opt/python2.6/lib/python2.6/sre_constants.py
import sre_constants # precompiled from /opt/python2.6/lib/python2.6/sre_constants.pyc
dlopen("/opt/python2.6/lib/python2.6/lib-dynload/strop.so", 2);
import strop # dynamically loaded from /opt/python2.6/lib/python2.6/lib-dynload/strop.so
dlopen("/opt/python2.6/lib/python2.6/lib-dynload/operator.so", 2);
import operator # dynamically loaded from /opt/python2.6/lib/python2.6/lib-dynload/operator.so
>>> trial_image=Image.open("/tmp/jordanthecoder.jpg")
# /opt/python2.6/lib/python2.6/site-packages/PIL/BmpImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/BmpImagePlugin.py
import PIL.BmpImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/BmpImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/ImageFile.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/ImageFile.py
import PIL.ImageFile # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/ImageFile.pyc
# /opt/python2.6/lib/python2.6/traceback.pyc matches /opt/python2.6/lib/python2.6/traceback.py
import traceback # precompiled from /opt/python2.6/lib/python2.6/traceback.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/GifImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/GifImagePlugin.py
import PIL.GifImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/GifImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/JpegImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/JpegImagePlugin.py
import PIL.JpegImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/JpegImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/PpmImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/PpmImagePlugin.py
import PIL.PpmImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/PpmImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/PngImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/PngImagePlugin.py
import PIL.PngImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/PngImagePlugin.pyc
>>> trial_image.verify()
>>> fake_image = Image.open("/tmp/fakeimage.jpg") #text file ending in .jpg
# /opt/python2.6/lib/python2.6/site-packages/PIL/CurImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/CurImagePlugin.py
import PIL.CurImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/CurImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/ArgImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/ArgImagePlugin.py
import PIL.ArgImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/ArgImagePlugin.pyc
import marshal # builtin
# /opt/python2.6/lib/python2.6/site-packages/PIL/Hdf5StubImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/Hdf5StubImagePlugin.py
import PIL.Hdf5StubImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/Hdf5StubImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/MspImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/MspImagePlugin.py
import PIL.MspImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/MspImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/MicImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/MicImagePlugin.py
import PIL.MicImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/MicImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/TiffImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/TiffImagePlugin.py
import PIL.TiffImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/TiffImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/OleFileIO.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/OleFileIO.py
import PIL.OleFileIO # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/OleFileIO.pyc
# /opt/python2.6/lib/python2.6/StringIO.pyc matches /opt/python2.6/lib/python2.6/StringIO.py
import StringIO # precompiled from /opt/python2.6/lib/python2.6/StringIO.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/FitsStubImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/FitsStubImagePlugin.py
import PIL.FitsStubImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/FitsStubImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/MpegImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/MpegImagePlugin.py
import PIL.MpegImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/MpegImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/PixarImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/PixarImagePlugin.py
import PIL.PixarImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/PixarImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/DcxImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/DcxImagePlugin.py
import PIL.DcxImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/DcxImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/PcxImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/PcxImagePlugin.py
import PIL.PcxImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/PcxImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/WmfImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/WmfImagePlugin.py
import PIL.WmfImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/WmfImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/XVThumbImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/XVThumbImagePlugin.py
import PIL.XVThumbImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/XVThumbImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/XbmImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/XbmImagePlugin.py
import PIL.XbmImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/XbmImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/ImtImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/ImtImagePlugin.py
import PIL.ImtImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/ImtImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/IptcImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/IptcImagePlugin.py
import PIL.IptcImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/IptcImagePlugin.pyc
# /opt/python2.6/lib/python2.6/tempfile.pyc matches /opt/python2.6/lib/python2.6/tempfile.py
import tempfile # precompiled from /opt/python2.6/lib/python2.6/tempfile.pyc
# /opt/python2.6/lib/python2.6/random.pyc matches /opt/python2.6/lib/python2.6/random.py
import random # precompiled from /opt/python2.6/lib/python2.6/random.pyc
# /opt/python2.6/lib/python2.6/__future__.pyc matches /opt/python2.6/lib/python2.6/__future__.py
import __future__ # precompiled from /opt/python2.6/lib/python2.6/__future__.pyc
dlopen("/opt/python2.6/lib/python2.6/lib-dynload/math.so", 2);
import math # dynamically loaded from /opt/python2.6/lib/python2.6/lib-dynload/math.so
dlopen("/opt/python2.6/lib/python2.6/lib-dynload/binascii.so", 2);
import binascii # dynamically loaded from /opt/python2.6/lib/python2.6/lib-dynload/binascii.so
dlopen("/opt/python2.6/lib/python2.6/lib-dynload/_random.so", 2);
import _random # dynamically loaded from /opt/python2.6/lib/python2.6/lib-dynload/_random.so
dlopen("/opt/python2.6/lib/python2.6/lib-dynload/cStringIO.so", 2);
import cStringIO # dynamically loaded from /opt/python2.6/lib/python2.6/lib-dynload/cStringIO.so
dlopen("/opt/python2.6/lib/python2.6/lib-dynload/fcntl.so", 2);
import fcntl # dynamically loaded from /opt/python2.6/lib/python2.6/lib-dynload/fcntl.so
import thread # builtin
# /opt/python2.6/lib/python2.6/site-packages/PIL/GribStubImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/GribStubImagePlugin.py
import PIL.GribStubImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/GribStubImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/TgaImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/TgaImagePlugin.py
import PIL.TgaImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/TgaImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/BufrStubImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/BufrStubImagePlugin.py
import PIL.BufrStubImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/BufrStubImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/FpxImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/FpxImagePlugin.py
import PIL.FpxImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/FpxImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/SgiImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/SgiImagePlugin.py
import PIL.SgiImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/SgiImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/FliImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/FliImagePlugin.py
import PIL.FliImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/FliImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/PcdImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/PcdImagePlugin.py
import PIL.PcdImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/PcdImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/PalmImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/PalmImagePlugin.py
import PIL.PalmImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/PalmImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/XpmImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/XpmImagePlugin.py
import PIL.XpmImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/XpmImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/ImImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/ImImagePlugin.py
import PIL.ImImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/ImImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/SunImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/SunImagePlugin.py
import PIL.SunImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/SunImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/IcnsImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/IcnsImagePlugin.py
import PIL.IcnsImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/IcnsImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/McIdasImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/McIdasImagePlugin.py
import PIL.McIdasImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/McIdasImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/PdfImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/PdfImagePlugin.py
import PIL.PdfImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/PdfImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/GbrImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/GbrImagePlugin.py
import PIL.GbrImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/GbrImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/EpsImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/EpsImagePlugin.py
import PIL.EpsImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/EpsImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/IcoImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/IcoImagePlugin.py
import PIL.IcoImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/IcoImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/SpiderImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/SpiderImagePlugin.py
import PIL.SpiderImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/SpiderImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/PsdImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/PsdImagePlugin.py
import PIL.PsdImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/PsdImagePlugin.pyc
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/python2.6/lib/python2.6/site-packages/PIL/Image.py", line 1916, in open
raise IOError("cannot identify image file")
IOError: cannot identify image file
</code></pre>
<p><strong>UPDATE</strong></p>
<p>Here is the output for $ python manage.py shell (skipping initial import statements):</p>
<pre><code>Python 2.6.4 (r264:75706, Jan 15 2010, 14:42:33)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> from PIL import Image
import PIL # directory /opt/python2.6/lib/python2.6/site-packages/PIL
# /opt/python2.6/lib/python2.6/site-packages/PIL/__init__.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/__init__.py
import PIL # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/__init__.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/Image.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/Image.py
import PIL.Image # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/Image.pyc
# /opt/python2.6/lib/python2.6/lib-tk/FixTk.pyc matches /opt/python2.6/lib/python2.6/lib-tk/FixTk.py
import FixTk # precompiled from /opt/python2.6/lib/python2.6/lib-tk/FixTk.pyc
import ctypes # directory /opt/python2.6/lib/python2.6/ctypes
# /opt/python2.6/lib/python2.6/ctypes/__init__.pyc matches /opt/python2.6/lib/python2.6/ctypes/__init__.py
import ctypes # precompiled from /opt/python2.6/lib/python2.6/ctypes/__init__.pyc
dlopen("/opt/python2.6/lib/python2.6/lib-dynload/_ctypes.so", 2);
import _ctypes # dynamically loaded from /opt/python2.6/lib/python2.6/lib-dynload/_ctypes.so
# /opt/python2.6/lib/python2.6/ctypes/_endian.pyc matches /opt/python2.6/lib/python2.6/ctypes/_endian.py
import ctypes._endian # precompiled from /opt/python2.6/lib/python2.6/ctypes/_endian.pyc
dlopen("/opt/python2.6/lib/python2.6/site-packages/PIL/_imaging.so", 2);
import PIL._imaging # dynamically loaded from /opt/python2.6/lib/python2.6/site-packages/PIL/_imaging.so
# /opt/python2.6/lib/python2.6/site-packages/PIL/ImageMode.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/ImageMode.py
import PIL.ImageMode # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/ImageMode.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/ImagePalette.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/ImagePalette.py
import PIL.ImagePalette # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/ImagePalette.pyc
>>> trial_image=Image.open("/tmp/jordanthecoder.jpg")
# /opt/python2.6/lib/python2.6/site-packages/PIL/BmpImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/BmpImagePlugin.py
import PIL.BmpImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/BmpImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/ImageFile.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/ImageFile.py
import PIL.ImageFile # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/ImageFile.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/GifImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/GifImagePlugin.py
import PIL.GifImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/GifImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/JpegImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/JpegImagePlugin.py
import PIL.JpegImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/JpegImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/PpmImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/PpmImagePlugin.py
import PIL.PpmImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/PpmImagePlugin.pyc
# /opt/python2.6/lib/python2.6/site-packages/PIL/PngImagePlugin.pyc matches /opt/python2.6/lib/python2.6/site-packages/PIL/PngImagePlugin.py
import PIL.PngImagePlugin # precompiled from /opt/python2.6/lib/python2.6/site-packages/PIL/PngImagePlugin.pyc
>>> trial_image.verify()
>>>
</code></pre>
<p><strong>UPDATE #2:</strong></p>
<p>Okay, so I decided to bypass ImageField altogether and just see what PIL does on its own. This is the code in question (in my views.py file):</p>
<pre><code>def test_image(request):
i = Image.open("/tmp/jordanthecoder.jpg")
t = i.verify()
return HttpResponse("Image is "+repr(i.info))
</code></pre>
<p>This is the webpage output:</p>
<pre><code>Image is {'jfif_version': (1, 1), 'jfif': 257, 'jfif_unit': 1, 'jfif_density': (72, 72), 'dpi': (72, 72)}
</code></pre>
<p>UPDATE #3</p>
<p>So, these are the modules and paths for the two different systems. I'm not exactly sure why they're so different and what I can do to change the behavior of the web version.</p>
<p>They actually were using different JSON modules, pretty sure that that makes no difference. Other than that, here are the different modules. Assume these modules are all somewhere in /opt/python2.6/...</p>
<p>Only in the web version:</p>
<pre><code>django.contrib.sessions.*, django.core.email, django.core.handlers.*, django.core.mail, django.core.mimetypes, django.core.os, django.core.random, django.core.smtplib, django.core.socket, django.core.time, django.middleware.*, email.*, encodings.ascii, hmac, mimetypes, mod_wsgi, smtplib, uu
</code></pre>
<p>Only in the shell version:</p>
<pre><code>code, codeop, django.core.management.*, readline, rlcompleter, settings, user
</code></pre>
<p>Thanks</p>
<p>** UPDATE #4 **</p>
<p>Looks like the problem is that apache is using the incorrect libjpeg.so, whereas python is using the right one. I've created a more <a href="https://stackoverflow.com/questions/2173103/is-it-possible-to-control-which-libraries-apache-uses">generalized version of the question that isn't specific to Django.</a></p>
|
<p>See possible answer in:</p>
<p><a href="https://stackoverflow.com/questions/2173103/is-it-possible-to-control-which-libraries-apache-uses">Is it possible to control which libraries apache uses?</a></p>
<p>Referencing here so can still get bounty if awarded. :-)</p>
|
python|django|python-imaging-library|django-models
| 2 |
1,905,953 | 32,387,023 |
os.walk() vs os.scandir()
|
<p><strong><a href="https://github.com/benhoyt/scandir" rel="nofollow">os.scandir</a></strong> claims to be a better directory iterator and faster then <strong>os.walk()</strong>. It became a part of the stdlib for Python3. Working in production environment, what are the things to consider when moving from <strong>os.walk()</strong> to <strong>os.scandir()</strong></p>
|
<p>I once used os.scandir() in Python 2.7. It kept on crashing because of weird unicode characters. (<code>ù ỳ ǹ</code> and the likes). Switched back to os.walk() and everything was fine. I would suggest you test that if it's a concern.</p>
<p>Appart from that it really is faster, especially on Windows.</p>
|
python|os.walk
| 2 |
1,905,954 | 32,488,384 |
Generate random binary arrays with a specific range of ones
|
<p>I want to generate random binary arrays with specific range of ones, e.g.,if I have 6 vectors each of 6-bits and a range of 1s from 1-3 then my result would be for example <code>([1,0,0,1,0,0],[1,0,0,0,0,0],[1,1,0,0,0,0],[1,0,1,0,0,1,0], etc)</code> i.e. gives me some combinations of 0s and 1s and uses the range of 1s I gave.</p>
<p>I am using:</p>
<pre><code>arr = np.array([1] * K + [0] * (N-K))
np.random.shuffle(arr)
</code></pre>
<p>to generate vectors with specific number of 1s and I am using:</p>
<pre><code>arr2 = np.array([1] * (K-K+1) + [0] * (N-1))
np.random.shuffle(arr2)
arr3 = np.array([1] * (K-K+2) + [0] * (N-2))
np.random.shuffle(arr3)
arr4 = np.array([1] * (K-K+3) + [0] * (N-3))
np.random.shuffle(arr4)
arr5 = np.array([1] * (K-K+4) + [0] * (N-4))
np.random.shuffle(arr5)
</code></pre>
<p>and concatenate these together with 'axis=0' to have the vectors I want, but this idea is not efficient and I want something to make it more random and more clever.</p>
<p>thanks </p>
|
<p>Did you try this ?</p>
<pre><code>while arr.sum() != total_of_1:
arr = np.random.randint(2, size=(r, c))
</code></pre>
<p>where <code>r</code> is the number of rows and <code>c</code> is the number of columns.</p>
|
python|python-2.7
| 0 |
1,905,955 | 32,489,360 |
Do I need to use random.seed() with random.uniform() to ensure I get different sequences
|
<p>Do I have to use random.seed() to ensure random numbers are different OR random.uniform already sets a new seed every time that is called?
I do not want to repeat sequences so does it matter if I use seed at all?</p>
|
<h2>Case 1:</h2>
<pre><code>import random
random.seed(10)
for i in range(3):
print random.randrange(2000)
</code></pre>
<h2>Output of Case 1:</h2>
<pre><code>$ python b.py
1142
857
1156
$ python b.py
1142
857
1156
</code></pre>
<p>As you see, in case 1, the different sessions (running the program is a session) produce the same set of random variables.</p>
<h2>Case 2:</h2>
<pre><code>import random
for i in range(3):
print random.randrange(2000)
</code></pre>
<h2>Output of Case 2:</h2>
<pre><code>$ python b.py
1469
1559
267
$ python b.py
1252
476
1804
</code></pre>
<p>In case 2, on the other hand, the set of random variables in different sessions are also different. </p>
<p>This is because when you import random, it <em>randomly</em> picks a seed. And the seeds are likely to be different in different sessions. </p>
<p>However, if you override the seed, then you will have the same set of random variables, as we see in case 1.</p>
|
python
| 6 |
1,905,956 | 28,086,495 |
SMBus/I2C in Python keeps triggering receive callback when requesting read
|
<p>I am trying to read some values from a Arduino microcontroller by sending a read request from my PC, but instead of triggering the request callback it is triggering the receive, which does not make sense at all? I am running I2C hence SMBus seems to be significantly slower.</p>
<p>Arduino code:</p>
<pre><code>void dataReceive() {
Serial.println("Receive");
}
void dataRequest() {
Serial.println("Request");
Wire.write(1);
}
void setup()
{
Wire.begin(4);
Wire.onReceive(dataReceive);
Wire.onRequest(dataRequest);
}
</code></pre>
<p>PC code:</p>
<pre><code>import smbus
bus = smbus.SMBus(1)
data = bus.read_i2c_block_data(0x04, 0x09, 1)
print data
</code></pre>
<p>I get following error aswell:</p>
<pre><code>Traceback (most recent call last):
File "./i2ctest.py", line 16, in <module>
data = bus.read_i2c_block_data(0x04, 0x09, 1)
IOError: [Errno 11] Resource temporarily unavailable
</code></pre>
<p>Although i am able to see in the Arduino serial monitor that the <code>dataReceive</code> callback is triggered.</p>
|
<p>Arduino has no repetitve start signal in Wire.h library. Your solution is something like this:</p>
<p>On Arduino side:</p>
<pre><code>void dataReceive() {
x = 0;
for (int i = 0; i < byteCount; i++) {
if (i==0) {
x = Wire.read();
cmd = ""
} else {
char c = Wire.read();
cmd = cmd + c;
}
}
if (x == 0x09) {
// Do something arduinoish here with cmd if you need no answer
// or result from Arduino
x = 0;
cmd = ""
}
}
</code></pre>
<p>This will store the first character of the received ones as a "command" then the rest will be the arguments part. In your case command is 0x09, argument is 1.</p>
<p>On PC side the python command is this:</p>
<pre><code>bus.write_i2c_block_data(0x05,0x09,buff)
</code></pre>
<p>Where buff is "1".</p>
<p>You might need the datarequest event:</p>
<pre><code>void dataRequest() {
x = 0;
Wire.write(0xFF);
}
</code></pre>
<p>This will send back a simple FF. </p>
<p>If you need answer from the arduino, then process the cmd parameter here. In this case on python side you will need more:</p>
<pre><code>bus.write_i2c_block_data(0x05,0x09,buff)
tl = bus.read_byte(0x05)
</code></pre>
<p>This sends "1" into command "0x09" to device "0x05". You will then fetch the answer with a read command simply from device "0x05". </p>
|
python|arduino|i2c
| 1 |
1,905,957 | 44,005,652 |
How do I efficiently bin values into overlapping bins using Pandas?
|
<p>I would like to bin all the values from a column of type float into bins that are overlapping. The resulting column could be a series of 1-D vectors with bools - one vector for each value from the original column. The resulting vectors contain <code>True</code> for each bin a value falls into and <code>False</code> for the other bins.</p>
<p>For example, if I have four bins <code>[(0, 10), (7, 20), (15, 30), (30, 60)]</code>, and the original value is 9.5, the resulting vector should be <code>[True, True, False, False]</code>.</p>
<p>I know how to iterate through all the ranges with a custom function using 'apply', but is there a way to perform this binning more efficiently and concisely?</p>
|
<p>Would a simple list comprehension meet your needs?</p>
<pre><code>Bins = [(0, 10), (7, 20), (15, 30), (30, 60)]
Result = [((9.5>=y[0])&(9.5<=y[1])) for y in Bins]
</code></pre>
<p>If your data is stored in column <code>data</code> of a pandas DataFrame (<code>df</code>) then you can define the function:</p>
<pre><code>def in_ranges(x,bins):
return [((x>=y[0])&(x<=y[1])) for y in bins]
</code></pre>
<p>and apply it to the column:</p>
<pre><code>df[data].apply(lambda x: pd.Series(in_ranges(x,Bins),Bins))
</code></pre>
|
python|pandas|binning
| 4 |
1,905,958 | 44,188,070 |
Determining Thousandths in a number
|
<p>If an aircraft is flying VFR in the US, if the heading is east, the altitude must be an odd thousand plus 500 feet (1500, 3500, 5500, etc). If flying west, the altitude must be an even thousand plus 500 feet (2500, 4500, 6500, etc). If I input a given altitude, but it is the wrong (odd or even) for the heading, how do I get Python to correct it next higher odd or even thousandths (1500 becomes 2500, 6500 becomes 7500, etc)? We never round down for altitudes. Thanks!</p>
|
<p>You could divide your altitude by 1000.0 and cast to an int which would drop the decimal: </p>
<p><code>if int(altitude/1000.0) % 2 == 0</code> </p>
<p>Then you can do whatever you want with that information.</p>
|
python|rounding
| 0 |
1,905,959 | 32,834,016 |
Python path in .bash_profile not respected
|
<p>Been Googling and searching here to no avail, so forgive me if this is a duplicate.</p>
<p>Basically, I installed Python 3.4 on my machine (Mac running Yosemite 10.10.2), but when I run <code>python</code> in Terminal, it starts up Python2.7.whatever, which I'm assuming is the version that is often mentioned as coming pre-installed on Macs. I've checked my ~/.bash_profile using vim, and here's what it currently contains:</p>
<pre><code># Setting PATH for Python 3.4
# The orginal version is saved in .bash_profile.pysave
PATH="/Library/Frameworks/Python.framework/Versions/3.4/bin:${PATH}"
export PATH
# Virtualenv Wrapper stuff
export WORKON_HOME=$HOME/.virtualenvs
export PROJECT_HOME=$HOME/Devel
source /usr/local/bin/virtualenvwrapper.sh
</code></pre>
<p>Which, according to all the reading I've been doing, should work. But it isn't. Any and all thoughts as to why are appreciated.</p>
|
<p>Usually you would just type <code>python3</code> or <code>python3.4</code> to get a specific version. You should try that.</p>
<p>Most developers also use virtual environments which give you the ability to handle python versions and modules per application. There is an introduction at <a href="http://docs.python-guide.org/en/latest/dev/virtualenvs/" rel="nofollow">http://docs.python-guide.org/en/latest/dev/virtualenvs/</a></p>
|
python|macos|path
| 0 |
1,905,960 | 13,989,166 |
Why are my Pylot graphs blank?
|
<p>I'm using Pylot 1.26 with Python 2.7 on Windows 7 64bit having installed Numpy 1.6.2 and Matplotlib 1.1.0. The test case executes and produces a report but the response time graph is empty (no data) and the throughput graph is just one straight line. </p>
<p>I've tried the 32 bit and 64 bit installers but the result is the same. </p>
|
<p>I had the same identical problem. I spent sometime on it today debugging few things, I realized the problem with me was that the data collected to plot charts wasn't correct and i needed to adjust. What I did was just changing the time from absolute to relative and dynamically adjusting the range of the axis. I'm not that good in python and so my code doesn't look that good. </p>
|
python|numpy|matplotlib|pylot
| 0 |
1,905,961 | 13,794,178 |
Renaming a file on S3
|
<blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://stackoverflow.com/questions/91846/rails-or-django-or-something-else">Rails or Django? (or something else?)</a><br>
<a href="https://stackoverflow.com/questions/2481685/amazon-s3-boto-how-to-rename-a-file-in-a-bucket">Amazon S3 boto: how to rename a file in a bucket?</a> </p>
</blockquote>
<p>I am using this:
<a href="http://django-storages.readthedocs.org/en/latest/backends/amazon-S3.html" rel="nofollow noreferrer">http://django-storages.readthedocs.org/en/latest/backends/amazon-S3.html</a></p>
<p>I need to rename a file, how can I do that?</p>
<p>I searched the docs thoroughly and couldn't find anything.</p>
|
<p>You may use copy operation see <a href="http://docs.amazonwebservices.com/AmazonS3/latest/dev/CopyingObjectsExamples.html">http://docs.amazonwebservices.com/AmazonS3/latest/dev/CopyingObjectsExamples.html</a>. Using the copy operation, you can:</p>
<ul>
<li><p>Create additional copies of objects</p></li>
<li><p><strong>Rename objects by copying them and deleting the original ones</strong></p></li>
<li><p>Move objects across Amazon S3 locations (e.g., Northern California
and EU)</p></li>
</ul>
|
python|django|amazon-s3|boto
| 6 |
1,905,962 | 54,651,410 |
Custom Authentication in django is not working
|
<p>I am new to django and I wanted to authenticate user on <code>email</code> or <code>username</code> with <code>password</code> hence I wrote a custom authentication as shown in documentation but it doesn't seem to be called and I have no idea what do I do?</p>
<p><strong>settings.py</strong></p>
<pre><code>AUTHENTICATION_BACKENDS = ('accounts.backend.AuthBackend',)
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>def login(request):
if request.method == 'POST':
username_or_email = request.POST['username']
password = request.POST['password']
user = authenticate(username=username_or_email, password=password)
print(user)
if user is not None:
return reverse('task:home')
else:
messages.error(request, "Username or password is invalid")
return render(request, 'accounts/login.html')
else:
return render(request, 'accounts/login.html')
</code></pre>
<p><strong>backend.py</strong></p>
<pre><code>from django.contrib.auth.models import User
from django.db.models import Q
class AuthBackend(object):
supports_object_permissions = True
supports_anonymous_user = False
supports_inactive_user = False
def get_user(self, user_id):
try:
return User.objects.get(pk=user_id)
except User.DoesNotExist:
return None
def authenticate(self, username, password):
print('inside custom auth')
try:
user = User.objects.get(
Q(username=username) | Q(email=username) )
print(user)
except User.DoesNotExist:
return None
print(user)
if user.check_password(password):
return user
else:
return None
</code></pre>
<p>I wrote this <code>print</code> statements in my class to check if they are being called and being written in console. However, they are not being printed and the <code>print</code> statement in <code>views.py</code> prints <code>None</code></p>
|
<p>You need to <code>extend</code> the <code>ModelBackend</code> from <code>django.contrib.auth.backends</code> </p>
<pre><code>from django.contrib.auth import get_user_model
from django.contrib.auth.backends import ModelBackend
User = get_user_model()
class AuthBackend(ModelBackend):
supports_object_permissions = True
supports_anonymous_user = False
supports_inactive_user = False
def get_user(self, user_id):
try:
return User.objects.get(pk=user_id)
except User.DoesNotExist:
return None
def authenticate(self, request, username=None, password=None):
print('inside custom auth')
try:
user = User.objects.get(
Q(username=username) | Q(email=username) )
print(user)
except User.DoesNotExist:
return None
print(user)
if user.check_password(password):
return user
else:
return None
</code></pre>
<p>And also in <code>settings.py</code> don't forget to add your custom backend authentication</p>
<pre><code>AUTHENTICATION_BACKENDS = [
'django.contrib.auth.backends.ModelBackend',
'accounts.backend.AuthBackend'
]
</code></pre>
<h3>Another Possible Solution</h3>
<p>From you code what I am seeing is that you want your <code>email</code> should treat as user_name of <code>User</code> model. You can easily modify <code>Django's</code> <code>AbstructUser</code> model like following</p>
<pre><code>from django.contrib.auth.models import AbstractUser
class User(AbstractUser):
# your necessary additional fields
USERNAME_FIELD = 'email' # add this line
</code></pre>
<p>Now <code>email</code> field will treat as an user_name field. No need to add custom <code>authentication-backend</code></p>
|
django|python-3.x|django-authentication
| 4 |
1,905,963 | 34,666,482 |
Python test whole script
|
<p>I have a Python script. I use <code>unittest</code> for tests, but how can I test whole script.</p>
<p>My idea is something like this:</p>
<pre><code>def test_script(self):
output=runScript('test.py --a 5 --b 3')
self.assertEqual(output, '8')
</code></pre>
<p>test.py takes argument a and b and print a+b, in this case 8</p>
|
<p>You can use the <a href="https://docs.python.org/2/library/subprocess.html" rel="nofollow">subprocess</a> library to call a script and capture the output. </p>
<pre><code>import subprocess
p = subprocess.Popen(
['./test.py', '--a', ...],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT
)
print p.stdout.read()
</code></pre>
|
python|unit-testing|python-3.x|testing|automated-tests
| 2 |
1,905,964 | 34,549,601 |
Python SUDS doesn't include parameter in call
|
<p>I am new to Python and suds. Using SOAP UI, the call to my service looks like this:</p>
<pre><code><soapenv:Envelope
xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:ns="<URL to service>"
xmlns:ns1="<URL to second namespace>">
<soapenv:Header/>
<soapenv:Body>
<ns:AuthenticateCaller>
<!--Optional:-->
<ns:request>
<ns1:LoanAccountNumber>292206816</ns1:LoanAccountNumber>
</ns:request>
</ns:AuthenticateCaller>
</soapenv:Body>
</soapenv:Envelope>
</code></pre>
<p>I tried the following using suds:</p>
<pre><code>from suds.xsd.doctor import ImportDoctor, Import
imp = Import(<URL to service>)
imp.filter.add(<URL to second namespace>)
doctor = ImportDoctor(imp)
client = Client(url, doctor=doctor)
client.service.AuthenticateCaller(LoanAccountNumber='292206816')
</code></pre>
<p>The generated XML looks like this:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
xmlns:ns0="<URL to service>"
xmlns:ns1="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/
envelope/">
<SOAP-ENV:Header/>
<ns1:Body>
<ns0:AuthenticateCaller/>
</ns1:Body>
</SOAP-ENV:Envelope>
</code></pre>
<p>It is missing the LoanAccountNumber parameter in the call which is the key to the API. It is also missing the second namespace which I thought ImportDoctor was supposed to fix.</p>
<p>My question is, what am I missing that the LoanAccountNumber isn't included in the call to the API.</p>
|
<p>It seems that the following instructions will help you:</p>
<p>First of all, you must print your <code>Client</code> instant which is <code>client</code> in your code, so you see something like this:</p>
<pre><code>Suds ( https://fedorahosted.org/suds/ ) version: 0.4 GA build: R699-20100913
Service ( YourService_cmV6YW9ubGluZS5uZXQ= ) tns="http://www.yourservice.com/soap/YourService_cmV6YW9ubGluZS5uZXQ="
Prefixes (1)
ns0 = "http://schemas.xmlsoap.org/soap/encoding/"
Ports (1):
(YourService_cmV6YW9ubGluZS5uZXQ=Port)
Methods (2):
your_method(xs:string _your_param)
Types (48):
ns0:Array
ns0:ENTITIES
ns0:ENTITY
ns0:ID
ns0:NOTATION
ns0:Name
ns0:QName
ns0:Struct
ns0:anyURI
ns0:arrayCoordinate
ns0:base64
ns0:base64Binary
ns0:boolean
ns0:byte
ns0:date
ns0:dateTime
ns0:decimal
ns0:double
ns0:duration
ns0:float
ns0:hexBinary
ns0:int
ns0:integer
ns0:language
ns0:long
ns0:negativeInteger
ns0:positiveInteger
ns0:short
ns0:string
ns0:time
ns0:token
</code></pre>
<p>then find your appropriate parameter type, and create your parameter in the following way:</p>
<pre><code>your_param = client.factory.create("ns0:string")
your_param.value = your_value
</code></pre>
<p>(The complex types follows the complex ways!)</p>
<p>Now, you can call your method as follows:</p>
<pre><code>client.service.your_method(your_param)
</code></pre>
<p>And Enjoy!</p>
|
python|parameter-passing|suds
| 1 |
1,905,965 | 34,716,513 |
solr-thumbnail - some jpegs are not processed
|
<p>I got a few pictures which I upload from django admin using solr.thumbnail.ImageField and it works perfectly - image is uploaded to media folder, thumbnails are created in cache subdir and proper entries in thumbnail_kvstore are created.</p>
<p>I got some images, which do not work though - they are uploaded to media folder, but no cache entries and no thumbnails are created. No errors or anything raised.</p>
<pre><code>Django==1.8
Pillow==3.0.0
sorl-thumbnail==12.3
</code></pre>
<p>This picture I obtained from bigger one and resizing down, but same behaviour applies to original picture also.
What should I look into to make those pictures work?</p>
<p><a href="https://i.stack.imgur.com/1qFsm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1qFsm.jpg" alt="E.g. not working image"></a></p>
|
<p>Try to use Pillow==2.9.3 or wait for 3.1.0. <a href="https://github.com/python-pillow/Pillow/pull/1492" rel="nofollow">More info</a>.</p>
|
python|django
| 1 |
1,905,966 | 27,198,287 |
Tkinter. Create multiple buttons with "different" command function
|
<p>first of all, sorry for the title, I couldn't find a better one.</p>
<p>The following code is a minimalized version of a problem I have in my Python program (I am a newbie btw.).</p>
<pre><code>def onClick(i):
print "This is Button: " + str(i)
return
def start():
b = [0 for x in range(5)]
win = Tkinter.Tk()
for i in range(5):
b[i] = Tkinter.Button(win,height=10,width=100,command=lambda : onClick(i))
b[i].pack()
return
</code></pre>
<p>What it does:
Whatever Button I click, it says "This is Button: 4".</p>
<p>What I want:
First button should say "This is Button: 0" and so on.</p>
<p>Is this a wanted behaviour of Python? And if the answer is yes, why is that so? How can I fix it?</p>
<p>On the other hand, this works fine:</p>
<pre><code>def start():
x = [0 for x in range(5)]
for i in range(5):
x[i] = lambda:onClick(i)
x[i]()
return
</code></pre>
|
<p>Use default parameter to avoid late-binding issue (Otherwise <code>i</code> is bound when the lambda function is called, not when it is created):</p>
<pre><code>def start():
buttons = []
win = Tkinter.Tk()
for i in range(5):
b = Tkinter.Button(win, height=10, width=100, command=lambda i=i: onClick(i))
b.pack()
buttons.append(b)
</code></pre>
|
python|python-2.7|user-interface|tkinter|late-binding
| 12 |
1,905,967 | 27,010,584 |
How can I split this counter and export it to .csv?
|
<p>I'm working on this Python(2.7) code, analyzing a .txt file for the 50 most common words. The next step is exporting the words and their number of occurrences to a .csv file. I am exporting to the .csv file, but the code keeps grouping the words and their numbers with punctuations and parentheses. I need it in two columns, with each group on a new row.</p>
<p>For example: ('the', 329) needs to appear as two different columns, the | 329</p>
<p>I think I can pull it off using regex, but I really don't know how. Any help is appreciated.</p>
<pre><code>import re
import collections
import csv
from collections import Counter
words = re.findall('\w+', open('document.txt').read().lower())
thing = Counter(words).most_common(50)
PDFiles = "PDFiles.csv"
with open(PDFiles, "w") as output:
writer = csv.writer(output, lineterminator='\n')
for val in thing:
writer.writerow(val) # edited
</code></pre>
<p>With the latest edit, the text appears as</p>
<pre><code>tell | 329
0| 65
</code></pre>
|
<p>Try this</p>
<p>writer.writerow([val[0],val[1]])</p>
|
python|csv
| 0 |
1,905,968 | 12,521,675 |
Given module m and code object c, what does "exec c in m.__dict__" do?
|
<p>I'm writing Python 3 code and for some reason I want to run everything just in memory and save no files on disk. I managed to solve almost all of my problems so far by reading answers here, but I'm stuck on these lines:</p>
<pre><code>>>> code = compile(source, filename, 'exec')
>>> exec code in module.__dict__
</code></pre>
<p>I don't really understand what the second line does, since I have "in" connected with loops and testing whether something is in some set or not which is not this case.</p>
<p>So, what does the second line do? And what is its Python 3 equivalent since in py3 is exec function, not keyword?</p>
|
<pre><code>exec code in module.__dict__
</code></pre>
<p>means execute the commands in the file or string called 'code', taking global and local variables referred to in 'code' from <code>module.__dict__</code> and storing local and global variables created in 'code' into the dictionary <code>module.__dict__</code></p>
<p>See <a href="http://docs.python.org/reference/simple_stmts.html#exec" rel="noreferrer">http://docs.python.org/reference/simple_stmts.html#exec</a></p>
<p>Eg:</p>
<pre><code>In [51]: mydict={}
In [52]: exec "val1=100" in mydict
In [53]: mydict['val1']
Out[53]: 100
</code></pre>
<p>Eg2:</p>
<pre><code>In [54]: mydict={}
In [55]: mydict['val2']=200
In [56]: exec "val1=val2" in mydict
In [57]: mydict.keys()
Out[57]: ['__builtins__', 'val2', 'val1']
In [58]: mydict['val2']
Out[58]: 200
In [59]: mydict['val1']
Out[59]: 200
</code></pre>
|
python|exec
| 5 |
1,905,969 | 23,312,182 |
Receiving serial port data: real-time web display + logging (with downsampling)
|
<p>I am working on a small project which involves displaying and recording (for later processing) data received through a serial port connection from some sort of measurement device. I am using a Raspberry Pi to read and store the received information: this is done with a small program written in Python which opens the serial device, reads a frame and stores the data in a MySQL database (there is no need to poll or interact with the device, data is sent automatically).</p>
<p>The serial data is formatted into frames about 2.5kbits long, which are sent repeatedly at 1200baud, which means that a new frame is received about every 2 seconds.</p>
<p>Now, even though the useful data is just a portion of the frame, that is way too much information to store for what I need, so what I'm currently doing is "downsampling" the data by reading a frame only once per minute. Currently this is done via a cron task which calls my logging script every minute.</p>
<p>The problem with my setup is that the PHP webpage used to display (and process) the received data (pulled from the MySQL database) cannot show new data more than once per minute.</p>
<p>Thus here come my question:</p>
<blockquote>
<p><strong>How would you do to make the webpage show the live data (which doesn't need to be saved), while keeping the logging to the MySQL database @ once per minute?</strong></p>
</blockquote>
<p>I guess the solution would involve some sort of daemon, which stores the data at the specified frequency (once per minute), while keeping the latest received data available for the php webpage (how?). What do you think? Do you have any examples of similar code/applications which I could use as a starting point?</p>
<p>Thanks!</p>
|
<p>I don't know if I understand your problem correctly, but it appears you want to show a non-stop “stream” of data with your PHP script. If that's the case, I'm afraid this won't be so easy.</p>
<p>The basic idea of the HTTP protocol is request/response based. Your browser sends a request and receives a (static) response.</p>
<p>You could build some sort of “streaming” server, but streaming (such as done by youtube.com) is also not much more than periodically sending static chunks of a video file, and the player re-assembles them into a video or audio “stream”.</p>
<p>You could, however, look into concepts like “web sockets” and “long polling”. For example, you could create a long-running PHP script that reads a certail file once every two seconds and outputs the value. (Remember to use <code>flush()</code>, or output will be buffered.)</p>
<p>A smart solution could even output a JavaScript snippet every two seconds, which again would update some sort of <code><div></code> container displaying charts and what not.</p>
<p>There are for example implementations of progress meters implemented with this type of approach.</p>
|
php|python|mysql|logging|serial-port
| 0 |
1,905,970 | 903,582 |
How can I draw automatic graphs using dot in Python on a Mac?
|
<p>I am producing graphs in a Python program, and now I need to visualize them.</p>
<p>I am using Tkinter as GUI to visualize all the other data, and I would like to have a small subwindow inside with the graph of the data.
At the moment I have the data being represented in a .dot file. And then I keep graphviz open, which shows the graph. But this is of course suboptimal. I need to get the graph inside the tk window.</p>
<p>I thought about using graphviz from the command line, but I always run into the same well known bug:</p>
<pre><code>Desktop ibook$ dot -Tpng -O 1.dot
dyld: lazy symbol binding failed: Symbol not found: _pixman_image_create_bits
Referenced from: /usr/local/lib/graphviz/libgvplugin_pango.5.dylib
Expected in: flat namespace
dyld: Symbol not found: _pixman_image_create_bits
Referenced from: /usr/local/lib/graphviz/libgvplugin_pango.5.dylib
Expected in: flat namespace
Trace/BPT trap
</code></pre>
<p>The bug seem to be well known in the Graphviz community:</p>
<p><a href="http://www.graphviz.org/bugs/b1479.html" rel="nofollow noreferrer">http://www.graphviz.org/bugs/b1479.html</a></p>
<p><a href="http://www.graphviz.org/bugs/b1488.html" rel="nofollow noreferrer">http://www.graphviz.org/bugs/b1488.html</a></p>
<p><a href="http://www.graphviz.org/bugs/b1498.html" rel="nofollow noreferrer">http://www.graphviz.org/bugs/b1498.html</a></p>
<p>So since it seems that I cannot use the command line utility I was wondering if anyone knew a direct way to draw a dot graph in Python, without using the command line, or doing something that would incur the same error?</p>
<p>I am programming on a Mac Leopard, python 2.5.2</p>
|
<p>I do not have a mac to test it on, but the <a href="http://networkx.lanl.gov/index.html" rel="nofollow noreferrer">NetworkX</a> package includes methods to <a href="http://networkx.lanl.gov/reference/generated/networkx.read_dot.html" rel="nofollow noreferrer">read .dot files</a> and <a href="http://networkx.lanl.gov/reference/generated/networkx.draw.html" rel="nofollow noreferrer">draw graphs</a> using <a href="http://matplotlib.sourceforge.net/" rel="nofollow noreferrer">matplotlib</a>. You can embed a matplotlib figure in Tk (<a href="http://matplotlib.sourceforge.net/examples/user_interfaces/embedding_in_tk.html" rel="nofollow noreferrer">example 1</a>, <a href="http://matplotlib.sourceforge.net/examples/user_interfaces/embedding_in_tk2.html" rel="nofollow noreferrer">example 2</a>).</p>
|
python|macos|graphviz|dot|dyld
| 2 |
1,905,971 | 714,881 |
How to include external Python code to use in other files?
|
<p>If you have a collection of methods in a file, is there a way to include those files in another file, but call them without any prefix (i.e. file prefix)?</p>
<p>So if I have:</p>
<pre><code>[Math.py]
def Calculate ( num )
</code></pre>
<p>How do I call it like this:</p>
<pre><code>[Tool.py]
using Math.py
for i in range ( 5 ) :
Calculate ( i )
</code></pre>
|
<p>You will need to import the other file as a module like this:</p>
<pre><code>import Math
</code></pre>
<p>If you don't want to prefix your <code>Calculate</code> function with the module name then do this:</p>
<pre><code>from Math import Calculate
</code></pre>
<p>If you want to import all members of a module then do this:</p>
<pre><code>from Math import *
</code></pre>
<p><strong>Edit:</strong> <a href="https://linux.die.net/diveintopython/html/object_oriented_framework/importing_modules.html" rel="noreferrer">Here is a good chapter</a> from <a href="https://linux.die.net/diveintopython/html/toc/index.html" rel="noreferrer">Dive Into Python</a> that goes a bit more in depth on this topic.</p>
|
python
| 159 |
1,905,972 | 41,771,349 |
Merge columns with same header no duplicate columns
|
<p>So I've tried looking into many articles explaining how merge or concat, join, etc. work for pandas in python, or just general r. Nothing seems to work the way I need when I test it out with my data. I'm going to post sample data with arbitrary numbers and headers that has the characteristics of my data and how I want it to look in the final product. I generally have tried using Genus as my common column because that column has the most common info and all other columns are information describing that column. These are text files.</p>
<p>Dataframe 1:</p>
<pre><code>Genus Data Facts Info
Dog 1 2 N/A
Cat 3 1 N/A
Elephant N/A 3 3
Pig N/A N/A N/A
Mouse N/A N/A N/A
</code></pre>
<p>Dataframe 2:</p>
<pre><code>Genus Info Stats
Dog 2 3
Cat 1 2
Elephant N/A 1
Pig N/A N/A
Mouse N/A N/A
Bird N/A N/A
</code></pre>
<p>Desired Outcome:</p>
<pre><code>Genus Data Facts Info Stats
Dog 1 2 2 3
Cat 3 1 1 2
Elephant N/A 3 3 1
Pig N/A N/A N/A N/A
Mouse N/A N/A N/A N/A
Bird N/A N/A N/A N/A
</code></pre>
<p>Is there any way to create this outcome using either python or r? I'm kind of new to python and r and don't know /all/ of the ins and outs, so I may just be missing something or not searching with the right terminology, but I have been trying for about 3 weeks now reading what other people have done to similar situations and trying to bank off of them. I can't use Excel because it auto changes some number inputs into dates and does other small changes that if someone tried to redo it, they may not realize to fix those nuances. </p>
|
<p>Here is how you can do that with pandas in python:</p>
<pre><code>import pandas as pd
import numpy as np
df1 = pd.DataFrame(np.random.randn(3,4), columns=['a','b','c','d']
df2 = pd.DataFrame(np.random.randn(3,2), columns=['e','f'])
pd.concat([df1, df2], axis=1)
# a b c d e f
# 0 -1.181554 0.918146 0.547498 -0.409452 -1.852066 -0.377525
# 1 0.508757 0.219863 1.945935 1.285512 -0.076156 0.172197
# 2 -0.186153 -1.784728 -0.200416 1.873692 2.097818 0.575256
</code></pre>
<p>(McKinney, Python for Data Analysis, p. 186)</p>
<p>edit:</p>
<p>Oops, I noticed you have an Info column in both dataframes. I think this would work better in that case:</p>
<pre><code>df1.combine_first(df2)
</code></pre>
<blockquote>
<p>you can think of it as "patching" missing data in the calling object with data from the object you pass (McKinney)</p>
</blockquote>
<p>edit:</p>
<p>Another quote from McKinney, p177 provides</p>
<blockquote>
<p><code>pandas.merge</code> connects rows of DataFrames based on one or more keys [like a database join]</p>
<p><code>pandas.concat</code> glues or stacks together objects along an axis</p>
<p><code>combine_first</code> enables splicing together overlapping data to fill in missing values in one object with values from another</p>
</blockquote>
|
python|r
| 3 |
1,905,973 | 47,103,242 |
python passing command line argument with-in program
|
<p>I have below python program takes argument from command line. </p>
<pre><code>import argparse
import sys
choice=['orange','apple', 'grapes', 'banna']
def select():
parser = argparse.ArgumentParser(prog='one', description='name test')
parser.add_argument(
'fname',
action="store",
type=str,
choices=choice,
help="furits name")
args = parser.parse_args(sys.argv[1:2])
print 'selected name {0}\n'.format(args.fname)
if __name__ == '__main__':
select()
</code></pre>
<p>this works</p>
<pre><code> python s.py apple
selected name apple
</code></pre>
<p>How can inline argument with-in main function. I tried this but its not working.</p>
<p>change main line this.</p>
<pre><code>if __name__ == '__main__':
sys.argv[0]='apple'
select()
</code></pre>
<p>getting below error.</p>
<blockquote>
<pre><code>usage: one [-h] {orange,apple,grapes,banna}
one: error: too few arguments
</code></pre>
</blockquote>
<p>How can I achieve this in argument?</p>
<p>thanks
-SR</p>
|
<p>Your index is wrong <code>sys.argv[0]</code> will be the path of the python script. What you want is:</p>
<pre><code>if __name__ == '__main__':
if len(sys.argv) == 1:
sys.argv.append("apple")
select()
</code></pre>
<p>But, this is a weird way of doing things. After a bit more thought, this occurred to me:</p>
<pre><code>choice=['orange','apple', 'grapes', 'banna']
def select():
parser = argparse.ArgumentParser(prog='one', description='name test')
parser.add_argument(
'fname',
nargs='?',
default='apple',
action="store",
type=str,
choices=choice,
help="furits name")
args = parser.parse_args(sys.argv[1:2])
print 'selected name {0}\n'.format(args.fname)
if __name__ == '__main__':
select()
</code></pre>
<p>Note the <code>nargs='?'</code> and <code>default='apple'</code> additions to the call to add_argument(). These make the parameter optional and set the default value to "apple" if no argument is supplied.</p>
|
python|python-2.7|argparse
| 1 |
1,905,974 | 71,023,519 |
Randomly changing for loop values
|
<p>I've been working on a deep q learning snake game in my free time, with plans to add genetic algorithm components to it. To that end, I was setting up loops that would allow me to create a given population of snakes that would each run for some number of episodes for a total of some number of generations.</p>
<p>It should be simple. Just some nested for loops. Only, I've been getting some pretty wild results from my for loops.</p>
<p>This is the code in question:</p>
<pre><code>def run(population_size=1, max_episodes=10, max_generations=50):
total_score = 0
agents = [Agent() for i in range(population_size)]
game = SnakeGameAI()
for cur_gen in range(max_generations):
game.generation = cur_gen
for agent_num, agent in enumerate(agents):
# Set colors
game.color1 = agent.color1
game.color2 = agent.color2
# Set agent number
game.agent_num = agent_num
for cur_episode in range(1, max_episodes+1):
# Get old state
state_old = agent.get_state(game)
# Get move
final_move = agent.get_action(state_old)
# Perform move and get new state
reward, done, score = game.play_step(final_move)
state_new = agent.get_state(game)
# Train short memory
agent.train_short_memory(state_old, final_move, reward, state_new, done)
# Remember
agent.remember(state_old, final_move, reward, state_new, done)
# Snake died
if done:
# Train long memory, plot result
game.reset()
agent.episode = cur_episode
game.agent_episode = cur_episode
agent.train_long_memory()
if score > game.top_score:
game.top_score = score
agent.model.save()
total_score += score
game.mean_score = np.round((total_score / cur_episode), 3)
print(f"Agent{game.agent_num}")
print(f"Episode: {cur_episode}")
print(f"Generation: {cur_gen}")
print(f"Score: {score}")
print(f"Top Score: {game.top_score}")
print(f"Mean: {game.mean_score}\n")
</code></pre>
<p>And this is the output it gives:</p>
<pre><code>Agent0
Episode: 3
Generation: 7
Score: 0
Top Score: 0
Mean: 0.0
Agent0
Episode: 3
Generation: 14
Score: 0
Top Score: 0
Mean: 0.0
Agent0
Episode: 7
Generation: 20
Score: 1
Top Score: 1
Mean: 0.143
Agent0
Episode: 10
Generation: 26
Score: 0
Top Score: 1
Mean: 0.1
Agent0
Episode: 6
Generation: 28
Score: 1
Top Score: 1
Mean: 0.333
Agent0
Episode: 5
Generation: 37
Score: 0
Top Score: 1
Mean: 0.4
Agent0
Episode: 3
Generation: 43
Score: 0
Top Score: 1
Mean: 0.667
Agent0
Episode: 1
Generation: 45
Score: 1
Top Score: 1
Mean: 3.0
Agent0
Episode: 2
Generation: 49
Score: 0
Top Score: 1
Mean: 1.5
</code></pre>
<p>The generation number steadily ticks up every second until it hits 49 and ends the loop, while the episode number randomly changes every time the snake dies. It's bizarre. I've never seen anything like this and have no idea what in my code could possible cause it.</p>
|
<p><strong>Answer:</strong></p>
<p>For everyone who doesn't want to go through the comments where
Eli Harold helped me work this out, the problem was that I had my code treating each episode like a frame of the game. So instead of an episode being the full lifespan of a snake (an entire game), every time the snake took an action was an episode.</p>
<p>Here's what my code looks like now. I added a run loop, which fixed the issue.</p>
<pre><code>def run(population_size=1, max_episodes=10, max_generations=50):
total_score = 0
agents = [Agent() for i in range(population_size)]
game = SnakeGameAI()
for cur_gen in range(max_generations):
game.generation = cur_gen
for agent_num, agent in enumerate(agents):
# Set colors
game.color1 = agent.color1
game.color2 = agent.color2
# Set agent number
game.agent_num = agent_num
for cur_episode in range(1, max_episodes+1):
run = True
while run:
# Get old state
state_old = agent.get_state(game)
# Get move
final_move = agent.get_action(state_old)
# Perform move and get new state
reward, done, score = game.play_step(final_move)
state_new = agent.get_state(game)
# Train short memory
agent.train_short_memory(state_old, final_move, reward, state_new, done)
# Remember
agent.remember(state_old, final_move, reward, state_new, done)
# Snake died
if done:
run = False
# Train long memory, plot result
game.reset()
agent.episode = cur_episode
game.agent_episode = cur_episode
agent.train_long_memory()
if score > game.top_score:
game.top_score = score
agent.model.save()
total_score += score
game.mean_score = np.round((total_score / cur_episode), 3)
print(f"Agent{game.agent_num}")
print(f"Episode: {cur_episode}")
print(f"Generation: {cur_gen}")
print(f"Score: {score}")
print(f"Top Score: {game.top_score}")
print(f"Mean: {game.mean_score}\n")
</code></pre>
|
python|for-loop
| 2 |
1,905,975 | 11,460,192 |
pygame error on Mac OSX 10.7.4
|
<pre><code>**pygame.image.load("ball.PNG") error: File is not a Windows BMP file**
</code></pre>
<p>I am getting the above error every time i try to load any image other than <code>.BMP</code>. I searched the internet for a solution and nothing has worked.
The <code>SDL_image</code> library is where it should be but python seems to be ignoring it!!</p>
<pre><code>pygame.image.get_extended() // returns 0
</code></pre>
<p>I am running <code>python-2.7.3</code>...and <code>pygame-1.9.2pre-py2.7-macosx10.7</code></p>
<p>If anyone can point me in the right direction it would be very much appreciated.</p>
|
<p>I finally got it to work! If you install Pygame for Python 2.7, for OS X 10.3 instead of the newer version that apple has supplied which was:
python-2.7.3...and pygame-1.9.2pre-py2.7-macosx10.7...everything works perfect!</p>
<p>install the Pygame for Python 2.7 for OS X 10.3:
<a href="http://pygame.org/ftp/pygame-1.9.1release-python.org-32bit-py2.7-macosx10.3.dmg" rel="nofollow">http://pygame.org/ftp/pygame-1.9.1release-python.org-32bit-py2.7-macosx10.3.dmg</a></p>
|
python|osx-lion|pygame
| 1 |
1,905,976 | 11,593,964 |
Python setuptools unable to find sub module of library
|
<p>My first attempt at using python setuptools. I am using wxPython in the project. I am using the following import lines</p>
<pre><code>import wx, random
from wx.lib import buttons
</code></pre>
<p>And in my <code>setup.py</code> I have</p>
<pre><code> setup(
name='name',
version='0.2p',
description='...',
author='...',
author_email='...',
packages=['name'],
long_description=open(
path.join(
path.dirname(__file__),
'README'
)
).read(),
install_requires=[
'setuptools',
'MySQL-python',
'wx',
'ObjectListView'
],)
</code></pre>
<p>When I use <code>easy_install</code> on the .egg everything seems fine. But when I run the main method from where the project has been installed, I get the failed import message:</p>
<pre><code>from wx.lib import buttons
ImportError: No module named lib
</code></pre>
<p>Do I need to explicity require the <code>wx.lib</code> module in the setup.py file?</p>
|
<p>The problem has nothing to do with your <code>setup.py</code> file, rather you're missing a step in your import statements. You need to explicitly import the <code>lib</code> module from <code>wx</code>. It should look something like this:</p>
<pre><code>import wx
import wx.lib
from wx.lib import buttons
</code></pre>
<p><strong>Edit</strong>: Actually, there is a problem with the <code>setup.py</code> <code>install_requires</code>. You want to require <code>wxPython</code> and <strong>NOT</strong> <code>wx</code>. <code>wx</code> is an entirely different package in Python's package index. </p>
<p>You do still need that extra <code>import wx.lib</code> in your import statements however.</p>
|
python|wxpython|setuptools
| 2 |
1,905,977 | 46,781,371 |
How to open a file with utf-8 non encoded characters?
|
<p>I want to open a text file (.dat) in python and I get the following error:
'utf-8' codec can't decode byte 0x92 in position 4484: invalid start byte
but the file is encoded using utf-8, so maybe there some character that cannot be read. I am wondering, is there a way to handle the problem without calling each single weird characters? Cause I have a rather huge text file and it would take me hours to run find the non encoded Utf-8 encoded character.</p>
<p>Here is my code</p>
<pre><code>import codecs
f = codecs.open('compounds.dat', encoding='utf-8')
for line in f:
if "InChI=1S/C11H8O3/c1-6-5-9(13)10-7(11(6)14)3-2-4-8(10)12/h2-5" in line:
print(line)
searchfile.close()
</code></pre>
|
<p>It shouldn't "take you hours" to find the bad byte. The error tells you <em>exactly</em> where it is; it's at index 4484 in your input with a value of <code>0x92</code>; if you did:</p>
<pre><code>with open('compounds.dat', 'rb') as f:
data = f.read()
</code></pre>
<p>the invalid byte would be at <code>data[4484]</code>, and you can slice as you like to figure out what's around it.</p>
<p>In any event, if you just want to ignore or replace invalid bytes, that's what the <code>errors</code> parameter is for. <a href="https://docs.python.org/2/library/io.html#io.open" rel="noreferrer">Using <code>io.open</code></a> (because <code>codecs.open</code> is subtly broken in many ways, and <code>io.open</code> is both faster and more correct):</p>
<pre><code># If this is Py3, you don't even need the import, just use plain open which is
# an alias for io.open
import io
with io.open('compounds.dat', encoding='utf-8', errors='ignore') as f:
for line in f:
if u"InChI=1S/C11H8O3/c1-6-5-9(13)10-7(11(6)14)3-2-4-8(10)12/h2-5" in line:
print(line)
</code></pre>
<p>will just ignore the invalid bytes (dropping them as if they never existed). You can also pass <code>errors='replace'</code> to insert a replacement character for each garbage byte, so you're not silently dropping data.</p>
|
python|encoding|utf-8
| 8 |
1,905,978 | 37,894,104 |
Trouble with extracting data from a CSV file, sorting it and then writing that sorted list into another CSV file
|
<p>I'm trying to read data from a CSV file, sort it in python and then write it into another CSV file. I'm unsure of how to split the list I've sorted into the correct columns.</p>
<p>The output file prints out the complete list and I don't know how to split the list and output it into the csv file for each column.</p>
<p>Here's a snippet of the CSV file</p>
<pre><code>Jack,M,1998
Bill,F,2006
Kat,F,1999
Jess,F,2009
Alexander,M,1982
</code></pre>
<p>and my code to give some insight on what I'm trying to do.</p>
<pre><code>import csv
import operator
US = open('Test.csv', 'r')#Unsorted
S = open('TestSorted.csv', 'w')#Sorted
def sortinput():
option = input('Sort by name, gender or year?: ')
if option == "name":
choice = 0
elif option == "gender":
choice = 1
elif option == "year":
choice = 2
else:
print('Invalid Input')
csv1 = csv.reader(US, delimiter=',')
sort = sorted(csv1, key=operator.itemgetter(choice))
for eachline in sort:
print (eachline)
with S as csvfile:
fieldnames = ['Name', 'Gender', 'Year']
csv2 = csv.DictWriter(csvfile, fieldnames=fieldnames)
csv2.writeheader()
for eachline in sort:
csv2.writerow({'Name': sort[0] ,'Gender': sort[1],'Year':sort[2][enter image description here][1]})
</code></pre>
|
<p>Instead of <code>csv.DictWriter</code>, you could use <code>csv.writer</code>. The loop would look like below</p>
<pre><code> with S as csvfile:
fieldnames = ['Name', 'Gender', 'Year']
csv2 = csv.writer(csvfile,quoting=csv.QUOTE_ALL)
csv2.writerow(fieldnames)
csv2.writerow(sort)
</code></pre>
|
python|sorting|csv
| 1 |
1,905,979 | 38,026,359 |
indexError:list index out of range
|
<p>The program is for quick sort with duplicate keys.The code runs perfectly once or twice and then gives the IndexError next time eventhough the list is not empty.When I print the indices they lie within range.Is it a problem with my computer specifically?</p>
<p>EDIT-added the traceback</p>
<pre><code>import random
def partition(n,lo,hi):
i=lo
lt=lo #index showing the start of all duplicate partitioning keys
gt=hi #index showing the end of all duplicate partitioning keys
x=n[lt]
while(i<=gt):
while(n[i]<=n[lt] and i<=gt):
if(x!=n[lt]):
print("alert!!!")
if(n[i]<n[lt]): #current alement not a duplicate of partitioning alement
if(lt<=i):
n[lt],n[i]=n[i],n[lt]
#print(n)
i+=1
lt+=1
else: #current element is a duplicate partitioning alement
#print(n[i],"=",n[lt])
i+=1
while(n[gt]>n[lt] and i<=gt):
gt-=1
if(i<gt):
n[i],n[gt]=n[gt],n[i]
gt-=1
#print(n)
return gt
def quickSort(n,lo,hi):
#print("called")
if(lo<hi):
print(n)
p=partition(n, lo, hi)
quickSort(n, lo, p-1)
quickSort(n, p+1, hi)
def main():
nums=[]
for i in range(30):
nums.append(random.randrange(100))
print("original array")
print(nums)
k=4
hi=len(nums)-1
#print(k,"th lowest number is ",quickSelect(nums, 0,hi,k))
print(nums)
quickSort(nums,0,hi)
print(nums)
if __name__ == "__main__":
main()
</code></pre>
<p>Traceback:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\S.reddy\workspace\sorter\src\selector\quickSelect.py", line 59, in <module>
main()
File "C:\Users\S.reddy\workspace\sorter\src\selector\quickSelect.py", line 55, in main
quickSort(nums,0,hi)
File "C:\Users\S.reddy\workspace\sorter\src\selector\quickSelect.py", line 43, in quickSort
quickSort(n, p+1, hi)
File "C:\Users\S.reddy\workspace\sorter\src\selector\quickSelect.py", line 41, in quickSort
p=partition(n, lo, hi)
File "C:\Users\S.reddy\workspace\sorter\src\selector\quickSelect.py", line 11, in partition
while(n[i]<=n[lt] and i<=gt):
IndexError: list index out of range
</code></pre>
|
<p>Your code was sometimes getting out of bounds indexes because of the order you were checking your conditions in your inner <code>while</code> loop.</p>
<p>Often an easy best way to debug issues like this is to add <code>try</code> and <code>except</code> blocks to the code, with the <code>except</code> block printing out useful diagnostic values. I used this variation on your loop to figure out the issue:</p>
<pre><code>try:
while(n[i]<=n[lt] and i<=gt):
if(x!=n[lt]):
print("alert!!!")
if(n[i]<n[lt]): #current alement not a duplicate of partitioning alement
if(lt<=i):
n[lt],n[i]=n[i],n[lt]
#print(n)
i+=1
lt+=1
else: #current element is a duplicate partitioning alement
#print(n[i],"=",n[lt])
i+=1
except IndexError:
print(i, gt, len(n))
raise
</code></pre>
<p>You'll see that under certain circumstances, <code>gt</code> will be <code>len(n) - 1</code> and <code>i</code> will be <code>len(n)</code>. In that situation, the first test in <code>while(n[i]<=n[lt] and i<=gt):</code> will raise an <code>IndexError</code> since <code>n[i]</code> is not a valid index.</p>
<p>Instead, you should put the tests in the other order, with the <code>i <= gt</code> first. If that test is <code>False</code>, the <code>and</code> will "short-circuit" and not evaluate the second test, which is the one that would cause the exception. So: use <code>while i <= gt and n[i] <= n[lt]:</code> (The parentheses were unnecessary, so I've removed them and spaced out the terms from the operators. See <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow">PEP 8</a> for more recommendations on Python style.)</p>
|
python
| 1 |
1,905,980 | 43,093,505 |
Python 3 urlopen usage
|
<p>I'm working on a python script that will communicate with the API of a CRM system I'm deploying right now. I can get data from the CRM server, but I can't seem to add (write) a new entry. I suspect I'm doing something silly because I'm fairly new to Python and programming in general, can someone point me in the right direction? The server does not reject the data, but it acts as if I was requesting data from /api/v1.0/payments as opposed to posting new data.</p>
<pre><code>from urllib.request import Request, urlopen
headers = {
'Content-Type': 'application/json',
'X-Auth-App-Key': '[API key]'
}
values = b"""
{
"clientId": 104,
"method": 3,
"checkNumber": "",
"createdDate": "2016-09-12T00:00:00+0000",
"amount": 40,
"note": "",
}
"""
request = Request('http://[SERVER_URL]/api/v1.0/payments', data=values, headers=headers)
response_body = urlopen(request).read()
print(response_body)
</code></pre>
<p>I'm working based on example code from the API documentation here:
<a href="http://docs.ucrm.apiary.io/#reference/payments/payments/post" rel="nofollow noreferrer">http://docs.ucrm.apiary.io/#reference/payments/payments/post</a></p>
<p>Am I using urlopen correctly at the bottom?</p>
|
<p><a href="https://stackoverflow.com/questions/6348499/making-a-post-call-instead-of-get-using-urllib2">This question/answer</a> may be your issue. Basically your POST request is being redirected to /api/v1.0/payments/ (note the trailing slash), when that happens your POST is redirected to a GET request, which is why the server is responding as if you were trying to retrieve all of the payments info.</p>
<p>Other things to note are your json data is actually invalid as it contains a trailing <code>,</code> after the 'note' value, so that may be an issue issue too. I think you may also be missing the <code>Content-Length</code> header in your headers. I'd recommend using the <code>json</code> module to create your json data:</p>
<pre><code>values = json.dumps({
"clientId": 104,
"method": 3,
"checkNumber": "",
"createdDate": "2016-09-12T00:00:00+0000",
"amount": 40,
"note": ""
})
headers = {
'Content-Type': 'application/json',
'Content-Length': len(values),
'X-Auth-App-Key': '[API key]'
}
request = Request('http://[SERVER_URL]/api/v1.0/payments/', data=values, headers=headers)
</code></pre>
|
python|api|urlopen
| 0 |
1,905,981 | 43,262,960 |
Web scraper in python to get the list of doctors
|
<p>I am fairly new to python and I am trying to write a web scraper to get the list of doctors in the US. I have found a number of websites with the database containing the list, including AMA but I could not scrape the list into CSV file.</p>
<p>I am trying to use Pandas and Beautiful soup to do the job.
Please point me in the right direction.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>import bs4 as bs
import urllib.request
import pandas as pd
import csv
import io
dataFrames = pd.read_html('link of the website')
for df in dataFrames:
print(df)
with io.open('doctorlist.csv', 'w',encoding="utf-8") as database:
df.to_csv(database, sep='\t',encoding="utf-8")
</code></pre>
</div>
</div>
</p>
|
<p>Firstly, I would recommend that you share the exact URL, but that wouldn't help you so here I am giving you what you need not what you wanted.</p>
<p>You're just starting with Python, when a kid goes starts with his education he has math, he doesn't start with linear algebra or trigonometry, he starts with the basics. <strong>Learn Python basics!</strong> Again, you're using modules that you don't even understand:</p>
<pre><code>import pandas # This one is Python Data Analysis Library
import bs4 # Used for parsing data( through html/xml...)
from urllib import reqests #Used for making requests such as urlopen (open a URL to get the HTML)
</code></pre>
<p>Giving you what you need: ('Point me to the right direction'):</p>
<p><a href="http://automatetheboringstuff.com" rel="nofollow noreferrer">Great AUTOMATEBORINGSTUFF, for basics quickly to advanced</a></p>
<p><a href="https://www.learnpython.org/" rel="nofollow noreferrer">Learn PYTHON</a> - [Learn the Basics, Data Science Tutorials, Advanced Tutorials]</p>
<p><em>Just a simple Google/YouTube search can give you a really good idea!</em></p>
|
python|pandas|web-scraping|beautifulsoup
| 0 |
1,905,982 | 43,038,937 |
Update Matplotlib 3D Graph Based on Real Time User Input
|
<p>I am trying to get matplotlib to create a dynamic 3d graph based on user input - but I can't get the graph to update. If I use the exact same code but without the "projection='3d'" setting, the program works correctly - but as soon as the graph is changed to display in 3d - it doesn't work.</p>
<p>Any help would be greatly appreciated.</p>
<p>3D Graph Code (graph doesn't update)</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.widgets import Slider
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
plt.subplots_adjust(left=0.25, bottom=0.25)
x = np.arange(0.0, 1.0, 0.1)
a0 = 5
b0 = 1
y = a0 * x + b0
z = np.zeros(10)
l, = plt.plot(x, y, z)
# Set size of Axes
plt.axis([0, 1, -10, 10])
# Place Sliders on Graph
ax_a = plt.axes([0.25, 0.1, 0.65, 0.03])
ax_b = plt.axes([0.25, 0.15, 0.65, 0.03])
# Create Sliders & Determine Range
sa = Slider(ax_a, 'a', 0, 10.0, valinit=a0)
sb = Slider(ax_b, 'b', 0, 10.0, valinit=b0)
def update(val):
a = sa.val
b = sb.val
l.set_ydata(a*x+b)
fig.canvas.draw_idle()
sa.on_changed(update)
sb.on_changed(update)
plt.show()
</code></pre>
<p>2D Graph Code (graph updates properly)</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.widgets import Slider
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111)
plt.subplots_adjust(left=0.25, bottom=0.25)
x = np.arange(0.0, 1.0, 0.1)
a0 = 5
b0 = 1
y = a0 * x + b0
l, = plt.plot(x, y)
# Set size of Axes
plt.axis([0, 1, -10, 10])
# Place Sliders on Graph
ax_a = plt.axes([0.25, 0.1, 0.65, 0.03])
ax_b = plt.axes([0.25, 0.15, 0.65, 0.03])
# Create Sliders & Determine Range
sa = Slider(ax_a, 'a', 0, 10.0, valinit=a0)
sb = Slider(ax_b, 'b', 0, 10.0, valinit=b0)
def update(val):
a = sa.val
b = sb.val
l.set_ydata(a*x+b)
fig.canvas.draw_idle()
sa.on_changed(update)
sb.on_changed(update)
plt.show()
</code></pre>
|
<p>The line in the 3D case needs to be updated in all 3 dimensions (even the data in some dimension stays the same). In order to do so, you have to set the 2D data using <code>set_data</code> and the third dimension using <code>set_3d_properties</code>. So updating <code>y</code> would look like this:</p>
<pre><code>l.set_data(x, a*x+b)
l.set_3d_properties(z)
</code></pre>
|
matplotlib|3d|widget|slider|python-3.4
| 3 |
1,905,983 | 48,777,014 |
How can we get the default behavior of __repr__()?
|
<p>If someone writes a class in python, and fails to specify their own <code>__repr__()</code> method, then a default one is provided for them. However, suppose we want to write a function which has the same, or similar, behavior to the default <code>__repr__()</code>. However, we want this function to have the behavior of the default <code>__repr__()</code> method even if the actual <code>__repr__()</code> for the class was overloaded. That is, suppose we want to write a function which has the same behavior as a default <code>__repr__()</code> regardless of whether someone overloaded the <code>__repr__()</code> method or not. How might we do it?</p>
<pre><code>class DemoClass:
def __init__(self):
self.var = 4
def __repr__(self):
return str(self.var)
def true_repr(x):
# [magic happens here]
s = "I'm not implemented yet"
return s
obj = DemoClass()
print(obj.__repr__())
print(true_repr(obj))
</code></pre>
<h2>Desired Output:</h2>
<p><code>print(obj.__repr__())</code> prints <code>4</code>, but <code>print(true_repr(obj))</code> prints something like:<br>
<code><__main__.DemoClass object at 0x0000000009F26588></code></p>
|
<p>You can use <code>object.__repr__(obj)</code>. This works because the default <code>repr</code> behavior is defined in <code>object.__repr__</code>.</p>
|
python|python-3.x|representation|repr
| 28 |
1,905,984 | 48,000,827 |
Installing imutils in windows 10 with python 3.6
|
<p>I need imutils package to run <a href="https://github.com/datitran/face2face-demo" rel="nofollow noreferrer">https://github.com/datitran/face2face-demo</a></p>
<p>The problem is installing it to my machine. I tried <a href="https://anaconda.org/mlgill/imutils" rel="nofollow noreferrer"><code>conda install -c mlgill imutils</code></a> but runs into PackageNotFoundError
<a href="https://i.stack.imgur.com/UvIV3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UvIV3.png" alt="enter image description here"></a></p>
<p>I also tried <code>pip install imutils</code> but another error came in. <a href="https://i.stack.imgur.com/iMTAY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iMTAY.png" alt="enter image description here"></a></p>
<p>Can anyone share how you install imutils in windows 10 with Python 3.6.3 :: Anaconda custom (64-bit)</p>
|
<p>You can install pip on anaconda platform
use </p>
<ol>
<li><em>conda install pip</em>
then use pip to install imutils </li>
<li><em>pip install imutils</em></li>
</ol>
<p>The package will get installed.</p>
|
python|window|anaconda
| 4 |
1,905,985 | 47,995,002 |
Panda export to csv
|
<p>I have the code bellow:</p>
<pre><code>from xlsxwriter import Workbook
import os,shutil
import requests
import pandas
from bs4 import BeautifulSoup
MAX_RETRIES = 20
base_url='https://pagellapolitica.it/politici/sfoggio/9/matteo-renzi?page='
for page in range(1,32,1):
l=[]
session = requests.Session()
adapter = requests.adapters.HTTPAdapter(max_retries=MAX_RETRIES)
session.mount('https://', adapter)
session.mount('http://', adapter)
site=(base_url+str(page)+".html")
# print(site)
c=session.get(site)
r=c.content
soup=BeautifulSoup(r,'html.parser')
all=soup.find_all("div",{"class":"clearfix"})
for d in all:
links=d.find_all("a")
len(links)
l=[]
# workbook = Workbook('bbb.xlsx')
# worksheet = workbook.add_worksheet()
# row +=0
# worksheet.write(row,0,'Link')
# worksheet.write(row,1,'Name ')
# row+=1
for a in links[5:17]:
d={}
href=(a["href"])
basic_url=('https://pagellapolitica.it/')
site =basic_url + href
#print(site)
c=requests.get(site)
r=c.content
soup=BeautifulSoup(r,'html.parser')
Name=soup.find("h3",{"class":"pull-left"}).text
Fact_checking=soup.find("label",{"class":"verdict-analisi"}).text
quote=soup.find("div",{"class":"col-xs-12 col-sm-6 col-smm-6 col-md-5 col-lg-5"}).text
all=soup.find_all("span",{"class":"item"})
Topic=all[0].text
Date=all[2].text
a=all[3].find("a",{"class":"" ""})
Link=a["href"]
Text=soup.find("div",{"class":"col-xs-12 col-md-12 col-lg-12"}).text
d["Name"]=Name
d["Fact_checking"]=Fact_checking
d["Quote"]=quote
d["Economic_topic"]=Topic
d["Date"]=Date
d["Link"]=Link
d["Text"]=Text
l.append(d)
df=pandas.DataFrame(l)
df.to_csv("outing.csv")
</code></pre>
<p>The problem that when I export the data in csv I only get 6 rows of results.When I do print(df) and print(l) it prints all the data that I havein the list,however when i check len(l) I only get a lenght of 6. Any ideas why this is happening??
Thank you in advance!</p>
|
<p>Consider building a list of dictionaries binded into individual dataframes with <code>pandas.DataFrame()</code> and then concatenate a list of individual dataframes with <code>pandas.concat()</code> for final dataframe.</p>
<pre><code>df_list = []
for d in all:
links=d.find_all("a")
len(links)
l=[]
for a in links[5:17]:
d={}
...
d["Name"]=Name
d["Fact_checking"]=Fact_checking
d["Quote"]=quote
d["Economic_topic"]=Topic
d["Date"]=Date
d["Link"]=Link
d["Text"]=Text
l.append(d)
df_list.append(pandas.DataFrame(l))
final_df = pandas.concat(df_list)
final_df.to_csv("outing.csv")
</code></pre>
|
python|pandas|csv
| 2 |
1,905,986 | 48,350,675 |
Matplotlib: How to change the color of a LineCollection according to its coordinates?
|
<p>Consider the folowing plot:</p>
<p><a href="https://i.stack.imgur.com/Ka8wy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ka8wy.png" alt="enter image description here"></a></p>
<pre><code>fig, ax = plt.subplots(figsize = (14, 6))
ax.set_facecolor('k')
ax.set_xlim(0, 100)
ax.set_ylim(0, 100)
xs = np.arange(60, 70) # xs = np.linspace(60, 70, 100)
ys = np.arange(0, 100, .5) # ys = np.linspace(0, 100, 100)
v = [[[x, y] for x in xs] for y in ys]
lines = LineCollection(v, linewidth = 1, cmap = plt.cm.Greys_r)
lines.set_array(xs)
ax.add_collection(lines)
</code></pre>
<p>How can I change the color of the lines according to their <code>x</code> coordinates (horizontally) so as to create a "shading" effect like this: </p>
<p><a href="https://i.stack.imgur.com/lyu8S.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lyu8S.png" alt="enter image description here"></a></p>
<p>Here, the greater <code>x</code> is, the "whiter" the <code>LineCollection</code> is.</p>
<p>Following this reasoning, I thought that specifying <code>lines.set_array(xs)</code> would do the trick but as you can see in my plot the color gradation is still following the y axis. Strangely the pattern is repeating itself, from black to white (every 5) over and over (up to 100).</p>
<p>I think (not sure at all) the problem lies in the <code>v</code> variable that contains the coordinates. The concatenation of <code>x</code> and <code>y</code> might be improper.</p>
|
<p>The shape of the list <code>v</code> you supply to the <code>LineCollection</code> is indeed not suitable to create a gradient of the desired direction. This is because each line in a LineCollection can only have single color. Here the lines range from x=60 to x=70 and each of those lines has one color. </p>
<p>What you need to do instead is to create a line collection where each line is devided into several segments, each of which can then have its own color. </p>
<p>To this end an array of dimensions <code>(n, m, l)</code>, where <code>n</code> is the number of segments, <code>m</code> is the number of points per segment, and <code>l</code> is the dimension (2D, hence <code>l=2</code>) needs to be used.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from matplotlib.collections import LineCollection
fig, ax = plt.subplots(figsize = (14, 6))
ax.set_facecolor('k')
ax.set_xlim(0, 100)
ax.set_ylim(0, 100)
xs = np.linspace(60, 70, 100)
ys = np.linspace(0, 100, 100)
X,Y = np.meshgrid(xs,ys)
s = X.shape
segs = np.empty(((s[0])*(s[1]-1),2,2))
segs[:,0,0] = X[:,:-1].flatten()
segs[:,1,0] = X[:,1:].flatten()
segs[:,0,1] = Y[:,:-1].flatten()
segs[:,1,1] = Y[:,1:].flatten()
lines = LineCollection(segs, linewidth = 1, cmap = plt.cm.Greys_r)
lines.set_array(X[:,:-1].flatten())
ax.add_collection(lines)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/eWkY6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eWkY6.png" alt="enter image description here"></a></p>
|
python|matplotlib|colors
| 2 |
1,905,987 | 73,540,240 |
How cani fix the send_keys Attribute Error?
|
<p>I am trying to write a search on the google search bar.</p>
<pre><code>import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome("C:\\Users\\HP\\PycharmProjects\\SeleniumTest1\\Drivers\\chromedriver.exe")
driver.get("https://www.google.com/")
driver.maximize_window()
driver.find_element_by_name("q").send_keys("LinkedIn login")
driver.find_element_by_name("q").send_keys(Keys.ENTER)
driver.find_element_by_partial_link_text("LinkedIn Login").click()
driver.find_element_by_id("username").send_keys("enter your username")
driver.find_element_by_id("password").send_keys("enter your password")
driver.find_element_by_tag_name("button").click()
time.sleep(5)
print(driver.title)
print(driver.current_url)
driver.close()
</code></pre>
<p>This is the error I keep getting from the console</p>
<pre><code>driver.find_element_by_name("q").send_keys("LinkedIn login")
AttributeError: 'NoneType' object has no attribute 'send_keys'
</code></pre>
|
<p>The current selenium version uses <code>find_element()</code> to get elements. Try the code below. Change the by attribute to the one you are searching with</p>
<pre><code>driver.find_element(by="name", value = "q").send_keys("LinkedIn login")
</code></pre>
|
python|selenium|selenium-webdriver|selenium-chromedriver|sendkeys
| 0 |
1,905,988 | 17,293,455 |
Writing csv in rows and columns
|
<p>I am so close to being done but cant get my head around this problem.
I am writing to csv and my code keeps giving me this output.</p>
<pre><code> dict,a,b,c,d
,,,,
list,1,2,3,4
</code></pre>
<p>I want it to be as follows:</p>
<pre><code> dict, list
a,1
b,2
c,3
d,4
</code></pre>
<p>The code is:</p>
<pre><code> ##Opening my dictionary .cvs file
with open('some_file.csv', mode='r') as infile:
reader = csv.reader(infile,)
DICT = {rows[0]:rows[1] for rows in reader if len(rows) == 2}
##Opening my enquiry list .cvs file
datafile = open(self.filename, 'r')
datareader = csv.reader(datafile)
n1 = []
for row in datareader:
n1.append(row)
n = list(itertools.chain(*n1))
headings = ['dict', 'list']
##Writing to .cvs file
with open(asksaveasfilename(), 'w') as fp:
a = csv.writer(fp)
# write row of header names
a.writerow(n)
# build up a list of the values in DICT corresponding to the keys in n
values = []
for name in n:
if name in DICT:
values.append(DICT[name])
else:
values.append("Not Available")
# now write them out
a.writerow(values)
</code></pre>
<p>I tried using <code>writerows</code> but this prints the data wrong also</p>
<pre><code>d,i,c,t
a
b
c
d
l,i,s,t
1
2
3
4
</code></pre>
<hr>
<p><strong>SOLUTION:</strong></p>
<pre><code> for nameValueTuple in zip(n,values):
a.writerow(nameValueTuple)
</code></pre>
<p>Did the trick</p>
|
<h3>Writing the data directly</h3>
<pre><code>import csv
DICT = {a:a*a for a in [1,2,3,4,5]}
n = [2, 5, 99, 3]
headings = ['dict', 'list']
##Writing to .cvs file
with open("try.csv", 'w') as fp:
a = csv.writer(fp)
a.writerow(headings)
for name in n:
if name in DICT:
a.writerow([name, DICT[name]])
else:
a.writerow([name, "Not Available"])
</code></pre>
<p>This will result in <code>try.csv</code> containing:</p>
<pre><code>dict,list
2,4
5,25
99,Not Available
3,9
</code></pre>
<h3>Doing the processing first, then writing the processed rows:</h3>
<p>You can also do the processing and write everything at once:</p>
<pre><code>import csv
DICT = {a:a*a for a in [1,2,3,4,5,6]}
ns = [2,3,99,5]
headings = ['dict', 'list']
ns_squared = [DICT[name] if name in DICT else "NOT_FOUND" for name in names]
print(ns_squared) #=> [4, 9, 'NOT_FOUND', 25]
rows = zip(ns,ns_squared)
with open("try.csv", 'w') as fp:
a = csv.writer(fp)
a.writerow(headings)
a.writerows(rows)
</code></pre>
<p>This will then result in:</p>
<pre><code>dict,list
2,4
3,9
99,NOT_FOUND
5,25
</code></pre>
<h3>Using zip to turn columns into row</h3>
<p>If you have columns as lists, you can turn these into rows by using the <code>zip()</code> builtin function. For example:</p>
<pre><code>>>> column1 = ["value", 1, 2, 3, 4]
>>> column2 = ["square", 2, 4, 9, 16]
>>> zip(column1,column2)
[('value', 'square'), (1, 2), (2, 4), (3, 9), (4, 16)]
</code></pre>
|
python|list|csv|dictionary|writer
| 1 |
1,905,989 | 17,446,768 |
Automate browser actions - Clicking the submit button errors - "Click succeeded but Load Failed. .."
|
<p>I'm trying to write a code that automatically logs into two websites and goes to a certain page. I use <a href="http://splinter.cobrateam.info/" rel="nofollow">Splinter</a>.</p>
<p>I only get the error with the "Mijn ING Zakelijk" website using <a href="http://www.phantomjs.org" rel="nofollow">PhantomJS</a> as browser type.</p>
<p>Until a few days ago the code ran perfectly fine 20 out of 20 times. But since today I'm getting an error. Sometimes the code runs fine. Other times it does not and gives me the "Click succeeded but Load Failed.." error. Here's the full traceback:</p>
<pre><code>## Attempting to login to Mijn ING Zakelijk, please wait.
- Starting the browser..
- Visiting the url..
- Filling the username form with the defined username..
- Filling the password form with the defined password..
- Clicking the submit button..
Traceback (most recent call last):
File "/Users/###/Dropbox/Python/Test environment 2.7.3/Splinter.py", line 98, in <module>
mijning()
File "/Users/###/Dropbox/Python/Test environment 2.7.3/Splinter.py", line 27, in mijning
attemptLogin(url2, username2, password2, defined_title2, website_name2, browser_type2)
File "/Users/###/Dropbox/Python/Test environment 2.7.3/Splinter.py", line 71, in attemptLogin
browser.find_by_css('.submit').first.click()
File "/Users/###/Library/Python/2.7/lib/python/site-packages/splinter/driver/webdriver/__init__.py", line 344, in click
self._element.click()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/selenium/webdriver/remote/webelement.py", line 54, in click
self._execute(Command.CLICK_ELEMENT)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/selenium/webdriver/remote/webelement.py", line 228, in _execute
return self._parent.execute(command, params)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 165, in execute
self.error_handler.check_response(response)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 158, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: u'Error Message => \'Click succeeded but Load Failed. Status: \'fail\'\'\n caused by Request => {"headers":{"Accept":"application/json","Accept-Encoding":"identity","Connection":"close","Content-Length":"81","Content-Type":"application/json;charset=UTF-8","Host":"127.0.0.1:56899","User-Agent":"Python-urllib/2.7"},"httpVersion":"1.1","method":"POST","post":"{\\"sessionId\\": \\"c2bbc8a0-e3d2-11e2-b7a8-f765797dc4e7\\", \\"id\\": \\":wdc:1372850513087\\"}","url":"/click","urlParsed":{"anchor":"","query":"","file":"click","directory":"/","path":"/click","relative":"/click","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/click","queryKey":{},"chunks":["click"]},"urlOriginal":"/session/c2bbc8a0-e3d2-11e2-b7a8-f765797dc4e7/element/%3Awdc%3A1372850513087/click"}' ; Screenshot: available via screen
Process finished with exit code 1
</code></pre>
<p>Here's the full code:</p>
<pre><code>## *** Payment Notification and Mail Tool (FPNMT) ##
from splinter import *
from Tkinter import *
def ###():
# Define values
browser_type1 = 'phantomjs' # 'phantomjs' or 'firefox'
url1 = 'http://###.nl/admin'
username1 = '###'
password1 = '###'
defined_title1 = 'Bestellingen'
website_name1 = '###.nl Admin'
attemptLogin(url1, username1, password1, defined_title1, website_name1, browser_type1)
def mijning():
# Define values
browser_type2 = 'phantomjs' # 'phantomjs' or 'firefox'
url2 = 'https://mijnzakelijk.ing.nl/internetbankieren/SesamLoginServlet'
username2 = '###'
password2 = '###'
defined_title2 = 'Saldo informatie'
website_name2 = 'Mijn ING Zakelijk'
attemptLogin(url2, username2, password2, defined_title2, website_name2, browser_type2)
# Functions #
def attemptLogin(url, username, password, defined_title, website_name, browser_type):
print '## Attempting to login to ' + website_name + ', please wait.'
# Start the browser
print '- Starting the browser..'
browser = Browser(browser_type)
# Visit in the url
print '- Visiting the url..'
browser.visit(url)
if website_name == '###.nl Admin':
# Find the username form and fill it with the defined username
print '- Filling the username form with the defined username..'
browser.fill('username', username)
# Find the password form and fill it with the defined password
print '- Filling the password form with the defined password..'
browser.fill('password', password)
# Find the submit button and click
print '- Clicking the submit button..'
browser.click_link_by_text('Inloggen')
# Find, click and display page with order history
print '- Visiting the defined web page..'
current_token = browser.url[57:97]
url_plus_token = 'http://www.###.nl/admin/index.php?route=sale/order' + current_token
browser.visit(url_plus_token)
else:
website_name == 'Mijn ING Zakelijk'
# Find the username form and fill it with the defined username
print '- Filling the username form with the defined username..'
browser.find_by_id('gebruikersnaam').first.find_by_tag('input').fill(username)
# Find the password form and fill it with the defined password
print '- Filling the password form with the defined password..'
browser.find_by_id('wachtwoord').first.find_by_tag('input').fill(password)
# Find the submit button and click
print '- Clicking the submit button..'
browser.find_by_css('.submit').first.click()
# Display page with transaction history
print '- Visiting the defined web page..'
browser.visit('https://mijnzakelijk.ing.nl/mpz/solstartpaginarekeninginfo.do')
# Get page title after successful login
current_title = browser.title
# Check the title of the page to confirm successful login
checkLogin(defined_title, current_title, website_name, browser)
def checkLogin(defined_title, current_title, website_name, browser):
if current_title == defined_title:
print '# Login to', website_name, 'successful.'
print '- Quitting the browser..'
browser.quit()
else:
print '# Login to', website_name, 'failed.'
print '- Quitting the browser..'
browser.quit()
i = 1
while i < 10:
print i
#***()
mijning()
i = i+1
</code></pre>
<p>Any ideas on what's causing this error and how do I solve it?</p>
<p>Thanks.</p>
|
<p>The current version of the ghostdriver source code fixes the issue (there is no longer any "Click succeeded but Load Failed" message" - see <a href="https://github.com/detro/ghostdriver/commit/d0615a547f5a036df3134ef946c33d972c384aac" rel="nofollow">here</a>). The thing is, that version is not yet released (as of 08/19/2013), so you need to get it and then build it yourself. That solved the problem for me (Windows 7, Python 2.7.5, Selenium 2.33). You can find the step-by-step <a href="http://phantomjs.org/build.html" rel="nofollow">here</a>.</p>
<p><strong>UPDATE</strong>:</p>
<p>PhantomJS 1.9.2 just came out and with Ghostdriver 1.0.4, which fixes the problem (check <a href="https://github.com/detro/ghostdriver/blob/master/src/request_handlers/webelement_request_handler.js" rel="nofollow">here</a> - no more "Click succeeded but Load Failed" message). So just upgrade to <a href="http://phantomjs.org/download.html" rel="nofollow">PhantomJS 1.9.2</a> and you should be fine. No need to build anything yourself anymore.</p>
|
python|selenium|webdriver|urllib2|phantomjs
| 5 |
1,905,990 | 17,279,652 |
Mulitple Lines in a single Excel cell
|
<p>What is the easiest method for writing multple lines into a single cell within excel using python. Ive trying the csv module without success.</p>
<pre><code>import csv
with open('xyz.csv', 'wb') as outfile:
w = csv.writer(outfile)
w.writerow(['stringa','string_multiline',])
</code></pre>
<p>Also each of the mutliline stringshave a number of characters in which are typically used for csv`s ie commas.</p>
<p>Any help would be really appreciated.</p>
|
<p>To figure this out, I created a file in Excel with a single multiline cell.</p>
<p><img src="https://i.stack.imgur.com/34CzR.png" alt="enter image description here"></p>
<p>Then I saved it as CSV and opened it up in a text editor:</p>
<pre><code>"a^Mb"
</code></pre>
<p>It looks like Excel interprets Ctrl-M characters as newlines.</p>
<p>Let’s try that with Python:</p>
<pre><code>#!/usr/bin/env python2.7
import csv
with open('xyz.csv', 'wb') as outfile:
w = csv.writer(outfile)
w.writerow(['stringa','multiline\015string',])
</code></pre>
<p>Yup, that worked!</p>
<p><img src="https://i.stack.imgur.com/LZX6I.png" alt="enter image description here"></p>
|
python
| 10 |
1,905,991 | 64,440,097 |
I am working on a discord.py but my on_member_removed(member) does nothing
|
<p>I have added all the required stuff such as the intents and the gateways but my bot just doesnt react, I put a debug print command so that when someone leaves it prints "bot has detected that "+member.mention+ " has left the server" but no matter what I do it really does nothing.</p>
<pre><code>import discord
from discord.ext.commands import has_permissions
import asyncio
import logging
intents = discord.Intents.default()
intents.typing = True
intents.presences = True
intents.members = True
def replace_line(file_name, line_num, text):
with open(file_name, 'r') as file:
data = file.readlines()
print(data)
data[line_num] = text
with open(file_name, 'w') as file:
file.writelines( data )
print(data)
def leavechannel(serverid, channel):
flc = open("leavechannels.txt", "r")
checker = (flc.read())
flc.close
x = checker.find(serverid)
print (x)
if x >= 0:
lookup = serverid
with open("leavechannels.txt") as myFile:
for num, line in enumerate(myFile, 0):
if lookup in line:
print (serverid +' found at line:', num)
linecache= num
print(linecache)
replace_line("leavechannels.txt", linecache, "\n"+serverid + " = " + channel)
else:
flc = open("leavechannels.txt", 'a+')
flc.write(+serverid + " = " + channel)
client = discord.Client()
prefix = "ez!"
@client.event
async def on_ready():
print('We have logged in as {0.user}'.format(client))
@client.event
async def on_member_remove(member):
print("recognised that "+member.mention+" has left")
with open("leavechannels.txt") as myFile:
for item in myFile.split("\n"):
if member.server.id in item:
leavechannelcache = item.strip()
embedVar= discord.Embed(title= member.mention + " has left :(", description="Come back plox!", color=12151512)
await discord.Object(id=leavechannelcache).send(embed=embedVar)
print("Sent message to #CHANNEL")
@client.event
async def on_message(message):
if message.author == client.user:
return
if message.content.startswith(prefix+'leavechannel'):
if message.author.guild_permissions.manage_channels or message.author.guild_permissions.administration:
leavechannelhash = message.content.replace(prefix+'leavechannel ', '')
print (leavechannelhash)
await message.channel.send("Server leave channel has been set to "+leavechannelhash)
leavechannel(str(message.guild.id), str(leavechannelhash))
else:
await message.channel.send("Sorry, "+message.author+"; You do not have sufficient Permissions to do this.")
client.run('token')
</code></pre>
<p>i have ommitted a lot of content as it is unnecessary (mostly just stuff related to help and hello world filler commands) but the main issue at hand is that I have the intents.members and I have activated it in the discord apps panel but even after doing so, it doesnt even give me a DEBUG print when someone leaves the server which obviously points towards something being wrong with its detection of member leaves. Any fixes you can suggest?</p>
|
<p>You need to change your code, if you're on dpy 1.5.1, there's a intent update so use this on your code.</p>
<pre><code>from discord import Intents
from discord.ext.commands import Bot
intent = Intents().all()
bot = Bot(command_prefix='prefix', intents=intent)
... # Some code under this line
</code></pre>
<p>And try to use <code>Intents().all</code>?
In your case put the intent like this<code>discord.Client(intents=intent)</code></p>
|
python|discord|bots|member
| 0 |
1,905,992 | 70,643,061 |
class_weight giving worst results in Keras model. What could be the reason?
|
<p>I'm working on an NLP Classification task with imbalanced data and the code:</p>
<pre><code>df['target'] = le.fit_transform(df['CHAPTER'])
Y = df['target'].ravel()
classes = df['target'].nunique()
train_X, val_X, train_y, val_y = train_test_split(X,Y, test_size=0.1, stratify = Y, random_state = SEED)
class_weights = class_weight.compute_class_weight(class_weight = 'balanced',classes = np.unique(train_y),y = train_y)
class_weight_dict = dict(enumerate(class_weights))
vocab_size = 25000
tokenizer = Tokenizer(num_words=vocab_size, filters = ' ')
tokenizer.fit_on_texts(list(train_X))
train_X = tokenizer.texts_to_sequences(train_X)
val_X = tokenizer.texts_to_sequences(val_X)
train_X = pad_sequences(train_X, maxlen=maxlen)
val_X = pad_sequences(val_X, maxlen=maxlen)
</code></pre>
<p>Works fine and giving me an accuracy of around 70% when I do:</p>
<pre><code>history = model.fit(train_X, train_y, batch_size=64, epochs = 30,
validation_split = 0.1,verbose = 1)
</code></pre>
<p>But the moment I use <code>class_weight=class_weight_dict</code> in <code>train</code> , my accuracy drops from 70 to 30%. What could be the possible reason? Am I doing something wrong with the code?</p>
|
<p>When you use the <code>dict(enumerate(class_weights))</code> method, it creates a dictionary with keys starting from zero. In case if you don't have class labels that correspond to zero (or if you don't have any in that range, at all) this can be a problem. Below is a demonstration:</p>
<pre><code>train_y = [1, 1, 1, 2, 2] # training labels: 1 and 2
class_weights = class_weight.compute_class_weight(
class_weight='balanced',
classes=np.unique(train_y),
y=train_y
)
print(class_weights)
# array([0.83333333, 1.25 ])
</code></pre>
<p>Creating the dictionary as you've done it:</p>
<pre><code>class_weight_dict = dict(enumerate(class_weights))
print(class_weight_dict)
# {0: 0.8333333333333334, 1: 1.25}
</code></pre>
<p>There is no class as <code>0</code> and the class weight for class <code>2</code> is missing.</p>
<p>Instead, you should do:</p>
<pre><code>class_weight_dict = {label: weight for label, weight in zip(np.unique(train_y), class_weights)}
print(class_weight_dict)
# {1: 0.8333333333333334, 2: 1.25}
</code></pre>
|
tensorflow|machine-learning|keras|scikit-learn|deep-learning
| 0 |
1,905,993 | 70,516,228 |
Quicksight Dashboard using existing Template
|
<p>I am trying to create a template in Quicksight, so that it allows me to create dashboards with different datasets, but with the same structure.</p>
<p>I am using boto3 (Python) and the documentation indicates that a template is capable of creating a dashboard using different datasets, as long as the new dataset has the same structure as the dataset with which the template was generated.</p>
<p>However, when I try to create the dashboard, I get the following error:</p>
<pre><code>An error occurred (InvalidParameterValueException) when calling the CreateDashboard operation: Given placeholders [test_2] are not part of template
</code></pre>
<p>It would be helpful if someone could tell me the steps in the code to follow.</p>
<p>Thanks a lot!</p>
|
<p>Follow link to image here
<a href="https://i.stack.imgur.com/69rHj.png" rel="nofollow noreferrer">https://i.stack.imgur.com/69rHj.png</a></p>
<p>See line 32 and description on line33.</p>
<p>This had me going for 2 or 3 hours, too. Same error as yourself.
From AWS CLI I derived my QS data set id. That was wrong in my case.
Use the TEMPLATE data set id instead. Issue resolved, dashboard created.</p>
|
python|amazon-web-services|boto3|dashboard|amazon-quicksight
| 0 |
1,905,994 | 55,615,049 |
Installing postgresql-dev for Postgres 9.6.x in a Dockerfile?
|
<p>I have been fruitlessly searching the internet for 2 days now looking for a way to install postgresql-dev for 9.6 due to an extremely outdated dep I'm trying to run. Unfortunately, running the following Dockerfile commands:</p>
<pre><code>FROM python:2.7-alpine
ENV PYTHONUNBUFFERED 1
RUN mkdir /app/
RUN mkdir ./app/logs/
RUN mkdir ./app/xxx/
WORKDIR /app/xxx/
ADD requirements.txt /app/xxx/
ADD ./ /app/xxx/
RUN apk --update add python py-pip openssl postgresql-dev ca-certificates py-openssl libffi-dev musl-dev openssl-dev wget build-base gcc python-dev py-pip jpeg-dev zlib-dev libx
ml2 libxslt-dev
ENV LIBRARY_PATH=/lib:/usr/lib
RUN pip install --upgrade pip setuptools
RUN pip install psycopg2==2.4.5
</code></pre>
<p>Gives me the following error:</p>
<pre><code>Collecting psycopg2==2.4.5
Downloading https://files.pythonhosted.org/packages/36/77/894a5dd9f3f55cfc85682d3e6473ee5103d8d418b95baf4019fad3ffa026/psycopg2-2.4.5.tar.gz (719kB)
Complete output from command python setup.py egg_info:
running egg_info
creating pip-egg-info/psycopg2.egg-info
writing pip-egg-info/psycopg2.egg-info/PKG-INFO
writing top-level names to pip-egg-info/psycopg2.egg-info/top_level.txt
writing dependency_links to pip-egg-info/psycopg2.egg-info/dependency_links.txt
writing manifest file 'pip-egg-info/psycopg2.egg-info/SOURCES.txt'
Error: could not determine PostgreSQL version from '11.2'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-UcoQQZ/psycopg2/
</code></pre>
<p>Which I understand means that I'm installing PostgreSQL 11.2 from postgresql-dev when I need 9.6. I cannot seem to find this apk anywhere, and running postgresql-dev=9.6.5 or its equivalents does not appear to work either. </p>
<p>Is there any way to get this version of postgresql-dev from python2.7 alpine (or any other docker)? I saw that there are postgres docker containers but I'm new to docker and couldn't get them running either (psycopg2 was completely unable to find their installations)</p>
|
<p>The closest version to <code>postgresql-dev</code> 9.6.5 in Alpine repositories is <code>9.6.10-r0</code>, used in Alpine 3.5:
<a href="https://pkgs.alpinelinux.org/package/v3.5/main/x86_64/postgresql-dev" rel="nofollow noreferrer">https://pkgs.alpinelinux.org/package/v3.5/main/x86_64/postgresql-dev</a></p>
<p>Regardless of your Alpine version, you could instruct apk to pick this exact version from the V3.5 apk repository:</p>
<pre><code>apk add postgresql-dev=9.6.10-r0 --repository=http://dl-cdn.alpinelinux.org/alpine/v3.5/main
</code></pre>
|
postgresql|python-2.7|docker|alpine-linux
| 4 |
1,905,995 | 66,659,944 |
Python is giving me both columns of a table I a scraping, but I only want it to give me one of the columns
|
<p>I am using Python to scrape the names of the Alaska Supreme Court justices from Ballotpedia (<a href="https://ballotpedia.org/Alaska_Supreme_Court" rel="nofollow noreferrer">https://ballotpedia.org/Alaska_Supreme_Court</a>). My current code is giving me both the names of the justices as well as the names of the persons in the "Appointed by" column. Here is my current code:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
list = ['https://ballotpedia.org/Alaska_Supreme_Court']
temp_dict = {}
for page in list:
r = requests.get(page)
soup = BeautifulSoup(r.content, 'html.parser')
temp_dict[page.split('/')[-1]] = [item.text for item in soup.select("table.wikitable.sortable.jquery-tablesorter a")]
df = pd.DataFrame.from_dict(temp_dict,
orient='index').transpose()
df.to_csv('18-TEST.csv')
</code></pre>
<p>I've been trying to work with this line:</p>
<pre><code>temp_dict[page.split('/')[-1]] = [item.text for item in soup.select("table.wikitable.sortable.jquery-tablesorter a")]
</code></pre>
<p>I'm a little inexperienced using the inspect function on webpages, so I may be trying the wrong thing when I try to put "tr" or "td" (which I am finding under "tbody") after "tablesorter". I'm a bit lost at this point and am having trouble finding resources on this. Would you be able to help me to get python to give me the judge column but not the appointed by column? Thank you!</p>
|
<p>There are different options to get the result.</p>
<h2>Option#1</h2>
<p>Slice the list and pick every second element:</p>
<pre><code>soup.select("table.wikitable.sortable.jquery-tablesorter a")][0::2]
</code></pre>
<p><strong>Example:</strong></p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
lst = ['https://ballotpedia.org/Alaska_Supreme_Court']
temp_dict = {}
for page in lst:
r = requests.get(page)
soup = BeautifulSoup(r.content, 'html.parser')
temp_dict[page.split('/')[-1]] = [item.text for item in soup.select("table.wikitable.sortable.jquery-tablesorter a")][0::2]
pd.DataFrame.from_dict(temp_dict, orient='index').transpose().to_csv('18-TEST.csv', index=False)
</code></pre>
<h2>Option#2</h2>
<p>Make your selection more specific and select only the first <code>td</code> in a <code>tr</code>:</p>
<pre><code>soup.select("table.wikitable.sortable.jquery-tablesorter tr > td:nth-of-type(1)")]
</code></pre>
<p><strong>Example</strong></p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
list = ['https://ballotpedia.org/Alaska_Supreme_Court']
temp_dict = {}
for page in list:
r = requests.get(page)
soup = BeautifulSoup(r.content, 'html.parser')
temp_dict[page.split('/')[-1]] = [item.text for item in soup.select("table.wikitable.sortable.jquery-tablesorter tr > td:nth-of-type(1)")]
pd.DataFrame.from_dict(temp_dict, orient='index').transpose().to_csv('18-TEST.csv', index=False)
</code></pre>
<h2>Option#3</h2>
<p>Use <code>pandas</code> functionality <code>read_html()</code></p>
<p><strong>Example</strong></p>
<pre><code>import pandas as pd
df = pd.read_html('https://ballotpedia.org/Alaska_Supreme_Court')[2]
df.Judge.to_csv('18-TEST.csv', index=False)
</code></pre>
|
python|web-scraping|multiple-columns|web-inspector
| 2 |
1,905,996 | 66,483,306 |
Pip requirements installation fails in Travis due to idna version conflict
|
<p><a href="https://travis-ci.com/github/zobayer1/elastic-migrate/jobs/487243062" rel="nofollow noreferrer">One of my Travis build tests</a> have started to fail with the following error:</p>
<pre><code>The conflict is caused by:
The user requested idna==3.1
requests 2.25.1 depends on idna<3 and >=2.5
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
</code></pre>
<p>However, this runs fine on my local machine. For example:</p>
<pre><code>(venv) C:\Users\Asus\PycharmProjects\elastic-migrate>tox -e py38
GLOB sdist-make: C:\Users\Asus\PycharmProjects\elastic-migrate\setup.py
py38 create: C:\Users\Asus\PycharmProjects\elastic-migrate\.tox\py38
py38 installdeps: -rrequirements.txt
py38 inst: C:\Users\Asus\PycharmProjects\elastic-migrate\.tox\.tmp\package\1\elastic-migrate-0.1.0.dev126+g8e5eb23.zip
py38 installed: appdirs==1.4.4,atomicwrites==1.4.0,attrs==20.3.0,certifi==2020.12.5,cfgv==3.2.0,chardet==4.0.0,click==7.1.2,click-log==0.3.2,codecov==2.1.11,colorama==0.4.4,coverage==5.3.1,distlib==0.3.1,elastic-migrate @ file:///C:/Us
ers/Asus/PycharmProjects/elastic-migrate/.tox/.tmp/package/1/elastic-migrate-0.1.0.dev126%2Bg8e5eb23.zip,filelock==3.0.12,flake8==3.8.4,identify==1.5.10,idna==2.10,importlib-metadata==3.3.0,iniconfig==1.1.1,jsonschema==3.2.0,mccabe==0.
6.1,more-itertools==8.6.0,nodeenv==1.5.0,packaging==20.8,pluggy==0.13.1,pre-commit==2.9.3,py==1.10.0,pycodestyle==2.6.0,pyfakefs==4.3.3,pyflakes==2.2.0,pyparsing==2.4.7,pyrsistent==0.17.3,pytest==6.2.1,pytest-cov==2.10.1,pytest-mock==3
.4.0,PyYAML==5.3.1,requests==2.25.1,requests-mock==1.8.0,setuptools-scm==5.0.1,six==1.15.0,SQLAlchemy==1.3.22,toml==0.10.2,tox==3.20.1,urllib3==1.26.2,validator-collection==1.5.0,virtualenv==20.2.2,wcwidth==0.2.5,zipp==3.4.0
py38 run-test-pre: PYTHONHASHSEED='473'
</code></pre>
<p>For reference:</p>
<ul>
<li>My <a href="https://github.com/zobayer1/elastic-migrate/blob/master/.travis.yml" rel="nofollow noreferrer">.travis.yml</a> file</li>
<li>My <a href="https://github.com/zobayer1/elastic-migrate/blob/master/tox.ini" rel="nofollow noreferrer">tox.ini</a> file</li>
</ul>
<p>This has started happening since I've tried to add python 3.9 support to the project, and <a href="https://pyup.io/" rel="nofollow noreferrer">pyup</a> has upgraded the dependencies subsequently. As I've dug about it a little, I've found that there are others facing <a href="https://github.com/psf/requests/issues/5710" rel="nofollow noreferrer">the same issues</a>. However, I am unable to find a satisfactory way to go about it. What is the recommended way to handle tox environment dependencies better? One <code>requirements.txt</code> file doesn't seem to be the right way of doing it.</p>
|
<p>Historically, <strong>pip</strong> didn't have a proper dependency resolver. So, if you asked it to install a package without any version flag, you’d be getting the newest version of the package, even if it conflicts with other packages that you had already installed.</p>
<p>However, with <strong>pip 20.3</strong>, this changes, and now <strong>pip</strong> has a stricter dependency resolver. Pip will now complain if any of your sub-dependencies are incompatible.</p>
<p>As a quick fix, you can pin your <strong>idna</strong> version in your <code>requirements.txt</code> to <strong>2.05</strong>. As a longer-term solution, you can adopt a tool like <a href="https://github.com/jazzband/pip-tools" rel="nofollow noreferrer">pip-tools</a> where you will be able to pin your top-level dependencies in a <code>requirements.in</code> file and run a <code>pip-compile</code> command to generate the <code>requirements.txt</code> file. This way there will be an explicit delineation between the top-level dependencies and the sub-dependencies. Also, the tool will resolve the sub-dependency conflicts for you.</p>
|
python|pip|travis-ci
| 2 |
1,905,997 | 64,791,178 |
Concatenate row with it's following ones when each row is a list
|
<p>Suppose I have the following DataFrame</p>
<pre><code>dict_test = {'a':[['1','2'], ['t','rba'], ['5','6','20'],['7','9'],['sar']],'b':['John','John','John','Tom','Tom']}
df = pd.DataFrame(dict_test)
</code></pre>
<p><a href="https://i.stack.imgur.com/p15Fb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p15Fb.png" alt="enter image description here" /></a></p>
<p>I've been searching for a way to reshape it in a way such that I end up with</p>
<pre><code>dict_test2 = {'a':[['1','2'], ['1','2','t','rba'], ['1','2','t','rba','5','6','20'],['7','9'],['7','9','sar']],'b':['John','John','John','Tom','Tom']}
df2 = pd.DataFrame(dict_test2)
</code></pre>
<p><a href="https://i.stack.imgur.com/wGUIq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wGUIq.png" alt="
" /></a></p>
<p>Unluckily, I'm not familiar enough with pandas to come up with such a transformation. If anyone has one, or a tip, I will highly appreciate it</p>
|
<p>you can also do</p>
<pre><code>df['a']=df.groupby('b')['a'].apply(lambda x: x.cumsum())
print(df)
</code></pre>
<p>results:</p>
<pre><code>0 [1, 2] John
1 [1, 2, t, rba] John
2 [1, 2, t, rba, 5, 6, 20] John
3 [7, 9] Tom
4 [7, 9, sar] Tom
</code></pre>
|
python|pandas|dataframe
| 2 |
1,905,998 | 64,833,482 |
Do you need to run brew unlink / brew link before / after brew switch?
|
<p>Do you need to run <code>brew unlink</code> / <code>brew link</code> before / after <code>brew switch</code>?</p>
<p>According to this link <a href="https://docs.brew.sh/Tips-N%27-Tricks#activate-a-previously-installed-version-of-a-formula" rel="nofollow noreferrer">https://docs.brew.sh/Tips-N'-Tricks#activate-a-previously-installed-version-of-a-formula</a> <code>brew switch</code> is activate a previously installed version of a formula.</p>
<p>Looks like I have some 'system' python installed(it's installed under <code>/usr/bin/python3</code>), that was not installed with brew, I can see it if I do <code>brew unlink python@3.9</code> and <code>brew unlink python@3.7</code>:</p>
<pre><code>python -V
Python 3.7.3
/usr/bin/python3 -V
Python 3.7.3
which python3
/usr/bin/python3
</code></pre>
<p>Then I switch to python@3.9:</p>
<pre><code>brew switch python@3.9 3.9.0_1
python -V
Python 3.9.0
</code></pre>
<p>Then I switch to python@3.7:</p>
<pre><code>brew switch python@3.7 3.7.9
python -V
Python 3.9.0
</code></pre>
<p>but it still show me Python 3.9.0, <code>ls -l /usr/local/bin | grep python3</code> shows me python3.9 too.</p>
<p>I also tried to unlink all brew python packages before each brew switch test:</p>
<pre><code>brew unlink python@3.7 && brew unlink python@3.9
brew switch python@3.7 3.7.9
python -V
Python 3.7.3
brew unlink python@3.7 && brew unlink python@3.9
brew switch python@3.9 3.9.0_1
python -V
Python 3.9.0
</code></pre>
<p>So seems for some reason it automatically links python@3.9 on brew switch and don't do it for python@3.7, why? Is <code>brew switch</code> have <code>brew unlink</code> / <code>brew link</code> inside it or should it be done manually?</p>
<p>Can I just do <code>brew unlink</code> / <code>brew link</code> to switch python version?</p>
<p><strong>Update:</strong></p>
<p>Seems newer versions of <code>brew</code> don't have <code>switch</code>:</p>
<pre><code>Error: Calling `brew switch` is disabled! Use `brew link` @-versioned formulae instead.
brew --version
Homebrew 2.7.7
Homebrew/homebrew-core (git revision 918f0; last commit 2021-02-04)
Homebrew/homebrew-cask (git revision 2b83c; last commit 2021-02-04)
</code></pre>
<p>So now it should be something like this:</p>
<pre><code>brew unlink python@3.7 && brew unlink python@3.9
brew link python@3.7
python -V
Python 3.7.9
</code></pre>
|
<p><code>brew switch</code> is no more a brew command.</p>
<pre><code>$ brew --version
Homebrew 3.0.0
Homebrew/homebrew-core (git revision 8d644; last commit 2021-02-10)
$ brew switch mariadb@10.3
Error: Unknown command: switch
</code></pre>
<p>The <a href="https://docs.brew.sh/Tips-N%27-Tricks#activate-a-previously-installed-version-of-a-formula" rel="nofollow noreferrer">Tips and Tricks</a> page you were referring to in your question also remove any mention of <code>brew switch</code>.</p>
<blockquote>
<p><strong>Installing previous versions of formulae</strong></p>
<p>The supported method of installing specific versions of some formulae
is to see if there is a versioned formula (e.g. gcc@7) available. If
the version you’re looking for isn’t available, consider using brew
extract.</p>
</blockquote>
<p>Either, no mention of <code>brew switch</code> in <code>man brew</code>.</p>
<p>So it seems that your question has been solved upstream by Homebrew maintainers.</p>
<p>Note that <code>brew switch</code> was deprecated since <a href="https://brew.sh/2020/12/01/homebrew-2.6.0/" rel="nofollow noreferrer">Homebrew 2.6.0</a> .</p>
<p>== Update ==</p>
<p>If we take your example with python@3.7 and python@3.9. We should first inspect the output of <code>brew info python@3.7</code> and <code>brew info python@3.9</code>. We see that python@3.7 is <a href="https://docs.brew.sh/FAQ#what-does-keg-only-mean" rel="nofollow noreferrer">keg-only</a>:</p>
<pre><code>python@3.7 is keg-only, which means it was not symlinked into /usr/local,
because this is an alternate version of another formula.
</code></pre>
<p>which is not the case of python@3.9 because it is for now an alias of the python formula. This means that installing python@3.9 automatically run <code>brew link python@3.9</code> and installing python@3.7 does not run <code>brew link python@3.7</code>.</p>
<p>Then to list which files will be linked with python@3.9, you can run <code>brew link --dry-run python@3.9</code></p>
<pre><code>Would link:
/usr/local/bin/2to3
/usr/local/bin/2to3-3.9
/usr/local/bin/easy_install-3.9
/usr/local/bin/idle3
/usr/local/bin/idle3.9
/usr/local/bin/pip3
/usr/local/bin/pip3.9
/usr/local/bin/pydoc3
/usr/local/bin/pydoc3.9
/usr/local/bin/python3
/usr/local/bin/python3-config
/usr/local/bin/python3.9
/usr/local/bin/python3.9-config
/usr/local/bin/wheel3
/usr/local/share/man/man1/python3.1
/usr/local/share/man/man1/python3.9.1
/usr/local/lib/pkgconfig/python-3.9-embed.pc
/usr/local/lib/pkgconfig/python-3.9.pc
/usr/local/lib/pkgconfig/python3-embed.pc
/usr/local/lib/pkgconfig/python3.pc
/usr/local/Frameworks/Python.framework/Headers
/usr/local/Frameworks/Python.framework/Python
/usr/local/Frameworks/Python.framework/Resources
/usr/local/Frameworks/Python.framework/Versions/3.9
/usr/local/Frameworks/Python.framework/Versions/Current
</code></pre>
<p>To list which files will be linked with python@3.7, run <code>brew link --dry-run python@3.7</code>:</p>
<pre><code>Would link:
/usr/local/bin/2to3
/usr/local/bin/2to3-3.7
/usr/local/bin/easy_install-3.7
/usr/local/bin/idle3
/usr/local/bin/idle3.7
/usr/local/bin/pip3
/usr/local/bin/pip3.7
/usr/local/bin/pydoc3
/usr/local/bin/pydoc3.7
/usr/local/bin/python3
/usr/local/bin/python3-config
/usr/local/bin/python3.7
/usr/local/bin/python3.7-config
/usr/local/bin/python3.7m
/usr/local/bin/python3.7m-config
/usr/local/bin/pyvenv
/usr/local/bin/pyvenv-3.7
/usr/local/bin/wheel3
/usr/local/share/man/man1/python3.1
/usr/local/share/man/man1/python3.7.1
/usr/local/lib/pkgconfig/python-3.7.pc
/usr/local/lib/pkgconfig/python-3.7m.pc
/usr/local/lib/pkgconfig/python3.pc
/usr/local/Frameworks/Python.framework/Headers
/usr/local/Frameworks/Python.framework/Python
/usr/local/Frameworks/Python.framework/Resources
/usr/local/Frameworks/Python.framework/Versions/3.7
/usr/local/Frameworks/Python.framework/Versions/Current
</code></pre>
<p>So to answer your question, one way to switch between python@3.7 and python@3.9 is by using <code>brew link</code> and <code>brew unlink</code> and yes this can break things if you have scripts which are compatible with python@3.7 and not with python@3.9 or vice versa.</p>
<p>There is another way to use different version of python in the same time. This is explained at the end of the output of <code>brew link --dry-run python@3.7</code>:</p>
<pre><code>If you need to have this software first in your PATH instead consider running:
echo 'export PATH="/usr/local/opt/python@3.7/bin:$PATH"' >> ~/.zshrc
</code></pre>
<p>For scripts that neeeds python@3.7, you can set:</p>
<pre><code> export PATH="/usr/local/opt/python@3.7/bin
</code></pre>
<p>and for scripts that neeeds python@3.9, you can set:</p>
<pre><code> export PATH="/usr/local/opt/python@3.9/bin
</code></pre>
<p>Hope this helps.</p>
|
python-3.x|macos|homebrew
| 3 |
1,905,999 | 63,782,470 |
Error occured when migrate mysql in django
|
<p>I installed mysql client -- pip install mysqlclient
but when i made connection to mysql using this code</p>
<pre><code> 'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'django_db',
'USER' : 'root',
'PASSWORD' : '',
'HOST' : '127.0.0.1',
'PORT' : '8080'
}
</code></pre>
<p>following error returns :
django.db.utils.OperationalError: (2013, "Lost connection to MySQL server at 'reading initial communication packet', system error: 0")</p>
|
<p>This is because django couldn't connect to mysql database.</p>
<p>Try changing host to 'localhost', its still the same but might work.</p>
<p>Also check your mysql port when you run mysql on your computer.</p>
|
mysql|python-3.x|django
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.