Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,903,400 | 62,360,684 |
How to increase the size of xticks in pandas plot
|
<p>This is a follow up for a question which i asked <a href="https://i.stack.imgur.com/4yxZJ.png" rel="nofollow noreferrer">here</a>:
The code is as follows: </p>
<pre><code>from pandas_datareader import data as web
import matplotlib.pyplot as plt
import matplotlib.dates as md
fig, (ax1, ax2) = plt.subplots(2, 1)
df = web.DataReader('F', 'yahoo')
df2 = web.DataReader('Fb', 'yahoo')
ax = df.plot(figsize=(35,15), ax=ax1)
df2.plot(y = 'Close', figsize=(35,15), ax=ax2)
plt.xticks(fontsize = 25)
for ax in (ax1, ax2):
ax.xaxis.set_major_locator(md.MonthLocator(bymonth = range(1, 13, 6)))
ax.xaxis.set_major_formatter(md.DateFormatter('%b\n%Y'))
ax.xaxis.set_minor_locator(md.MonthLocator())
plt.setp(ax.xaxis.get_majorticklabels(), rotation = 0 )
plt.show()
</code></pre>
<p>This produces this plot:
<a href="https://i.stack.imgur.com/4yxZJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4yxZJ.png" alt="enter image description here"></a></p>
<p>How can i increase the size of both the xticks in the two subplots as you can see the size was increased for the bottom one only.</p>
<pre><code> [1]: https://stackoverflow.com/questions/62358966/adding-minor-ticks-to-pandas-plot
</code></pre>
|
<p>You can use the <code>tick_params</code> function on the <code>ax</code> instance to control the size of the tick-labels on the x-axis. If you want to control the size of both x and y axis, use <code>axis='both'</code>. You can additionally specify <code>which='major'</code> or <code>which='minor'</code> or <code>which='both'</code> depending on if you want to change major, minor or both tick labels.</p>
<pre><code>for ax in (ax1, ax2):
# Rest of the code
ax.tick_params(axis='x', which='both', labelsize=25)
</code></pre>
|
python|python-3.x|pandas|matplotlib|xticks
| 7 |
1,903,401 | 35,436,873 |
Python decimal custom context
|
<p>The context manager <code>decimal.localcontext</code> apparently is ignored when used inside another context. The following example illustrates this (Python 2.7):</p>
<pre><code>from decimal import Decimal, Context, localcontext
from contextlib import contextmanager
@contextmanager
def precision_context(precision):
yield localcontext(Context(prec=precision))
PRECISION=4
SMALL_NUMBER=Decimal('0.0001')
with localcontext(Context(prec=PRECISION)):
# This is working as it should
print SMALL_NUMBER + 1 # prints 1.000
with precision_context(PRECISION):
# But this is not
print SMALL_NUMBER + 1 # prints 1.0001
</code></pre>
<p>Why this happens, and how to solve it?</p>
|
<p>This happens because you don't actually enter the context manager (invoke the <code>__enter__</code> method). Nothing calls <code>localcontext(Context(prec=precision)).__enter__</code> because</p>
<pre><code>with precision_context(PRECISION):
</code></pre>
<p>only enters the <code>precision_context</code> context manager.</p>
<p>You can solve the problem by adding another <code>with</code> statement:</p>
<pre><code>with precision_context(PRECISION) as ctx:
# Enter `localcontext(Context(prec=precision))`
with ctx:
print(SMALL_NUMBER + 1) # prints 1.000
</code></pre>
|
python|decimal|contextmanager
| 3 |
1,903,402 | 35,741,212 |
django-registration-redux with custom user model registration NotImplementedError
|
<p>I am struggling with making my custom user model works with django-registration-redux. I manage to get all fields displaying propertly in registration view and i can create account within django shell. But when i try to register on page i get:</p>
<pre><code>NotImplementedError at /accounts/register/
No exception message supplied
Request Method: POST
Request URL: http://127.0.0.1/accounts/register/
Django Version: 1.9.2
Exception Type: NotImplementedError
Exception Location: /usr/local/lib/python3.5/site-packages/registration/views.py in register, line 118
Python Executable: /usr/local/bin/python3.5
Python Version: 3.5.1
Python Path:
['/opt/davinci/davinci-web',
'/usr/local/bin',
'/usr/local/lib/python35.zip',
'/usr/local/lib/python3.5',
'/usr/local/lib/python3.5/plat-linux',
'/usr/local/lib/python3.5/lib-dynload',
'/usr/local/lib/python3.5/site-packages']
</code></pre>
<p>my root.urls.py</p>
<pre><code>from django.conf.urls import url, include
from django.contrib import admin
from home.views import home
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^$', home, name="home"),
url(r'^booking/', include('booking.urls'), name="booking"),
url(r'^accounts/', include('accounts.urls'), name='accounts')
]
</code></pre>
<p>accounts.urls</p>
<pre><code>from django.conf.urls import url, include
from registration.views import RegistrationView
from .forms import UserRegistrationForm
urlpatterns = [
url(r'^register/$', RegistrationView.as_view(form_class=UserRegistrationForm),
name='registration_register',),
url(r'^', include('registration.backends.default.urls')),
]
</code></pre>
<p>accounts.models.py</p>
<pre><code>from django.db import models
from django.utils import timezone
from django.core.mail import send_mail
from django.utils.translation import ugettext_lazy as _
from django.contrib.auth.models import AbstractBaseUser, PermissionsMixin, BaseUserManager
class UserManager(BaseUserManager):
def _create_user(self, email, password, first_name, last_name,
is_staff, is_superuser, **extra_fields):
now = timezone.now()
if not email:
raise ValueError(_('Email is required'))
email = self.normalize_email(email)
first_name = first_name.capitalize()
last_name = last_name.capitalize()
user = self.model(email=email, first_name=first_name, last_name=last_name,
is_staff=is_staff, is_active=False,
is_superuser=is_superuser, last_login=now, date_joined=now,
**extra_fields)
user.set_password(password)
user.save(using=self._db)
return user
def create_user(self, email, password, first_name, last_name, **extra_fields):
return self._create_user(email, password, first_name, last_name, is_staff=False,
is_superuser=False, **extra_fields)
def create_superuser(self, email, password, first_name, last_name, **extra_fields):
user = self._create_user(email, password, first_name, last_name, is_staff=True,
is_superuser=True, **extra_fields)
user.is_active = True
user.save(using=self._db)
return user
class User(AbstractBaseUser, PermissionsMixin):
email = models.EmailField(_('email'), max_length=100, unique=True,
help_text=_('Required. 100 characters or fewer. '
'Letters, numbers and @/./+/-/_ characters'))
first_name = models.CharField(_('first name'), max_length=30, blank=False, null=True)
last_name = models.CharField(_('last name'), max_length=30, blank=False, null=True)
is_staff = models.BooleanField(_('staff status'), default=False)
is_active = models.BooleanField(_('active'), default=False)
date_joined = models.DateTimeField(_('date joined'), default=timezone.now)
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = ['first_name', 'last_name']
objects = UserManager()
class Meta:
verbose_name = _('user')
verbose_name_plural = _('user')
def get_full_name(self):
full_name = '%s %s' % (self.first_name, self.last_name)
return full_name.strip()
def get_short_name(self):
return self.first_name
def email_user(self, subject, message, from_email=None):
send_mail(subject, message, from_email, [self.email])
</code></pre>
<p>accounts.forms.py</p>
<pre><code>from django import forms
from django.utils.translation import ugettext_lazy as _
from registration.forms import RegistrationForm
from .models import User
class UserRegistrationForm(RegistrationForm):
first_name = forms.CharField(max_length=30, label=_("First name"))
last_name = forms.CharField(max_length=30, label=_("Last name"))
class Meta:
model = User
fields = ("email", "first_name", "last_name")
</code></pre>
<p>I suspect that i have to implement register method somewhere but no idea how i where.
i found <a href="https://stackoverflow.com/questions/18975231/notimplementederror-in-django-registration-when-registering-a-user">this answer</a> but i can not figure out what to do whit it.</p>
|
<p>Ok. I got it working. Posting solutions here for descendants.
My accounts/views.py</p>
<pre><code>from django.contrib.sites.shortcuts import get_current_site
from registration.backends.default.views import RegistrationView
from registration.models import RegistrationProfile
from registration import signals
from .forms import UserRegistrationForm
from .models import User
class UserRegistrationView(RegistrationView):
form_class = UserRegistrationForm
def register(self, request, form):
site = get_current_site(request)
if hasattr(form, 'save'):
new_user_instance = form.save()
else:
new_user_instance = (User().objects
.create_user(**form.cleaned_data))
new_user = RegistrationProfile.objects.create_inactive_user(
new_user=new_user_instance,
site=site,
send_email=self.SEND_ACTIVATION_EMAIL,
request=request,
)
signals.user_registered.send(sender=self.__class__,
user=new_user,
request=request)
return new_user
</code></pre>
<p>And in accounts/urls.py Change </p>
<pre><code>from registration.views import RegistrationView
</code></pre>
<p>to</p>
<pre><code>from .views import UserRegistrationView
</code></pre>
|
python|django|django-registration
| 4 |
1,903,403 | 15,600,561 |
Warning in file statement
|
<p>I am getting warning :<code>Assignment to reserved built-in symbol: file</code>
on this comand:</p>
<pre><code>file=open(fileName,'r')
</code></pre>
<p>Any Specific reason?</p>
|
<p><a href="http://docs.python.org/2/library/functions.html#file"><code>file</code></a> is a built-in global name in Python. It's warning you not to redefine it.</p>
|
python|warnings
| 6 |
1,903,404 | 49,142,612 |
Python Iterate over Folders and combine csv files inside
|
<p>Windows OS - I've got several hundred subdirectories and each subdirectory contains 1 or more .csv files. All the files are identical in structure. I'm trying to loop through each folder and concat all the files in each subdirectory into a new file combining all the .csv files in that subdirectory. </p>
<p>example:</p>
<p>folder1 -> file1.csv, file2.csv, file3.csv -->> file1.csv, file2.csv, file3.csv, combined.csv</p>
<p>folder2 -> file1.csv, file2.csv -->> file1.csv, file2.csv, combined.csv</p>
<p>Very new to coding and getting lost in this. Tried using os.walk but completely failed.</p>
|
<p>The generator produced by <code>os.walk</code> yields three items each iteration: the path of the current directory in the walk, a list of paths representing sub directories that will be traversed next, and a list of filenames contained in the current directory.</p>
<p>If for whatever reason you don't want to walk certain file paths, you should remove entries from what I called <code>sub</code> below (the list of sub directories contained in <code>root</code>). This will prevent <code>os.walk</code> from traversing any paths you removed.</p>
<p>My code does not prune the walk. Be sure to update this if you don't want to traverse an entire file subtree.</p>
<p>The following outline should work for this although I haven't been able to test this on Windows. I have no reason to think it'll behave differently.</p>
<pre><code>import os
import sys
def write_files(sources, combined):
# Want the first header
with open(sources[0], 'r') as first:
combined.write(first.read())
for i in range(1, len(sources)):
with open(sources[i], 'r') as s:
# Ignore the rest of the headers
next(s, None)
for line in s:
combined.write(line)
def concatenate_csvs(root_path):
for root, sub, files in os.walk(root_path):
filenames = [os.path.join(root, filename) for filename in files
if filename.endswith('.csv')]
combined_path = os.path.join(root, 'combined.csv')
with open(combined_path, 'w+') as combined:
write_files(filenames, combined)
if __name__ == '__main__':
path = sys.argv[1]
concatenate_csvs(path)
</code></pre>
|
python|csv
| 0 |
1,903,405 | 49,110,675 |
Could I have used a for loop somehow?
|
<p>Just wondering if there is a way to clean this up a little where its a bunch of lines that only changes a little. This is using python 3.4.3, tkinter, and mysql.connector.</p>
<pre><code> Plantname = tk.StringVar()
self.Plantbox = tk.Entry(self, textvariable=Plantname)
self.Plantbox.grid(row=0, column=0)
self.Name = tk.Label(self, text="Name",width=10)
self.Name.grid(row=1, column=0)
self.Amount = tk.Label(self, text="Amount",width=10)
self.Amount.grid(row=1, column=1)
self.Date = tk.Label(self, text="Date",width=10)
self.Date.grid(row=1, column=2)
self.Planting = tk.Label(self, text="Planting #",width=10)
self.Planting.grid(row=1, column=3)
self.batch = tk.Label(self, text="batch #",width=10)
self.batch.grid(row=1, column=4)
self.Name_2 = tk.Label(self, text="0")
self.Name_2.grid(row=2, column=0)
self.Amount_2 = tk.Label(self, text="0")
self.Amount_2.grid(row=2, column=1)
self.Date_2 = tk.Label(self, text="0")
self.Date_2.grid(row=2, column=2)
</code></pre>
<p>full code:</p>
<p><a href="https://pastebin.com/JPjrtdEg" rel="nofollow noreferrer">https://pastebin.com/JPjrtdEg</a></p>
|
<p>Yes, you can use a loop. There's nothing special about tkinter objects that make them any different than any other python object.</p>
<pre><code>for col, heading in enumerate(("Name", "Amount", "Date", "Planting #", "batch #")):
label = tk.Label(self, text=heading, width=10)
label.grid(row=1, column=col)
</code></pre>
|
python|tkinter
| 1 |
1,903,406 | 25,014,667 |
Sympy Lambdify with array inputs
|
<p>I am trying to give an array as input and expect an array as output for the following code.</p>
<pre><code>from sympy import symbols
from sympy.utilities.lambdify import lambdify
import os
from sympy import *
import numpy as np
text=open('expr.txt','r')
expr=text.read()
x,param1,param2=symbols('x param1 param2')
params=np.array([param1,param2])
T=lambdify((x,params),expr,modules='numpy')
data=np.genfromtxt('datafile.csv',delimiter=',')
print T(data[0],[0.29,4.5])
text.close()
</code></pre>
<p>But get the following error.</p>
<pre><code>TypeError: <lambda>() takes exactly 3 arguments (13 given)
</code></pre>
<p>How do i tell sympy that its a single array? Thanks in advance.</p>
|
<p><strong>1. Solution:</strong>
Your problem is, that the function T expects a value, but you are handing out a list. Try this instead of <code>print T(data[0],[0.29,4.5])</code>to get a list of results:</p>
<pre><code>print [T(val,[0.29,4.5]) for val in data[0]]
</code></pre>
<p>Or use a wrapper function:</p>
<pre><code>def arrayT(array, params):
return [T(val, params) for val in array]
print arrayT(data[0], [0.29, 4.5])
</code></pre>
<hr>
<p><strong>2. Solution:</strong> You have to change your mathematical expression. Somehow sympy doesn't work with list of lists, so try this:</p>
<pre><code>expr = "2*y/z*(x**(z-1)-x**(-1-z/2))"
T=lambdify((x,y,z),expr,'numpy')
print T(data[0], 0.29, 4.5)
</code></pre>
|
python|sympy
| 5 |
1,903,407 | 67,852,228 |
Create a dataframe from the model
|
<p>I am writing an application using Django and I ran into a problem. I have models that are as follows:</p>
<pre><code>class Feature(models.Model):
category = models.ForeignKey(Category, on_delete=models.CASCADE)
feature_name = models.CharField(max_length=300)
feature_code = models.CharField(max_length=50, unique=True)
feature_predictable = models.BooleanField(default=False)
def __str__(self):
return self.feature_name
def breed_name_based_upload_to(instance, filename):
return "breeds/{0}/{1}".format(instance.breed_name, filename)
class Breed(models.Model):
breed_name = models.CharField(max_length=300)
breed_features = models.ManyToManyField(Feature)
breed_image = models.ImageField(default='no_image.png', upload_to=breed_name_based_upload_to)
breed_visible = models.BooleanField(default=True)
def __str__(self):
return self.breed_name
class FeatureValue(models.Model):
breed = models.ForeignKey(Breed, on_delete=models.CASCADE)
feature = models.ForeignKey(Feature, on_delete=models.CASCADE)
feature_value = IntegerRangeField(min_value=1, max_value=3, default=1)
class Meta:
unique_together = ('breed', 'feature')
</code></pre>
<p>In the 'Feature' model, I have 3 records with feature_code with values for example 'value1', 'value2', 'value3'. In the 'Breed' model I also have 3 records, with each of these records assigned values for each record from the 'Feature' model (I use the FeatureValue model for assigning values).</p>
<p>Now I need to use the Breed model to create a DataFrame that would look like this:</p>
<pre><code>id breed_name value1 value2 value3
0 name1 2 1 3
1 name2 1 2 2
2 name3 3 3 3
</code></pre>
<p>At the moment, using this code:</p>
<pre><code>dataframe = pandas.DataFrame().from_records(list(
Breed.objects.all().values(
'id',
'breed_name',
'featurevalue__feature_value'
)
))
</code></pre>
<p>I managed to achieve something like this:</p>
<pre><code>id breed_name featurevalue__feature_value
0 name1 2
0 name1 1
0 name1 3
1 name2 1
1 name2 2
1 name2 2
2 name3 3
2 name3 3
2 name3 3
</code></pre>
<p>How can I fix it?</p>
|
<p>If we start from your example dataframe.</p>
<p>You can enumerate rows in each group of <code>breed_name</code> values.</p>
<pre><code>>>> df["pos"] = df.groupby("breed_name").cumcount()
>>> df["pos"] = "value" + df["pos"].astype("str")
>>> df
id breed_name featurevalue__feature_value pos
0 0 name1 2 value0
1 0 name1 1 value1
2 0 name1 3 value2
3 1 name2 1 value0
4 1 name2 2 value1
5 1 name2 2 value2
6 2 name3 3 value0
7 2 name3 3 value1
8 2 name3 3 value2
</code></pre>
<p>Then pivot the dataframe, delete extra level of columns index and reset row index.</p>
<pre><code>>>> df2 = df.pivot(columns="pos", index=["id", "breed_name"])
>>> df2
featurevalue__feature_value
pos value0 value1 value2
id breed_name
0 name1 2 1 3
1 name2 1 2 2
2 name3 3 3 3
>>> df2 = df2.droplevel(0, axis=1).reset_index()
>>> df2
pos id breed_name value0 value1 value2
0 0 name1 2 1 3
1 1 name2 1 2 2
2 2 name3 3 3 3
</code></pre>
|
python|django|pandas|dataframe|model
| 0 |
1,903,408 | 67,815,685 |
How can you restrict user to only input alphabets in Python?
|
<p>I am a beginner trying to learn Python. First question.</p>
<p>Trying to find a way to ask users to input alphabets only.
Wrote this but it doesn't work!
It returns <code>True</code> and then skips the rest before continuing to the <code>else</code> clause.
<code>break</code> doesn't work either.</p>
<p>Can someone point out why?
I assume it's very elementary, but I'm stuck and would appreciate it if someone could pull me out.</p>
<pre><code>while True:
n = input("write something")
if print(n.isalpha()) == True:
print(n)
break
else:
print("Has to be in alphabets only.")
</code></pre>
|
<p>Your problem here is the <code>print</code> function. <code>print</code> does not return anything, so your <code>if</code> statement is always comparing <code>None</code> to <code>True</code>.</p>
<pre><code>while True:
n = input("write something")
if n.isalpha():
print(n)
break
else:
print("Has to be in alphabets only.")
</code></pre>
|
python|loops|break|restrict|isalpha
| 1 |
1,903,409 | 30,384,627 |
"Unwrapping" SklearnClassifier Object - NLTK Python
|
<p>I have used the SklearnClassifier() wrapper from the NLTK python package to train a couple of sklearn classifiers (LogisticRegression() and RandomForest()) for a binary classification problem where text are features. Is there any functionality that allows one to "unwrap" this object so that one can access things such as parameter estimates (for logistic regression) or the variable importance list from the random forest (or any of the items available from the original sklearn object)? The nltk classifier object can score new instances so the underlying information must be contained in that object somewhere? Thank you for you thoughts.</p>
|
<p>Your classifier is hidden under _clf variable.</p>
<pre><code>classifier = SKLearnClassifier(MLPClassifier())
mlp = classifier._clf
</code></pre>
<p>Documentation found at <a href="http://www.nltk.org/_modules/nltk/classify/scikitlearn.html" rel="nofollow noreferrer">http://www.nltk.org/_modules/nltk/classify/scikitlearn.html</a> :</p>
<pre><code>def __init__(self, estimator, dtype=float, sparse=True):
"""
:param estimator: scikit-learn classifier object.
:param dtype: data type used when building feature array.
scikit-learn estimators work exclusively on numeric data. The
default value should be fine for almost all situations.
:param sparse: Whether to use sparse matrices internally.
The estimator must support these; not all scikit-learn classifiers
do (see their respective documentation and look for "sparse
matrix"). The default value is True, since most NLP problems
involve sparse feature sets. Setting this to False may take a
great amount of memory.
:type sparse: boolean.
"""
self._clf = estimator
self._encoder = LabelEncoder()
self._vectorizer = DictVectorizer(dtype=dtype, sparse=sparse)
</code></pre>
|
python|scikit-learn|nltk
| 0 |
1,903,410 | 66,924,831 |
aiohttp - Splitting task while getting large number of HTML pages - RuntimeError: cannot reuse already awaited coroutine
|
<p>I have list of URL links which I get and save to HTML files with following code:</p>
<pre><code> tasksURL = []
async with aiohttp.ClientSession() as session:
for url in listOfURLs:
tasksURL.append(self.fetch(session, url))
allHTMLs = await asyncio.gather(*tasksURL)
i = 0
for html in allHTMLs:
i += 1
with open("myPath.html", mode='w', encoding='UTF-8', errors='strict', buffering=1) as f:
f.write(html)
</code></pre>
<p>Since URL list can be quite large (up to 60 000) I need to chunk this tasks.</p>
<p>I tried following solution. I've defined function that will chop list in smaller chunks with this function:</p>
<pre><code>def chunkList(self, listOfURLs, n):
"""Yield successive n-sized chunks from lst."""
for i in range(0, len(lst), n):
yield lst[i:i + n]
</code></pre>
<p>And than use this function to run each chunked piece of <code>listOfURLs</code> like this:</p>
<pre><code>tasksURL = []
chunkedListOfURLs = self.chunkList(listOfURLs, 5)
for URLList in chunkedListOfURLs:
async with aiohttp.ClientSession() as session:
for url in URLList:
tasksURL.append(self.fetch(session, url))
allHTMLs = await asyncio.gather(*tasksURL)
for html in allHTMLs:
with open("myPath.html", mode='w', encoding='UTF-8', errors='strict', buffering=1) as f:
f.write(html)
</code></pre>
<p>I'm getting error:</p>
<blockquote>
<p>RuntimeError: cannot reuse already awaited coroutine</p>
</blockquote>
<p>I understand problem but haven't found way around it.</p>
|
<p>I would suggest to use the <a href="https://docs.python.org/3/library/asyncio-queue.html#asyncio.Queue" rel="nofollow noreferrer">asyncio.Queue</a> in this case. You don't want to create 60k tasks for each URL. When you use queue, you can spawn a set number of workers and limit the queue size:</p>
<blockquote>
<p>If maxsize is less than or equal to zero, the queue size is infinite.
If it is an integer greater than 0, then await put() blocks when the
queue reaches maxsize until an item is removed by get().</p>
</blockquote>
<pre><code>import asyncio
import random
WORKERS = 10
async def worker(q):
while True:
url = await q.get()
t = random.uniform(1, 5)
print(f"START: {url} ({t:.2f}s)")
await asyncio.sleep(t)
print(f"END: {url}")
q.task_done()
async def main():
q = asyncio.Queue(maxsize=100)
tasks = []
for _ in range(WORKERS):
tasks.append(asyncio.create_task(worker(q)))
for i in range(10):
await q.put(f"http://example.com/{i}")
await q.join()
for task in tasks:
task.cancel()
await asyncio.gather(*tasks, return_exceptions=True)
if __name__ == "__main__":
main = asyncio.run(main())
</code></pre>
<p><strong>Test:</strong></p>
<pre><code>$ python test.py
START: http://example.com/0 (1.14s)
START: http://example.com/1 (4.40s)
START: http://example.com/2 (2.48s)
START: http://example.com/3 (4.34s)
START: http://example.com/4 (1.94s)
END: http://example.com/0
START: http://example.com/5 (1.52s)
END: http://example.com/4
START: http://example.com/6 (4.84s)
END: http://example.com/2
START: http://example.com/7 (4.35s)
END: http://example.com/5
START: http://example.com/8 (2.33s)
END: http://example.com/3
START: http://example.com/9 (1.80s)
END: http://example.com/1
END: http://example.com/8
END: http://example.com/9
END: http://example.com/6
END: http://example.com/7
</code></pre>
<p>Btw writing to files will block your main event loop, either call it in <a href="https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor" rel="nofollow noreferrer">run_in_executor</a> or use <a href="https://github.com/Tinche/aiofiles" rel="nofollow noreferrer">aiofiles</a>.</p>
<p><strong>Update Sat 3 Apr 13:49:55 UTC 2021:</strong></p>
<p><strong>Example:</strong></p>
<pre><code>import asyncio
import traceback
import aiohttp
WORKERS = 5
URLS = [
"http://airbnb.com",
"http://amazon.co.uk",
"http://amazon.com",
"http://baidu.com",
"http://basecamp.com",
"http://bing.com",
"http://djangoproject.com",
"http://envato.com",
"http://facebook.com",
"http://github.com",
"http://gmail.com",
"http://google.co.uk",
"http://google.com",
"http://google.es",
"http://google.fr",
"http://heroku.com",
"http://instagram.com",
"http://linkedin.com",
"http://live.com",
"http://netflix.com",
"http://rubyonrails.org",
"http://shopify.com",
"http://stackoverflow.com",
"http://trello.com",
"http://wordpress.com",
"http://yahoo.com",
"http://yandex.ru",
"http://yiiframework.com",
"http://youtube.com",
]
class Bot:
async def fetch(self, client, url):
async with client.get(url) as r:
return await r.text()
async def worker(self, q, client):
loop = asyncio.get_running_loop()
while True:
url = await q.get()
try:
html = await self.fetch(client, url)
except Exception:
traceback.print_exc()
else:
await loop.run_in_executor(None, self.save_to_disk, url, html)
finally:
q.task_done()
def save_to_disk(self, url, html):
print(f"{url} ({len(html)})")
async def main():
q = asyncio.Queue(maxsize=100)
tasks = []
async with aiohttp.ClientSession() as client:
bot = Bot()
for _ in range(WORKERS):
tasks.append(asyncio.create_task(bot.worker(q, client)))
for url in URLS:
await q.put(url)
await q.join()
for task in tasks:
task.cancel()
await asyncio.gather(*tasks, return_exceptions=True)
if __name__ == "__main__":
main = asyncio.run(main())
</code></pre>
|
python|python-asyncio|aiohttp
| 2 |
1,903,411 | 66,880,262 |
python multiprocessing start and close processes independently
|
<p>I am trying to run inference with tensorflow using multiprocessing. Each process uses 1 GPU. I have a list of files input_files[]. Every process gets one file, runs the model.predict on it and writes the results to file. To move on to next file, I need to close the process and restart it. This is because tensorflow doesn't let go of memory. So if I use the same process, I get memory leak.</p>
<p>I have written a code below which works. I start 5 processes, close them and start another 5. The issue is that all processes need to wait for the slowest one before they can move on. How can I start and close each process independent of the others?</p>
<p>Note that Pool.map is over input_files_small not input_files.</p>
<pre><code>file1 --> start new process --> run prediction --> close process --> file2 --> start new process --> etc.
for i in range(0, len(input_files), num_process):
input_files_small = input_files[i:i+num_process]
try:
process_pool = multiprocessing.Pool(processes=num_process, initializer=init_worker, initargs=(gpu_ids))
pool_output = process_pool.map(worker_fn, input_files_small)
finally:
process_pool.close()
process_pool.join()
</code></pre>
|
<p>There is no need to re-create over and over the processing pool. First, specify <em>maxtasksperchild=1</em> when creating the pool. This should result in creating a new process for each new task submitted. And instead of using method <code>map</code>, use method <code>map_async</code>, which will not block. You can use <code>pool.close</code> followed by <code>pool.join()</code> to wait for these submissions to complete implicitly if your worker function does not return results you need, as follows or use the second code variation:</p>
<pre class="lang-py prettyprint-override"><code>process_pool = multiprocessing.Pool(processes=num_process, initializer=init_worker, initargs=(gpu_ids), maxtasksperchild=1)
for i in range(0, len(input_files), num_process):
input_files_small = input_files[i:i+num_process]
process_pool.map_async(worker_fn, input_files_small))
# wait for all outstanding tasks to complete
process_pool.close()
process_pool.join()
</code></pre>
<p>If you need return values from <code>worker_fn</code>:</p>
<pre class="lang-py prettyprint-override"><code>process_pool = multiprocessing.Pool(processes=num_process, initializer=init_worker, initargs=(gpu_ids), maxtasksperchild=1)
results = []
for i in range(0, len(input_files), num_process):
input_files_small = input_files[i:i+num_process]
results.append(process_pool.map_async(worker_fn, input_files_small))
# get return values from map_async
pool_outputs = [result.get() for result in results]
# you do not need process_pool.close() and process_pool.join()
</code></pre>
<p>But, since there may be some "slow" tasks still running from an earlier invocation of <code>map_async</code> when tasks from a later invocation of <code>map_async</code> start up, some of these tasks may still have to wait to run. But at least all of your processes in the pool should stay fairly busy.</p>
<p>If you are expecting exceptions from your worker function and need to handle them in your main process, it gets more complicated.</p>
|
python|tensorflow|asynchronous|multiprocessing
| 1 |
1,903,412 | 64,154,451 |
Why i can't click this button on selenium webdriver python?
|
<p>I cannot click some buttons in the router interface. I was only able to click through using pyautogui. But this method is not functional. How can I click this button on Selenium? I will use this code to reset my ip address.</p>
<p>This is the css code of the place I want to click:</p>
<pre><code><a href="#" class="edit" id="editBtn" title="Düzenle" onclick="editClick('ppp1.1', 'MyISP_PTM_35')"></a>
</code></pre>
<p>Html Data:
<a href="https://mega.nz/file/2XJyEbCR#xBcEtzYh8QFLWTmSfAqll2V-p-SHiaw4wEz1RAWtso0" rel="nofollow noreferrer">https://mega.nz/file/2XJyEbCR#xBcEtzYh8QFLWTmSfAqll2V-p-SHiaw4wEz1RAWtso0</a></p>
<p>I tryied all method but not worked.</p>
<pre><code>try:
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR,'#editBtn'))).send_keys("\n")
except:
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR,'#editBtn'))).send_keys(Keys.ENTER)
try:
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR,'#editBtn')))[0].send_keys("\n")
except:
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR,'#editBtn')))[0].send_keys(Keys.ENTER)
try:
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR,'#editBtn')))[0].click()
except:
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR,'#editBtn'))).click()
</code></pre>
<p><a href="https://i.stack.imgur.com/PLc3d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PLc3d.png" alt="enter image description here" /></a></p>
|
<p>try this</p>
<pre><code>link = driver.find_element_by_link_text('')
link.click()
</code></pre>
<p>you want to click link,
maybe this example helps you.</p>
|
python|selenium|request
| 1 |
1,903,413 | 72,309,166 |
plotly python how to show bars where y values are zero
|
<p>I would like to still show all x axis values even when the y values are zero for that bar. What am I doing wrong here?</p>
<pre><code>import plotly.express as px
import plotly.io as pio
threshold_one = ['13%', '13%', '13%', "34%", "34%", "34%", "55%", "55%", "55%"]
threshold_two = ["15%", "37.5%", "60%", "15%", "37.5%", "60%", "15%", "37.5%", "60%"]
y_values = [5500,5267,5466,345,356,375,0,0,0]
df = pd.DataFrame({'threshold_one': threshold_one, 'threshold_two': threshold_two, 'y_values': y_values})
fig = px.histogram(df, x="threshold_one", y="y_values",
color="threshold_two",
barmode = 'group')
fig.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/EPvlQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EPvlQ.png" alt="enter image description here" /></a></p>
<p>As you can see, x axis threshold one = 55% is missing from the x axis, but I would like it to still be there even though the y values are 0.</p>
<p>Thanks in advance</p>
|
<p>Use <code>px.bar</code> instead of <code>px.histogram</code>.</p>
<pre class="lang-py prettyprint-override"><code>import plotly.express as px
threshold_one = ['13%', '13%', '13%', "34%", "34%", "34%", "55%", "55%", "55%"]
threshold_two = ["15%", "37.5%", "60%", "15%", "37.5%", "60%", "15%", "37.5%", "60%"]
y_values = [5500,5267,5466,345,356,375,0,0,0]
df = pd.DataFrame({'threshold_one': threshold_one, 'threshold_two': threshold_two, 'y_values': y_values})
fig = px.bar(df, x="threshold_one", y="y_values",
color="threshold_two",
barmode = 'group')
fig.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/pbUBl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pbUBl.png" alt="enter image description here" /></a></p>
|
python|plotly
| 1 |
1,903,414 | 72,142,024 |
How do I replace missing values with NaN
|
<p>I am using the IMDB dataset for machine learning, and it contains a lot of missing values which are entered as '\N'. Specifically in the StartYear column which contains the movie year release I want to convert the values to integers. Which im not able to do right now, I could drop these values but I wanted to see why they're missing first. I tried several things but no success.</p>
<p>This is my latest attempt:</p>
<p><a href="https://i.stack.imgur.com/hDKp8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hDKp8.png" alt="My attempt" /></a></p>
|
<p>Here is a way to do it without using <code>replace</code>:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
df_basics = pd.DataFrame({'startYear':['\\N']*78760+[2017]*18267 + [2018]*18263+[2016]*17837+[2019]*17769+['1996 ','1993 ','2000 ','2019 ','2029 ']})
print(pd.value_counts(df_basics.startYear))
df_basics.loc[df_basics.startYear == '\\N','startYear'] = np.NaN
print(pd.value_counts(df_basics.startYear, dropna=False))
</code></pre>
<p>Output:</p>
<pre><code>NaN 78760
2017 18267
2018 18263
2016 17837
2019 17769
1996 1
1993 1
2000 1
2019 1
2029 1
</code></pre>
|
python|pandas|missing-data
| 1 |
1,903,415 | 50,296,466 |
Python distributed tasks with multiple queues
|
<p>So the project I am working on requires a distributed tasks system to process CPU intensive tasks. This is relatively straight forward, spin up celery and throw all the tasks in a queue and have celery do the rest.</p>
<p>The issue I have is that every user needs their own queue, and items within each users queue must be processed synchronously. So it there is a task in a users queue already processing, wait until it is finished before allowing a worker to pick up the next.</p>
<p>The closest I've come to something like this is having a fixed set of queues, and assigning them to users. Then having the users tasks picked off by celery workers fixed to a certain queue with a concurrency of 1.</p>
<p>The problem with this system is that I can't scale my workers to process a backlog of user tasks.</p>
<p>Is there a way I can configure celery to do what I want, or perhaps another task system exists that does what I want?</p>
<p><strong>Edit:</strong></p>
<p>Currently I use the following command to spawn my celery workers with a concurrency of one on a fixed set of queues</p>
<pre><code>celery multi start 4 -A app.celery -Q:1 queue_1 -Q:2 queue_2 -Q:3 queue_3 -Q:4 queue_4 --logfile=celery.log --concurrency=1
</code></pre>
<p>I then store a queue name on the user object, and when the user starts a process I queue a task to the queue stored on the user object. This gives me my synchronous tasks.</p>
<p>The downside is when I have multiple users sharing queues causing tasks to build up and never getting processed. </p>
<p>I'd like to have say 5 workers, and a queue per user object. Then have the workers just hop over the queues, but never have more than 1 worker on a single queue at a time.</p>
|
<p>I use <code>chain</code> <a href="http://docs.celeryproject.org/en/latest/userguide/canvas.html#chains" rel="nofollow noreferrer">doc here</a> condition for execution task in a specific order :</p>
<pre><code>chain = task1_task.si(account_pk) | task2_task.si(account_pk) | task3_task.si(account_pk)
chain()
</code></pre>
<p>So, i execute for a specific user task1 when its finished i execute task2 and when finished execute task3.
It will spawm in any worker available :)</p>
<p>For stopping a chain midway:</p>
<pre><code>self.request.callbacks = None
return
</code></pre>
<p>And don't forget to bind your task :</p>
<pre><code>@app.task(bind=True)
def task2_task(self, account_pk):
</code></pre>
|
python|django|celery
| 1 |
1,903,416 | 50,431,294 |
Anaconda3 Custom Install Location: 'conda' Not Working
|
<p>Through carelessness, I have two Anaconda install locations: one on C:\Anaconda3\ (on a smallish SSD with Windows 10 installed) and one on X:\Anaconda3\ (on a large HDD with no OS installed), and I would like to keep the one on X as my only installation. I have X:\Anaconda3, X:\Anaconda3\Scripts, and X:\Anaconda3\Library\bin folders in my PATH.</p>
<p>I've noticed is that I can't used any <code>conda</code> commands with my X:\ installation in the PATH variable. I get the usual "conda is not a recognized command" error. But if I change the drive of the folders in PATH to C:\, suddenly everything works great. I've also tried making my current directory be on X:\ while running these commands, and <code>conda</code> still doesn't work.</p>
<p>How might I fix this issue? There must be something I'm missing about how the PATH variable or the <code>conda</code> command works. Thanks!</p>
|
<p>@abarnert Thanks to you I managed to find the issue.</p>
<p>On looking into both install paths, I found that only the C:\ installation had <code>C:\Anaconda3\Scripts\conda.exe</code>. The X:\ installation had <code>C:\Anaconda3\Scripts\conda.exe.c~</code>. I still don't know what that file's for, but I assume it pointed to the <code>conda.exe</code> on C:. Anyway, a complete uninstall of both locations followed by a fresh install on X:\ got everything to work. I can now run commands like <code>conda env list</code> and similar using the installation on X:\</p>
<p>Thanks again!</p>
|
python|windows|path|anaconda|conda
| 1 |
1,903,417 | 26,787,972 |
Python 2.6 compatibility for rundeckrun
|
<p>I'm new to Python and struggling a bit with a piece of code. I am using rundeckrun which is an open source python client for the Rundeck API. There is one bit of code in the client that seems to be locked to python 2.7+ and I really need to get it to work on Python 2.6. I've tried searching but don't even really know what this construct is called so hard to find the 2.6 equivalent for it.</p>
<pre><code>node_attr_keys = (
'name',
'hostname',
'username',
'description',
'osArch',
'osFamily',
'osName',
'editUrl',
'remoteUrl',
)
data = {k: getattr(self, k)
for k in node_attr_keys if getattr(self, k, None) is not None}
</code></pre>
<p>The specific error is:</p>
<p>File "/usr/lib/python2.6/site-packages/rundeckrun-0.1.11-py2.6.egg/rundeck/client.py", line 21, in
from .api import RundeckApiTolerant, RundeckApi, RundeckNode
File "/usr/lib/python2.6/site-packages/rundeckrun-0.1.11-py2.6.egg/rundeck/api.py", line 135
for k in node_attr_keys if getattr(self, k, None) is not None}
^
SyntaxError: invalid syntax</p>
|
<p>That is a dictionary comprehension. They are not supported in Python 2.6. The code you provided is roughly equivalent to this code:</p>
<pre><code>node_attr_keys = (
# Same as your code, omitted for brevity
)
data = {}
for k in node_attr_keys:
if getattr(self, k, None) is not None:
data[k] = getattr(self, k)
</code></pre>
|
python|python-2.7|python-2.6|rundeck
| 1 |
1,903,418 | 61,284,659 |
Python: How can I read csv and clean my data in a loop
|
<p>I have the following code below which reads in csv and cleans them for 5 different financial assets. The code seems repetitive and inefficient. Is there a way I can do this in a <strong>loop</strong> instead of having to declare the 5 assets variables one-by-one?</p>
<pre><code>import pandas as pd
# Input symbols
stock_symbol = "VTI"
ltb_symbol = "TLT"
stb_symbol = "IEI"
gld_symbol = "GLD"
comms_symbol = "DBC"
session_type = "open" # open, high, low or close
# Read csv for all symbols in Dataframes
stock_df = pd.read_csv("BATS_{}, 1D.csv".format(stock_symbol))
ltb_df = pd.read_csv("BATS_{}, 1D.csv".format(ltb_symbol))
stb_df = pd.read_csv("BATS_{}, 1D.csv".format(stb_symbol))
gld_df = pd.read_csv("BATS_{}, 1D.csv".format(gld_symbol))
comms_df = pd.read_csv("BATS_{}, 1D.csv".format(comms_symbol))
#Clean Dataframes
stock_df = stock_df.rename({'{}'.format(session_type): '{}'.format(stock_symbol)}, axis=1)\
.loc[:,['time','{}'.format(stock_symbol)]]
ltb_df = ltb_df.rename({'{}'.format(session_type): '{}'.format(ltb_symbol)}, axis=1)\
.loc[:,['time','{}'.format(ltb_symbol)]]
stb_df = stb_df.rename({'{}'.format(session_type): '{}'.format(stb_symbol)}, axis=1)\
.loc[:,['time','{}'.format(stb_symbol)]]
gld_df = gld_df.rename({'{}'.format(session_type): '{}'.format(gld_symbol)}, axis=1)\
.loc[:,['time','{}'.format(gld_symbol)]]
comms_df = comms_df.rename({'{}'.format(session_type): '{}'.format(comms_symbol)}, axis=1)\
.loc[:,['time','{}'.format(comms_symbol)]]
</code></pre>
|
<p>Create dictionary of <code>DataFrames</code>:</p>
<pre><code>symbols = [stock_symbol,ltb_symbol,stb_symbol,gld_symbol,comms_symbol]
session_type = "open" # open, high, low or close
# Read csv for all symbols in Dataframes
dfs = {s: pd.read_csv("BATS_{}, 1D.csv".format(s)).rename({'{}'.format(session_type): '{}'.format(stock_symbol)}, axis=1).loc[:,['time','{}'.format(stock_symbol)]] for s in symbols}
</code></pre>
<p>You can also filter only 2 columns in <code>read_csv</code> by <code>usecols</code> parameter:</p>
<pre><code>dfs = {s: pd.read_csv("BATS_{}, 1D.csv".format(s), usecols=['time',session_type]).rename({session_type: stock_symbol}, axis=1) for s in symbols}
</code></pre>
<p>And last is possible for DataFrames select by keys:</p>
<pre><code>print (dfs['VTI'])
</code></pre>
|
python|pandas
| 3 |
1,903,419 | 61,383,004 |
Server-Sent Events in Flask work locally but not on production (linode)
|
<p>I'm trying to get a progress bar to work in Flask. I use a <a href="https://html.spec.whatwg.org/multipage/server-sent-events.html" rel="nofollow noreferrer">Server Sent Events</a> for that. When I run it in local development server, then everything works well and I can see in the browser real time added numbers in /progress window and progress bar works no problems.</p>
<p>But if I run it in Linux server (Linode), then browser windows hangs for 10 sec and after that progress bar jumps to 100. I am beginner and do not understand why it works in local machine and why not in the remote server. Please somebody explain. And also - which would be a practical solution to this.</p>
<p>Flask - app.py</p>
<pre><code>@app.route('/progress')
def progress():
def progress_func():
x = 0
while x < 100:
time.sleep(1)
x = x + 10
yield 'data:' + str(x) + "\n\n"
return Response(progress_func(), mimetype='text/event-stream')
</code></pre>
<p>js</p>
<pre><code>var source = new EventSource("/progress");
source.onmessage = function(event) {
$('.progress-bar').css('width', event.data+'%').attr('aria-valuenow', event.data);
};
</code></pre>
<p>index.html</p>
<pre><code><div>
<div class="progress" style="width: 100%; height: 6px;">
<div class="progress-bar bg-success" role="progressbar" style="width: 6px" aria-valuenow="0" aria-valuemin="0" aria-valuemax="100"></div>
</div>
</code></pre>
|
<p>In my experience that problem can be caused by the reverse proxy between flask and the frontend.</p>
<p>If you use nginx you need to set <code>proxy_buffering</code> to <code>off</code> in order to be able to use SSE</p>
<p>EDIT:</p>
<p>While looking at <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering" rel="nofollow noreferrer">http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering</a> I noticed that you can achieve the same result by setting the <code>X-Accel-Buffering</code> header to <code>no</code> in the flask response. This solution is better since it's limited to this specific response.</p>
|
javascript|python|flask|server
| 1 |
1,903,420 | 57,909,348 |
How to use localized times for plotting? The datetime series is localized to my timezone but plotting is still done with the original times (Pandas)
|
<p>Dataframe <em>df</em> has a column "<em>start</em>" with a dtype of datetime64[ns]:</p>
<pre><code>df.head()
start energy
0 2017-09-06 09:38:24 4787.329
1 2017-09-06 14:30:02 3448.111
2 2017-09-07 08:49:46 6748.579
3 2017-09-07 07:14:35 4216.576
4 2017-09-07 13:21:49 5695.689
</code></pre>
<p>Next, the column is localized:</p>
<pre><code>df["start"] = df["start"].dt.tz_localize(tz = "EET")
start energy
0 2017-09-06 09:38:24+03:00 4787.329
1 2017-09-06 14:30:02+03:00 3448.111
2 2017-09-07 08:49:46+03:00 6748.579
3 2017-09-07 07:14:35+03:00 4216.576
4 2017-09-07 13:21:49+03:00 5695.689
</code></pre>
<p><strong>Desired outcome:</strong>
I want to use the localized times for plotting (for example 12:38:24 in the first line, instead of the original 09:38:24). </p>
<p><strong>Problem:</strong>
Any plotting that I do uses the original times (for example 09:38:24 in the first line).</p>
<p>How do I plot (and do any other further analysis) using only the localized times (the times in my timezone, EET)? </p>
<p>I would just add timedelta(hours = 3) to the series, but it won't help, because due to daylight savings, the difference is only 2 hours part of the year).</p>
<p>I also don't want to make the "start" column into index.</p>
|
<p>I found the answer:
First, the column must be localized to UTC. Then, it has to be converted into EET.</p>
<p>So all in one line:</p>
<pre><code>df["start"] = df["start"].dt.tz_localize("UTC").dt.tz_convert(tz = "EET")
</code></pre>
<p>Result:</p>
<pre><code> start energy
0 2017-09-06 12:38:24+03:00 4787.329
1 2017-09-06 17:30:02+03:00 3448.111
2 2017-09-07 11:49:46+03:00 6748.579
3 2017-09-07 10:14:35+03:00 4216.576
4 2017-09-07 16:21:49+03:00 5695.689
</code></pre>
|
python|pandas|datetime
| 1 |
1,903,421 | 18,662,511 |
Flask-Frozen: How to generate a 404 page?
|
<p>I am trying to generate a 404 page for my Flask-Frozen application. Currently, this is my only error-handling logic in views.py</p>
<pre><code>@app.errorhandler(404)
def page_not_found(e):
return render_template('404.html'), 404
</code></pre>
<p>Apparently, that isn't enough code to do the trick, any suggestions?</p>
|
<p>There are two things you need to do:</p>
<ol>
<li><p>Add a call to <a href="http://pythonhosted.org/Frozen-Flask/#flask_frozen.Freezer.register_generator" rel="nofollow"><code>freezer.register_generator</code></a> that returns at least one URL that will result in your 404 page:</p>
<pre><code>@freezer.register_generator
def error_handlers():
yield "/404"
</code></pre></li>
<li><p>Setup your web server to respond to 404 errors with your static page (the example is for Apache):</p>
<pre><code>ErrorDocument 404 /404.html
</code></pre></li>
</ol>
|
python|flask
| 4 |
1,903,422 | 18,265,376 |
Why I can log in amazon website using python mechanize, but not requests or urllib2
|
<p>I can use the following piece of python code found from <a href="http://chase-seibert.github.io/blog/2011/01/15/backup-your-amazon-order-history-with-python.html" rel="noreferrer">here</a> to log into amazon.com:</p>
<pre><code>import mechanize
br = mechanize.Browser()
br.set_handle_robots(False)
br.addheaders = [("User-agent", "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.13) Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13")]
sign_in = br.open('https://www.amazon.com/gp/sign-in.html')
br.select_form(name="sign-in")
br["email"] = 'test@test.com'
br["password"] = 'test4test'
logged_in = br.submit()
orders_html = br.open("https://www.amazon.com/gp/css/history/orders/view.html?orderFilter=year-%s&startAtIndex=1000" % 2013)
</code></pre>
<p>But following two pieces using requests module and urllib2 do not work. </p>
<pre><code>import requests
import sys
username = "test@test.com"
password = "test4test"
login_data = ({
'email' : fb_username,
'password' : fb_password,
'flex_password': 'true'})
url = 'https://www.amazon.com/gp/sign-in.html'
agent ={'User-agent', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.57 Safari/537.1'}
session = requests.session(config={'verbose': sys.stderr}, headers = agent)
r = session.get('http://www.amazon.com')
r1 = session.post(url, data=login_data, cookies=r.cookies)
r2 = session.post("https://www.amazon.com/gp/css/history/orders/view.html?orderFilter=year-2013&startAtIndex=1000", cookies = r1.cookies)
</code></pre>
<h2>#</h2>
<pre><code>import urllib2
import urllib
import cookielib
amazon_username = "test@test.com"
amazon_password = "test4test"
url = 'https://www.amazon.com/gp/sign-in.html'
cookie = cookielib.CookieJar()
login_data = urllib.urlencode({'email' : amazon_username, 'password' : amazon_password,})
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookie))
opener.addheaders = [('User-agent', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.57 Safari/537.1')]
opener.open('www.amazon.com')
response = opener.open(url, login_data)
response = opener.open("https://www.amazon.com/gp/css/history/orders/view.html?orderFilter=year-%s&startAtIndex=1000" % 2013, login_data)
</code></pre>
<p>What did I do wrong in posting the amazon log in form? This is the first time I post a form. Any help is appreciated.</p>
<p>I prefer to use urllib2 or requests because all my other code are using these two modules. </p>
<p>Moreover, can any body comment on the speed performance between mechanize, requests and urllib2, and other advantage of mechanize over the other two?</p>
<p>~~~~~~~~~~~New~~~~~~~~~~~~
Following C.C.'s instruction, I now can log in with urllib2. But when I try to do the same with requests, it still does not work. Can anyone give me a clue?</p>
<pre><code>import requests
import sys
fb_username = "test@test.com"
fb_password = "xxxx"
login_data = ({
'email' : fb_username,
'password' : fb_password,
'action': 'sign-in'})
url = 'https://www.amazon.com/gp/sign-in.html'
agent ={'User-agent', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.57 Safari/537.1'}
session = requests.session(config={'verbose': sys.stderr}, headers = agent)
r = session.get(url)
r1 = session.post('https://www.amazon.com/gp/flex/sign-in/select.html', data=login_data, cookies=r.cookies)
b = r1.text
</code></pre>
|
<p>Regarding your <code>urllib2</code> approach, you are missing 2 things.</p>
<p>First, if you look at the source of <code>sign-in.html</code>, it shows that
</p>
<pre><code><form name="sign-in" id="sign-in" action="/gp/flex/sign-in/select.html" method="POST">
</code></pre>
<p>Meaning the form should be submitted to <code>select.html</code>.</p>
<p>Second, besides email & password, you also need to select whether you are an existing user or not:
</p>
<pre><code><input id="newCust" type="radio" name="action" value="new-user"...>
...
<input id="returningCust" type="radio" name="action" value="sign-in"...>
</code></pre>
<p>It should look something like this:</p>
<pre><code>import cookielib
import urllib
import urllib2
amazon_username = ...
amazon_password = ...
login_data = urllib.urlencode({'action': 'sign-in',
'email': amazon_username,
'password': amazon_password,
})
cookie = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookie))
opener.addheaders = [('User-agent', ...)]
response = opener.open('https://www.amazon.com/gp/sign-in.html')
print(response.getcode())
response = opener.open('https://www.amazon.com/gp/flex/sign-in/select.html', login_data)
print(response.getcode())
response = opener.open("https://www.amazon.com/") # it should show that you are logged in
print(response.getcode())
</code></pre>
|
python|web|authentication|mechanize
| 2 |
1,903,423 | 71,622,555 |
Change conditional class instance of child class
|
<p>I have a parent class called <code>Organism</code> which has a class attribute called <code>isIntelligent</code> and I create a variable called <code>fitness</code> based on the value of the class attribute <code>isIntelligent</code> likewise:</p>
<pre class="lang-py prettyprint-override"><code>class Organism:
isIntelligent = False
def __init__(self):
#some variables defined here
if isIntelligent:
@property
def fitness(self):
return 0
</code></pre>
<p>Now, I want to create two child classes, for example <code>Rabbit</code> and <code>Fox</code>, I want Rabbits to be intelligent and Foxs to be not. In accordance to that they should have the <code>fitness</code> property.</p>
<p>I tried to change the value of the <code>isIntelligent</code> variable inside the child classes likewise:</p>
<pre class="lang-py prettyprint-override"><code>class Rabbit(Organism):
isIntelligent = True
class Fox(Organism):
isIntelligent = False
</code></pre>
<p>However, when I run the code and I expect the Rabbit objects to have the <code>fitness</code> variable, but they do not.</p>
<p>What would be the correct way to go about this ?</p>
|
<p>Code that isn't in a method is executed when the class is defined, so it can't be dependent on the instance. You need to put the condition inside the property method.</p>
<pre><code>@property
def fitness(self):
if self.isIntelligent:
return 0
else:
return 1
</code></pre>
|
python|class|inheritance
| 1 |
1,903,424 | 71,569,093 |
KeyError: -1 from trying to combine two appended dfs
|
<p>I am trying to combine two appended CSV files but I keep getting in return a KeyError: -1. I'm not sure why as I have been following a coding tutorial and for him, it works perfectly fine. The two different csv file groups have different formats and so this code will remove the 2 empty columns from one of the formats.</p>
<pre><code>import glob
import os
import pandas as pd
appended_data = []
for f in glob.glob ('C:\\Users\\xxx\\PycharmProjects\\xxx\\raw\\*.csv'):
df = pd.read_csv(f, header = None)
appended_data.append(df)
df = pd.concat(appended_data)
df.rename(columns={0: "volume",
1: "weighted volume",
2: "open",
3: "close",
4: "high",
5: "low",
6: "timestamp",
7: "transactions",
8: "date"},
inplace=True)
appended_data_new = []
for f in glob.glob ('C:\\Users\\xxxxxx\\PycharmProjects\\xxxxxx\\*.csv'):
df_new = pd.read_csv(f, header = None)
appended_data_new.append(df_new)
df_new = pd.concat(appended_data_new)
df_new.dropna(axis=1, inplace = True)
df_new.rename(columns={0: "volume",
1: "weighted volume",
2: "open",
3: "close",
4: "high",
5: "low",
6: "timestamp",
7: "transactions",
10: "date"},
inplace=True)
df_final = pd.concat([df, df_new])
path = 'C:\\Users\\xxxxxx\\PycharmProjects\\xxxxxx\\check point\\'
path = path + 'cp1-'+ df_final.iloc[-1][-1][0:10] +'.csv'
df_final.to_csv(path, header=True, index=None)
</code></pre>
<p>The following traceback error occurs:</p>
<pre><code>> Traceback (most recent call last): File
> "C:/Users/xxxxxx/PycharmProjects/xxxxxx/Clean_data.py", line 46, in
> <module>
> path = 'C:\\Users\\xxxxxx\\PycharmProjects\\xxxxxx\\check point\\' + 'cp1-'+ df_final.iloc[-1][-1][0:10] +'.csv' File
> "C:\Users\xxxxxx\PycharmProjects\xxxxxx\venv\lib\site-packages\pandas\core\series.py",
> line 942, in __getitem__
> return self._get_value(key) File "C:\Users\xxxxxx\PycharmProjects\xxxxxx\venv\lib\site-packages\pandas\core\series.py",
> line 1051, in _get_value
> loc = self.index.get_loc(label) File "C:\Users\xxxxxx\PycharmProjects\xxxxxx\venv\lib\site-packages\pandas\core\indexes\base.py",
> line 3363, in get_loc
> raise KeyError(key) from err KeyError: -1
>
> Process finished with exit code 1
</code></pre>
<p><a href="https://i.stack.imgur.com/9Fwju.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9Fwju.png" alt="Format 2" /></a></p>
<p><a href="https://i.stack.imgur.com/8HHdl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8HHdl.png" alt="Format 1" /></a></p>
|
<p>The indexing into your data frame elements to make the file name is causing the error. In this line:</p>
<pre><code>path = path + 'cp1-'+ df_final.iloc[-1][-1][0:10] +'.csv'
</code></pre>
<p>To get the last date from your data frame using <code>iloc</code>, you need to put both the row index and the column index inside the brackets.</p>
<pre><code>df_final.iloc[-1, -1]
</code></pre>
|
python|dataframe
| 1 |
1,903,425 | 55,442,972 |
Random Module Methods Python
|
<p>How can I search for all the random module methods in python?
I tried with this code:</p>
<pre><code>def random():
import random
random.helpm
random()
</code></pre>
<p>but it is not working.</p>
|
<p>Once you've imported the module, you can just do:</p>
<p><code>help(modulename)</code></p>
<p>...to get the docs on all the functions at once, interactively. Or you can use:</p>
<p><code>dir(modulename)</code></p>
<p>...to simply list the names of all the functions and variables defined in the module.</p>
|
python|python-2.7|random
| 1 |
1,903,426 | 55,371,246 |
Using Python with pygerrit2 Library to make API call to Gerrit From Power BI Desktop
|
<p>I've recently started using power BI. I am trying to get data from Gerrit Rest Api Using Python. The following code works fine when I run it locally on my machine.</p>
<pre><code> from requests.auth import HTTPDigestAuth
from pygerrit2.rest import GerritRestAPI
auth = HTTPDigestAuth('####', '##############')
rest = GerritRestAPI(url='https://gerrit.****.com', auth=auth)
changes = rest.get("/projects/?d")
</code></pre>
<p>In Power BI it doesn't cause any errors, but there are no results in the resulting Navigator Pane.</p>
<p>This seems to be the same problem outlined in this forum <a href="https://community.powerbi.com/t5/Desktop/Load-JSON-as-source-via-Python/td-p/485375" rel="nofollow noreferrer">https://community.powerbi.com/t5/Desktop/Load-JSON-as-source-via-Python/td-p/485375</a></p>
<p>But I don't see any real resolution.</p>
<p>Is there any other way I can accomplish this?</p>
|
<p>I think you must change from:</p>
<pre><code>rest = GerritRestAPI(url='https://gerrit.****.com', auth=auth)
</code></pre>
<p>To:</p>
<pre><code>rest = GerritRestAPI(url='https://gerrit.****.com/a', auth=auth)
</code></pre>
<p>Without the "/a" the authentication doesn't work and you're getting an empty project list.</p>
<p>See more details about the authentication in the Gerrit documentation <a href="https://gerrit-review.googlesource.com/Documentation/rest-api.html#authentication" rel="nofollow noreferrer">here</a>.</p>
|
python-3.x|rest|powerbi|gerrit
| 0 |
1,903,427 | 57,297,429 |
How to populate list in main window based on event in Dialog
|
<p>I am using PyQt5, where I am trying to create teams and work on a league system.</p>
<p>I have created Action Buttons which open Dialog boxes.
I want to populate certain lists from the database based on a team name I choose from my dialog window.</p>
<p>I think I am stuck, because I cannot understand how to communicate between the two.</p>
<p>When I try to add a new team, I want that all lists in my main window get appropriately filled. But how do I pass this information from the dialog box to the main window and also close the dialog box immediately after that?</p>
<p>Here is the code:</p>
<pre><code>import sys
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtWidgets import QInputDialog, QLineEdit, QDialog, QWidget, QPushButton, QHBoxLayout, QVBoxLayout, QApplication, QComboBox
import sqlite3
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
#removed because too big
#full code in link
def setupEvents(self, MainWindow):
self.actionNew_Team.triggered.connect(self.newTeam)
self.actionOpen_Team.triggered.connect(self.openTeam)
def newTeam(self, MainWindow):
n = NewTeamDialog()
n.exec_()
def openTeam(self, MainWindow):
o = OpenTeamDialog()
o.exec_()
def saveTeam(self, MainWindow):
pass
class NewTeamDialog(QDialog):
def __init__(self):
super(NewTeamDialog, self).__init__()
self.setWindowTitle("Create New Team")
self.setFixedWidth(300)
self.setFixedHeight(100)
self.nameInput = QLineEdit()
self.nameInput.setPlaceholderText("Enter Team Name")
self.addBtn = QPushButton()
self.addBtn.setText("Add Team")
self.addBtn.clicked.connect(self.addTeam)
layout = QVBoxLayout()
layout.addWidget(self.nameInput)
layout.addWidget(self.addBtn)
self.setLayout(layout)
def addTeam(self):
name = self.nameInput.text()
conn = sqlite3.connect("example.db")
c = conn.cursor()
c.execute('SELECT * FROM batsmen')
print(c.fetchall())
conn.close()
self.close()
</code></pre>
<p>LINK: <a href="https://www.paste.org/99817" rel="nofollow noreferrer">https://www.paste.org/99817</a></p>
|
<p>Is this what you are looking for?</p>
<p>You are creating the <code>OpenTeamDialog</code> and <code>NewTeamDialog</code> classes through methods, but the dialogs don't know anything about the <code>Window</code> class so you must pass it as a parameter when initializing them so that you can access all of its widgets.</p>
<p>This is assuming that your databse is the same meaning there are the following tables:
ALLROUNDERS, BATSMEN,BOWLERS, WICKETKEEPER and a field column called TEAM:</p>
<p>Also I am setting up the UI in another class so I would delete any thing besides the UI part that you got from QtDesigner in the Ui_MainWindow class.</p>
<p>There is most likely a better way to implement this but this is just a crude design based on your needs.</p>
<pre><code>class Window(QtWidgets.QMainWindow,Ui_MainWindow):
def __init__(self, parent = None):
super().__init__(parent)
self.setupUi(self)
self.setupEvents()
def setupEvents(self):
self.actionNew_Team.triggered.connect(self.newTeam)
self.actionOpen_Team.triggered.connect(self.openTeam)
def newTeam(self):
n = NewTeamDialog(self)
n.exec_()
def openTeam(self):
o = OpenTeamDialog(self)
o.exec_()
def saveTeam(self):
pass
class NewTeamDialog(QtWidgets.QDialog):
def __init__(self,window,parent = None):
super(NewTeamDialog, self).__init__(parent)
self.setAttribute(QtCore.Qt.WA_DeleteOnClose)
self.window = window
self.setWindowTitle("Create New Team")
self.setFixedWidth(300)
self.setFixedHeight(100)
self.dropDown_TeamType = QtWidgets.QComboBox()
#Added dropdown to choose which team goes in the team type
self.dropDown_TeamType.addItems(["ALLROUNDERS", "BATSMEN", "BOWLERS","WICKETKEEPER" ])
self.nameInput = QtWidgets.QLineEdit()
self.nameInput.setPlaceholderText("Enter Team Name")
self.addBtn = QtWidgets.QPushButton()
self.addBtn.setText("Add Team")
self.addBtn.clicked.connect(self.addTeam)
layout = QtWidgets.QVBoxLayout()
layout.addWidget(self.dropDown_TeamType)
layout.addWidget(self.nameInput)
layout.addWidget(self.addBtn)
self.setLayout(layout)
def addTeam(self):
name = self.nameInput.text()
team_type = self.dropDown_TeamType.currentText()
conn = sqlite3.connect("example.db")
c = conn.cursor()
#adds team to the database using the current selected dropdown item
c.execute("SELECT TEAM FROM {0} WHERE TEAM=?;".format(team_type),(name.title(),))
exists = c.fetchall()
if not exists:
c.execute("INSERT INTO {0} VALUES('{1}');".format(team_type,name.title()))
conn.commit()
conn.close()
conn.close()
self.close()
if team_type == "BATSMEN":
item = QtWidgets.QListWidgetItem(name)
self.window.listWidget_3.addItem(item)
elif team_type == "ALLROUNDERS":
item = QtWidgets.QListWidgetItem(name)
self.window.listWidget_4.addItem(item)
elif team_type == "BOWLERS":
item = QtWidgets.QListWidgetItem(name)
self.window.listWidget_5.addItem(item)
elif team_type == "WICKETKEEPER":
item = QtWidgets.QListWidgetItem(name)
self.window.listWidget_6.addItem(item)
class OpenTeamDialog(QtWidgets.QDialog):
def __init__(self, window, parent = None):
super(OpenTeamDialog, self).__init__(parent)
self.setAttribute(QtCore.Qt.WA_DeleteOnClose)
self.window = window
self.setWindowTitle("Open Saved Team")
self.setFixedWidth(300)
self.setFixedHeight(100)
self.dropDown_TeamType = QtWidgets.QComboBox()
self.dropDown_TeamType.addItems(["ALLROUNDERS", "BATSMEN", "BOWLERS","WICKETKEEPER" ])
self.dropDown = QtWidgets.QComboBox()
self.dropDown_TeamType.currentIndexChanged.connect(self.itemChanged)
self.addBtn = QtWidgets.QPushButton()
self.addBtn.setText("Add Team")
self.addBtn.clicked.connect(self.openTeam)
layout = QtWidgets.QVBoxLayout()
layout.addWidget(self.dropDown_TeamType)
layout.addWidget(self.dropDown)
layout.addWidget(self.addBtn)
self.setLayout(layout)
conn = sqlite3.connect("example.db")
conn.row_factory = lambda cursor, row: row[0]
c = conn.cursor()
c.execute("SELECT TEAM FROM ALLROUNDERS")
result = c.fetchall()
self.dropDown.addItems(result)
def itemChanged(self):
#adds all items from the database 'Team' column to drop down whenever it changes
team_type = self.dropDown_TeamType.currentText()
self.dropDown.clear()
conn = sqlite3.connect("example.db")
conn.row_factory = lambda cursor, row: row[0]
c = conn.cursor()
c.execute("SELECT TEAM FROM {0}".format(team_type))
result = c.fetchall()
self.dropDown.addItems(result)
conn.close()
def openTeam(self):
team_type = self.dropDown_TeamType.currentText()
team_name = self.dropDown.currentText()
self.close()
if team_type == "BATSMEN":
item = QtWidgets.QListWidgetItem(team_name)
self.window.listWidget_3.addItem(item)
elif team_type == "ALLROUNDERS":
item = QtWidgets.QListWidgetItem(team_name)
self.window.listWidget_4.addItem(item)
elif team_type == "BOWLERS":
item = QtWidgets.QListWidgetItem(team_name)
self.window.listWidget_5.addItem(item)
elif team_type == "WICKETKEEPER":
item = QtWidgets.QListWidgetItem(team_name)
self.window.listWidget_6.addItem(item)
class EvaluateDialog(QtWidgets.QDialog):
pass
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
MainWindow = Window()
MainWindow.show()
sys.exit(app.exec_())
</code></pre>
<p>Here is a .py file you can use to create the same database:</p>
<pre><code>import sqlite3
def createTables():
connection = sqlite3.connect("example.db")
connection.execute("CREATE TABLE ALLROUNDERS(TEAM TEXT NOT NULL)")
connection.execute("CREATE TABLE BATSMEN(TEAM TEXT NOT NULL)")
connection.execute("CREATE TABLE BOWLERS(TEAM TEXT NOT NULL)")
connection.execute("CREATE TABLE WICKETKEEPER(TEAM TEXT NOT NULL)")
connection.execute("INSERT INTO ALLROUNDERS VALUES(?)",('Empty',))
connection.execute("INSERT INTO BATSMEN VALUES(?)",('Empty',))
connection.execute("INSERT INTO BOWLERS VALUES(?)",('Empty',))
connection.execute("INSERT INTO WICKETKEEPER VALUES(?)",('Empty',))
connection.commit()
result = connection.execute("SELECT * FROM BATSMEN")
connection.close()
createTables()
</code></pre>
|
python|pyqt|pyqt5
| 0 |
1,903,428 | 57,605,074 |
Downloading JavaScript-loaded audio using Python
|
<p>I'm trying to write a script to automate the downloading of english audio files from a website, using Python. </p>
<p>The audio plays/loads on click, but I don't know how to "capture" the file as it loads and download it. I don't know javascript language.</p>
<p>The website: <a href="https://context.reverso.net/traduzione/inglese-italiano/pull+the+rug" rel="nofollow noreferrer">https://context.reverso.net/traduzione/inglese-italiano/pull+the+rug</a></p>
<p>For example, the first play button:</p>
<pre><code><button data-id="OPENSUBTITLES-2018.EN-IT_13515521" class="voice icon
stopped" title="Pronuncia" data-lang="en"></button>
</code></pre>
<p>loads this URL:</p>
<p><a href="https://voice2.reverso.net/RestPronunciation.svc/v1/output=json/GetVoiceStream/voiceName=Heather22k?inputText=cHVsbCB0aGUgcnVnCgoKWW91IGp1c3QgbGV0IGhlciBrbm93IHRoYXQgeW91IGNhbiBwdWxsIHRoZSBydWcu" rel="nofollow noreferrer">https://voice2.reverso.net/RestPronunciation.svc/v1/output=json/GetVoiceStream/voiceName=Heather22k?inputText=cHVsbCB0aGUgcnVnCgoKWW91IGp1c3QgbGV0IGhlciBrbm93IHRoYXQgeW91IGNhbiBwdWxsIHRoZSBydWcu</a></p>
<p>Copying that into the browser, an audio file loads that can be downloaded manually. I want to download it automatically.</p>
<p>Thanks!</p>
|
<p>Resolved. The url inputText is generated encoding in base64 the written translation. </p>
|
javascript|python-3.x|selenium|web-scraping|python-requests
| 2 |
1,903,429 | 42,572,378 |
selenium scraping returns empty string after first few elements
|
<p>I am scraping a website using selenium in python. The xpath is able to find the 20 elements, which contain the search results. However, the content is available only for the first 6 elements, and the rest has empty strings. This is true for all the pages of the results</p>
<p>The xpath used:</p>
<pre><code>results = driver.find_elements_by_xpath("//li[contains(@class, 'search-result search-result__occluded-item ember-view')]")
</code></pre>
<p>xpath finds 20 elements in chrome</p>
<p><a href="https://i.stack.imgur.com/gcy4k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gcy4k.png" alt="enter image description here"></a></p>
<p>Text inside the results</p>
<pre><code>[tt.text for tt in results]
</code></pre>
<p>anonymized output:</p>
<pre><code>['Abcddwedwada',
'Asefdasdfaca',
'Asdaafcascac',
'Asdadaacjkhi',
'Sfskjfbsfvbkd',
'Fjsbfksjnsvas',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'']
</code></pre>
<p>I have tried extracting the id of the 20 elements and used <code>driver.find_element_by_id</code>, but still I get empty strings after the first 6 elements.</p>
|
<p>Try this ,</p>
<pre><code>[str(tt.text) for tt in results if str(tt.text) !='']
</code></pre>
<p>OR</p>
<pre><code> [tt.text for tt in results if len(tt.text) > 0]
</code></pre>
|
python|selenium|xpath|automated-tests
| 1 |
1,903,430 | 59,475,986 |
I have tried many methods and I couldn't find solution. So how to convert Python file to executables?
|
<ul>
<li>Python : 3.5</li>
<li>PYinstaller : 3.5</li>
<li>Win64 </li>
<li>cx_Freeze : 6.0</li>
</ul>
<p>According to the above information, I have tried to convert Python project to exe but it does not work. </p>
<p>First I have tried pyinstaller but the process throw some error:</p>
<p><img src="https://i.stack.imgur.com/XUIUE.jpg" alt="when trying pyinstaller"></p>
<p>After that, I have tried cx_Freeze and it works, but the exe are working on some computer but not on every computer that have same platform. </p>
<p><img src="https://i.stack.imgur.com/lIM6o.png" alt="when trying cx_Freeze"></p>
<p>I don't know what I can do. I looked for google and stackoverflow but there are unsolved problems or I couldn't see the solution.</p>
<p>Later I have tried to change python version but doesn't work again. Computers that have tried to running the exe, have same OS platform, I'm sure.</p>
<p>By the way, <a href="https://i.stack.imgur.com/o1iP8.png" rel="nofollow noreferrer">if you receive the following cx_Freeze error</a>, resolving like this: </p>
<pre><code>Build\exe.win-amd64-3.7\lib\scipy\spatial\cKDTree.cp37-win_amd64
</code></pre>
<p>change to </p>
<pre><code>Build\exe.win-amd64-3.7\lib\scipy\spatial\ckdtree.cp37-win_amd64
</code></pre>
<p>Program uses the following modules : tkinter, pydicom, skimage, PIL, cv2, etc.</p>
<p>Primarily Program has 2 page that content code but I made single file for I came across this sentence "It's worth noting that pyinstaller can compile to a single .exe file"</p>
<p>What do you suggest I do? Thanks for your help.</p>
<p>Edit: I have been tried "Auto-py-to-exe" but I got an error (Fatal Error : Failed to execute script")</p>
<p>Edit2: I tried to run outside the anaconda. I think its work. But I'm still testing.</p>
<p>Edit3: I have tried to change python version, GUI was opened another computer but the program is not work properly. the program works on my computer but not on another computers</p>
|
<p>If you are not able to convert python file to executable format, you can try using auto py-to-exe library.</p>
<p>This library contains a GUI format to convert .py files to .exe</p>
<p>Here you can find auto py-to-exe with usage instructions,</p>
<p><a href="https://pypi.org/project/auto-py-to-exe/" rel="nofollow noreferrer">https://pypi.org/project/auto-py-to-exe/</a></p>
|
python-3.x|tkinter|pyinstaller|executable|cx-freeze
| 0 |
1,903,431 | 54,070,285 |
How to convert Dataframe into key-Value pair in given format
|
<p>I have one Dataframe like this </p>
<pre><code> Number String
0 12 Hi
1 34 how
2 35 are
</code></pre>
<p>Now i want to convert this into like given format</p>
<pre><code>[(Number=12, String='hi'),(Number=34, String='How'),(Number=35, String='are')]
</code></pre>
<p>I tried this </p>
<pre><code>tuples = [tup for tup in df.itertuples()]
</code></pre>
<p>which is returning me this result</p>
<pre><code>[Pandas(Index=0, Number=12, String='hi'),
Pandas(Index=1, Number=34, String='How'),
Pandas(Index=2, Number=35, String='are')]
</code></pre>
<p>any suggestion?</p>
|
<p>You can get a list of tuples with a one-liner:</p>
<p><code>[(k, v) for k,v in df.set_index('Number')['String'].to_dict().items()]</code></p>
<p>Or a tuple of tuples:</p>
<p><code>tuple((k, v) for k,v in df.set_index('Number')['String'].to_dict().items())</code></p>
<pre><code>> ((12, 'Hi'), (34, 'how'), (35, 'are'))
</code></pre>
|
python-3.x|pandas|list|dataframe|tuples
| 0 |
1,903,432 | 53,844,326 |
import module from other folder python (errors)
|
<p>I am new to Python. I am using the following directory structure and am trying to import module OrgRepo into Func1. I am using a virtualenv and vs code as my IDE.</p>
<p><code>
src/
├── Functions
│ ├── Func1
│ └── Func2
└── Shared
├── __init__.py
├── Repositories
│ ├── __init__.py
│ ├── OrgRepo
└── Utilities
├── __init__.py
└── DictUtil
</code>
I have also tried this without `<strong>init</strong>.py'</p>
<p>This is my PATH:</p>
<p><code>['/Users/username/Documents/Projects/myproject/name/src/Functions/Func1', '/Users/username/anaconda3/envs/my_virtual_env/lib/python37.zip', '/Users/username/anaconda3/envs/my_virtual_env/lib/python3.7', '/Users/username/anaconda3/envs/my_virtual_env/lib/python3.7/lib-dynload', '/Users/username/.local/lib/python3.7/site-packages', '/Users/username/anaconda3/envs/my_virtual_env/lib/python3.7/site-packages']
</code></p>
<p>I have tried the following in order to import OrgRepo into Func1:</p>
<p>1: <code>from .Shared.Repositories import OrgRepo</code></p>
<p><code>ModuleNotFoundError: No module named '__main__.Shared'; '__main__' is not a package</code></p>
<p>2: <code>from ..Shared.Repositories import OrgRepo</code>
'</p>
<p><code>ValueError: attempted relative import beyond top-level package</code></p>
<p>3: <code>from src.Shared.Repositories import OrgRepo</code></p>
<p><code>ModuleNotFoundError: No module named 'src'</code></p>
<p>4: `from Shared.Repositories import OrgRepo1</p>
<p><code>'ModuleNotFoundError: No module named 'Shared'</code></p>
<p>5: I am using VS Code and when I try to save the file:</p>
<p>It automatically changes to this:
<code>import OrgRepo
import DictionaryUtilities
import datetime
import json
import sys
sys.path.insert(0, 'src/Repositories/')
</code></p>
<p>6:
<code>import sys
sys.path.insert(
0, '/Users/username/Documents/Projects/project/m/src/Repositories')
import OrgRepo</code></p>
<p>and this: </p>
<p><code>sys.path.insert(0, 'Repositories')</code>
<code>sys.path.insert(0, .'Repositories')</code>
<code>sys.path.insert(0, ..'Repositories')</code></p>
<p>Upon running or saving, vs code changes it to this:
<code>import OrgRepo
import sys
sys.path.insert(
0, '/Users/username/Documents/Projects/project/m/src/Repositories')
</code>
and received this error:</p>
<p><code>ModuleNotFoundError: No module named 'OrgRepo'</code></p>
<p>I was able to install this with PIP and import it, but that does not suit our needs.</p>
<p>I have read these posts:
<a href="https://stackoverflow.com/questions/4383571/importing-files-from-different-folder">Importing files from different folder</a></p>
<p><a href="https://stackoverflow.com/questions/20693687/python-import-modules-folder-structures">Python import modules, folder structures</a></p>
<p><a href="https://stackoverflow.com/questions/43059505/how-to-import-multiple-python-modules-from-other-directories">How to Import Multiple Python Modules from Other Directories</a></p>
<p><a href="https://stackoverflow.com/questions/22774433/how-to-properly-import-python-modules-from-an-adjacent-folder">How to properly import python modules from an adjacent folder?</a></p>
<p>I tried to read/understand a few other posts as well . . . I even tried to bang on the Flux Capacitor a few times to no avail . . </p>
<p>EDIT: I am using this code to upload as an AWS Lambda function. While the sys.path solution works locally it makes it does not fit into my workflow. This requires me to change the sys.path to import while uploading and causes problems with Intellisense. I would like to be able to be able to import the module directly. e.g. <code>import OrgRepo</code> so Intellisense does not throw errors and I can zip and upload my package to AWS. I have no problem uploading my package to AWS when I am able to import <code><module_name></code>.</p>
<p>I activate my environment in Anaconda and then export the following PYTHONPATH environment variable:</p>
<p><code>export PYTHONPATH=src/shared/repositories:src/shared/utilities</code></p>
<p>I also tried <code>export PYTHONPATH=$PATH:src/shared/repositories:src/shared/utilities</code></p>
<p>This worked for a period of time and now I am getting <code>PYTHON[unresolved-import]</code> with IntelliSense. I do not appear to get this error when I try to run the script from the directory above <code>/src</code>.</p>
<p>I would be so grateful if somebody can show me how I can import my modules using the standard <code>import <module></code> and have it consistently work.</p>
|
<p>I think what you're trying to do is something like the below. I've cleaned up some of the directory names (typically directories are lowercase with underscores and Class names are uppercased), added <code>.py</code> extensions to the python files, and tried to create a minimalistic environment to replicate your scenario. Hopefully this is helpful. </p>
<h3>Setup the environment</h3>
<pre><code>$ mkdir src; mkdir src/functions; touch src/functions/func1.py; mkdir src/shared; mkdir src/shared/repositories; touch src/shared/repositories/org_repo.py
$ tree
.
└── src
├── functions
│ └── func1.py
└── shared
└── repositories
└── org_repo.py
# src/shared/repositories/org_repo.py
def a_cool_func():
print(f'hello from {__file__}')
# src/functions/func1.py
import pathlib
import sys
file_path = pathlib.Path(__file__)
path = file_path.resolve().parents[2]
sys.path.append(str(path))
from src.shared.repositories.org_repo import a_cool_func
print(f'hello from {__file__}')
a_cool_func()
</code></pre>
<h3>Run it</h3>
<pre><code># note that there is no significance to /Users/username/tmp/tmp/
# this is just the directory where the test environment was setup
$ python src/functions/func1.py
hello from src/functions/func1.py
hello from /Users/username/tmp/tmp/src/shared/repositories/org_repo.py
</code></pre>
|
python|python-import|importerror
| 0 |
1,903,433 | 58,517,303 |
Can't import Weka in Python
|
<p>I've tried import weka package in Pycharm. But it got an error.</p>
<p>This is for Python 3.7.4 on windows 10. And I've installed java bridge and Weka successfully.</p>
<pre><code>Package Version
------------------- -------
arff 0.9
javabridge 1.0.18
numpy 1.17.3
pandas 0.25.2
pip 19.3.1
python-dateutil 2.8.0
python-weka-wrapper 0.3.15
pytz 2019.3
setuptools 40.8.0
six 1.12.0
weka 1.0.6
</code></pre>
<p>and I type:
<code>import weka.core</code>
and get the error:</p>
<pre><code> File "C:/Users/dell/PycharmProjects/lab2/Source.py", line 2, in <module>
import weka.core
ModuleNotFoundError: No module named 'weka.core'
</code></pre>
<p>So how to fix it? Thank you</p>
|
<p><code>weka.core</code> is not a Python module, as the error states.</p>
<p>What are you trying to achieve with this import?</p>
<p>If you are trying to import the <code>jvm</code> module (eg for starting/stopping the JVM), then do something like <code>import weka.core.jvm as jvm</code>.</p>
|
python|weka
| 0 |
1,903,434 | 65,307,881 |
ANN regression NaN loss values
|
<p>I'm trying to train an ANN to perform regression. The dataset consists of ~10k data points for 7 parameters. I'm using <a href="https://www.tensorflow.org/tutorials/keras/regression#full_model" rel="nofollow noreferrer">this basic regression example from Tensorflow.</a></p>
<p>However, during training, the loss values return NaN values from the very first epoch to the last. I've tried different regularization techniques (Normalization, dropout, L1/L2 weight regularization) without any success.</p>
<p>Any suggestions on what could be the source of this, and how to solve?</p>
<p>Below are some useful code-snippets and images:</p>
<p><strong>Model</strong></p>
<pre><code>def build_and_compile_model(norm):
model = keras.Sequential([
norm,
layers.Dense(10, activation='relu', kernel_regularizer=regularizers.l1_l2(l1=1e-5, l2=1e-4),
bias_regularizer=regularizers.l2(1e-4),
activity_regularizer=regularizers.l2(1e-5)),
layers.Dense(1)
])
model.add(Dropout(0.2, input_shape=(6,)))
model.compile(loss='mean_absolute_error',
optimizer=tf.keras.optimizers.Adam(0.0001))
return model
dnn_model = build_and_compile_model(normalizer)
dnn_model.summary()
</code></pre>
<p><strong>Training</strong></p>
<pre><code>%%time
history = dnn_model.fit(
train_features, train_labels,
validation_split=0.2,
verbose=0, epochs=100)
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
print(hist.head())
print(hist.tail())
# PRINT:
loss val_loss epoch
0 NaN NaN 0
1 NaN NaN 1
2 NaN NaN 2
3 NaN NaN 3
4 NaN NaN 4
loss val_loss epoch
95 NaN NaN 95
96 NaN NaN 96
97 NaN NaN 97
98 NaN NaN 98
99 NaN NaN 99
Wall time: 26.1 s
</code></pre>
<p><strong>Data inspection</strong></p>
<p>Last row is output parameter.</p>
<p><a href="https://i.stack.imgur.com/hGYOh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hGYOh.png" alt="enter image description here" /></a></p>
|
<p>I think the problem is you have added a Dropout layer after the Dense(1) layer.
This makes dropout the top layer of your model. Remove it and it should work</p>
|
tensorflow|keras|deep-learning|neural-network|regression
| 0 |
1,903,435 | 22,674,423 |
Ensure value range for python list
|
<p>I want a function that takes a list and returns that list with any elements less than 0 or greater than "upper" removed.</p>
<p>I figured out how to do it with a list comprehension, but I can't figure out why this didn't work:</p>
<pre><code>dim = 4
def ensure_values(l, upper=dim**2):
for i in l:
if i < 0 or i >= upper:
l.remove(i)
return l
l = [0,2,-3,5]
ensure_values(l)
[0,2,-3,5]
</code></pre>
<p>I expected [0,2,5].</p>
|
<p>Best practice in Python would be for your function to return a new list that contains only the desired elements. This is best done with a list comprehension.</p>
<pre><code>dim = 4
def ensure_values(lst, upper=dim**2):
return [n for n in lst if 0 < n <= upper]
lst = [0,2,-3,5]
assert ensure_values(lst) == [2, 5]
</code></pre>
<p>Note that I was able to use the expression <code>0 < n <= upper</code>. That's not legal in C or most other languages, but in Python it is legal and does what one would expect it to do.</p>
<p>Also, I recommend against using <code>l</code> as a variable name. It looks almost exactly like a <code>1</code> or an <code>I</code> in many fonts. Recommended practice is to use <code>lst</code> or <code>L</code> for a short name for a list.</p>
<p>The use of <code>l</code> as a variable name is specifically dis-recommended in the PEP 8 coding style guidelines for Python. Here's a link:</p>
<p><a href="http://legacy.python.org/dev/peps/pep-0008/#id29" rel="nofollow">http://legacy.python.org/dev/peps/pep-0008/#id29</a></p>
|
python|list
| 3 |
1,903,436 | 45,662,253 |
Can I run Keras model on gpu?
|
<p>I'm running a Keras model, with a submission deadline of 36 hours, if I train my model on the cpu it will take approx 50 hours, is there a way to run Keras on gpu?</p>
<p>I'm using Tensorflow backend and running it on my Jupyter notebook, without anaconda installed.</p>
|
<p>Yes you can run keras models on GPU. Few things you will have to check first.</p>
<ol>
<li>your system has GPU (Nvidia. As AMD doesn't work yet)</li>
<li>You have installed the GPU version of tensorflow</li>
<li>You have installed CUDA <a href="https://www.tensorflow.org/install/install_linux" rel="noreferrer">installation instructions</a></li>
<li>Verify that tensorflow is running with GPU <a href="https://stackoverflow.com/questions/38009682/how-to-tell-if-tensorflow-is-using-gpu-acceleration-from-inside-python-shell">check if GPU is working</a></li>
</ol>
<p><code>sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))</code></p>
<p>for TF > v2.0</p>
<p><code>sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(log_device_placement=True))</code></p>
<p>(Thanks @nbro and @Ferro for pointing this out in the comments)</p>
<p>OR</p>
<pre><code>from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
</code></pre>
<p>output will be something like this:</p>
<pre><code>[
name: "/cpu:0"device_type: "CPU",
name: "/gpu:0"device_type: "GPU"
]
</code></pre>
<p>Once all this is done your model will run on GPU:</p>
<p>To Check if keras(>=2.1.1) is using GPU:</p>
<pre><code>from keras import backend as K
K.tensorflow_backend._get_available_gpus()
</code></pre>
<p>All the best.</p>
|
python|tensorflow|keras|jupyter
| 213 |
1,903,437 | 68,830,510 |
ValueError when checking if tuple occurs in list of tuples
|
<p>When I am running my code I suddenly get an unexpected error:
<code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()</code></p>
<p>I am trying to check if a tuple occurs within a list:</p>
<pre><code>concat_tuples = [(7, 18), (7, [0, 10, 19]), (7, 16)]
to_explode = [c for c in concat_tuples if any(isinstance(x, list) and
len(x) > 1 for x in c)]
# >> to_explode = [(7, [0, 10, 19])]
not_explode = [x for x in concat_tuples if x not in to_explode]
</code></pre>
<p>However, my last line of code fails in my script for the first value (and probably also for the other values). The weird thing is that it works in my Python console, but not in my script (pytests). What could be going wrong in my script?</p>
<p><strong>What I have tried</strong></p>
<ul>
<li>Checking existence in list with <code>list.index()</code>. This also fails with the same error</li>
<li>Checked types of both x and to_explode, they're a tuple and list of tuples respectively</li>
<li>Reformatted the code: list comprehension to regular for-loop, still no success</li>
<li>Run the code in Python console, which works</li>
</ul>
|
<p>It turned out that sometimes the mostly the tuples contained integers and sometimes they contained numpy int32 objects, which caused the error. I fixed it with by casting everything to strings.</p>
|
python|list|if-statement|tuples|valueerror
| 0 |
1,903,438 | 6,701,714 |
Numpy - Replace a number with NaN
|
<p>I am looking to replace a number with NaN in numpy and am looking for a function like numpy.nan_to_num, except in reverse.</p>
<p>The number is likely to change as different arrays are processed because each can have a uniquely define NoDataValue. I have see people using dictionaries, but the arrays are large and filled with both positive and negative floats. I suspect that it is not efficient to try to load all of these into anything to create keys.</p>
<p>I tried using the following and numpy requiring that I use any() or all(). I realize that I need to iterate element wise, but hope that a built-in function can achieve this.</p>
<pre><code>def replaceNoData(scanBlock, NDV):
for n, i in enumerate(array):
if i == NDV:
scanBlock[n] = numpy.nan
</code></pre>
<p>NDV is GDAL's no data value and array is a numpy array.</p>
<p>Is a masked array the way to go perhaps?</p>
|
<pre><code>A[A==NDV]=numpy.nan
</code></pre>
<p>A==NDV will produce a boolean array that can be used as an index for A</p>
|
python|arrays|numpy|nan|gdal
| 71 |
1,903,439 | 25,701,036 |
Force label line wrap
|
<p>I'm using a Gtk.Label within a tooltip to display text. The text is usually rather short, but does occasionally become very long, in which case the label expands horizontally to fill the entire screen width, which results in a ridiculously wide tooltip. Is there a way to force the label's text into more lines?</p>
<p>I had two ideas:</p>
<p>1) set the label's maximum width. Gtk doesn't seem to support this.</p>
<p>2) set the label's maximum line length. Gtk doesn't seem to support this either.</p>
<p>Which means I'm fresh out of ideas. Is there any way to do this?</p>
|
<p>Use <a href="http://www.pygtk.org/pygtk2reference/class-gtklabel.html#method-gtklabel--set-line-wrap" rel="nofollow">gtk.Label.set_line_wrap</a> combined with <a href="http://www.pygtk.org/pygtk2reference/class-gtklabel.html#method-gtklabel--set-max-width-chars" rel="nofollow">gtk.Label.set_max_width_chars</a>.</p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>label.set_line_wrap(True)
label.set_max_width_chars(20)
</code></pre>
|
python|size|gtk3
| 1 |
1,903,440 | 62,027,638 |
Access property object (for replacing setter method)
|
<p>Consider the following code taken from <a href="https://docs.python.org/3/howto/descriptor.html#properties" rel="nofollow noreferrer">the official documentation</a></p>
<pre><code>class test:
_x = 10
def getx(self): return self._x
def setx(self, value): self._x = value
x = property(getx, setx)
</code></pre>
<p>as already explained in many other questions, this is 100% equivalent to </p>
<pre><code>class test:
_x = 10
@property
def x(self):
return self._x
@x.setter
def x(self, val):
self._x = val
</code></pre>
<p>I would like to access <strong>the property</strong> <code>x</code> (and not the <code>int</code> in <code>_x</code>) in order to change the value of <code>x.setter</code>. </p>
<p>However doing <code>type(test().x)</code> returns <code>int</code> rather than <code>property</code> indicating that what <code>test().x</code> returns is <code>_x</code> and not the property <code>x</code>. Indeed, trying to do access <code>test().x.setter</code> returns a <code>AttributeError: 'int' object has no attribute 'setter'</code>.</p>
<p>I understand that 99.9% of the time, when someone does <code>test().x</code> he wants to access the value associated with the property <code>x</code>. This is exactly what properties are meant for. </p>
<p>However, how can I do in that 0.01% of the times when I want to access the <code>property</code> object rather than the value returned by the <code>getter</code>?</p>
|
<p><code>x</code> is a <em>class</em> attribute, whose <code>__get__</code> method receives a reference to the object when invoked on an instance of the class. You need to get a reference to the class first, then you can get the actual <code>property</code> object without invoking the getter.</p>
<pre><code>>>> t = test()
>>> t.x
10
>>> type(t).x
<property object at ....>
</code></pre>
|
python|properties
| 1 |
1,903,441 | 23,631,506 |
'Polygone' object does not support indexing
|
<p>I am trying to render an SVG map using Kartograph.py. It throws me the TypeError. Here is the python code:</p>
<pre><code>import kartograph
from kartograph import Kartograph
import sys
from kartograph.options import read_map_config
css = open("stylesheet.css").read()
K = Kartograph()
cfg = read_map_config(open("config.json"))
K.generate(cfg, outfile='dd.svg', format='svg', stylesheet=css)
</code></pre>
<p>Here is the error it throws</p>
<pre><code>Traceback (most recent call last):
File "<pyshell#33>", line 1, in <module>
K.generate(cfg, outfile='dd.svg', format='svg', stylesheet=css)
File "C:\Python27\lib\site-packages\kartograph.py-0.6.8-py2.7.egg\kartograph\kartograph.py", line 46, in generate
_map = Map(opts, self.layerCache, format=format)
File "C:\Python27\lib\site-packages\kartograph.py-0.6.8-py2.7.egg\kartograph\map.py", line 61, in __init__
layer.get_features()
File "C:\Python27\lib\site-packages\kartograph.py-0.6.8-py2.7.egg\kartograph\maplayer.py", line 81, in get_features
charset=layer.options['charset']
File "C:\Python27\lib\site-packages\kartograph.py-0.6.8-py2.7.egg\kartograph\layersource\shplayer.py", line 121, in get_features
geom = shape2geometry(shp, ignore_holes=ignore_holes, min_area=min_area, bbox=bbox, proj=self.proj)
File "C:\Python27\lib\site-packages\kartograph.py-0.6.8-py2.7.egg\kartograph\layersource\shplayer.py", line 153, in shape2geometry
geom = shape2polygon(shp, ignore_holes=ignore_holes, min_area=min_area, proj=proj)
File "C:\Python27\lib\site-packages\kartograph.py-0.6.8-py2.7.egg\kartograph\layersource\shplayer.py", line 217, in shape2polygon
poly = MultiPolygon(polygons)
File "C:\Python27\lib\site-packages\shapely\geometry\multipolygon.py", line 74, in __init__
self._geom, self._ndim = geos_multipolygon_from_polygons(polygons)
File "C:\Python27\lib\site-packages\shapely\geometry\multipolygon.py", line 30, in geos_multipolygon_from_polygons
N = len(ob[0][0][0])
TypeError: 'Polygon' object does not support indexing
</code></pre>
|
<p>I had a look at shapely and it seems like you are using an outdated version.
Update your current install:</p>
<pre><code>pip install -U shapely
</code></pre>
|
python-2.7|gis|shapely|kartograph
| 0 |
1,903,442 | 23,866,833 |
What's the full specification for implementing a custom scikit-learn estimator?
|
<p>I'm rolling my own predictor and want to use it like I would use any of the scikit routines (e.g. RandomForestRegressor). I have a class containing <code>fit</code> and <code>predict</code> methods that seem to work fine. However, when I try to use some of the scikit methods, such as cross validation, I get errors like:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\sklearn\cross_validation.py", line 1152, in cross_val_
score
for train, test in cv)
File "C:\Python27\lib\site-packages\sklearn\externals\joblib\parallel.py", line 516, in __
call__
for function, args, kwargs in iterable:
File "C:\Python27\lib\site-packages\sklearn\cross_validation.py", line 1152, in <genexpr>
for train, test in cv)
File "C:\Python27\lib\site-packages\sklearn\base.py", line 43, in clone
% (repr(estimator), type(estimator)))
TypeError: Cannot clone object '<__main__.Custom instance at 0x033A6990>' (type <type 'inst
ance'>): it does not seem to be a scikit-learn estimator a it does not implement a 'get_para
ms' methods.
</code></pre>
<p>I see that it wants me to implement some methods (presumably <code>get_params</code> as well as maybe <code>set_params</code> and <code>score</code>) but I'm not sure what the right specification for making these methods is. Is there some information available on this topic? Thanks.</p>
|
<p>Full instructions are available in the <a href="http://scikit-learn.org/stable/developers/contributing.html#rolling-your-own-estimator" rel="noreferrer">scikit-learn docs</a>, and the principles behind the API are set out in <a href="http://arxiv.org/abs/1309.0238" rel="noreferrer">this paper by yours truly et al.</a> In short, besides <code>fit</code>, what you need for an estimator are <code>get_params</code> and <code>set_params</code> that return (as a <code>dict</code>) and set (from kwargs) the hyperparameters of the estimator, i.e. the parameters of the learning algorithm itself (as opposed to the data parameters it learns). These parameters should match the <code>__init__</code> parameters.</p>
<p>Both methods can be obtained by inheriting from the classes in <code>sklearn.base</code>, but you can provide them yourself if you don't want your code to be dependent on scikit-learn.</p>
<p>Note that input validation should be done in <code>fit</code>, not the constructor, because otherwise you can still set invalid parameters in <code>set_params</code> and have <code>fit</code> fail in unexpected ways.</p>
|
python|scikit-learn
| 17 |
1,903,443 | 46,575,166 |
Django Python str() argument 2 must be str, not int
|
<p>I have a 12x12 dictionary with values, which are 0 for all.</p>
<pre><code>matchfield = {}
for i in range(12):
for j in range(12):
matchfield[str((i, j))] = 0
</code></pre>
<p>I want to set some values to 1 with the following snippet (it checks whether the surrounding fields are free):</p>
<pre><code>length = 4
m = randint(1, 10-length)
n = randint(1, 10)
for x in range(m-1, m+length+1):
for y in range(n-1, n+1):
if not matchfield[str((x, y))]:
for k in range(length):
matchfield[str((m+k, n))] = 1
</code></pre>
<p>if I test this in python console, all works and the 4 selected values are set to 1, but in my Django view function I got an TypeError on the following line:</p>
<h2> matchfield[str((m+k, n))] = 1</h2>
<pre><code>Environment:
Request Method: GET
Request URL: https://www.maik-kusmat.de/schiffeversenken/start/
Django Version: 1.11.5
Python Version: 3.5.3
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites',
'django.contrib.flatpages',
'accounts',
'home',
'contact',
'kopfrechnen',
'braces',
'ckeditor',
'ckeditor_uploader',
'battleship',
'hangman']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.contrib.flatpages.middleware.FlatpageFallbackMiddleware']
Traceback:
File "/home/pi/Dev/mkenergy/lib/python3.5/site-packages/django/core/handlers/exception.py" in inner
41. response = get_response(request)
File "/home/pi/Dev/mkenergy/lib/python3.5/site-packages/django/core/handlers/base.py" in _get_response
187. response = self.process_exception_by_middleware(e, request)
File "/home/pi/Dev/mkenergy/lib/python3.5/site-packages/django/core/handlers/base.py" in _get_response
185. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/pi/Dev/mkenergy/lib/python3.5/site-packages/django/contrib/auth/decorators.py" in _wrapped_view
23. return view_func(request, *args, **kwargs)
File "/home/pi/Dev/mkenergy/src/battleship/views.py" in battleship_start
36. matchfield[str((m+k, n))] = 1
Exception Type: TypeError at /schiffeversenken/start/
Exception Value: str() argument 2 must be str, not int
</code></pre>
<p>did I miss something? I do not understand the error</p>
|
<p>I would expect the <code>argument 2 must be str</code> error to occur if you had </p>
<pre><code>matchfield[str(m+k, n)]
</code></pre>
<p>In Python 3, the second argument to <a href="https://docs.python.org/3/library/functions.html#func-str" rel="nofollow noreferrer"><code>str</code></a> is the encoding, so an integer <code>n</code> would cause that error.</p>
<p>However, your traceback shows <code>matchfield[str((m+k, n))]</code>, which shouldn't cause that error. Try restarting the Django server to make sure you're running the current code.</p>
<p>At first, I suggested that you use tuples as dictionary keys, e.g.</p>
<pre><code>matchfield[(i, j)] = 0
</code></pre>
<p>However, if you are serializing <code>matchfield</code> to json then that won't work because the keys need to be strings.</p>
|
python|django
| 1 |
1,903,444 | 70,305,833 |
RuntimeError: shape '[128, -1]' is invalid for input of size 378 pytorch
|
<p>I'm running a spiking neural network for data that has 21 features with a batch size of 128. I get the following error after many iterations of training (this error doesn't arise immediately!):</p>
<p><code>RuntimeError: shape '[128, -1]' is invalid for input of size 378 pytorch</code></p>
<p>When I went to go print out what the shapes of the tensors are before, I get the following:</p>
<pre><code>Train
torch.Size([128, 21])
Test
torch.Size([128, 21])
</code></pre>
<p>This is my network:</p>
<pre><code>class SpikingNeuralNetwork(nn.Module):
"""
Parameters in SpikingNeuralNetwork class:
1. number_inputs: Number of inputs to the SNN.
2. number_hidden: Number of hidden layers.
3. number_outputs: Number of output classes.
4. beta: Decay rate.
"""
def __init__(self, number_inputs, number_hidden, number_outputs, beta):
super().__init__()
self.number_inputs = number_inputs
self.number_hidden = number_hidden
self.number_outputs = number_outputs
self.beta = beta
# Initialize layers
self.fc1 = nn.Linear(self.number_inputs, self.number_hidden) # Applies linear transformation to all input points
self.lif1 = snn.Leaky(beta = self.beta) # Integrates weighted input over time, emitting a spike if threshold condition is met
self.fc2 = nn.Linear(self.number_hidden, self.number_outputs) # Applies linear transformation to output spikes of lif1
self.lif2 = snn.Leaky(beta = self.beta) # Another spiking neuron, integrating the weighted spikes over time
"""
Forward propagation of SNN. The code below function will only be called once the input argument x
is explicitly passed into net.
@param x: input passed into the network
@return layer of output after applying final spiking neuron
"""
def forward(self, x):
num_steps = 25
# Initialize hidden states at t = 0
mem1 = self.lif1.init_leaky()
mem2 = self.lif2.init_leaky()
# Record the final layer
spk2_rec = []
mem2_rec = []
for step in range(num_steps):
cur1 = self.fc1(x)
spk1, mem1 = self.lif1(cur1, mem1)
cur2 = self.fc2(spk1)
spk2, mem2 = self.lif2(cur2, mem2)
spk2_rec.append(spk2)
mem2_rec.append(mem2)
return torch.stack(spk2_rec, dim = 0), torch.stack(mem2_rec, dim = 0)
</code></pre>
<p>This is my training loop:</p>
<pre><code>def training_loop(net, train_loader, test_loader, dtype, device, optimizer):
num_epochs = 1
loss_history = []
test_loss_history = []
counter = 0
# Temporal dynamics
num_steps = 25
# Outer training loop
for epoch in range(num_epochs):
iter_counter = 0
train_batch = iter(train_loader)
# Minibatch training loop
for data, targets in train_batch:
data = data.to(device)
targets = targets.to(device)
# Forward pass
net.train()
print("Train")
print(data.size())
spk_rec, mem_rec = net(data.view(batch_size, -1))
# Initialize the loss and sum over time
loss_val = torch.zeros((1), dtype = dtype, device = device)
for step in range(num_steps):
loss_val += loss_function(mem_rec[step], targets.long().flatten().to(device))
# Gradient calculation and weight update
optimizer.zero_grad()
loss_val.backward()
optimizer.step()
# Store loss history for future plotting
loss_history.append(loss_val.item())
# Test set
with torch.no_grad():
net.eval()
test_data, test_targets = next(iter(test_loader))
test_data = test_data.to(device)
test_targets = test_targets.to(device)
# Test set forward pass
print("Test")
print(test_data.size())
test_spk, test_mem = net(test_data.view(batch_size, -1))
# Test set loss
test_loss = torch.zeros((1), dtype = dtype, device = device)
for step in range(num_steps):
test_loss += loss_function(test_mem[step], test_targets.long().flatten().to(device))
test_loss_history.append(test_loss.item())
# Print train/test loss and accuracy
if counter % 50 == 0:
train_printer(epoch, iter_counter, counter, loss_history, data, targets, test_data, test_targets)
counter = counter + 1
iter_counter = iter_counter + 1
return loss_history, test_loss_history
</code></pre>
<p>The error occurs on <code>spk_rec, mem_rec = net(data.view(batch_size, -1))</code>.</p>
<p>The code was adopted from <a href="https://snntorch.readthedocs.io/en/latest/tutorials/tutorial_5.html" rel="nofollow noreferrer">https://snntorch.readthedocs.io/en/latest/tutorials/tutorial_5.html</a>, where it was originally used for the MNIST dataset. However, I am not working with an image dataset. I am working with a dataset that has 21 features and predicts just one target (with 100 classes). I tried to change <code>data.view(batch_size, -1)</code> and <code>test_data.view(batch_size, -1)</code> to <code>data.view(batch_size, 21)</code> and <code>test_data.view(batch_size, 21)</code> based on some other forum answers that I saw, and my program is running for now through the training loop. Does anyone have any suggestions for how I can run through the training with no errors?</p>
<p>EDIT: I now get the error <code>RuntimeError: shape '[128, 21]' is invalid for input of size 378</code> from <code>spk_rec, mem_rec = net(data.view(batch_size, -1))</code>.</p>
<p>Here are my DataLoaders:</p>
<pre><code> train_loader = DataLoader(dataset = train, batch_size = batch_size, shuffle = True)
test_loader = DataLoader(dataset = test, batch_size = batch_size, shuffle = True)
</code></pre>
<p>My batch size is 128.</p>
|
<p>Tryng to run it by myself to try to solve your problem I luck also: net params and snn.snn.Leaky</p>
<pre><code>import torch
from torch import nn
from torch.utils.data import DataLoader
class SpikingNeuralNetwork(nn.Module):
"""
Parameters in SpikingNeuralNetwork class:
1. number_inputs: Number of inputs to the SNN.
2. number_hidden: Number of hidden layers.
3. number_outputs: Number of output classes.
4. beta: Decay rate.
"""
def __init__(self, number_inputs, number_hidden, number_outputs, beta):
super().__init__()
self.number_inputs = number_inputs
self.number_hidden = number_hidden
self.number_outputs = number_outputs
self.beta = beta
# Initialize layers
self.fc1 = nn.Linear(self.number_inputs,
self.number_hidden) # Applies linear transformation to all input points
self.lif1 = snn.Leaky(
beta=self.beta) # Integrates weighted input over time, emitting a spike if threshold condition is met
self.fc2 = nn.Linear(self.number_hidden,
self.number_outputs) # Applies linear transformation to output spikes of lif1
self.lif2 = snn.Leaky(beta=self.beta) # Another spiking neuron, integrating the weighted spikes over time
"""
Forward propagation of SNN. The code below function will only be called once the input argument x
is explicitly passed into net.
@param x: input passed into the network
@return layer of output after applying final spiking neuron
"""
def forward(self, x):
num_steps = 25
# Initialize hidden states at t = 0
mem1 = self.lif1.init_leaky()
mem2 = self.lif2.init_leaky()
# Record the final layer
spk2_rec = []
mem2_rec = []
for step in range(num_steps):
cur1 = self.fc1(x)
spk1, mem1 = self.lif1(cur1, mem1)
cur2 = self.fc2(spk1)
spk2, mem2 = self.lif2(cur2, mem2)
spk2_rec.append(spk2)
mem2_rec.append(mem2)
return torch.stack(spk2_rec, dim=0), torch.stack(mem2_rec, dim=0)
batch_size = 2
train = torch.rand(128, 21)
test = torch.rand(128, 21)
train_loader = DataLoader(dataset=train, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(dataset=test, batch_size=batch_size, shuffle=True)
net = SpikingNeuralNetwork(number_inputs=1)
loss_function = nn.CrossEntropyLoss()
optimizer = nn.optim.Adam(net.parameters(), lr=0.1)
def training_loop(net, train_loader, test_loader, dtype, device, optimizer):
num_epochs = 1
loss_history = []
test_loss_history = []
counter = 0
# Temporal dynamics
num_steps = 25
# Outer training loop
for epoch in range(num_epochs):
iter_counter = 0
train_batch = iter(train_loader)
# Minibatch training loop
for data, targets in train_batch:
data = data.to(device)
targets = targets.to(device)
# Forward pass
net.train()
print("Train")
print(data.size())
spk_rec, mem_rec = net(data.view(batch_size, -1))
# Initialize the loss and sum over time
loss_val = torch.zeros((1), dtype=dtype, device=device)
for step in range(num_steps):
loss_val += loss_function(mem_rec[step], targets.long().flatten().to(device))
# Gradient calculation and weight update
optimizer.zero_grad()
loss_val.backward()
optimizer.step()
# Store loss history for future plotting
loss_history.append(loss_val.item())
# Test set
with torch.no_grad():
net.eval()
test_data, test_targets = next(iter(test_loader))
test_data = test_data.to(device)
test_targets = test_targets.to(device)
# Test set forward pass
print("Test")
print(test_data.size())
test_spk, test_mem = net(test_data.view(batch_size, -1))
# Test set loss
test_loss = torch.zeros((1), dtype=dtype, device=device)
for step in range(num_steps):
test_loss += loss_function(test_mem[step], test_targets.long().flatten().to(device))
test_loss_history.append(test_loss.item())
# Print train/test loss and accuracy
if counter % 50 == 0:
train_printer(epoch, iter_counter, counter, loss_history, data, targets, test_data, test_targets)
counter = counter + 1
iter_counter = iter_counter + 1
return loss_history, test_loss_history
</code></pre>
|
deep-learning|pytorch|runtime-error|dimensions|pytorch-dataloader
| 0 |
1,903,445 | 53,492,159 |
Speed up pandas pairwise creation
|
<p>I am looking to speed up a pandas dataframe groupby function to do a pairwise comparison. </p>
<p>For a given dataframe, it has columns [x1, x2, x3, x4] with many rows. (there are millions of rows)</p>
<p>I want to group-by [ x1]. (there will be tens of thousands of groups)</p>
<p>Then take the first row of every group-by, duplicate the row N number of times, where N is the number of rows in the group-by.
Rename the column headers to: [y1, y2, y3, y4]
then merge it with the original group.</p>
<p>My original table with header:</p>
<pre><code>[x1, x2, x3, x4]
[1, 'p', 45, 62]
[1, 'k', 12, 84]
</code></pre>
<p>Turn to:</p>
<pre><code>[y1, y2, y3, y4, x1, x2, x3, x4]
[1, 'p', 45, 62, 1, 'p', 45, 62]
[1, 'p', 45, 62, 1, 'k', 12, 84]
</code></pre>
<p>I can multi-process it, but it is still pretty slow, my current version.</p>
<pre><code>for name, group in dataframe.groupby(['x1']):
# take first row and make dataframe
duplicated_row = pd.concat([group.iloc[[0]]]*len(group), ignore_index = True)
# create new headers
new_headers = [x.replace('v2', 'v1') for x in list(duplicated_row)]
column_names2 = dict(zip(list(duplicated_row), new_headers))
# rename headers
duplicated_row = duplicated_row.rename(index=str, columns=column_names2)
duplicated_row = duplicated_row.reset_index(drop=True)
# concat two dataframes
full_df = pd.concat([duplicated_row, group.reset_index(drop=True)], axis = 1)
</code></pre>
<p>Are there any functions I can pull from pandas which are native C to speed this? or vectorize this somehow? (at entire dataframe, or by the groupby level)</p>
|
<p>Use <code>groupby</code> and <code>transform</code>, and <code>concat</code> the results.</p>
<pre><code>i = df['x1'].rename('y1')
j = df.groupby('x1').transform('first')
j.columns = 'y' + j.columns.str[1:]
df = pd.concat([i, j, df], axis=1)
print(df)
y1 y2 y3 y4 x1 x2 x3 x4
0 1 p 45 62 1 p 45 62
1 1 p 45 62 1 k 12 84
</code></pre>
|
python|pandas|pandas-groupby
| 3 |
1,903,446 | 53,440,754 |
Heroku postgresql operator does not exist
|
<p>I deployed my site to Heroku running postgresql. Before, I had it on the flask development environment running sqlite. The app ran fine when using the schedule view, but when I access the schedule view from Heroku, I get an error. </p>
<p><strong>CLASS</strong></p>
<pre><code>class Task(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(80), index=True)
description = db.Column(db.String(200), index=True)
priority = db.Column(db.Integer)
is_complete = db.Column(db.Boolean) ####might be trouble
url = db.Column(db.String(200), index=True)
est_dur = db.Column(db.Integer)
time_quad = db.Column(db.Integer)
timestamp = db.Column(db.DateTime, index=True, default=datetime.utcnow)
user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
</code></pre>
<p><strong>ROUTE</strong></p>
<pre><code>@bp.route('/schedule')
#@login_required
def schedule():
currentUser = current_user.id
q1 = Task.query.filter_by(time_quad=1).filter_by(user_id=currentUser).filter_by(is_complete=0).all()
q2 = Task.query.filter_by(time_quad=2).filter_by(user_id=currentUser).filter_by(is_complete=0).all()
q3 = Task.query.filter_by(time_quad=3).filter_by(user_id=currentUser).filter_by(is_complete=0).all()
q4 = Task.query.filter_by(time_quad=4).filter_by(user_id=currentUser).filter_by(is_complete=0).all()
taskAll = q1 + q2 + q3 + q4
print("current user" + str(currentUser))
return render_template('schedule.html', taskList = taskAll)
</code></pre>
<p><strong>ERROR</strong></p>
<pre><code>Exception on /schedule [GET]
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
context)
File "/app/.heroku/python/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 509, in do_execute
cursor.execute(statement, parameters)
psycopg2.ProgrammingError: operator does not exist: boolean = integer
LINE 3: ....time_quad = 1 AND task.user_id = 1 AND task.is_complete = 0
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
[SQL: 'SELECT task.id AS task_id, task.name AS task_name, task.description AS task_description, task.priority AS task_priority, task.is_complete AS task_is_complete, task.url AS task_url, task.est_dur AS task_est_dur, task.time_quad AS task_time_quad, task.timestamp AS task_timestamp, task.user_id AS task_user_id \nFROM task \nWHERE task.time_quad = %(time_quad_1)s AND task.user_id = %(user_id_1)s AND task.is_complete = %(is_complete_1)s'] [parameters: {'time_quad_1': 1, 'user_id_1': 1, 'is_complete_1': 0}] (Background on this error at: http://sqlalche.me/e/f405)
</code></pre>
|
<p>PostgreSQL has a real boolean type but SQLite doesn't, SQLite generally uses one and zero for true and false (respectively). When you say this:</p>
<pre><code>filter_by(is_complete=0)
</code></pre>
<p>the <code>0</code> will be interpreted by SQLite as "false" but by PostgreSQL as just the number zero; hence the complaint at Heroku about not being able to compare a boolean and an integer in <code>task.is_complete = 0</code>. If you mean "false", say so:</p>
<pre><code>filter_by(is_complete=False)
</code></pre>
<p>That should be converted to zero when talking to SQLite and the proper boolean <code>'f'</code> (or <code>false</code>) when talking to PostgreSQL.</p>
<p>Once you fix that, I strongly recommend that you install PostgreSQL in your development environment if that's the database you're going to be deploying on. You'll have a much better time of things if you develop, test, and deploy on the same stack.</p>
|
python|postgresql|heroku|heroku-postgres
| 0 |
1,903,447 | 53,403,923 |
Creating a histogram that sorts array values into bins, and shows the the frequency of items in the bins
|
<p>For an array of values between 0 and 1, I want to create a histogram of 5 bins where bin one show the frequency(# of times) numbers between 0-0.2 show up in the array, bin 2 shows the frequency of numbers between 0.2-0.4, bin 3 is 0.4-0.6, bin 4: 0.6-0.8, bin 5 0.8-1.</p>
<pre><code>import numpy as np
arr = np.array([0.5, 0.1, 0.05, 0.67, 0.8, 0.9, 1, 0.22, 0.25])
y, other_stuff = np.histogram(arr, bins=5)
x = range(0,5)
graph = plt.bar(x,height=y)
plt.show()
</code></pre>
|
<p>I think you are looking for matplotlib's <a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hist.html" rel="nofollow noreferrer"><code>hist</code></a> method.</p>
<p>With your sample array the code would look like: </p>
<pre><code>import matplotlib.pyplot as plt
plt.hist(arr, bins=np.linspace(0,1,6), ec='black')
</code></pre>
<p><a href="https://i.stack.imgur.com/3ciMf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3ciMf.png" alt="enter image description here"></a></p>
|
python|arrays|numpy|matplotlib|histogram
| 3 |
1,903,448 | 45,738,630 |
Summing array entries along a particular line, python
|
<p>I have a 2D array, and would like to sum its entries along a particular line. It should basically be like <code>numpy.sum()</code>, not along a column or row but rather along a line (given by an equation).</p>
<p>I don't really know where to start from. There is <a href="https://stackoverflow.com/questions/25328473/sum-elements-along-a-line-of-numpy-array">this answer</a> which uses a Radon transfer (though I haven't managed to porperly install the skimage package).</p>
<p>Is there any built-in function I can start from?</p>
|
<p>Here's what I've come up with:</p>
<pre><code>array = [[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]]
def points_on_line(x0,y0, x1,y1):
dx = x1 - x0
dy = y1 - y0
D = 2*dy - dx
y = y0
for x in range(x0, x1):
yield (x,y)
if D > 0:
y = y + 1
D = D - 2*dx
D = D + 2*dy
print(sum([array[y][x] for x, y in points_on_line(0,0, 5, 4)]))
</code></pre>
<p>This uses <a href="https://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm" rel="nofollow noreferrer">Bresenheim's Line Algorithm</a> to find the points lying on the line between two points. It's not perfect though, and won't return <strong>all</strong> the points that it touches. This should be a good jumping off point, though!</p>
|
python|arrays|numpy
| 0 |
1,903,449 | 54,853,639 |
vimeo api python using keywords to search for videos
|
<p>I'm trying to use the Vimeo API with Python, but I'm stuck trying to find video's using keywords.</p>
<p>What I have is this, after successfully registering with Vimeo:</p>
<pre><code>import vimeo
client = vimeo.VimeoClient(
token='my_token',
key='my_key',
secret='my_secret_key'
)
about_me = client.get('/me',params={'fields':'uri,name'})
json.loads(about_me.text)
</code></pre>
<p>these return my user credentials. Now I want to use a similar approach to get videos using Query keywords, like on <a href="https://developer.vimeo.com/api/reference/videos?version=3.4#search_videos" rel="nofollow noreferrer">their page</a>. But I cannot get it to work.
So, I want to have a JSON returned with movies based on keywords (like 'interstellar trailer' and not the URI or id of Vimeo).
But for some reason, I can't get the query keyword to work, and from the linked page above, I cannot figure it out how to implement it in my code.</p>
<p>27-02-19: UPDATE: I figured it out, and will update my solution here, soon.</p>
|
<p>The <code>/me</code> <a href="https://developer.vimeo.com/api/reference/users#get_user" rel="nofollow noreferrer">endpoint</a> only returns the user object for the authenticated user.</p>
<p>To get and search for videos, use the <code>/me/videos</code> or <code>/videos</code> <a href="https://developer.vimeo.com/api/reference/videos?version=3.4#search_videos" rel="nofollow noreferrer">endpoint</a> (depending on if you are searching for videos on your own account, or searching for videos from other users public on Vimeo). </p>
|
python|api|vimeo|vimeo-api
| 0 |
1,903,450 | 73,721,837 |
Is there a way to store the results of a loop into a Sequence variable in Python
|
<p>I have a list of names</p>
<pre><code>names = ["David","Olivia","Charlotte"]
</code></pre>
<p>I want to measure the length of each name in this list and store the integer value in a list like L=[] and using the max function to determine which name is the longest. Something like below:</p>
<pre><code>L = []
for x in names:
L = len(x)
print(max(L))
</code></pre>
<p>but I get this error in the terminal that the defined L variable is not iterable. How do I store the results in L variable as a sequence?</p>
|
<p><code>append</code> to the list, don't overwrite it.</p>
<pre class="lang-py prettyprint-override"><code>L: list[int] = []
for x in names:
L.append(len(x))
print(max(L))
</code></pre>
<hr />
<p>Actually, it's more efficient to create the list “on the fly”:</p>
<pre class="lang-py prettyprint-override"><code>L: list[int] = [len(x) for x in names]
print(max(L))
</code></pre>
<p>... or not to create a list at all:</p>
<pre class="lang-py prettyprint-override"><code>print(max(len(x) for x in names))
</code></pre>
|
python|for-loop
| 2 |
1,903,451 | 12,770,913 |
Randint error in Python?
|
<p>I'm writing up the code for the RSA algorithm as a project. With those who are familiar woth this encryption system,the function below calculates the value of phi(n). However when I run this, it comes up this error: </p>
<pre><code> Traceback (most recent call last):
File "C:\Python27\RSA.py", line 127, in <module>
phi_n()
File "C:\Python27\RSA.py", line 30, in phi_n
prime_f = prime_list[random.randint(0,length)]
File "C:\Python27\lib\random.py", line 241, in randint
return self.randrange(a, b+1)
File "C:\Python27\lib\random.py", line 217, in randrange
raise ValueError, "empty range for randrange() (%d,%d, %d)" % (istart, istop, width)
ValueError: empty range for randrange() (0,0, 0)
</code></pre>
<p>I don't fully understand why this error is coming up. Here is my code for the phi_n function: </p>
<pre><code>def phi_n():
global prime_list
length = len(prime_list) - 1
prime_f = prime_list[random.randint(0,length)]
prime_s = prime_list[random.randint(0,length)]
global pq
n = prime_f * prime_s
global phi_n
phi_n = (prime_f -1) * (prime_s -1)
return phi_n
</code></pre>
<p>Comment will be appreciated. </p>
<p>Thanks </p>
|
<p>The problem is <code>prime_list</code> is empty and therefore <code>length</code> will equal <code>-1</code> resulting in the call <code>random.randint(0, -1)</code> which is invalid for obvious reasons.</p>
|
python|rsa
| 5 |
1,903,452 | 21,632,587 |
How to know using Django if server is secure (uses https)
|
<p>I work on a Django based app, and I want to know if there's a way to know if my server uses http connections or https.</p>
<p>I know that using</p>
<pre><code>import socket
if socket.gethostname().startswith('****'):
</code></pre>
<p>I can get the hostname, is it possible to do something like that so I can get to know if the hosting uses a ssl certificate?</p>
<p><strong>PD:</strong> I'm a rookie here, so I'm asking to see if it's possible and, if it is, how should I do it.
Thanks</p>
|
<p>it's completely possible:</p>
<pre><code>def some_request_function(request):
if request.is_secure():
#You are safe!
else:
#You are NOT safe!
</code></pre>
<p>More details:
<a href="https://docs.djangoproject.com/en/2.0/ref/request-response/#django.http.HttpRequest.is_secure" rel="noreferrer">https://docs.djangoproject.com/en/2.0/ref/request-response/#django.http.HttpRequest.is_secure</a></p>
|
python|django|ssl|https|ssl-certificate
| 13 |
1,903,453 | 24,885,827 |
Python tkinter: How can I ensure only ONE child window is created onclick and not a new window every time the button is clicked?
|
<p>Currently learning tkinter and have come to a dead end.
Each time I click one of the buttons in my GUI (after typing 'username' and 'password' into the log in screen) a new child window is created and appears. As it should. What I would now like to do is to ensure that only one window is created and any subsequent clicks do not create yet more windows. How could this be done?</p>
<pre><code>from tkinter import *
class Main():
def __init__(self, master):
self.master = master
self.master.title("Main Window")
self.button1 = Button(self.master, text="Click Me 1", command = self.Open1)
self.button1.grid(row=0, column=0, sticky=W)
self.button2 = Button(self.master, text="Click Me 2", command = self.Open2)
self.button2.grid(row=0, column=2, sticky=W)
self.button3 = Button(self.master, text="Close", command = self.Close)
self.button3.grid(row=1, column=0, sticky=W)
def Login(self):
login_win = Toplevel(self.master)
login_window = Login(login_win)
login_window.Focus()
#main_window.Hide()
def Open1(self):
second_window = Toplevel(self.master)
window2 = Second(second_window)
def Open2(self):
third_window = Toplevel(self.master)
window3 = Third(third_window)
def Close(self):
self.master.destroy()
#def Hide(self):
#self.master.withdraw()
#def Appear(self):
#self.master.deiconify()
class Second():
def __init__(self, master):
self.master = master
self.master.title("Child 1 of Main")
self.master.geometry("300x300")
self.button4 = Button(self.master, text="Click Me 1", command = self.Open_Child)
self.button4.grid(row=0, column=0, sticky=W)
def Open_Child(self):
second_child_window = Toplevel(self.master)
window4 = Second_Child(second_child_window)
class Third():
def __init__(self, master):
self.master = master
self.master.geometry("300x300")
self.master.title("Child 2 of Main")
class Second_Child():
def __init__(self, master):
self.master = master
self.master.geometry("400x300")
self.master.title("Child of 'Child 1 of Main'")
class Login():
def __init__(self, window):
self.window = window
self.window.title("Current Library")
Label(self.window, text="Log in to use this program:").grid(row=0, column=0, sticky=W)
self.userbox=Entry(self.window, width=20, bg="light green")
self.userbox.grid(row=1, column=0, sticky=W)
self.passbox=Entry(self.window, width=20, bg="light green")
self.passbox.grid(row=2, column=0, sticky=W)
Button(self.window, text="Submit", width=5, command=self.clicked).grid(row=3, column=0, sticky=W)
def Focus(self):
self.window.attributes('-topmost', 1)
self.window.grab_set()
def clicked(self):
username = self.userbox.get()
password = self.passbox.get()
if password == "password" and username == "username":
self.correct = True
self.window.grab_release()
self.window.destroy()
else:
pass
root_window = Tk()
root_window.iconbitmap(default='transparent.ico')
root_window.geometry("200x100")
main_window = Main(root_window)
main_window.Login()
root_window.mainloop()
</code></pre>
<p>Inside the functions which are called when the buttons are clicked could I add an IF statement?: IF window object does not exist then instantiate window object ELSE pass.</p>
<pre><code>def Open1(self):
if window2 == False:
second_window = Toplevel(self.master)
window2 = Second(second_window)
else:
pass
</code></pre>
<p>If this is the correct logic then how can I check if a window object has already been created because the above doesn't work as I am referencing the object before it is created?</p>
<p>Many thanks.</p>
|
<p>Initialize the child window variable in Main's <code>__init__</code>.</p>
<pre><code>self.child_window = None
</code></pre>
<p>Then you can check whether it exists in <code>Open1</code>.</p>
<pre><code>def Open1(self):
if not self.child_window:
self.child_window = Second(Toplevel(self.master))
</code></pre>
<p>By the way, if you intend for Second to act like its own window, it's a bit awkward that you have to create a Toplevel every time you want to create a Second. Conventionally, you would make <code>Second</code> a subclass of <code>Toplevel</code>, so it can be created on its own. Something like:</p>
<pre><code>class Second(Toplevel):
def __init__(self, master, *args, **kargs):
Toplevel.__init__(self, master, *args, **kargs)
#rest of initialization goes here.
#Use `self` everywhere you previously used `self.master`.
</code></pre>
<p>Now you could just do:</p>
<pre><code>def Open1(self):
if not self.child_window:
self.child_window = Second(self.master)
</code></pre>
|
python|tkinter
| 5 |
1,903,454 | 38,304,691 |
drop rows with errors for pandas data coercion
|
<p>I have a dataframe, for which I need to convert columns to floats and ints, that has bad rows, ie., values that are in a column that should be a float or an integer are instead string values. </p>
<p>If I use <code>df.bad.astype(float)</code>, I get an error, this is expected. </p>
<p>If I use <code>df.bad.astype(float, errors='coerce')</code>, or <code>pd.to_numeric(df.bad, errors='coerce')</code>, bad values are replaced with <code>np.NaN</code>, also according to spec and reasonable. </p>
<p>There is also <code>errors='ignore'</code>, another option that ignores the errors and leaves the erroring values alone. </p>
<p>But actually, I want to not ignore the errors, but drop the rows with bad values. How can I do this? </p>
<p>I can ignore the errors and do some type checking, but that's not an ideal solution, and there might be something more idiomatic to do this. </p>
<h3>Example</h3>
<pre><code>test = pd.DataFrame(["3", "4", "problem"], columns=["bad"])
test.bad.astype(float) ## ValueError: could not convert string to float: 'problem'
</code></pre>
<p>I want something like this:</p>
<pre><code>pd.to_numeric(df.bad, errors='drop')
</code></pre>
<p>And this returns dataframe with only the 2 good rows.</p>
|
<p>Since the bad values are replaced with <code>np.NaN</code> would it not be simply just <code>df.dropna()</code> to get rid of the bad rows now?</p>
<p>EDIT:
Since you need to not drop the initial NaNs, maybe you could use <code>df.fillna()</code> prior to using <code>pd.to_numeric</code></p>
|
python|pandas|coercion
| 3 |
1,903,455 | 38,276,130 |
Python Adding one hour to time.time()
|
<p>Hi i want to add one hour to Python time.time().</p>
<p>My current way of doing it is :</p>
<pre><code>t = int(time.time())
expiration_time = t + 3600
</code></pre>
<p>Is this considered bad for any reasons?
If so is there a better way of doing this easily.</p>
|
<p>It's not considered bad for any reason . I do it this way many times . Here is an example :</p>
<pre><code>import time
t0 = time.time()
print time.strftime("%I %M %p",time.localtime(t0))
03 31 PM
t1 = t0 + 60*60
print time.strftime("%I %M %p",time.localtime(t1))
04 31 PM
</code></pre>
<p>Here are other ways of doing it using 'datetime'</p>
<pre><code>import datetime
t1 = datetime.datetime.now() + datetime.timedelta(hours=1)
t2 = datetime.datetime.now() + datetime.timedelta(minutes=60)
</code></pre>
|
python|python-2.7
| 18 |
1,903,456 | 31,139,948 |
flask admin custom QueryAjaxModelLoader
|
<p>From what I understand, Flask Admin supports AJAX use for foreign key model loading. The <a href="https://flask-admin.readthedocs.org/en/latest/api/mod_model/">Flask Admin - Model Documentation</a> covers the basics under the heading <code>form_ajax_refs</code>. I have managed to use this successfully on many occasions, however I am having issues with the level of customisation that I hope to achieve. Let me elaborate.</p>
<p>I have a <code>Product</code> model, an <code>Organisation</code> model and a join table to relate them, defined as so:</p>
<pre><code>class Product(Base):
__tablename__ = "products"
product_uuid = Column(UUID(as_uuid=True), primary_key=True)
title = Column(String, nullable=False)
description = Column(String, nullable=False)
last_seen = Column(DateTime(timezone=True), nullable=False, index=True)
price = Column(Numeric(precision=7, scale=2), nullable=False, index=True)
class Organisation(Base):
__tablename__ = "organisations"
org_id = Column(String, primary_key=True)
org_name = Column(String, nullable=False)
products = relationship(
Product,
secondary="organisation_products",
backref="organisations"
)
organisation_products_table = Table(
"organisation_products",
Base.metadata,
Column("org_id", String, ForeignKey("organisations.org_id"), nullable=False),
Column("product_uuid", UUID(as_uuid=True), ForeignKey("products.product_uuid"), nullable=False),
UniqueConstraint("org_id", "product_uuid"),
)
</code></pre>
<p>In a Flask Admin Model view of a model called <code>CuratedList</code> that has a foreign key constraint to the <code>Product</code> model, I am using <code>form_ajax_refs</code> in the form create view, to allow selection of dynamically loaded <code>Product</code> items.</p>
<pre><code>form_ajax_refs = {"products": {"fields": (Product.title,)}}
</code></pre>
<p>This works nicely to show me ALL rows of the <code>Product</code> model.</p>
<p>My current requirement, however, is to <strong>only use the AJAX model loader to show products with a specific org_id, for example "Google".</strong></p>
<h2>Attempt No. 1</h2>
<p>Override <code>get_query</code> function of the <code>ModelView</code> class to join on <code>organisation_products_table</code> and filter by <code>org_id</code>. This looks something like this:</p>
<pre><code>def get_query(self):
return (
self.session.query(CuratedList)
.join(
curated_list_items_table,
curated_list_items_table.c.list_uuid == CuratedList.list_uuid
)
.join(
Product,
Product.product_uuid == curated_list_items_table.c.product_uuid
)
.join(
organisation_products_table,
organisation_products_table.c.product_uuid == Product.product_uuid
)
.filter(CuratedList.org_id == "Google")
.filter(organisation_products_table.c.org_id == "Google")
)
</code></pre>
<p>Unfortunately, this does not solve the issue, and returns the same behaviour as:</p>
<pre><code>def get_query(self):
return (
self.session.query(CuratedList)
.filter(CuratedList.org_id == self._org_id)
)
</code></pre>
<p>It does not affect the behaviour of <code>form_ajax_refs</code>.</p>
<h2>Attempt No.2</h2>
<p>The <a href="https://flask-admin.readthedocs.org/en/latest/api/mod_model/">Flask Admin - Model Documentation</a> mentions another way of using <code>form_ajax_refs</code>, which involves using the <code>QueryAjaxModelLoader</code> class.</p>
<p>In my second attempt, I subclass the <code>QueryAjaxModelLoader</code> class and try to override the values of it's <code>model</code>, <code>session</code> or <code>fields</code> variables. Something like this:</p>
<pre><code>class ProductAjaxModelLoader(QueryAjaxModelLoader):
def __init__(self, name, session, model, **options):
super(ProductAjaxModelLoader, self).__init__(name, session, model, **options)
fields = (
session.query(model.title)
.join(organisation_products_table)
.filter(organisation_products_table.c.org_id == "Google")
).all()
self.fields = fields
self.model = model
self.session = session
</code></pre>
<p>And then instead of the previous <code>form_ajax_refs</code> approach, I use my new <code>AjaxModelLoader</code> like so:</p>
<pre><code>form_ajax_refs = {
"products": ProductAjaxModelLoader(
"products", db.session, Product, fields=['title']
)
}
</code></pre>
<p>Unfortunately, whether overriding the values of <code>session</code> or <code>model</code> with my query returns no products from the AJAX loader, and overriding <code>fields</code> still returns all products; not just products of org_id "Google".</p>
<h2>What I Hope to Not Resort to</h2>
<p>I would like to be able to achieve this <strong>without</strong> having to create a new model for each org, as this will prove to be non-scalable and of bad design.</p>
<p>Any suggestions welcomed. Thanks.</p>
|
<p>Thanks to Joes comment to my original question, I have formulated a working solution:</p>
<p>Override <code>AjaxModelLoader</code> function <code>get_list</code> like so:</p>
<pre><code>def get_list(self, term, offset=0, limit=DEFAULT_PAGE_SIZE):
filters = list(
field.ilike(u'%%%s%%' % term) for field in self._cached_fields
)
filters.append(Organisation.org_id == "Google")
return (
db.session.query(Product)
.join(organisation_products_table)
.join(Organisation)
.filter(*filters)
.all()
)
</code></pre>
|
python|flask|sqlalchemy|flask-admin
| 9 |
1,903,457 | 31,203,983 |
Sort list with Python
|
<p>I have an error with python 2.7
I am trying to sort a list of element.
Here is my code: </p>
<pre><code>index=7
print(len(myList)) #print 16
sortedList = sorted(myList,key=lambda x: float(x[index]),reverse=True)
</code></pre>
<p>I can't understand why I am having this error, my index is less than the list length... Any ideas? </p>
<pre><code> sortedList = sorted(myList,key=lambda x: float(x[index]),reverse=True)
IndexError: list index out of range
</code></pre>
|
<p>It is not that your <code>index</code> is less than <code>len(list)</code> But in this function</p>
<pre><code> sortedList = sorted(myList,key=lambda x: float(x[index]),reverse=True)
</code></pre>
<p>each item in list is passed as <code>x</code> So it is trying to access the <code>x[7]</code> which may not be true in all the cases</p>
|
python|sorting
| 2 |
1,903,458 | 31,186,453 |
IndexError when dropping rows in Dataframe
|
<p>I have the following code:</p>
<pre><code>for (i1, row1), (i2, row2) in pairwise(df.iterrows()):
if row1['months_to_maturity'] == row2['months_to_maturity'] and
row1['coupon'] == row2['coupon']:
df = df.drop(df.index[[i1]])
</code></pre>
<p>What I am trying to do is to get rid of rows if the following condition is satisfied</p>
<pre><code>row1['months_to_maturity'] == row2['months_to_maturity'] and
row1['coupon'] == row2['coupon']
</code></pre>
<p>The <code>pairwise(df.iterrows())</code> method gives the current row and the next row of the <code>dataframe</code>. </p>
<p>Unfortunately, when I execute the code above I get this error</p>
<blockquote>
<p>IndexError: index 12 is out of bounds for axis 1 with size 12</p>
</blockquote>
<p>I did <code>print(len(df.index))</code> at the beginning of this section and got <code>12</code> printed, so I am slightly confused why <code>IndexError</code> raises.</p>
|
<p>It seems to me that you are iterating over rows, matching a condition and then drop rows based on the condition that is met. I don't think this is the optimum way of doing what you are trying to do.</p>
<p>I am going to recommend that do things completely differently. Try this, given dataframe df,</p>
<pre><code>df = pd.DataFrame({'a': [1,2,3,4,4,4,5,5,5]})
df['b'] = df.a
print (df)
a b
0 1 1
1 2 2
2 3 3
3 4 4
4 4 4
5 4 4
6 5 5
7 5 5
8 5 5
</code></pre>
<p>To get to the next row, I could do,</p>
<pre><code>df_next = df.shift()
print (df_next)
a b
0 NaN NaN
1 1 1
2 2 2
3 3 3
4 4 4
5 4 4
6 4 4
7 5 5
8 5 5
</code></pre>
<p>To find the matching rows and drop them , I could do,</p>
<pre><code>df2 = df.drop(df.index[(df.b==df_nxt.b) & (df.a==df_nxt.a)])
a b
0 1 1
1 2 2
2 3 3
3 4 4
6 5 5
</code></pre>
<p>Effectively, this boils down to two lines of code,</p>
<pre><code>df_next = df.shift()
df2 = df.drop(df.index[(df.b==df_nxt.b) & (df.a==df_nxt.a)])
</code></pre>
<p>This is the magic of pandas</p>
|
python|pandas
| 2 |
1,903,459 | 40,226,526 |
Unable to build the TensorFlow model using bazel build
|
<p>I have set up the TensorFlow server in a docker machine running on a windows machine by following the instructions from <a href="https://tensorflow.github.io/serving/serving_basic" rel="nofollow">https://tensorflow.github.io/serving/serving_basic</a>. I am successfully able to build and run the mnist_model. However, when I am trying to build the model for the wide_n_deep_tutorial example by running the following command "bazel build //tensorflow_serving/example:wide_n_deep_tutorial.py" the model is not successfully built as there are no files generated in bazel-bin folder. </p>
<p><a href="https://i.stack.imgur.com/95Cu3.png" rel="nofollow"><img src="https://i.stack.imgur.com/95Cu3.png" alt="enter image description here"></a></p>
<p>Since there is no error message displayed while building the model, I am unable to figure out the problem. I would really appreciate if someone can help me debug and solve the problem.</p>
|
<p>By adding the target for wide and deep model in the BUILD file solves the problem.</p>
<p>Added the following to the BUILD file:</p>
<pre><code>py_binary(
name = "wide_n_deep_model",
srcs = [
"wide_n_deep_model.py",
],
deps = [
"//tensorflow_serving/apis:predict_proto_py_pb2",
"//tensorflow_serving/apis:prediction_service_proto_py_pb2",
"@org_tensorflow//tensorflow:tensorflow_py",
],
)
</code></pre>
|
tensorflow|bazel|tensorflow-serving
| 0 |
1,903,460 | 40,114,100 |
Uploading different versions (python 2.7 vs 3.5) to PyPI
|
<p>I have a package I'm uploading to PyPI with two different versions of the code: one for Python 2.7 and one for Python 3.5.</p>
<p>What is the standard for uploading this to PyPI? Do I use two separate <code>setup.py</code> files? </p>
<p>When users run <code>pip install mypackage</code> will it automatically download the correct version? </p>
|
<p>TL;DR: add <code>python_requires</code> on <code>setup.py</code>. Use <a href="https://github.com/pypa/twine" rel="nofollow noreferrer">twine</a> to upload the package to PyPI.</p>
<p>Like IPython, its 6.0.0+ support Python 3.3+ only and the 5.x still support Python 2.x. If you install it using <code>pip</code> >= 9.0.1, <code>pip install ipython</code> will select latest 5.x for Python 2 and latest 6.x for Python 3.</p>
<h1>Modify <code>setup.py</code></h1>
<p>First, you need set <a href="https://www.python.org/dev/peps/pep-0345/#requires-python" rel="nofollow noreferrer">Requires-Python</a> (<a href="https://www.python.org/dev/peps/pep-0440/#version-specifiers" rel="nofollow noreferrer">PEP 440</a>) meta-data in <code>setup.py</code>, by putting the <code>python_requires</code> argument on <code>setup()</code> function.</p>
<p>For example, the following is the <code>setup.py</code> for the Python 2.7 version:</p>
<pre><code>setup(
name='some-package',
version='2.3.3',
...,
python_requires='==2.7.*'
)
</code></pre>
<p>For Python 3.5+, just change it to <code>python_requires='>=3.5'</code>.</p>
<p>Of course, two packages must have different version numbers. otherwise PyPI will reject one.</p>
<p>You can use two separate <code>setup.py</code> files to do this,
or just use one file and set <code>python_requires</code> argument dynamically.</p>
<h1>Upload to PyPI</h1>
<p><code>python setup.py sdist upload</code> seems to not upload package meta-data (which contains <em>Requires-Python</em>) to PyPI.</p>
<p>I found the easiest way to do it correctly is using <a href="https://github.com/pypa/twine" rel="nofollow noreferrer">twine</a>.</p>
<blockquote>
<ol>
<li><p>Create some distributions in the normal way:</p>
<pre><code>$ python setup.py sdist bdist_wheel
</code></pre></li>
<li><p>Register your project (if necessary)</p></li>
<li><p>Upload with twine</p>
<pre><code>$ twine upload dist/*
</code></pre></li>
</ol>
</blockquote>
<p>Repeat step 1 & 3 for both Python 2 and 3 version.</p>
|
python|pip|pypi
| 10 |
1,903,461 | 40,170,101 |
Python, read uart and post to MQTT, has extra spaces
|
<p>I'm reading a string from a microcontroller to Raspberry Pi using Python. The string looks like this:</p>
<blockquote>
<p>5050313 9</p>
</blockquote>
<p>I then split this up into MQTT topic and payload. The value left of the " " is the topic, and the one right of " " is the payload. My code adds extra new lines to the MQTT topic. How can I avoid these new lines? I've even try rstrip() on the payload. Here's the code:</p>
<pre><code>import serial
import time
import paho.mqtt.publish as publish
def readlineCR(port):
rv = ""
while True:
ch = port.read()
rv += ch
if ch=='\r\n' or ch=='':
return rv
port = serial.Serial("/dev/ttyAMA0", baudrate=115200, timeout=3.0)
while True:
rcv = port.readline()
print(rcv)
if len(rcv) > 4:
mytopic, mypayload = rcv.split(" ")
mypayload.rstrip()
publish.single(mytopic, mypayload, hostname="localhost")
</code></pre>
<p>If I subscribe to that topic, I get this exactly:</p>
<blockquote>
<p>pi@raspberrypi:/media/pycode $ mosquitto_sub -h localhost -t
50C51C570B00</p>
<p>97</p>
<p>98</p>
<p>99</p>
</blockquote>
<p>There shouldn't be any extra lines between the numbers. It should just be </p>
<p>97
98
99</p>
<p>Any ideas where these new lines are coming from?</p>
|
<p>you did'nt save result of <code>mypayload.rstrip()</code> in a variable then send this variable i.e (<code>mpayload</code> not affected) look at this example:</p>
<pre><code>>>> s='\r\n97\r\n'
>>> s.strip()
'97'
>>> s
'\r\n97\r\n'
</code></pre>
<p>the your code should be:</p>
<pre><code>if len(rcv) > 4:
mytopic, mypayload = rcv.split(" ")
v=mypayload.strip()
publish.single(mytopic, v, hostname="localhost")
</code></pre>
|
python|newline|mqtt
| 0 |
1,903,462 | 29,046,955 |
How to install and use beautifulsoup4
|
<p>I want to use BeautifulSoup on python3.4.3 (windows 7). I installed it with cmd.exe. But as I try to import it I get an error saying no module named <code>BeautifulSoup</code> or <code>BeautifulSoup4</code>.</p>
<p>Could you please help me? This is how I installed <code>beautifulsoup4</code>:</p>
<pre><code>C:\Python34\Scripts\pip.exe install beautifulsoup4
</code></pre>
|
<p>The module is named <code>bs4</code>:</p>
<pre><code>from bs4 import BeautifulSoup
</code></pre>
<p>Make sure you use the <a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow">correct documentation</a> for BeautifulSoup version 4, it lists details like these.</p>
<p>Yes, the project name (<code>beautifulsoup4</code>) differs from the module you import!</p>
|
python|python-3.x|beautifulsoup
| 2 |
1,903,463 | 8,597,126 |
cx_Freeze/ldap: ImportError: DLL Load Failed %1 is not a valid Win32 application
|
<p>I am using cx_Freeze to convert my python program to an exe. It all runs fine when it is a .py however when I come to run the exe, I get the following traceback; </p>
<pre><code>Traceback (most recent call last):
File "UCA_Starter.py", line 45, in <module>
File "UCA_Starter.py", line 39, in main
File "C:\Python26\Scripts\ClientSelector.py", line 20, in <module>
import login_d
File "C:\Python26\Scripts\login_d.py", line 6, in <module>
import ad_auth
File "C:\Python26\Scripts\ad_auth.py", line 1, in <module>
import ldap
File "C:\Python26\lib\site-packages\ldap\__init__.py", line 22, in <module>
from _ldap import *
File "ExtensionLoader_ldap__ldap.py", line 12, in <module>
ImportError: DLL load failed: %1 is not a valid Win32 application.
</code></pre>
<p>I have googled the problem but I am still not sure what it even means or if it is a problem with cx_Freeze or the module or if i'm just missing dll's. Any help would be much appreciated. Thanks in advance!</p>
|
<p>Try to install the 32bit version of the cx_Freeze. That worked for me.</p>
|
python|ldap|cx-freeze
| 3 |
1,903,464 | 58,624,641 |
How to prevent PytestCollectionWarning when testing class Testament via pytest
|
<p><strong>Update to more general case:</strong>
How can I prevent a PytestCollectionWarning when testing a Class Testament via pytest? Simple example for testament.py:</p>
<pre><code>class Testament():
def __init__(self, name):
self.name = name
def check(self):
return True
</code></pre>
<p>And the test_testament.py</p>
<pre><code>from testament.testament import Testament
def test_send():
testament = Testament("Paul")
assert testament.check()
</code></pre>
<p>This creates a PytestCollectionWarning when run with pytest. Is there a way to suppress this warning for the imported module without turning all warnings off?</p>
|
<p>You can set a <code>__test__ = False</code> attribute in classes that pytest should ignore:</p>
<pre><code>class Testament:
__test__ = False
</code></pre>
|
python|pytest
| 46 |
1,903,465 | 52,124,659 |
What should be 'y_train' in Keras LSTM?
|
<p>I refer to the example given at the Keras website <a href="https://keras.io/getting-started/sequential-model-guide/" rel="nofollow noreferrer">here</a>:</p>
<pre><code>from keras.models import Sequential
from keras.layers import LSTM, Dense
import numpy as np
data_dim = 16
timesteps = 8
num_classes = 10
# expected input data shape: (batch_size, timesteps, data_dim)
model = Sequential()
model.add(LSTM(32, return_sequences=True,
input_shape=(timesteps, data_dim))) # returns a sequence of vectors of dimension 32
model.add(LSTM(32, return_sequences=True)) # returns a sequence of vectors of dimension 32
model.add(LSTM(32)) # return a single vector of dimension 32
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
# Generate dummy training data
x_train = np.random.random((1000, timesteps, data_dim))
y_train = np.random.random((1000, num_classes))
# Generate dummy validation data
x_val = np.random.random((100, timesteps, data_dim))
y_val = np.random.random((100, num_classes))
model.fit(x_train, y_train, batch_size=64, epochs=5, validation_data=(x_val, y_val))
</code></pre>
<p>For a real-world example, what should be y_train and y_val? Should they be the same as x_train and x_val respectively, since they come from the same sequence?</p>
<p>Also, how should I understand data_dim and num_classes?</p>
|
<p>Since your parameter <code>return_sequences = True</code>, your LSTM will be fed numpy arrays of shape <code>[batch_size, time_steps, input_features]</code> and perform a "many-to-many" mapping. <code>Data_dim</code> is simply the number of distinct features your model takes as input. Your <code>y_train</code> will be of shape <code>[[1000, 10]]</code></p>
<p>The key to understanding the excerpt of code you provided is that setting the parameter <code>return_sequences = True</code> enables the LSTM layer to propagate <em>sequences</em> of values to upstream layers in the network. Note that the final LSTM layer that precedes the 10-way softmax does not set <code>return_sequences = True</code>. This is due to the fact that the Dense layer cannot handle a sequence of inputs - hence, the <code>time_steps</code> dimension is collapsed and the Dense layer receives a vector of inputs, which it can process without issue. </p>
|
python|machine-learning|neural-network|keras|lstm
| 0 |
1,903,466 | 52,283,000 |
Deep learning Keras model CTC_Loss gives loss = infinity
|
<p>i've a CRNN model for text recognition, it was published on Github, trained on english language,</p>
<p>Now i'm doing the same thing using this algorithm but for arabic.</p>
<p>My ctc function is:</p>
<pre><code>def ctc_lambda_func(args):
y_pred, labels, input_length, label_length = args
# the 2 is critical here since the first couple outputs of the RNN
# tend to be garbage:
y_pred = y_pred[:, 2:, :]
return K.ctc_batch_cost(labels, y_pred, input_length, label_length)
</code></pre>
<p>My Model is:</p>
<pre><code>def get_Model(training):
img_w = 128
img_h = 64
# Network parameters
conv_filters = 16
kernel_size = (3, 3)
pool_size = 2
time_dense_size = 32
rnn_size = 128
if K.image_data_format() == 'channels_first':
input_shape = (1, img_w, img_h)
else:
input_shape = (img_w, img_h, 1)
# Initialising the CNN
act = 'relu'
input_data = Input(name='the_input', shape=input_shape, dtype='float32')
inner = Conv2D(conv_filters, kernel_size, padding='same',
activation=act, kernel_initializer='he_normal',
name='conv1')(input_data)
inner = MaxPooling2D(pool_size=(pool_size, pool_size), name='max1')(inner)
inner = Conv2D(conv_filters, kernel_size, padding='same',
activation=act, kernel_initializer='he_normal',
name='conv2')(inner)
inner = MaxPooling2D(pool_size=(pool_size, pool_size), name='max2')(inner)
conv_to_rnn_dims = (img_w // (pool_size ** 2), (img_h // (pool_size ** 2)) * conv_filters)
inner = Reshape(target_shape=conv_to_rnn_dims, name='reshape')(inner)
# cuts down input size going into RNN:
inner = Dense(time_dense_size, activation=act, name='dense1')(inner)
# Two layers of bidirectional GRUs
# GRU seems to work as well, if not better than LSTM:
gru_1 = GRU(rnn_size, return_sequences=True, kernel_initializer='he_normal', name='gru1')(inner)
gru_1b = GRU(rnn_size, return_sequences=True, go_backwards=True, kernel_initializer='he_normal', name='gru1_b')(inner)
gru1_merged = add([gru_1, gru_1b])
gru_2 = GRU(rnn_size, return_sequences=True, kernel_initializer='he_normal', name='gru2')(gru1_merged)
gru_2b = GRU(rnn_size, return_sequences=True, go_backwards=True, kernel_initializer='he_normal', name='gru2_b')(gru1_merged)
# transforms RNN output to character activations:
inner = Dense(num_classes+1, kernel_initializer='he_normal',
name='dense2')(concatenate([gru_2, gru_2b]))
y_pred = Activation('softmax', name='softmax')(inner)
Model(inputs=input_data, outputs=y_pred).summary()
labels = Input(name='the_labels', shape=[30], dtype='float32')
input_length = Input(name='input_length', shape=[1], dtype='int64')
label_length = Input(name='label_length', shape=[1], dtype='int64')
# Keras doesn't currently support loss funcs with extra parameters
# so CTC loss is implemented in a lambda layer
loss_out = Lambda(ctc_lambda_func, output_shape=(1,), name='ctc')([y_pred, labels, input_length, label_length])
# clipnorm seems to speeds up convergence
# the loss calc occurs elsewhere, so use a dummy lambda func for the loss
if training:
return Model(inputs=[input_data, labels, input_length, label_length], outputs=loss_out)
return Model(inputs=[input_data], outputs=y_pred)
</code></pre>
<p>Then i compile it with SGD optimizer (Tried SGD,adam)</p>
<pre><code>sgd = SGD(lr=0.0000002, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=5)
model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=sgd)
</code></pre>
<p>Then i fit the model with my training set (Images of words up to 30 characters) into (sequence of labels of 30</p>
<pre><code>model.fit_generator(generator=tiger_train.next_batch(),
steps_per_epoch=int(tiger_train.n / batch_size),
epochs=30,
callbacks=[checkpoint],
validation_data=tiger_val.next_batch(),
validation_steps=int(tiger_val.n / val_batch_size))
</code></pre>
<p>Once it starts, it give me loss = inf, after many searches, i didn't find any similar problem.</p>
<p>So my questions is, how can i solve this, what can make a ctc_loss compute an infinite cost?</p>
<p>Thanks in advance</p>
|
<p>I found the problem, it was dimensions problem,</p>
<p>For <code>R-CNN OCR</code> using <code>CTC layer</code>, if you are detecting a sequence with <code>length n</code>, you should have an image with at least a width of <code>(2*n-1)</code>. The more the better till you reach the best image/timesteps ratio to let the <code>CTC layer</code> able to recognize the letter correctly. If image with is less than <code>(2*n-1)</code>, it will give a nan loss.</p>
|
python-3.x|keras|deep-learning|conv-neural-network|rnn
| 1 |
1,903,467 | 52,021,746 |
Remove all whitespaces
|
<p>I have a regular expression that searches for a special class and outputs a tag.</p>
<pre><code>(?<=<div\ class="value.*?">\s+).*?(?=\s+</div>)
</code></pre>
<p>The problem is that it leaves whitespaces at the beginning of the tag</p>
<p><strong>Example:</strong></p>
<pre><code><div class="value odd"> THIS IS MY TAG </div>
</code></pre>
<p>For now my expression remove the whitespaces only after the tag, but not at the beginning.</p>
<p><strong>How I can remove it at the beginning?</strong></p>
<p>I need to get only: <code>THIS IS MY TAG</code></p>
|
<p>You do not need any look-aheads or look-behinds. A correct regex is:</p>
<pre><code>'<div class="value.*?">\s+(.*?)\s+</div>'
</code></pre>
|
python|regex
| -1 |
1,903,468 | 51,669,102 |
how to pass data to html page using flask?
|
I have a small application where input is taken from the user and based on it data is displayed back on the HTML. I have to send data from flask to display on HTML but unable to find a way to do it. There's no error that I encountered.
<p>[Here is my code:][1]</p>
<pre><code><pre>
from flask import Flask,render_template
app = Flask(__name__)
data=[
{
'name':'Audrin',
'place': 'kaka',
'mob': '7736'
},
{
'name': 'Stuvard',
'place': 'Goa',
'mob' : '546464'
}
]
@app.route("/")
def home():
return render_template('home.html')
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
|
<p>Given that you're using Flask, I'll assume you use jinja2 templates. You can then do the following in your flask app:</p>
<pre><code>return render_template('home.html', data=data)
</code></pre>
<p>And parse <code>data</code> in your HTML template:</p>
<pre><code><ul>
{% for item in data %}
<li>{{item.name}}</li>
<li>{{item.place}}</li>
<li>{{item.mob}}</li>
{% endfor %}
</ul>
</code></pre>
|
python|python-3.x|flask
| 19 |
1,903,469 | 51,590,334 |
How to increase iterrows operations speed in pandas
|
<p>I have a data frame which contains thousands of entries which contain the regression results on different variable combinations. The combinations for the regression are formed using a list of single variables and the itertools combinations functions.</p>
<p>I am now looking at a way to remove variable combinations that need to be removed as they represent similar measures. I have made a list of all the variables that cannot occur together. My code iterates over the data frame containing the combinations and uses the <code>collections.Counter</code> function to count the number of elements within each row that is within the duplicate list. If two of more of the elements are within the row, then the row is not copied to a new cleaned data frame. My code is below:</p>
<pre><code>import pandas as pd
import itertools
import random
from collections import Counter
def remove_elements(data, dup_col, duplicate_list):
"""
Removes items from a dataframe that contain multiple items from the duplicate list.
Arguments:
data (dataframe): Dataframe containing the data
dup_col (string): Column name for within data to check for duplicates
duplicate_list (list): List of duplicate items
Returns:
cleaned dataframe
"""
df_cleaned = pd.DataFrame()
for idx, row in df.iterrows():
if any(ele in duplicate_list for ele in row[dup_col]):
lenduplicate = sum(Counter(set.intersection(set(duplicate_list), set(row[dup_col]))).values())
if lenduplicate > 1:
continue
else:
df_cleaned = df_cleaned.append(row)
else:
df_cleaned = df_cleaned.append(row)
return df_cleaned
#%% Create Data
df = pd.DataFrame()
models = ['A', 'B', 'C', 'D', 'E', 'F', 'G']
df['model'] = [list(x) for x in itertools.combinations(models, 4)]
df['result'] = [random.randint(1,101) for i in range(0,len(df['model']))]
# Run Function
same_elements = ['A', 'D', 'G']
df = remove_elements(df, 'model', same_elements)
</code></pre>
<p>The function seems to work okay, but on a data frame which contains thousands of entries is it taking 20 to 30 minuets. Is it possible to make this operation faster?</p>
<p>Any advice welcome.</p>
<p>BJR</p>
|
<p>You should be able to use <code>np.intersect1d</code> to create a mask where the length of that intersection is two or more and boolean flip it to only retain desired elements from your original dataframe.</p>
<pre><code>df[~df.model.apply(lambda L: len(np.intersect1d(L, same_elements)) >= 2)]
</code></pre>
<p>Which'll give you identical output as per your existing code:</p>
<pre><code> model result
1 [A, B, C, E] 39
2 [A, B, C, F] 62
7 [A, B, E, F] 10
13 [A, C, E, F] 28
20 [B, C, D, E] 38
21 [B, C, D, F] 33
24 [B, C, E, G] 9
25 [B, C, F, G] 11
26 [B, D, E, F] 73
29 [B, E, F, G] 1
30 [C, D, E, F] 96
33 [C, E, F, G] 77
</code></pre>
|
python|performance|pandas
| 1 |
1,903,470 | 51,809,507 |
IndexError: list index out of range with csv
|
<p>I have CSV file that contains multiple columns
when I run this code in the first column, will run fine,
but when I run it in another column, will display this error<br>
<code>IndexError: list index out of range</code></p>
<pre><code>array_of_ids = []
with open('reactions/by_ids.csv','r',newline='') as f:
reader = csv.reader(f)
for row in reader:
array_of_ids.append(row[2])
</code></pre>
<p>so, <code>row[0]</code> works, <code>row[2]</code> does not work!!</p>
|
<p>You should take a look at your CSV file in a file editor and ensure that each line contains at least 3 entries. A common problem is when your CSV file contains a blank link e.g. the last line. </p>
<p>You could always add <code>if len(row) == 3:</code> before doing your append, this would then have the effect of skipping over any lines that are not correctly formated, for example:</p>
<pre><code>import csv
array_of_ids = []
with open('reactions/by_ids.csv', 'r', newline='') as f:
reader = csv.reader(f)
for row in reader:
if len(row) == 3:
array_of_ids.append(row[2])
</code></pre>
|
python|csv
| 1 |
1,903,471 | 18,774,811 |
Add multiple numbers taken from user input and append it to a list
|
<p>I just have this little program here that I want to use to take an input from a user (as if entering a password). I have a list with the password in it, I then take the users input and place it in an empty class. Then compare that class to the one with the password in it, if it matches it will return "good". However I've only been able to do this using one digit. How would allow for the user to use multiple integers? And is this an efficient way of doing this sort of thing? Is there are quicker more efficient method? Thanks. </p>
<pre><code>class KeyCode(object):
def access(self):
room_code = [1]
print "Before you enter you must provide the room code: "
attempt = []
code = int(raw_input('>>'))
attempt.append(code)
if attempt == room_code:
print "Good"
else:
return 'death'
class Boxing_room(KeyCode):
def enter(self):
print "This is the boxing studio"
return 'Gymnast_room'
</code></pre>
|
<p>Lists aren't necessarily needed. You can just compare strings, or if your code is only numbers, integers. </p>
<p>Also, a class isn't really helpful here (unless it's here just to learn a bit about them). A function will suffice:</p>
<pre><code>def access():
room_code = 12534
code = int(raw_input('Enter the code: '))
if code == room_code:
return 'good'
return 'death'
</code></pre>
|
python|list|append
| 1 |
1,903,472 | 69,211,482 |
how to convert string to dict with double quotes in python?
|
<p>i want to convert string to dict with double quotes</p>
<p>here is example</p>
<pre class="lang-py prettyprint-override"><code>"{'a':'b'}"
</code></pre>
<p>i tried replace with json.dumps
but it is not resolved</p>
<pre class="lang-py prettyprint-override"><code>a = "{'a':'b'}"
b = a.replace("'", "\"") # '{"id":"b"}'
c = json.dumps(c, ensure_ascii=False) # '"{\"id\\":\"b\"}"'
</code></pre>
<p>i want to solve like this</p>
<pre class="lang-py prettyprint-override"><code>{"a":"b"}
</code></pre>
<p>how can i solve it?</p>
|
<p>That's not JSON.</p>
<pre><code>>>> import json
>>> foo = "{'a':'b'}"
>>> json.loads(foo)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.9/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.9/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
</code></pre>
<p>But it does look a lot like python</p>
<pre><code>>>> import ast
>>> ast.literal_eval(foo)
{'a': 'b'}
</code></pre>
<p>Whether this works for you depends on where this string came from and what its serialization rules are.</p>
|
python|dictionary
| 2 |
1,903,473 | 62,074,341 |
Append is overwriting existing data in list of classes
|
<p>I'm making a sudoku solving program in Python. </p>
<p>I have a <strong>class</strong>, an <strong>inner class</strong>, a <strong>list</strong>, <em>constructors</em> and a <em>function</em> that would make the <strong>list</strong> of <strong>Row</strong> objects.</p>
<pre><code> class Table:
rows = []
def __init__(self):
initRows()
class Row:
numbers = []
def __init__(self):
"stuff that stores a list of integers in **numbers** list"
def initRows(self, numbers):
for i in range(9):
temp_row = self.Row(i, numbers)
self.rows.append(temp_row)
</code></pre>
<p>The program goes like this:</p>
<ol>
<li>When a <strong>Table</strong> object is created it automatically tries to make a 9 length list of <strong>Row</strong> objects with the <em>initRows()</em> function.</li>
<li>In the <em>initRows()</em> function we just create a temporary object of class <strong>Row</strong> and instantly <em>append()</em> it to the <em>rows</em> list.</li>
<li>When we create a <strong>Row</strong> object it just stores the given numbers for the given row of the sudoku table.</li>
<li>If I <em>print()</em> the <em>numbers</em> list of each temporary <strong>Row</strong> object after it is created then it gives the correct values.</li>
</ol>
<pre><code>[0, 0, 0, 7, 4, 0, 0, 0, 6]
[4, 0, 6, 8, 0, 0, 5, 0, 7]
[7, 0, 0, 0, 9, 0, 0, 0, 4]
[0, 3, 0, 9, 8, 4, 7, 0, 0]
[8, 2, 0, 6, 1, 3, 4, 0, 9]
[0, 4, 0, 0, 0, 0, 3, 0, 0]
[0, 6, 2, 3, 7, 0, 0, 0, 5]
[0, 0, 5, 4, 0, 9, 0, 0, 0]
[0, 7, 0, 0, 6, 1, 2, 0, 8]
</code></pre>
<ol start="5">
<li>But when I try to <em>print()</em> the values of each <strong>Row</strong> object after the <em>for loop</em> loops once or more or after the initialization is finished then the list is filled with only the last <strong>Row</strong> object</li>
</ol>
<pre><code>[0, 7, 0, 0, 6, 1, 2, 0, 8]
[0, 7, 0, 0, 6, 1, 2, 0, 8]
[0, 7, 0, 0, 6, 1, 2, 0, 8]
[0, 7, 0, 0, 6, 1, 2, 0, 8]
[0, 7, 0, 0, 6, 1, 2, 0, 8]
[0, 7, 0, 0, 6, 1, 2, 0, 8]
[0, 7, 0, 0, 6, 1, 2, 0, 8]
[0, 7, 0, 0, 6, 1, 2, 0, 8]
[0, 7, 0, 0, 6, 1, 2, 0, 8]
[0, 7, 0, 0, 6, 1, 2, 0, 8]
</code></pre>
<p>I was searching on the internet for <strong>hours</strong> and found that people have issues with the <em>append()</em> function of the list class but nothing helped.</p>
<p>So my question is: How could I make this work?</p>
<p><em>(If any other information/part of code is needed: ask away!)</em></p>
|
<p>It turns out in the <strong>Row</strong> class the <strong><em>numbers</em></strong> list was a <em>class</em> attribute (means it's shared across all objects of the <strong>Row</strong> class) rather than an <em>instance</em> attribute. </p>
|
python|python-3.x|append|inner-classes
| 1 |
1,903,474 | 67,288,225 |
Snowflake: SQL compilation error: error line invalid identifier '"dateutc"'
|
<p>I'm moving data from Postgres to snowflake. Originally it worked however I've added:</p>
<pre><code>df_postgres["dateutc"]= pd.to_datetime(df_postgres["dateutc"])
</code></pre>
<p>because the date format was incorrectly loading to snowflake and now I see this error:</p>
<blockquote>
<p>SQL compilation error: error line 1 at position 87 invalid identifier
'"dateutc"'</p>
</blockquote>
<p>Here is my code:</p>
<pre><code>from sqlalchemy import create_engine
import pandas as pd
import glob
import os
from config import postgres_user, postgres_pass, host,port, postgres_db, snow_user, snow_pass,snow_account,snow_warehouse
from snowflake.connector.pandas_tools import pd_writer
from snowflake.sqlalchemy import URL
from sqlalchemy.dialects import registry
registry.register('snowflake', 'snowflake.sqlalchemy', 'dialect')
engine = create_engine(f'postgresql://{postgres_user}:{postgres_pass}@{host}:{port}/{postgres_db}')
conn = engine.connect()
#reads query
df_postgres = pd.read_sql("SELECT * FROM rok.my_table", conn)
#dropping these columns
drop_cols=['RPM', 'RPT']
df_postgres.drop(drop_cols, inplace=True, axis=1)
#changed columns to lowercase
df_postgres.columns = df_postgres.columns.str.lower()
df_postgres["dateutc"]= pd.to_datetime(df_postgres["dateutc"])
print(df_postgres.dateutc.dtype)
sf_conn = create_engine(URL(
account = snow_account,
user = snow_user,
password = snow_pass,
database = 'test',
schema = 'my_schema',
warehouse = 'test',
role = 'test',
))
df_postgres.to_sql(name='my_table',
index = False,
con = sf_conn,
if_exists = 'append',
chunksize = 300,
method = pd_writer)
</code></pre>
|
<p>Moving Ilja's answer from comment to answer for completeness:</p>
<ul>
<li>Snowflake is case sensitive.</li>
<li>When writing "unquoted" SQL, Snowflake will convert table names and fields to uppercase.</li>
<li>This usually works, until someone decides to start quoting their identifiers in SQL.</li>
<li><code>pd_writer</code> adds quotes to identifiers.</li>
<li>Hence when you have <code>df_postgres["dateutc"]</code> it remains in lowercase when its transformed into a fully quoted query.</li>
<li>Writing <code>df_postgres["DATEUTC"]</code> in Python should fix the issue.</li>
</ul>
|
python|sqlalchemy|snowflake-cloud-data-platform
| 3 |
1,903,475 | 67,387,051 |
Terminal and console python --> diffrent results
|
<p>I've got diffrent results from same code in python3 interpreter and terminal.
I'm runnin my code via terminal command
python3 "/home/marcinanbarbarzynca/Pulpit/aplikacja wag/moduł komunikacji serial/serial.py"
and im pasting the same program in interpreter.
It works in interpreter but not in terminal.</p>
<p>The code:</p>
<pre><code>#!/usr/local/bin/python3
import sys
print("Version ",sys.version)
print("Python version")
print (sys.version)
print("Version info.")
print (sys.version_info)
print("Sys.executable")
# Is it the same python interpreter?
import sys
print(sys.executable)
# Is it the same working directory?
import os
print(os.getcwd())
# Are there any discrepancies in sys.path?
# this is the list python searches, sequentially, for import locations
# some environment variables can fcuk with this list
print(sys.path)
import serial.tools.list_ports
ports = serial.tools.list_ports.comports()
for port, desc, hwid in sorted(ports):
print("{}: {} [{}]".format(port, desc, hwid))
sleep(1)
</code></pre>
<p>The result in terminal by running python3 /full/file/path/serial.py:</p>
<pre><code> Version 3.9.5 (default, May 4 2021, 15:58:12)
[GCC 7.5.0]
Python version
3.9.5 (default, May 4 2021, 15:58:12)
[GCC 7.5.0]
Version info.
sys.version_info(major=3, minor=9, micro=5, releaselevel='final', serial=0)
Sys.executable
/usr/local/bin/python3
/home/marcinanbarbarzynca/Pobrane/py3/Python-3.9.5
['/home/marcinanbarbarzynca/Pulpit/aplikacja wag/moduł komunikacji serial', '/usr/local/lib/python39.zip', '/usr/local/lib/python3.9', '/usr/local/lib/python3.9/lib-dynload', '/home/marcinanbarbarzynca/.local/lib/python3.9/site-packages', '/usr/local/lib/python3.9/site-packages']
Version 3.9.5 (default, May 4 2021, 15:58:12)
[GCC 7.5.0]
Python version
3.9.5 (default, May 4 2021, 15:58:12)
[GCC 7.5.0]
Version info.
sys.version_info(major=3, minor=9, micro=5, releaselevel='final', serial=0)
Sys.executable
/usr/local/bin/python3
/home/marcinanbarbarzynca/Pobrane/py3/Python-3.9.5
['/home/marcinanbarbarzynca/Pulpit/aplikacja wag/moduł komunikacji serial', '/usr/local/lib/python39.zip', '/usr/local/lib/python3.9', '/usr/local/lib/python3.9/lib-dynload', '/home/marcinanbarbarzynca/.local/lib/python3.9/site-packages', '/usr/local/lib/python3.9/site-packages']
Traceback (most recent call last):
File "/home/marcinanbarbarzynca/Pulpit/aplikacja wag/moduł komunikacji serial/serial.py", line 25, in <module>
import serial.tools.list_ports
File "/home/marcinanbarbarzynca/Pulpit/aplikacja wag/moduł komunikacji serial/serial.py", line 25, in <module>
import serial.tools.list_ports
ModuleNotFoundError: No module named 'serial.tools'; 'serial' is not a package
</code></pre>
<p>And result from python3 interpreter when i run in terminal python3 command:</p>
<pre><code>Python 3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> #!/usr/local/bin/python3
... import sys
>>> print("Version ",sys.version)
Version 3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0]
>>>
>>> print("Python version")
Python version
>>> print (sys.version)
3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0]
>>> print("Version info.")
Version info.
>>> print (sys.version_info)
sys.version_info(major=3, minor=6, micro=9, releaselevel='final', serial=0)
>>> print("Sys.executable")
Sys.executable
>>> # Is it the same python interpreter?
... import sys
>>> print(sys.executable)
/usr/local/bin/python3
>>>
>>> # Is it the same working directory?
... import os
>>> print(os.getcwd())
/home/marcinanbarbarzynca
>>>
>>> # Are there any discrepancies in sys.path?
... # this is the list python searches, sequentially, for import locations
... # some environment variables can fcuk with this list
... print(sys.path)
['', '/usr/lib/python36.zip', '/usr/lib/python3.6', '/usr/lib/python3.6/lib-dynload', '/home/marcinanbarbarzynca/.local/lib/python3.6/site-packages', '/usr/local/lib/python3.6/dist-packages', '/usr/lib/python3/dist-packages']
>>>
>>>
>>>
>>> import serial.tools.list_ports
>>> ports = serial.tools.list_ports.comports()
>>>
>>> for port, desc, hwid in sorted(ports):
... print("{}: {} [{}]".format(port, desc, hwid))
...
/dev/ttyS0: ttyS0 [PNP0501]
/dev/ttyUSB0: USB Serial [USB VID:PID=1A86:7523 LOCATION=5-2]
>>> sleep(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'sleep' is not defined
>>> sleep(1)
</code></pre>
<p>Even if the versions differs a bit it should run script from terminal and import modules normally, how to fix this?</p>
|
<p>I`ve got new ubuntu 64.
Python 3.9.4
I've done python3 -m pip install pyserial and it did install.</p>
<p>I'm still getting the same results.</p>
<p>When I paste code into interpeter I'm getting normal result.
When I'm tryin to run via terminal it cant import modules.</p>
<pre><code> Version 3.9.4 (default, Apr 4 2021, 19:38:44)
[GCC 10.2.1 20210401]
Python version
3.9.4 (default, Apr 4 2021, 19:38:44)
[GCC 10.2.1 20210401]
Version info.
sys.version_info(major=3, minor=9, micro=4, releaselevel='final', serial=0)
Sys.executable
/usr/bin/python3
/home/marcinanbarbarzynca/Desktop/pytongi
['/home/marcinanbarbarzynca/Desktop/pytongi', '/usr/lib/python39.zip', '/usr/lib/python3.9', '/usr/lib/python3.9/lib-dynload', '/home/marcinanbarbarzynca/.local/lib/python3.9/site-packages', '/usr/local/lib/python3.9/dist-packages', '/usr/lib/python3/dist-packages']
Version 3.9.4 (default, Apr 4 2021, 19:38:44)
[GCC 10.2.1 20210401]
Python version
3.9.4 (default, Apr 4 2021, 19:38:44)
[GCC 10.2.1 20210401]
Version info.
sys.version_info(major=3, minor=9, micro=4, releaselevel='final', serial=0)
Sys.executable
/usr/bin/python3
/home/marcinanbarbarzynca/Desktop/pytongi
['/home/marcinanbarbarzynca/Desktop/pytongi', '/usr/lib/python39.zip', '/usr/lib/python3.9', '/usr/lib/python3.9/lib-dynload', '/home/marcinanbarbarzynca/.local/lib/python3.9/site-packages', '/usr/local/lib/python3.9/dist-packages', '/usr/lib/python3/dist-packages']
Traceback (most recent call last):
File "/home/marcinanbarbarzynca/Desktop/pytongi/serial.py", line 25, in <module>
import serial.tools.list_ports
File "/home/marcinanbarbarzynca/Desktop/pytongi/serial.py", line 25, in <module>
import serial.tools.list_ports
ModuleNotFoundError: No module named 'serial.tools'; 'serial' is not a package
</code></pre>
|
python|terminal|console
| 0 |
1,903,476 | 63,498,441 |
Why does my program return an IndexError?
|
<p>I am trying to write a function that will split a 500 line CSV file into 10 lines segments for further use but when the for loop reaches the end of the dataframe I get an <code>IndexError</code> that <code> index 500 is out of bounds for axis 0 with size 500</code> this error happens at <code>close_next</code> line I tried to use <code>try</code>,<code>except</code> but I get that same error this time at <code>open.append(float(pd[start][0]))</code> , I can't see my mistake and can't seem to fix this</p>
<pre><code>
ad = genfromtxt('BNBBTC.csv', delimiter=',' ,dtype=str)
### this flips the data frame and converts it into an array
pd = np.flipud(ad)
def some_function(start, finish):
open = []
high = []
low = []
close = []
volume = []
date = []
for x in range(finish-start):
open.append(float(pd[start][0]))
high.append(float(pd[start][1]))
low.append(float(pd[start][2]))
close.append(float(pd[start][3]))
volume.append(float(pd[start][4]))
date.append(pd[start][5])
start = start + 1
print ("start - finish " , start , finish)
close_next = float(pd[finish][3])
print (close_next)
iter = 0
for x in range(len(pd)-5):
some_function( iter, iter+10)
iter = iter + 5
</code></pre>
|
<p>This is not the answer to the problem statement. However, I am going to point out areas that may need some work in your code.</p>
<p><strong>Below is original code. by. Sec Team</strong> with my comments</p>
<pre><code>def some_function(start, finish):
open = [] #please dont use open, close, and date as a variable
high = [] #you are initializing all these lists to empty list
low = [] #each time it comes into the function
close = [] #you are not returning the lists back from the function
volume = [] #nor are you assigning the values from these lists
date = [] #to any global variables. so what happens after you
#have processed all these values? You throw them away?
for x in range(finish-start):
#if you ignore all the code below except start = start + 1
#and just print x, finish, and start, you will find that this loop
#iterates only 10 times. So why not give range(10)?
open.append(float(pd[start][0]))
high.append(float(pd[start][1]))
low.append(float(pd[start][2]))
close.append(float(pd[start][3]))
volume.append(float(pd[start][4]))
date.append(pd[start][5])
start = start + 1
print ("start - finish " , start , finish)
#since this loop runs 10 times, you get 10 print statements
#where start increments each time and gets closer finish
#the final iteration ends at 1 less than finish
#When x = 0 (first call), start & finish values will be 0,10 then 1,10 then 2,10.... 9,10
#When x = 1 (second call), start & finish values will be 5,15 then 6,15 then 7,15... 14,15
#it will keep iterating thru the loop a few times
#when x = 98 (99th call), start & finish values will be 490, 500.
#The values for the 99 call time will be (490, 500), (491, 500),
#(492, 500), .... (499,500)
#you are only on your 99th call (x = 98)
#remember: your for loop for x expects you to go thru till 495
#this loop goes successfully as you are referring to [start] variable
#when you come to this point, the value for finish is 500
#The range for pd is 0 thru 499 (per your original comment. It has only 500 rows
#the code will break here because we don't have pd[500][3]
#Is this what you were expecting to do? I am not sure what you are trying to do
#hopefully by showing you what's happening inside your loop, it helps you understand
#where to fix your code
close_next = float(pd[finish][3])
print (close_next)
iter = 0
for x in range(len(pd)-5):
some_function( iter, iter+10)
iter = iter + 5
#as I told you, the for loop is sending values to some_function the wrong way
#the start and finish values may need to change or your implementation
#has to change. Whatever it is , you need to rework the entire code to
#ensure it does not go to IndexError
</code></pre>
<p>Hope this helps you understand how your code works.</p>
|
python
| 0 |
1,903,477 | 36,526,219 |
How to use a web service as a datasource in Spotfire
|
<p>There is a use case in which we would like to add columns from the data of a webservice to our original sql data table.</p>
<p>If anybody has done that then pls do comment.</p>
|
<p>Shadowfax is correct that you should review the How to Ask guide.</p>
<p>that said, Spotfire offers this feature in two ways:</p>
<ol>
<li><p>use IronPython scripting attached to an action control to retrieve the data. this is a very rigid solution that offers no caching, and the data must be retrieved and placed in memory each time the document is opened. I'll leave you to the search feature here on SO; I've posted a sample document somewhere.</p></li>
<li><p>the ideal solution is to use a separate product called Spotfire Advanced Data Services. this data federation layer can mashup data and perform advanced, custom caching based on your needs. the data is then exposed as an information link in Spotfire Server. you'll need to talk to your TIBCO sales rep about this. </p></li>
</ol>
|
sql-server|web-services|ironpython|spotfire
| 2 |
1,903,478 | 21,989,312 |
Django CMS error: modules are not found despite being installed
|
<p>I am trying to install beta version 3 of Django CMS.</p>
<p>Although all modules and dependencies and everything is there, whenever I try to <code>syncdb</code>, I get errors like this:</p>
<pre><code>ImportError cms.plugins.file: No module named plugins.file
</code></pre>
<p>If I uncomment it, the next one gives the error:</p>
<pre><code>ImportError cms.plugins.flash: No module named plugins.flash
</code></pre>
<p>Only when I uncomment them all can I install.</p>
<p>What might be the problem?</p>
|
<p>the core plugins have been removed from the cms.</p>
<p><a href="http://docs.django-cms.org/en/develop/upgrade/3.0.html#plugins-removed" rel="nofollow">http://docs.django-cms.org/en/develop/upgrade/3.0.html#plugins-removed</a></p>
|
python|django|django-cms
| 3 |
1,903,479 | 22,243,382 |
Counting Groups and Mobile users in Google Apps Domain
|
<p>I am trying to count count groups and mobile devices in a multi-domain Google apps domain using a python script. I am hoping to count the groups and mobile devices in their sub-domain. I have used the information here <a href="https://developers.google.com/google-apps/provisioning/#retrieving_all_groups_in_a_domain" rel="nofollow">https://developers.google.com/google-apps/provisioning/#retrieving_all_groups_in_a_domain</a> and I am not getting the information I want.</p>
|
<p>The Provisioning API that you linked to is deprecated and should not be used for new development.</p>
<p>The new Admin SDK Directory API should allow you to perform these operations.</p>
<p>To get a list of all groups where the primary address is one of your subdomains, use the <a href="https://developers.google.com/admin-sdk/directory/v1/reference/groups/list" rel="nofollow">groups.list</a> API call and specify the domain parameter with your subdomain.</p>
<p>To get a list of mobile devices associated with users in the subdomain, use the <a href="https://developers.google.com/admin-sdk/directory/v1/reference/mobiledevices/list" rel="nofollow">mobiledevices.list</a> API call and specify customerId=my_customer as well as query=email:subdomain.com parameter.</p>
|
python-2.7|google-apps|gdata
| 0 |
1,903,480 | 16,929,529 |
New to Python, NameError: name is not defined (after creating class)
|
<p>Not sure why this is happening or how to fix it. I'm new to Python and any help is appreciated.</p>
<pre><code>class Sentence:
def __init__(self, s):
self.s= s
x=s[:-1]
self.L= list(x.split())
def __getitem__(self,idx):
return (self.L[idx])
s= Sentence('What a beautiful morning!')
getitem(s, 2)
</code></pre>
<p>NameError: name 'getitem' is not defined</p>
|
<p>From <a href="http://docs.python.org/2/reference/datamodel.html#object.__getitem__" rel="nofollow">the docs</a>:</p>
<blockquote>
<h3><code>object.__getitem__(self, key)</code></h3>
<p>Called to implement evaluation of <code>self[key]</code>. ...</p>
</blockquote>
<p>By implementing <code>__getitem__</code>, you can use the bracket notation to retrieve items:</p>
<pre><code>s[2]
</code></pre>
<p>Or by calling <code>__getitem__</code> explicitly (I wouldn't do it):</p>
<pre><code>s.__geitem__(2)
</code></pre>
|
python
| 2 |
1,903,481 | 17,127,302 |
Internationalize units system
|
<p>I am gonna internationalize an application written in Python with GTK+. There is a need to internationalize units system, mostly volumes from "fl. oz" to "litres", ounces to grams etc.. I am looking for a tool, library that can be helpful. Python has gettext module by default, but I'm not sure if it will be helpful.</p>
<p>In best case, I would like to have a tool, that reads unit, automaticaly gets locale and returns localized unit.</p>
<p>Any help is appreciated.</p>
<hr>
<p>Of course, value calculation would be great, too.</p>
|
<p>It's awfully hard to prove a negative. But I suspect you are out of luck.</p>
<p>The most comprehensive collection of culture-specific names, preferences, and units that I know of is the Unicode <a href="http://cldr.unicode.org/index" rel="nofollow">Common Locale Data Repository (CLDR)</a> project. Looking at the <a href="http://www.unicode.org/repos/cldr-aux/charts/23.1/summary/root.html" rel="nofollow">summary of its most recent, v23.1, release</a>, I can see entries for currencies, time zone cities, names of time intervals, phrases for "in {0} months" and "{0} seconds ago"... but not cultural conventions for units of mass and volume. </p>
<p>If the CLDR doesn't have it, I'd be surprised if there is any off the shelf collection that does have it.</p>
<p>What you will need is:</p>
<ul>
<li>Data about which units various locales favour, for various kinds of measurement (mass, volume, etc.) Good luck getting single answers to such questions; in western Canada I see people using a mish-mash of pounds and kilograms, metres and feet, °F and °C, in everyday life. This is the data which may not be collected anywhere.</li>
<li>Names used for the various units in various locales. Whether we use metres or feet in this part of Canada, you'll want to have words for both units in all the languages your users use (English, French, simplified Chinese, Urdu,...). Any localiser could get you these word lists.</li>
<li>A mechanism to manage and retrieve the names of the various units, by locale. <a href="http://docs.python.org/2/library/gettext.html" rel="nofollow">Gettext</a> can do this for you. </li>
<li>A mechanism to manage and retrieve the identifiers for the various units, by locale. I haven't heard of any mechanism that does this directly, but you could write one yourself, or maybe repurpose Gettext.</li>
<li>Application logic to take a locale and a kind of unit, retrieve the identifier for the unit in that locale, and look up the locale's name for that unit in the target locale.</li>
<li>Application logic to convert quantities from one unit to another. I suspect this would be simple to write yourself. Finding the conversion coefficients shouldn't be hard.</li>
<li>Data about how various locales format numbers, and a mechanism to manage and retrieve that data. CLDR does do this for you. There has been some work on using <a href="http://cldr.unicode.org/index/cldr-spec/json" rel="nofollow">JSON bindings of CLDR in Python</a>.</li>
</ul>
<p>As a closing thought: your question presupposes "a need to internationalize units system". Have you studied your users and your product to see how they use units and how the product uses units, so that you are sure an automatic change of units is appropriate? I live in a country that is officially metric, but is overshadowed by a country that is emphatically <em>not</em> metric. I would rather have apps present grams, or pounds, or whatever unit the content author decided. If I want to convert, sometimes a precise conversion is good, and sometimes an approximate conversion is more useful ("2 pounds" can sometimes be a better rendering of "1 kilo" than "2 pounds 3.2 oz"). Before you invest too much work in this tool you seek, I suggest investing some time in your requirements.</p>
|
python|localization|internationalization|gettext
| 1 |
1,903,482 | 43,667,155 |
Datetime with milliseconds or microseconds AND timezone offset
|
<p>This is the representation desired for dates:</p>
<pre><code>>>> tz = pytz.timezone('US/Central')
>>> datefmt = '%Y-%m-%d %H:%M:%S.%f%z(%Z)'
>>> datetime.now(tz).strftime(datefmt)
'2017-04-27 15:09:59.606921-0500(CDT)'
</code></pre>
<p>This is how it's logged (Python 3.6.0 on Linux):</p>
<pre><code>>>> logrecord_format = '%(asctime)s %(levelname)s %(message)s'
>>> logging.basicConfig(format=logrecord_format, datefmt=datefmt)
>>> logging.error('ruh-roh!')
2017-04-27 15:10:35.%f-0500(CDT) ERROR ruh-roh!
</code></pre>
<p>It's not filling the microseconds properly. I've tried changing the <code>logrecord_format</code> to a few other things, but I could not figure it out - how to configure the logger to show microseconds and timezone in the correct way to match the <code>strftime</code> output exactly?</p>
<hr>
<p><strong><em>Edit</strong>:</em> I could settle for milliseconds with offset, i.e. <code>2017-04-27 15:09:59,606-0500(CDT)</code>. Is that possible? <code>logging</code> provides <code>%(msecs)03d</code> directive, but I can't seem to get the timezone offset to appear <em>after</em> the milliseconds. </p>
|
<p>Personally, instead of integrating the timezone into the date format, I add it directly to the logged message format. Usually, the timezone should not change during the program execution.</p>
<pre><code>import logging
import time
tz = time.strftime('%z')
fmt = '%(asctime)s' + tz + ' %(levelname)s %(message)s'
logging.basicConfig(format=fmt)
logging.error("This is an error message.")
# 2017-07-28 19:34:53,336+0200 ERROR This is an error message.
</code></pre>
|
python|python-3.x|datetime|logging|timezone
| 7 |
1,903,483 | 54,263,679 |
Do django is needed for every virtual environment?
|
<p>Is it necessary to install separate django for each project?
if yes then why?
and if no then when i check for django version when virtualenv is not activated
their is no error as below </p>
<pre><code>E:\web\python\django_1>python -m django --version
2.1.4
</code></pre>
<p>And when i activate virtual env i get error as below </p>
<pre><code>(django_1-Gx7XQ45n) E:\web\python\django_1>python -m django --version
C:\Users\usr_name\.virtualenvs\django_1-Gx7XQ45n\Scripts\python.exe: No module named django
</code></pre>
<p>Why this is happening?</p>
|
<p>Basically Yes, you'll need to install django for each project when you are using virtual environment. That's because when you are using django for dynamic build up there are several projects with different requirements and different versions.</p>
<p>And, Such that when you have typed "python -m django --version 2.1.4", basically the django is installed on a particular virtual environment with all supported functionalities and requirements to that virtual environment.</p>
<p>So, you need to install django for each new virtual environment.</p>
<p>Try these command to install "pip install django". </p>
|
django|python-3.x
| 3 |
1,903,484 | 54,520,291 |
Try different versions of website addresses - Python
|
<p>I am using variables from the list "names" to request value points from a website:</p>
<pre><code>names = ['A', 'B', 'C', 'D', 'E']
</code></pre>
<p>Actually I have a lot more values in names than this.</p>
<p>The plan was to iterate over the following adress and fill in my variables like this:</p>
<pre><code>def get_values(name):
res = requests.get('www.example.com/tb/' + name)
for name in names:
get_values(name)
</code></pre>
<p>The problem is, a part of the adress changes for the different values to three different values (tb, az and dm)(they are always the same for the different names):</p>
<ul>
<li>www.example.com/tb/A</li>
<li>www.example.com/tb/B</li>
<li>www.example.com/az/C</li>
<li>www.example.com/dm/D</li>
<li>www.example.com/dm/E</li>
</ul>
<p>For this reason in my code above only the values for A and B get downloaded. (And it is not practible to assign the variables to the names or vice versa.)</p>
<p>So my plan to get the correct URL was to solve this problem with if/else:</p>
<pre><code>try:
r = requests.get('www.example.com/tb/' + stock)
if r.status_code == 200:
url = 'www.example.com/tb/' + stock
else:
r = requests.get('www.example.com/az/' + stock)
if r.status_code == 200:
url = 'www.example.com/az/' + stock
else:
url = 'www.example.com/dm/' + stock
except:
pass
correctUrl = requests.get(url)
</code></pre>
<p>This only gives me the values for one variable (e.g. tz).
I also have tried to find a solution with try and except and also with some variations of try/except and if/else but it's not working. </p>
<p>It would be nice if someone can give me advice how I can verify the correct address for each name in my list. Or what is the most pythonic way to do this. Unfortunately I was not able to find an approach on stackoverflow or google.</p>
|
<p>You could iterate over the possible url's until you get a code <code>200</code>:</p>
<pre><code>def get_url(name):
for sub in ('tb', 'az', 'dm'):
url = f'www.example.com/{sub}/{name}'
r = requests.get(url)
if r.status_code == 200:
return url
return None
for name in names:
url = get_url(name)
print(url)
</code></pre>
<blockquote>
<p>Obviously, it would be much better if you could store those paths within your names, something like [(tb, A), (tb, B), (az, C), ...]. But i assume you can't?</p>
</blockquote>
|
python|web|verify
| 1 |
1,903,485 | 39,235,974 |
Why is my code returning "name 'Button' is not defined"?
|
<p>my code is giving me this error and I can't for the life of me figure out why it's telling me "NameError: name 'Button' is not defined." In Tkinter I thought Button was supposed to add a button?</p>
<pre><code>import Tkinter
gameConsole = Tkinter.Tk()
#code to add widgets will go below
#creates the "number 1" Button
b1 = Button(win,text="One")
gameConsole.wm_title("Console")
gameConsole.mainloop()
</code></pre>
|
<p>A few options available to source the namespace:</p>
<ul>
<li><p><code>from Tkinter import Button</code> Import the specific class.</p></li>
<li><p><code>import Tkinter</code> -> <code>b1 = Tkinter.Button(win,text="One")</code> Specify the the namespace inline.</p></li>
<li><p><code>from Tkinter import *</code> Imports everything from the module.</p></li>
</ul>
|
python|tkinter|pycharm
| 4 |
1,903,486 | 52,824,825 |
Merging multiple CSV rows with mostly empty fields into single row per timestamp
|
<p>I've got a problem converting an "ugly" csv into a "pretty" one.
e.g., I have:</p>
<pre><code>something,epochtime,time-human-readable,some,header,for,the,values,here
same,time-a,don-t_care,a,b,,,,
same,time-a,don-t_care,,,,,c,
same,time-a,don-t_care,,,,,,d
same,time-a,don-t_care,,,e,f,,
same,time-b,don-t_care,g,h,,,,
same,time-b,don-t_care,,,i,j,,
same,time-b,don-t_care,,,,,,k
same,time-b,don-t_care,,,,,l,
same,time-c,don-t_care,,,m,n,,
same,time-c,don-t_care,,,,,o,
same,time-c,don-t_care,p,q,,,,
same,time-c,don-t_care,,,,,,r
</code></pre>
<p>But what I need is:</p>
<pre><code>something,epochtime,time-human-readable,some,header,for,the,values,here
same,time-a,don-t_care,a,b,e,f,c,d
same,time-b,don-t_care,g,h,i,j,l,k
same,time-c,don-t_care,p,q,m,n,o,r
</code></pre>
<p>Data behaviour:</p>
<ul>
<li>Columns in question contain signed-integer or float (except first and third column which are of type string and not part of the problem).</li>
<li>Always exactly 1 value per column and epochtime. (One could interpret empty fields as 0 and sum all values in one column belonging to a single epochtime.) </li>
<li>Values to one epochtime spread across the same number of lines every time.</li>
<li>Values belonging to a single epochtime <em><strong>might</strong></em> always appear spread across the rows in the same pattern (unlike the example) ... but that's not guaranteed.</li>
</ul>
<p>I tried to solve this problem with my limited skill using sed / awk but to no avail.</p>
<p>Any solution that can be executed by crontab is welcome, while bash / sed / awk / perl / python or any "apt-get install ..." capable command-line tool is preferred. Host OS is XUbuntu 16.04 LTS.</p>
<p>Addendum: (2018-10-16 13:55 UTC)</p>
<ul>
<li>Rows are sorted chronological according to epochtime</li>
<li>Values are grouped by epochtime</li>
<li>Even though first and third column contain string, it consists of Letters, Numbers and <code>-</code> or <code>_</code>, no whitespace or <code>,</code> --> no string-headache<br>
i.e. <code>dummy,1539697764,2018-10-16_13-49-24,p,q,,,,</code></li>
</ul>
|
<pre><code>$ cat tst.awk
BEGIN { FS=OFS="," }
$2 != prev { if (NR>1) prt(); prev=$2 }
{
for (i=1; i<=NF; i++) {
if ($i != "") {
rec[i] = $i
}
}
}
END { prt() }
function prt() {
for (i=1; i<=NF; i++) {
printf "%s%s", rec[i], (i<NF ? OFS : ORS)
}
delete rec
}
$ awk -f tst.awk file
something,epochtime,time-human-readable,some,header,for,the,values,here
same,time-a,don-t_care,a,b,e,f,c,d
same,time-b,don-t_care,g,h,i,j,l,k
same,time-c,don-t_care,p,q,m,n,o,r
</code></pre>
|
bash|python-2.7|perl|awk|sed
| 2 |
1,903,487 | 47,806,862 |
How to use bootbox for Django DeleteView?
|
<p>I am using Django DeleteView for deleting objects. First, I implemented delete add confirm dialog redirecting another html page. Now, I want to add bootbox pop up. But i don't understand where to add code. Please help</p>
<p>models.py</p>
<pre><code>class Review(models.Model):
review_description = models.TextField(max_length=500)
user = models.ForeignKey(settings.AUTH_USER_MODEL, default=1)
book = models.ForeignKey(Book, on_delete=models.CASCADE)
updated = models.DateTimeField(auto_now=True, auto_now_add=False)
timestamp = models.DateTimeField(auto_now=False, auto_now_add=True)
</code></pre>
<p>views.py</p>
<pre><code>class ReviewDelete(DeleteView):
model = Review
template_name = "confirm_delete.html"
def get_success_url(self, *args, **kwargs):
review = get_object_or_404(Review, pk=self.kwargs['pk'])
return reverse("books:detail", args = (review.book.id,))
</code></pre>
<p>confirm_delete.html</p>
<pre><code>{% extends "base.html" %}
{% block content %}
<h1>Delete</h1>
<p>Are you sure you want to delete {{ review }}?</p>
<form action="{% url "delete" pk=review.id %}" method="POST">
{% csrf_token %}
<input type="submit" value="Yes, delete." />
<a href="{% url "books:detail" id=review.book.id %}">No, cancel.</a>
</form>
{% endblock %}
</code></pre>
<p>book_details.html</p>
<pre><code><a href="{% url "delete" pk=review.id %}" class="badge badge-danger">Delete</a>
{# <a href="{% url "delete" pk=review.id %}" class="badge badge-danger">Delete</a>#}
</code></pre>
<p>base.html</p>
<pre><code><script type="text/javascript">
$(document).ready(function () {
$("#review-delete-btn").click(function (event) {
event.preventDefault();
bootbox.confirm({
title: "Destroy planet?",
message: "Do you want to delete? This cannot be undone.",
buttons: {
cancel: {
label: '<i class="fa fa-times"></i> Cancel'
},
confirm: {
label: '<i class="fa fa-check"></i> Confirm'
}
},
callback: function (result) {
console.log('This was logged in the callback: ' + result);
}
});
});
});
</script>
</code></pre>
<p>urls.py</p>
<pre><code>url(r'^reviews/(?P<pk>\d+)/delete/$', ReviewDelete.as_view(), name='delete'),
</code></pre>
|
<p>Assuming your server page doesn't perform redirects when completing the delete, you just need to add an AJAX call to the confirm callback. Something like this:</p>
<pre class="lang-js prettyprint-override"><code><script>
$(function () {
$("#review-delete-btn").on('click', function (event) {
event.preventDefault();
// for referencing this later in this function
var _button = $(this);
bootbox.confirm({
title: "Destroy planet?",
message: "Do you want to delete? This cannot be undone.",
buttons: {
cancel: {
label: '<i class="fa fa-times"></i> Cancel'
},
confirm: {
label: '<i class="fa fa-check"></i> Confirm'
}
},
callback: function (result) {
// result will be a Boolean value
if(result){
// this encodes the form data
var data = _button.closest('form').serialize();
$.post('your delete URL here', data).
done(function(response, status, jqxhr){
// status code 200 response
})
.fail(function(jqxhr, status, errorThrown){
// any other status code, including 30x (redirects)
});
}
}
});
});
});
</script>
</code></pre>
<p>You will probably want to review the documentation for <a href="https://api.jquery.com/jquery.post/" rel="nofollow noreferrer">$.post</a> and <a href="https://api.jquery.com/serialize/" rel="nofollow noreferrer">serialize</a>. I would also recommend working your way through the <a href="https://api.jquery.com/category/ajax/" rel="nofollow noreferrer">AJAX</a> topics - $.post is a convenience method for $.ajax, so you should know how to use both.</p>
|
python|django|bootbox
| 1 |
1,903,488 | 47,811,878 |
Format text extracted from HTML
|
<p>I am getting the text from this HTML with one line:</p>
<pre><code><label class="product_title">
"TEXT 1"
<br>
"TEXT2"
</label>
</code></pre>
<p>My code is:</p>
<pre><code>title = amazon.find_element_by_css_selector(
'div > div > label').get_attribute('innerText')
</code></pre>
<p><strong>Current output:</strong></p>
<pre><code>TEXT
TEXT1
</code></pre>
<p><strong>Desired output:</strong></p>
<pre><code>TEXT TEXT1
</code></pre>
<p><strong>Question</strong></p>
<p>How do I obtain my desired output?</p>
|
<p>You can replace the new line character with space as below :</p>
<pre><code>title = amazon.find_element_by_css_selector('div > div > label').get_attribute('innerText').replace("\n", " ")
</code></pre>
|
python-3.x|selenium
| 0 |
1,903,489 | 37,561,348 |
Attempting to scrape some information from Amazon. While implementing the next page link, a lot of data is missing
|
<pre><code>class A1Spider(scrapy.Spider):
name = "amazon"
allowed_domains = ["www.amazon.com"]
start_urls = (
'http://www.amazon.com/s/ref=nb_sb_noss_1?url=search-alias%3Daps&field-keywords=golf+balls',
)
def __init__(self):
self.page = 0
def parse(self, response):
self.page += 1
#have to view the response from scrapy to determine the xpath. it is different from what the browser sees.
#xpath
url_x = '//*[starts-with(@id,"result_")]/div/div[3]/div[1]/a/@href'
url = response.xpath(url_x).extract()
print len(url)
for i in url:
yield scrapy.Request(i, callback=self.parse_item)
#next page
NextBottom = response.xpath('//*[@id="pagnNextLink"]/@href').extract_first()
NextBottom_a = response.urljoin(NextBottom)
# print NextBottom_a
# if self.page <= 1:
# yield scrapy.Request(NextBottom_a)
</code></pre>
<p>the last few lines are used to request the next page.
When I am only scraping the <code>start_url</code> (the first page of the search result), all of the 24 items show up.
When I use these lines to go to next page, most of items on the first page are missing and from the second page as well.</p>
<p>What are the possible reasons for this? I was thinking I encountered a robot check. However, it works perfectly on the first page only.</p>
|
<p>I think you need to:</p>
<pre><code>yield scrapy.Request(NextBottom_a, callback=self.parse, dont_filter=True)
</code></pre>
<p>to get to the next pages - while using <code>dont_filter</code> ensuring that scrapy doesn't decide you already called this url elsewhere and you don't actually need it.</p>
|
python|scrapy|web-crawler
| 0 |
1,903,490 | 34,191,392 |
Django 1.9: Should I avoid importing models during `django.setup()`?
|
<p>Porting my app to django 1.9, I got the scary <code>django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet</code></p>
<p>Basically my stacktrace is:</p>
<pre><code> manage.py
execute_from_command_line(sys.argv)
django/core/management:352, in execute_from_command_line
utility.execute()
django/core/management/__init__.py:326, in execute
django.setup()
django/__init__.py:18, in setup
apps.populate(settings.INSTALLED_APPS)
django/apps/registry.py:85, in populate
app_config = AppConfig.create(entry)
django/apps/config.py:90, in create
module = import_module(entry)
python2.7/importlib/__init__.py:37, in import_module
__import__(name)
myapp/mylib/__init__.py:52, in <module>
from django.contrib.contenttypes.models import ContentType #<= The important part
django/contrib/contenttypes/models.py:159, in <module>
class ContentType(models.Model):
django/db/models/base.py:94, in __new__
app_config = apps.get_containing_app_config(module)
django/apps/registry.p:239, in get_containing_app_config
self.check_apps_ready()
django/apps/registry.py:124, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
</code></pre>
<p>My main question here :</p>
<p><strong>Should I import my models in the <code>__init__.py</code> of my django apps ?</strong></p>
<p>It seems to trigger the <code>django.models.ModelBase</code> metaclass that check if the app is ready before creating the model.</p>
|
<blockquote>
<p>Should I import my models in the <code>__init__.py</code> of my django apps ?</p>
</blockquote>
<p>No, you <strong>must</strong> not import any model in the <code>__init__.py</code> file of any installed app. This is no longer possible in 1.9.</p>
<p>From the <a href="https://docs.djangoproject.com/en/1.9/releases/1.9/#features-removed-in-1-9" rel="noreferrer">release notes</a>:</p>
<blockquote>
<p>All models need to be defined inside an installed application or
declare an explicit app_label. Furthermore, it isn’t possible to
import them before their application is loaded. <strong>In particular, it
isn’t possible to import models inside the root package of an
application.</strong></p>
</blockquote>
|
python|django|django-models|django-1.9
| 13 |
1,903,491 | 7,434,408 |
Fatal Python error: PyEval_RestoreThread: NULL tstate
|
<p>Does somebody knows what this error means?</p>
<pre><code>Fatal Python error: PyEval_RestoreThread: NULL tstate
</code></pre>
<p>In my application when I destroy the main window this error is printed. I am using multiples threads to run differents jobs in the same time.</p>
<p>I really dont have any ideia what is this..</p>
<p>If someone ever lived the same problem please help me.. </p>
<p>Below is a code to show how to reproduce this error. (I tried to make smallest code that I could)</p>
<pre><code>#!/usr/bin/env python
import gtk
import threading
import sys
class Test(threading.Thread):
"""A subclass of threading.Thread, with a kill() method."""
def __init__(self, *args, **keywords):
threading.Thread.__init__(self, *args, **keywords)
gtk.gdk.threads_init()
self.killed = False
def start(self):
"""Start the thread."""
self.__run_backup = self.run
self.run = self.__run # Force the Thread to install our trace.
threading.Thread.start(self)
def __run(self):
"""Hacked run function, which installs the trace."""
sys.settrace(self.globaltrace)
self.__run_backup()
self.run = self.__run_backup
def globaltrace(self, frame, why, arg):
if why == 'call':
return self.localtrace
else:
return None
def localtrace(self, frame, why, arg):
if self.killed:
if why == 'line':
raise SystemExit()
return self.localtrace
def kill(self):
self.killed = True
class Window(gtk.Window):
"""Main window"""
def __init__(self):
"""Create a main window and all your children"""
super(Window, self).__init__()
self.connect('destroy', gtk.main_quit)
button = gtk.Button("Click and after, close window")
button.connect("clicked", self.on_item_run)
self.add(button)
self.show_all()
def on_item_run(self, widget):
t = Test()
t.start()
if __name__ == "__main__":
window = Window()
gtk.gdk.threads_enter()
gtk.main()
gtk.gdk.threads_leave()
</code></pre>
<p>Thanks a lot..</p>
|
<p>What version of gtk are you using? <a href="http://twistedmatrix.com/trac/ticket/994" rel="nofollow">This link</a> seems to indicate it's a threading bug that was fixed in 2.0.1.</p>
|
python|fatal-error
| 1 |
1,903,492 | 39,654,022 |
How is Filter Concatenation implemented in Tensorflow?
|
<p>Looking at <strong>GoogleNet</strong> architecture you can see such blocks:
<a href="https://i.stack.imgur.com/QUtQb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QUtQb.png" alt="enter image description here"></a></p>
<p>convolution operation is <strong>tf.nn.conv2d()</strong> </p>
<p>pooling is <strong>tf.nn.max_pool()</strong></p>
<p>But I cannot find in examples and tutorials how is Filter Concatenation implemented in TF?</p>
|
<p>Tensorflow has <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/array_ops.html#concat" rel="noreferrer">tf.concat</a>:</p>
<pre><code>concatenated_tensor = tf.concat(3, [branch1, branch2, branch3, branch4])
</code></pre>
<p>where branchX are the result tensors of the different paths.</p>
|
neural-network|tensorflow|deep-learning|convolution|conv-neural-network
| 5 |
1,903,493 | 39,839,119 |
Rank of a Permutation
|
<p>so there was a question I wasn't able to solve mainly because of computing power or lack thereof. Was wondering how to code this so that I can actually run it on my computer. The gist of the questions is:</p>
<p>Let's say you have a string <code>'xyz'</code>, and you want to find all unique permutations of this string. Then you sort them and find the index of <code>'xyz'</code> out of the unique permutations. Which seemed simple enough but then once you get a really long string, my computer gives up. What's the mathematical way around this which I'd assume would lead me to code that can actually run on my laptop.</p>
<pre><code>from itertools import permutations
def find_rank(n):
perms = [''.join(p) for p in permutations(n)]
perms = sorted(set(perms))
loc = perms.index(n)
return loc
</code></pre>
<p>But if I want to run this code on a string that's 100 letters long, it's just way too many for my computer to handle. </p>
|
<p>This problem can be easily solved by first simplifying it and thinking recursively.</p>
<p>So let's first assume that all the elements in the input sequence are unique, then the set of "unique" permutations is simply the set of permutations.</p>
<p>Now to find the rank of the sequence <code>a_1, a_2, a_3, ..., a_n</code> into its set of permutations we can:</p>
<ol>
<li><p>Sort the sequence to obtain <code>b_1, b_2, ..., b_n</code>. This permutation by definition has rank <code>0</code>.</p></li>
<li><p>Now we compare <code>a_1</code> and <code>b_1</code>. If they are the same then we can simply remove them from the problem: the rank of <code>a_1, a_2, ..., a_n</code> will be the same as the rank of just <code>a_2, ..., a_n</code>.</p></li>
<li><p>Otherwise <code>b_1 < a_1</code>, but then <strong>all</strong> permutations that start with <code>b_1</code> are going to be smaller than <code>a_1, a_2, ..., a_n</code>. The number of such permutations is easy to compute, it's just <code>(n-1)! = (n-1)*(n-2)*(n-3)*...*1</code>.</p>
<p>But then we can continue looking at our sequence <code>b_1, ..., b_n</code>. If <code>b_2 < a_1</code>, again all permutations starting with <code>b_2</code> will be smaller.
So we should add <code>(n-1)!</code> again to our rank.</p>
<p>We do this until we find an index <code>j</code> where <code>b_j == a_j</code>, and then we end up at point 2.</p></li>
</ol>
<p>This can be implemented quite easily:</p>
<pre><code>import math
def permutation_rank(seq):
ref = sorted(seq)
if ref == seq:
return 0
else:
rank = 0
f = math.factorial(len(seq)-1)
for x in ref:
if x < seq[0]:
rank += f
else:
rank += permutation_rank(seq[1:]) if seq[1:] else 0
return rank
</code></pre>
<p>The solution is pretty fast:</p>
<pre><code>In [24]: import string
...: import random
...: seq = list(string.ascii_lowercase)
...: random.shuffle(seq)
...: print(*seq)
...: print(permutation_rank(seq))
...:
r q n c d w s k a z b e m g u f i o l t j x p h y v
273956214557578232851005079
</code></pre>
<p>On the issue of equal elements: the point where they come into play is that <code>(n-1)!</code> is the number of permutations, considering each element as different from the others. If you have a sequence of length <code>n</code>, made of symbol <code>s_1, ..., s_k</code> and symbol <code>s_j</code> appears <code>c_j</code> times then the number of unique permutations is `(n-1)! / (c_1! * c_2! * ... * c_k!).</p>
<p>This means that instead of just adding <code>(n-1)!</code> we have to divide it by that number, and also we want to decrease by one the count <code>c_t</code> of the current symbol we are considering.</p>
<p>This can be done in this way:</p>
<pre><code>import math
from collections import Counter
from functools import reduce
from operator import mul
def permutation_rank(seq):
ref = sorted(seq)
counts = Counter(ref)
if ref == seq:
return 0
else:
rank = 0
f = math.factorial(len(seq)-1)
for x in sorted(set(ref)):
if x < seq[0]:
counts_copy = counts.copy()
counts_copy[x] -= 1
rank += f//(reduce(mul, (math.factorial(c) for c in counts_copy.values()), 1))
else:
rank += permutation_rank(seq[1:]) if seq[1:] else 0
return rank
</code></pre>
<p>I'm pretty sure there is a way to avoid copying the counts dictionary, but right now I'm tired so I'll let that as an exercise for the reader.</p>
<p>For reference, the final result:</p>
<pre><code>In [44]: for i,x in enumerate(sorted(set(it.permutations('aabc')))):
...: print(i, x, permutation_rank(x))
...:
0 ('a', 'a', 'b', 'c') 0
1 ('a', 'a', 'c', 'b') 1
2 ('a', 'b', 'a', 'c') 2
3 ('a', 'b', 'c', 'a') 3
4 ('a', 'c', 'a', 'b') 4
5 ('a', 'c', 'b', 'a') 5
6 ('b', 'a', 'a', 'c') 6
7 ('b', 'a', 'c', 'a') 7
8 ('b', 'c', 'a', 'a') 8
9 ('c', 'a', 'a', 'b') 9
10 ('c', 'a', 'b', 'a') 10
11 ('c', 'b', 'a', 'a') 11
</code></pre>
<p>And to show that it is efficient:</p>
<pre><code>In [45]: permutation_rank('zuibibzboofpaoibpaybfyab')
Out[45]: 246218968687554178
</code></pre>
|
python|algorithm|performance|permutation|combinatorics
| 4 |
1,903,494 | 16,051,478 |
How to enable App sandbox for a python app on OSX (The python app is targeted for Mac App Store)
|
<p>First of all, the python app is targeted for Mac App Store.
And Apple emailed me that my app was not sandbox enabled. </p>
<p>As below:</p>
<blockquote>
<p>Dear developer,</p>
<p>We have discovered one or more issues with your recent delivery for "<strong>InternetWorks</strong>". To process your delivery, the following issues must be corrected:</p>
<p><strong>App sandbox not enabled</strong> - The following executables must include the "<strong>com.apple.security.app-sandbox</strong>" entitlement with a Boolean value of true in the <strong>entitlements property list</strong>. Refer to the App Sandbox page for more information on sandboxing your app.</p>
<p>InternetWorks.app/Contents/MacOS/InternetWorks</p>
<p>Once these issues have been corrected, go to the Version Details page and click "Ready to Upload Binary." Continue through the submission process until the app status is "Waiting for Upload." You can then deliver the corrected binary.</p>
<p>Regards,</p>
<p>The App Store team</p>
</blockquote>
<p>Since my app is wrote in Python, i have no idea of how to make <strong>a entitlements property list</strong> for it. </p>
<p>From Apple's documentation, the entitlements property list was created inside a Xcode project, and somehow related with the executable file. But again, my app was wrote in Python, it just a couple of *.py files in essential, and i have packed it to a "my_app_name.app" executable file using py2app/cx_Freeze tools.</p>
<p>So, is is possible to create/make a entitlements property list file for my already packed python app(My_App_name.app file) manually and how can i do it without a XCode project?</p>
<p>Thanks in advance.</p>
|
<p>For some context the interface to configure your <code>.entitlements</code> <a href="/questions/tagged/plist" class="post-tag" title="show questions tagged 'plist'" rel="tag">plist</a> in <a href="/questions/tagged/xcode4" class="post-tag" title="show questions tagged 'xcode4'" rel="tag">xcode4</a> looks like this <img src="https://i.imgur.com/jiqAaO6m.png" alt="screenshot">.</p>
<p>It automatically creates a file in your <code>$(SOURCE_ROOT)</code> called <code>AppName.entitlements</code>.</p>
<p><code>.plist</code>s are the XML files Apple uses for almost everything that would be a text config file on a 'traditional' unix. If you're developing in Python on OSX you may find the Python <a href="http://docs.python.org/2/library/plistlib.html" rel="nofollow noreferrer">plistlib</a> helpful. The Apple blessed way to edit them is using XCode itself (<a href="https://stackoverflow.com/questions/5268389/property-list-editor-after-installing-xcode-4">before XCode4 there was a stand-alone Property List Editor.app</a>).</p>
<p>In this case the default they want merely contains:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>com.apple.security.app-sandbox</key>
<true/>
</dict>
</plist>
</code></pre>
<p>I suspect if you simply paste that into a file named <code>InternetWorks.entitlements</code> you'll resolve their issue. Although you may need to request more of <a href="http://developer.apple.com/library/ios/#documentation/Miscellaneous/Reference/EntitlementKeyReference/Chapters/EnablingAppSandbox.html#//apple_ref/doc/uid/TP40011195-CH4-SW5" rel="nofollow noreferrer">the available permissions</a>.</p>
|
python|sandbox|mac-app-store
| 0 |
1,903,495 | 38,713,816 |
Eclipse Raspberry Pi 3 remote debugging ImportError: No module named 'bluetooth'
|
<p>I am trying to do a Bluetooth scan on my Raspberry pi 3. I am using eclipse remote debugging for coding. Python version is 3.4</p>
<pre><code>import sys
sys.path.append(r'C:\Users\SachithW\Downloads\eclipse-java-mars-2-win32-x86_64\eclipse\plugins\org.python.pydev_5.1.2.201606231256\pysrc')
import pydevd
pydevd.settrace('192.168.1.11') # replace IP with address
# of Eclipse host machine
import bluetooth
</code></pre>
<p>I have installed "Python bluez" and "Bluetooth" in the raspberry pi device. </p>
<pre><code> sudo apt-get install bluetooth
sudo apt-get install bluez
sudo apt-get install python-bluez
pip install pybluez
</code></pre>
<p>But when I run the code it gives me this error massage.</p>
<p>Traceback (most recent call last):
File "D:\eclipse\RemoteSystemsTempFiles\192.168.1.4\home\pi\pi_projects\BT_multiple.py", line 6, in
import bluetooth
ImportError: No module named 'bluetooth'</p>
<p>What is the cause for this error? How to fix it?</p>
|
<p>Did you do this:</p>
<pre><code>sudo apt-get install python-bluez
</code></pre>
<p>I have also seen this in tutorials:</p>
<pre><code>pip install pybluez
</code></pre>
|
python|eclipse|bluetooth|raspberry-pi|remote-debugging
| 0 |
1,903,496 | 38,757,357 |
How do I set a boundary in which my image can go through in Pygame?How do I keep an image from going behind another?
|
<p>This is the code for my pygame</p>
<pre><code>import pygame
import os
img_path=os.path.join('C:/Desktop/Python Stuff','Angry Birds.jpg')
class pic(object):
def __init__(self):
""" The constructor of the class """
self.image = pygame.image.load(img_path)
# the bird's position
self.x = 0
self.y = 0
def handle_keys(self):
""" Handles Keys """
key = pygame.key.get_pressed()
dist = 5
if key[pygame.K_DOWN]: # down key
self.y += dist # move down
elif key[pygame.K_UP]: # up key
self.y -= dist # move up
if key[pygame.K_RIGHT]: # right key
self.x += dist # move right
elif key[pygame.K_LEFT]: # left key
self.x -= dist # move left
def draw(self, surface):
""" Draw on surface """
# blit yourself at your current position
surface.blit(self.image, (self.x, self.y))
</code></pre>
<p>This is the screen size. Is this where the I should restrict the image's boundaries? </p>
<pre><code>pygame.init()
screen=pygame.display.set_mode([1500,850])
Pic=pic()
pygame.display.set_caption('Angry Birds')
</code></pre>
<p>This is the image that I want to have a boundary for</p>
<pre><code>pic=pygame.image.load('Angry Birds.jpg')
keep_going=True
while keep_going:
event=pygame.event.poll()
*emphasized text* if event.type == pygame.QUIT:
pygame.quit()
running = False
Pic.handle_keys()
screen.blit(pic, (-200, 0))
Pic.draw(screen)
</code></pre>
<p>This image is what the 'Angry Birds' image is going behind. How do I stop it from going behind this image? </p>
<pre><code>tux=pygame.image.load('Rock Lee.gif')
screen.blit(tux,(500,600))
screen.blit(tux,(500,400))
screen.blit(tux,(500,0))
screen.blit(tux,(900,200))
screen.blit(tux,(900,400))
screen.blit(tux,(900,600))
screen.blit(tux,(1300,0))
screen.blit(tux,(1300,200))
screen.blit(tux,(1300,600))
pygame.display.get_surface([1500,850]).get_size([1500,850])
pygame.display.update()
</code></pre>
|
<h1>A) Keep rect on screen</h1>
<p>The simplest way would be using <a href="http://www.pygame.org/docs/ref/rect.html#pygame.Rect.clamp_ip" rel="nofollow" title="Rect.clamp_ip(rect)"><code>Rect.clamp_ip(rect)</code></a> on a <a href="http://www.pygame.org/docs/ref/sprite.html#pygame.sprite.Sprite" rel="nofollow" title="Sprite"><code>Sprite</code></a></p>
<pre><code>screen_size = Rect(1500,850)
# right after when you move the bird
bird.rect.clamp_ip(screen_size)
</code></pre>
<h1>B) rect on rect collision</h1>
<pre><code># Where .xvel and .yvel are bird's movement per frame
new_rect = bird.rect.move(bird.vxel, bird.yvel)
if not new_rect.collide_rect(other_bird.rect)
bird.rect = new_rect
else
print("Ouch!")
</code></pre>
|
python|pygame
| 1 |
1,903,497 | 68,237,372 |
Python find max under constraint
|
<p>I'm learning python on my own and I'm unable to find the right solution for a specific problem:</p>
<p>I get x $.
I can buy a list of different items which each have a certain price (costs) and provide a particular gain (gain)
I want to get the maximum the gain for the x $.
There is only 1 of each item.
lets says :</p>
<pre><code>dollars = 10
cost = [5, 4, 1, 10]
gain = [7, 6, 4, 12]
</code></pre>
<p>here => the max gain is 17</p>
<p>With a naïve solution, based on permutation, I managed to find a solution when the number of item is low.
but When the number of item grows, the time increase and the computer crashes.</p>
<p>Is there a typical algorithm to solve that kind of pb?</p>
|
<p>You mentionned not being interested in the solution's code in one of your comments, so I'll only be explaining the algorithm. This problem is better known as the <a href="https://en.wikipedia.org/wiki/Knapsack_problem" rel="nofollow noreferrer">0-1 knapsack problem</a>.
A typical approach to solving it is using <a href="https://en.wikipedia.org/wiki/Dynamic_programming" rel="nofollow noreferrer">dynamic programming</a>:</p>
<ul>
<li>let's define a value that we'll call <code>m(i, c)</code>, which is the max gain you can get by spending up to <code>c</code> $ and only buying items among the first <code>i</code> of your list.
You've got:</li>
<li><code>m(0, c) = 0</code> (If you can't buy any item, you won't be getting any gain).</li>
<li><code>m(i, c) = m(i-1, c)</code> if <code>cost[i]>c</code> (if the new item is more than the cost limit, you won't be able to buy it anyways)</li>
<li><code>m(i, c) = max(m(i-1, c), m(i-1, c-cost[i]) + gain[i])</code> if <code>cost[i]<=c</code> (you're now able to buy item <code>i</code>. Either you do buy it, or you don't, and the best gain you can get out of it is the maximum of those two alternatives)</li>
</ul>
<p>To get the best price, all you have to do is compute <code>m(len(cost), dollars)</code>. You can for example do so with a <code>for</code> loop where you'll compute <code>m(i, dollars)</code> for every <code>i</code> up to <code>len(cost)</code>, by filling a list of <code>m</code> values. To figure out which items were actually bought and not only the max gain, you'll have to save them in a separate list as you're filling out <code>m</code>.</p>
|
python|algorithm|optimization|knapsack-problem
| 0 |
1,903,498 | 26,320,638 |
Converting Pandas DataFrame to Orange Table
|
<p>I notice that this is an <a href="https://github.com/biolab/orange3/issues/68">issue on GitHub already</a>. Does anyone have any code that converts a Pandas DataFrame to an Orange Table?</p>
<p>Explicitly, I have the following table. </p>
<pre><code> user hotel star_rating user home_continent gender
0 1 39 4.0 1 2 female
1 1 44 3.0 1 2 female
2 2 63 4.5 2 3 female
3 2 2 2.0 2 3 female
4 3 26 4.0 3 1 male
5 3 37 5.0 3 1 male
6 3 63 4.5 3 1 male
</code></pre>
|
<p>The documentation of Orange package didn't cover all the details. <code>Table._init__(Domain, numpy.ndarray)</code> works only for <code>int</code> and <code>float</code> according to <code>lib_kernel.cpp</code>. </p>
<p>They really should provide an C-level interface for <code>pandas.DataFrames</code>, or at least <code>numpy.dtype("str")</code> support.</p>
<p><strong>Update</strong>: Adding <code>table2df</code>, <code>df2table</code> performance improved greatly by utilizing numpy for int and float. </p>
<p>Keep this piece of script in your orange python script collections, now you are equipped with pandas in your orange environment.</p>
<p><strong>Usage</strong>: <code>a_pandas_dataframe = table2df( a_orange_table )</code> , <code>a_orange_table = df2table( a_pandas_dataframe )</code> </p>
<p><strong>Note</strong>: This script works only in Python 2.x, refer to @DustinTang 's <a href="https://stackoverflow.com/a/44859770/2281436">answer</a> for Python 3.x compatible script.</p>
<pre><code>import pandas as pd
import numpy as np
import Orange
#### For those who are familiar with pandas
#### Correspondence:
#### value <-> Orange.data.Value
#### NaN <-> ["?", "~", "."] # Don't know, Don't care, Other
#### dtype <-> Orange.feature.Descriptor
#### category, int <-> Orange.feature.Discrete # category: > pandas 0.15
#### int, float <-> Orange.feature.Continuous # Continuous = core.FloatVariable
#### # refer to feature/__init__.py
#### str <-> Orange.feature.String
#### object <-> Orange.feature.Python
#### DataFrame.dtypes <-> Orange.data.Domain
#### DataFrame.DataFrame <-> Orange.data.Table = Orange.orange.ExampleTable
#### # You will need this if you are reading sources
def series2descriptor(d, discrete=False):
if d.dtype is np.dtype("float"):
return Orange.feature.Continuous(str(d.name))
elif d.dtype is np.dtype("int"):
return Orange.feature.Continuous(str(d.name), number_of_decimals=0)
else:
t = d.unique()
if discrete or len(t) < len(d) / 2:
t.sort()
return Orange.feature.Discrete(str(d.name), values=list(t.astype("str")))
else:
return Orange.feature.String(str(d.name))
def df2domain(df):
featurelist = [series2descriptor(df.icol(col)) for col in xrange(len(df.columns))]
return Orange.data.Domain(featurelist)
def df2table(df):
# It seems they are using native python object/lists internally for Orange.data types (?)
# And I didn't find a constructor suitable for pandas.DataFrame since it may carry
# multiple dtypes
# --> the best approximate is Orange.data.Table.__init__(domain, numpy.ndarray),
# --> but the dtype of numpy array can only be "int" and "float"
# --> * refer to src/orange/lib_kernel.cpp 3059:
# --> * if (((*vi)->varType != TValue::INTVAR) && ((*vi)->varType != TValue::FLOATVAR))
# --> Documents never mentioned >_<
# So we use numpy constructor for those int/float columns, python list constructor for other
tdomain = df2domain(df)
ttables = [series2table(df.icol(i), tdomain[i]) for i in xrange(len(df.columns))]
return Orange.data.Table(ttables)
# For performance concerns, here are my results
# dtndarray = np.random.rand(100000, 100)
# dtlist = list(dtndarray)
# tdomain = Orange.data.Domain([Orange.feature.Continuous("var" + str(i)) for i in xrange(100)])
# tinsts = [Orange.data.Instance(tdomain, list(dtlist[i]) )for i in xrange(len(dtlist))]
# t = Orange.data.Table(tdomain, tinsts)
#
# timeit list(dtndarray) # 45.6ms
# timeit [Orange.data.Instance(tdomain, list(dtlist[i])) for i in xrange(len(dtlist))] # 3.28s
# timeit Orange.data.Table(tdomain, tinsts) # 280ms
# timeit Orange.data.Table(tdomain, dtndarray) # 380ms
#
# As illustrated above, utilizing constructor with ndarray can greatly improve performance
# So one may conceive better converter based on these results
def series2table(series, variable):
if series.dtype is np.dtype("int") or series.dtype is np.dtype("float"):
# Use numpy
# Table._init__(Domain, numpy.ndarray)
return Orange.data.Table(Orange.data.Domain(variable), series.values[:, np.newaxis])
else:
# Build instance list
# Table.__init__(Domain, list_of_instances)
tdomain = Orange.data.Domain(variable)
tinsts = [Orange.data.Instance(tdomain, [i]) for i in series]
return Orange.data.Table(tdomain, tinsts)
# 5x performance
def column2df(col):
if type(col.domain[0]) is Orange.feature.Continuous:
return (col.domain[0].name, pd.Series(col.to_numpy()[0].flatten()))
else:
tmp = pd.Series(np.array(list(col)).flatten()) # type(tmp) -> np.array( dtype=list (Orange.data.Value) )
tmp = tmp.apply(lambda x: str(x[0]))
return (col.domain[0].name, tmp)
def table2df(tab):
# Orange.data.Table().to_numpy() cannot handle strings
# So we must build the array column by column,
# When it comes to strings, python list is used
series = [column2df(tab.select(i)) for i in xrange(len(tab.domain))]
series_name = [i[0] for i in series] # To keep the order of variables unchanged
series_data = dict(series)
print series_data
return pd.DataFrame(series_data, columns=series_name)
</code></pre>
|
python|pandas|dataframe|orange
| 19 |
1,903,499 | 60,273,799 |
Left shift values in a TensorArray in Tensorflow v1.15
|
<p>I'm trying to left shift values in a <code>TensorArray</code> using another <code>TensorArray</code>, using the following code:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
an_array = tf.TensorArray(dtype=tf.float32, size=2, dynamic_size=False, clear_after_read=False, element_shape=(1, 2), name="First")
old_array = tf.TensorArray(dtype=tf.float32, size=2, dynamic_size=False, clear_after_read=False, element_shape=(1, 2), name="Second")
old_array = old_array.write(0, 2.*tf.ones((1, 2)))
old_array = old_array.write(1, 3.*tf.ones((1, 2)))
for _ in range(1, 5):
val = tf.random.normal(shape=(1, 2))
an_array = an_array.write(0, old_array.read(1))
an_array = an_array.write(1, val)
old_array = an_array.identity()
print(tf.Session().run([an_array.stack(), old_array.stack()]))
</code></pre>
<p>I load the <code>old_array</code> with some initial values. In the loop I read the second element of the <code>old_array</code> array and give it to the first element of the <code>an_array</code> array. The new value is then written to the second element of the <code>an_array</code> array. The contents of <code>an_array</code> are copied over to <code>old_array</code>.</p>
<p>Though in the first iteration, the first element in <code>an_array</code> is strictly not the left shifted version of the array, but something based on the initial conditions, later iterations are expected to give the left shifted version of the <code>an_array</code> array.</p>
<p>When I run this script, I get the error</p>
<blockquote>
<p>tensorflow.python.framework.errors_impl.InvalidArgumentError: TensorArray First_1: Could not write to TensorArray index 0 because it has already been written to.</p>
</blockquote>
<p>Can someone point out what is wrong with the code? Thanks.</p>
|
<p>I was getting the same error as you are getting on running the code in <code>tensorflow version 1.15.2</code>. But it worked fine in <code>tensorflow version 2.2.0</code>. Seems like <code>tensorflow version 1.15.2</code> issue. </p>
<p>Below are the run results for tensorflow version 2.2.0.</p>
<p><strong>Tensorflow 2.2.0 -</strong></p>
<pre><code>#%tensorflow_version 2.x
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
print(tf.__version__)
an_array = tf.TensorArray(dtype=tf.float32, size=2, dynamic_size=False, clear_after_read=False, element_shape=(1, 2), name="First")
old_array = tf.TensorArray(dtype=tf.float32, size=2, dynamic_size=False, clear_after_read=False, element_shape=(1, 2), name="Second")
old_array = old_array.write(0, 2.*tf.ones((1, 2)))
old_array = old_array.write(1, 3.*tf.ones((1, 2)))
for _ in range(1, 5):
val = tf.random.normal(shape=(1, 2))
an_array = an_array.write(0, old_array.read(1))
an_array = an_array.write(1, val)
old_array = an_array.identity()
print(tf.compat.v1.Session().run([an_array.stack(), old_array.stack()]))
</code></pre>
<p><strong>Output -</strong></p>
<pre><code>2.2.0
[array([[[ 0.6800341 , -0.72848713]],
[[ 0.6648014 , -0.40344566]]], dtype=float32), array([[[ 0.6800341 , -0.72848713]],
[[ 0.6648014 , -0.40344566]]], dtype=float32)]
</code></pre>
<p>Hope this answers your question. Happy Learning.</p>
|
python|arrays|python-3.x|tensorflow
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.