Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,902,100 | 34,171,568 |
return value from python script to shell script
|
<p>I am new in Python. I am creating a Python script that returns a string "hello world." And I am creating a shell script. I am adding a call from the shell to a Python script.</p>
<ol>
<li>i need to pass arguments from the shell to Python.</li>
<li>i need to print the value returned from Python in the shell script.</li>
</ol>
<p>This is my code:</p>
<p><strong>shellscript1.sh</strong></p>
<pre><code>#!/bin/bash
# script for testing
clear
echo "............script started............"
sleep 1
python python/pythonScript1.py
exit
</code></pre>
<p><strong>pythonScript1.py</strong></p>
<pre><code>#!/usr/bin/python
import sys
print "Starting python script!"
try:
sys.exit('helloWorld1')
except:
sys.exit('helloWorld2')
</code></pre>
|
<p>You can't return message as exit code, only numbers. In bash it can accessible via <code>$?</code>. Also you can use <code>sys.argv</code> to access code parameters:</p>
<pre><code>import sys
if sys.argv[1]=='hi':
print 'Salaam'
sys.exit(0)
</code></pre>
<p>in shell:</p>
<pre><code>#!/bin/bash
# script for tesing
clear
echo "............script started............"
sleep 1
result=`python python/pythonScript1.py "hi"`
if [ "$result" == "Salaam" ]; then
echo "script return correct response"
fi
</code></pre>
|
python|linux|bash|shell
| 57 |
1,902,101 | 39,606,589 |
PySpark DataFrame - Join on multiple columns dynamically
|
<p>let's say I have two DataFrames on Spark</p>
<pre><code>firstdf = sqlContext.createDataFrame([{'firstdf-id':1,'firstdf-column1':2,'firstdf-column2':3,'firstdf-column3':4}, \
{'firstdf-id':2,'firstdf-column1':3,'firstdf-column2':4,'firstdf-column3':5}])
seconddf = sqlContext.createDataFrame([{'seconddf-id':1,'seconddf-column1':2,'seconddf-column2':4,'seconddf-column3':5}, \
{'seconddf-id':2,'seconddf-column1':6,'seconddf-column2':7,'seconddf-column3':8}])
</code></pre>
<p>Now I want to join them by multiple columns (any number bigger than one)</p>
<p>What I have is an array of columns of the first DataFrame and an array of columns of the second DataFrame, these arrays have the same size, and I want to join by the columns specified in these arrays. For example:</p>
<pre><code>columnsFirstDf = ['firstdf-id', 'firstdf-column1']
columnsSecondDf = ['seconddf-id', 'seconddf-column1']
</code></pre>
<p>Since these arrays have variable sizes I can't use this kind of approach:</p>
<pre><code>from pyspark.sql.functions import *
firstdf.join(seconddf, \
(col(columnsFirstDf[0]) == col(columnsSecondDf[0])) &
(col(columnsFirstDf[1]) == col(columnsSecondDf[1])), \
'inner'
)
</code></pre>
<p>Is there any way that I can join on multiple columns dynamically?</p>
|
<p>Why not use a simple comprehension:</p>
<pre><code>firstdf.join(
seconddf,
[col(f) == col(s) for (f, s) in zip(columnsFirstDf, columnsSecondDf)],
"inner"
)
</code></pre>
<p>Since you use logical it is enough to provide a list of conditions without <code>&</code> operator.</p>
|
python|apache-spark|dataframe|pyspark|apache-spark-sql
| 15 |
1,902,102 | 38,666,464 |
Calculating time between text field interactions
|
<p>I have a dataset of text field interactions across several dozen users of my application across the span of several months. I'm trying to calculate the average time between keystrokes in pandas. The data look something like this:</p>
<pre><code>timestamp before_text after_text
1453481138188 NULL a
1453481138600 a ab
1453481138900 ab abc
1453481139400 abc abcd
1453484000000 Enter some numbers 1
1453484000100 1 12
1453484000600 12 123
</code></pre>
<p><code>timestamp</code> contains the unix time that the user pressed the key, <code>before_text</code> is the what the text field contained before the user hit the key, and <code>after_text</code> is what the field looked like after the keystroke.</p>
<p>What's the best way to go about doing this? I know that's not as simple as doing something like:</p>
<pre><code>(df["timestamp"] - df["timestamp"].shift()).mean()
</code></pre>
<p>because this will calculate a very large time difference on the boundary between two interactions. It seems like the best way to do this would be to pass some function of each row to <code>df.groupby</code> so that I can apply the above snippet to each row. If I had this <code>magic_function</code> I could do something like:</p>
<pre><code>df.groupby(magic_function).apply(lambda x: x["timestamp"] - x["timestamp"].shift()).mean()
</code></pre>
<p>What's a good way to implement <code>magic_function</code>, or am I thinking about this all wrong?</p>
|
<p>I'd do it by calculating the text difference between 'before' and 'after'. If the difference is greater than some threshold, then that is a new session.</p>
<p>It requires <code>from Levenshtein import distance as ld</code>. I installed it via <code>pip</code> like so:</p>
<pre><code>pip install python-levenshtein
</code></pre>
<p>Then:</p>
<pre><code>from Levenshtein import distance as ld
import pandas as pd
# taking just these two columns and transposing and back filling.
# I back fill for one reason, to fill that pesky NA with after text.
before_after = df[['before_text', 'after_text']].T.bfill()
distances = before_after.apply(lambda x: ld(*x))
# threshold should be how much distance constitutes an obvious break in sessions.
threshold = 2
magic_function = (distances > 2).cumsum()
df.groupby(magic_function) \
.apply(lambda x: x["timestamp"] - x["timestamp"].shift()) \
.mean()
362.4
</code></pre>
|
python|pandas|time-series
| 2 |
1,902,103 | 38,512,485 |
Highlight specific points in matplotlib scatterplot
|
<p>I have a CSV with 12 columns of data. <a href="http://i.stack.imgur.com/7FGZE.png" rel="nofollow">I'm focusing on these 4 columns</a> </p>
<p>Right now I've plotted "Pass def" and "Rush def". I want to be able to highlight specific points on the scatter plot. For example, I want to highlight 1995 DAL point on the plot and change that point to a color of yellow. </p>
<p>I've started with a for loop but I'm not sure where to go. Any help would be great.</p>
<p>Here is my code: </p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import csv
import random
df = pd.read_csv('teamdef.csv')
x = df["Pass Def."]
y = df["Rush Def."]
z = df["Season"]
points = []
for point in df["Season"]:
if point == 2015.0:
print(point)
plt.figure(figsize=(19,10))
plt.scatter(x,y,facecolors='black',alpha=.55, s=100)
plt.xlim(-.6,.55)
plt.ylim(-.4,.25)
plt.xlabel("Pass DVOA")
plt.ylabel("Rush DVOA")
plt.title("Pass v. Rush DVOA")
plot.show
</code></pre>
|
<p>You can layer multiple scatters, so the easiest way is probably </p>
<pre class="lang-py prettyprint-override"><code>plt.scatter(x,y,facecolors='black',alpha=.55, s=100)
plt.scatter(x, 2015.0, color="yellow")
</code></pre>
|
python|pandas|csv|matplotlib|scatter-plot
| 8 |
1,902,104 | 40,347,986 |
decorate a python function with a method from a class object
|
<p>i'm looking to use a decorator on a function. the thing is the decorator function is defined inside a class and it has to be specific to an object of that class. for example:</p>
<pre><code>class Foo:
def __init__(self):
self.a = 1
def set_a(self, val):
self.a = val
def bar(func):
def _args(*args, **kwargs):
func(*args, **kwargs) + self.a
return _args
if __name__ == "__main__":
foo = Foo()
foo2 = Foo()
foo.set_a = 2
@foo.bar
def multip(a, b):
return a * b
multip(1, 2) # here i would expect answer to be 4
foo.set_a = 3
multip(1, 2) # here i would expect answer to be 5
@foo2.bar
def multip2(a, b):
return a * b
multip2(1, 2) # here i would expect the answer to be 3
</code></pre>
|
<p>So you can do what you are looking for but I'm not sure I would consider this a good decorator. Why not just go for a class decorator and use <code>__init__()</code> and <code>__call__()</code>.</p>
<p>Anyway here's your fixed code:</p>
<pre><code>class Foo:
def __init__(self, a=1):
self.a = a
def bar(self, func):
# ^^^^ you need self
def _args(*args, **kwargs):
return func(*args, **kwargs) + self.a
# ^^^^^^ You need to return here.
return _args
foo = Foo()
@foo.bar
def multip(a, b):
return a * b
multip(1, 2)
# 3
foo.a = 2
multip(1, 2)
# 4
foo.a = 3
multip(1, 2)
# 5
</code></pre>
|
python|class|decorator
| 0 |
1,902,105 | 51,656,758 |
How to get variable from another class without instantiation?
|
<p>How would i pass a varible from another class without instanting the class? the reason i do not want to instantiate the class is because i would have to pass self.master which would mess up the classes window i am passing the variable to. </p>
<pre><code>class MainPageGUI:
def __init__(self, master):
self.master = master
self.master.title("Jans Corp")
self.master.configure(background='lightgrey')
self.master.geometry("1200x800")
listbox = tk.Listbox(self.master,width=150, height=35) # varibable i would like to use in the other class
listbox.place(x=150, y = 130)
</code></pre>
<p>Class i would like to pass the variable in:</p>
<pre><code>class NewEmployee:
def __init__(self, master): #Creating basic GUI to add employees
self.master = master
self.master.title("Jans Corp")
self.master.configure(background="lightgrey")
self.master.geometry("300x500")
aa = MainPageGUI(self.master) ## my attempt at it, its wrong as the class get
self.listbox = self.aa.listbox
</code></pre>
|
<p>In general terms, the answer to "How to get variable from another class without instantiation?" is "you can't".</p>
<p>Your code example doesn't provide enough information to give a more concrete example. We don't know, for example, how, when, or where you create the instance of <code>MainPageGUI</code>, or how, when, and where you create an instance of <code>NewEmployee</code>.</p>
<p>I'm going to assume you've already created an instance of <code>MainPageGUI</code> before creating a <code>NewEmployee</code>.</p>
<p>In your case, you're trying to access something in <code>MainPageGUI</code> from another class. You don't want to create <em>another</em> <code>MainPageGUI</code>. Instead, what you need is a reference to the original <code>MainPageGUI</code>. Since that class must be instantiated somewhere, you simply need to pass that instance down when creating a new <code>NewEmployee</code>.</p>
<p>That means that you need to define <code>NewEmployee</code> something like this:</p>
<pre><code>class NewEmployee:
def __init__(self, master, main_gui):
self.main_gui = main_gui
...
</code></pre>
<p>Then, anywhere in <code>NewEmployee</code> where you need to reference the listbox, you would use <code>self.main_gui.listbox</code>.</p>
<p>Of course, this also requires that <code>MainGUI</code> actually defines <code>self.listbox</code>. Right now your code does <code>listbox = tk.Listbox(...)</code> when it needs to be <code>self.listbox = tk.Listbox(...)</code>.</p>
|
python|python-3.x|tkinter
| 1 |
1,902,106 | 10,031,945 |
Trouble using django.template Context in unittest
|
<p>I got a little confusing situation here when I use the Context from django.template.</p>
<p>The following works in the python shell:</p>
<pre><code>>>> from django.template import Context, Template
>>> b=Template('TEST').render(Context())
>>> print b
TEST
</code></pre>
<p>When I use the very same Code in a unittest, I get the follwing Error:</p>
<pre><code>Traceback (most recent call last):
File "/newsletterapi/tests.py", line 25, in setUp
b = Template('TEST').render(Context())
File "/opt/python2.7/lib/python2.7/site-packages/django/template/base.py", line 121, in render
context.render_context.push()
AttributeError: 'Context' object has no attribute 'render_context'
</code></pre>
<p>The unittest looks like this:</p>
<pre><code>from django.test import TestCase
from myproject.newsletterapi.models import Newsletter
from django.utils.termcolors import colorize
from django.db import IntegrityError
from django.template import Template, Context
import random
import datetime
from decimal import *
import string
class NewsletterTest(TestCase):
def setUp(self):
b = Template('TEST').render(Context()) # this is line 25
self.newsletter = Newsletter(body=b)
self.newsletter.save()
### ... continues here
</code></pre>
<p>Does anyone have an idea why this works in the shell but not in the unittest? I appreciate every hint.</p>
|
<p>OK, i got the solution:</p>
<pre><code>from decimal import *
</code></pre>
<p>is the "bad One" this lib has a Context object too.
Thanks for anyone reading!</p>
|
python|django|unit-testing|django-1.3
| 7 |
1,902,107 | 26,272,684 |
How can I show/hide toolbar depending on mouse movements and mouse position inside window?
|
<p>Hi Iam using Python and GTK+. In my GUI I have 2 toolbars I want show first toolbar only if user moves mouse than hide it again after few seconds as for second toolbar I want to show it when user is on particular x,y coordinates.How can I achieve it ?</p>
<p>EDIT:</p>
<p>Iam creating some kind of media player so I want toolbars to disapear while user is not using mouse in case of playerMenu toolbar or if user doesn't move it to specific location in case of ribbonBar toolbar .Iam using GTK+ here is my code for toolbars:</p>
<pre><code>class Player(Gtk.Window):
def __init__(self):
Gtk.Window.__init__(self)
def build_UI(self):
container=Gtk.Box(orientation=Gtk.Orientation.VERTICAL)
ribbonBar=Gtk.Toolbar()
playerMenu=Gtk.Toolbar()
def mouse_moved(self):
#TO-DO here I should check cordinates for example I want to see if mouse.y=window.height-50px and I would like to show ribbonaBar
#after that Gdk.threads_add_timeout(1000,4000,ribbonBar.hide)
#TO-DO here I show playerMenu toolbar if mouse is moved
# smt like playerMenu.show()
#after that I would call Gdk.threads_add_timeout(1000,4000,playerMenu.hide)
# to hide it again after 4 seconds
</code></pre>
<p>I should connect my window to some mouse event but I don't know the event name and how can I get mouse.x and mouse.y?</p>
|
<p>Why do you want to do this? Trying to use widgets that disappear when you're not moving the mouse is rather annoying, IMHO.</p>
<p>But anyway...</p>
<p>To toggle the visibility of a widget use the show() and hide() methods, or map() and unmap() if you don't want the other widgets in your window to move around. To handle timing, use gobject.timeout_add(), and you'll need to connect() your window to "motion_notify_event" and set the appropriate event masks: gtk.gdk.POINTER_MOTION_MASK and probably gtk.gdk.POINTER_MOTION_HINT_MASK. The Event object that your motion_notify callback receives will contain x,y mouse coordinates.</p>
<p>At least, that's how I'd do it in GTK2; I don't know GTK3.</p>
<p>If you want more specific help you need to post some code.</p>
<hr>
<p>I see that you've posted some code, but it doesn't have a lot of detail... But I understand that GTK can be a bit overwhelming. I haven't used it much in the last 5 years, so I'm a bit rusty, but I just started getting into it again a couple of months ago and thought your question would give me some good practice. :)</p>
<p>I won't claim that the code below is the best way to do this, but it works. And hopefully someone who is a GTK expert will come along with some improvements.</p>
<p>This program builds a simple Toolbar with a few buttons. It puts the Toolbar into a Frame to make it look nicer, and it puts the Frame into an Eventbox so we can receive events for everything in the Frame, i.e., the Toolbar and its ToolItems. The Toolbar only appears when the mouse pointer isn't moving and disappears after a few seconds, unless the pointer is hovering over the Toolbar.</p>
<p>This code also shows you how to get and process mouse x,y coordinates.</p>
<pre><code>#!/usr/bin/env python
''' A framed toolbar that disappears when the pointer isn't moving
or hovering in the toolbar.
A response to the question at
http://stackoverflow.com/questions/26272684/how-can-i-show-hide-toolbar-depending-on-mouse-movements-and-mouse-position-insi
Written by PM 2Ring 2014.10.09
'''
import pygtk
pygtk.require('2.0')
import gtk
import gobject
if gtk.pygtk_version < (2, 4, 0):
print 'pygtk 2.4 or better required, aborting.'
exit(1)
class ToolbarDemo(object):
def button_cb(self, widget, data=None):
#print "Button '%s' %s clicked" % (data, widget)
print "Button '%s' clicked" % data
return True
def show_toolbar(self, show):
if show:
#self.frame.show()
self.frame.map()
else:
#self.frame.hide()
self.frame.unmap()
def timeout_cb(self):
self.show_toolbar(self.in_toolbar)
if not self.in_toolbar:
self.timer = False
return self.in_toolbar
def start_timer(self, interval):
self.timer = True
#Timer will restart if callback returns True
gobject.timeout_add(interval, self.timeout_cb)
def motion_notify_cb(self, widget, event):
if not self.timer:
#print (event.x, event.y)
self.show_toolbar(True)
self.start_timer(self.time_interval)
return True
def eventbox_cb(self, widget, event):
in_toolbar = event.type == gtk.gdk.ENTER_NOTIFY
#print event, in_toolbar
self.in_toolbar = in_toolbar
#### self.show_toolbar(in_toolbar) does BAD things :)
if in_toolbar:
self.show_toolbar(True)
return True
def quit(self, widget): gtk.main_quit()
def __init__(self):
#Is pointer over the toolbar Event box?
self.in_toolbar = False
#Is pointer motion timer running?
self.timer = False
#Time in milliseconds after point stops before toolbar is hidden
self.time_interval = 3000
self.window = win = gtk.Window(gtk.WINDOW_TOPLEVEL)
width = gtk.gdk.screen_width() // 2
height = gtk.gdk.screen_height() // 5
win.set_size_request(width, height)
win.set_title("Magic Toolbar demo")
win.set_border_width(10)
win.connect("destroy", self.quit)
#self.motion_handler = win.connect("motion_notify_event", self.motion_notify_cb)
win.connect("motion_notify_event", self.motion_notify_cb)
win.add_events(gtk.gdk.POINTER_MOTION_MASK |
gtk.gdk.POINTER_MOTION_HINT_MASK)
box = gtk.VBox()
box.show()
win.add(box)
#An EventBox to capture events inside Frame,
# i.e., for the Toolbar and its child widgets.
ebox = gtk.EventBox()
ebox.show()
ebox.set_above_child(True)
ebox.connect("enter_notify_event", self.eventbox_cb)
ebox.connect("leave_notify_event", self.eventbox_cb)
box.pack_start(ebox, expand=False)
self.frame = frame = gtk.Frame()
frame.show()
ebox.add(frame)
toolbar = gtk.Toolbar()
#toolbar.set_border_width(5)
toolbar.show()
frame.add(toolbar)
def make_toolbutton(text):
button = gtk.ToolButton(None, label=text)
#button.set_expand(True)
button.connect('clicked', self.button_cb, text)
button.show()
return button
def make_toolsep():
sep = gtk.SeparatorToolItem()
sep.set_expand(True)
#sep.set_draw(False)
sep.show()
return sep
for i in xrange(5):
button = make_toolbutton('ToolButton%s' % (chr(65+i)))
toolbar.insert(button, -1)
#toolbar.insert(make_toolsep(), -1)
for i in xrange(1, 9, 2):
toolbar.insert(make_toolsep(), i)
button = gtk.Button('_Quit')
button.show()
box.pack_end(button, False)
button.connect("clicked", self.quit)
win.show()
frame.unmap()
def main():
ToolbarDemo()
gtk.main()
if __name__ == "__main__":
main()
</code></pre>
|
python|user-interface|gtk|gtk3
| 1 |
1,902,108 | 60,226,662 |
downloading specific files resides in s3 subfolder into my local machine using boto3
|
<p>I have a file resides in a s3 bucket subfolder. Bucket name "testbucket", folder name-"folder1". File name:"sample.csv". I want to download that into my local machine: "/Users/sameer/desktop/folder1".</p>
<p>What is the most efficient way to do download that if the file size is more than 3gb</p>
|
<p>Use the AWS CLI</p>
<pre class="lang-sh prettyprint-override"><code>aws s3 cp s3://testbucket/folder1/sample.csv /Users/sameer/desktop/folder1
</code></pre>
<p>if you must boto3:</p>
<pre class="lang-py prettyprint-override"><code>import boto3
s3_client = boto3.client('s3')
s3_client.download_file('testbucket', 'folder1/sample.csv', '/Users/sameer/desktop/folder1/sample.csv')
</code></pre>
|
python|amazon-web-services|amazon-s3|boto3
| 1 |
1,902,109 | 1,938,755 |
Checking if A is superclass of B in Python
|
<pre><code>class p1(object): pass
class p2(p1): pass
</code></pre>
<p>So p2 is the subclass of p1. Is there a way to find out programmatically that p1 is [one of] the superclass[es] of p2 ?</p>
|
<p>using <class>.__bases__ seems to be what you're looking for...</p>
<pre><code>>>> class p1(object): pass
>>> class p2(p1): pass
>>> p2.__bases__
(<class '__main__.p1'>,)
</code></pre>
|
python|reflection|superclass
| 46 |
1,902,110 | 63,120,662 |
Python pandas - How to get a data from a column in a dataframe with the data from an other column
|
<p>I have a dataframe and with a value inside the dataframe I need to get from different columns the values in the same row. In my example I need to get the value from the column Total that correspond to A1, A2 and A3. I did as following:</p>
<pre><code>df=(['A1', 140000],['A2', 100000],['A3', 400000])
df=pd.DataFrame({'A': ['A1', 'A2', 'A3'], 'Total': [14000, 10000, 40000]})
CA1 = df.loc[df['A']=='A1']['Total']
CA2 = df.loc[df['A']=='A2']['Total']
CT = df.loc[df['A']=='A3']['Total']
print(CA1)
print(CA2)
print(CT)
</code></pre>
<p>but I get this result and I would need to get only the value (14000, 10000, 40000), how could I do it?
0 14000
Name: Total, dtype: int64
1 10000
Name: Total, dtype: int64
2 40000
Name: Total, dtype: int64</p>
|
<p>I created a data frame with an 'A4' element (to show filtering):</p>
<pre><code>df = pd.DataFrame({'A': ['A1', 'A2', 'A3', 'A4'],
'Total': [14000, 10000, 40000, 1]})
df
</code></pre>
<p>Then I converted the column 'A' to an index, selected 'A1' to (and including) 'A3', converted to a Series, and then to a list:</p>
<pre><code>df.set_index('A').sort_index().loc['A1':'A3'].squeeze().to_list()
</code></pre>
<p>Result is:</p>
<pre><code>[14000, 10000, 40000]
</code></pre>
<p>If you want the grand total as a scalar, change from <code>.squeeze().to_list()</code> to <code>.sum()</code></p>
<p>UPDATE</p>
<p>You can obtain the sum like this. The function <code>squeeze()</code> converts a data frame with 1 column to a series. The sum of a series is a scalar.</p>
<pre><code>scalar = df.set_index('A').sort_index().loc['A1':'A3'].squeeze().sum()
print(scalar)
64000
</code></pre>
|
python|pandas
| 0 |
1,902,111 | 63,044,800 |
can't print variable from previous function in python
|
<p>I have got a database program to keep data in, and I can't solve this problem:</p>
<p>I have got two functions. When you input A into the program
the function called addy() starts
and ask for more input into a variable
then it returns to the main screen,
then the user can Input S
which starts Show()
and then it's supposed to show what you have added into the variable</p>
<p>PROBLEM:
It's not getting the value from the previous definition.</p>
<p>CODE:</p>
<pre><code>def addy():
os.system('cls')
addel = input('what is the name of the operating system?: \n')
os.system('cls')
time.sleep(1)
print(addel + ' Has been added to the database!')
time.sleep(2)
program()
def show():
print('Heres a list of the operating systems you have added:')
time.sleep(5)
program()
addel = addy()
print(addel) # this should print the value from the previous function
</code></pre>
|
<p>The are 2 reasons why</p>
<ol>
<li><code>Addel</code> is a local variable not a global one. Therefore, you can only use it in your <code>addy</code> function.</li>
<li>Say your intent was not to use it which is what it seems, you wrote</li>
</ol>
<pre><code>addel = addy()
</code></pre>
<p>the function <code>addy</code> has no return value so your code wont work.
to fix this write</p>
<pre><code>return addel
</code></pre>
<p>as the last line in your addy function then it will work because now the function has a return value.</p>
|
python|python-3.x
| 0 |
1,902,112 | 32,322,771 |
What is the downloader option --restrict-filenames for python youtube-dl
|
<p>Hello all I was wondering what the option is for the python based version of <code>youtube-dl</code> for this argument in terminal <code>--restrict-filenames</code>? What does the options in python does the tuple need to have added to it?</p>
<p>Thanks in advance, Ondeckshooting</p>
|
<p>Per the documentation, that option does not require an argument. So a command
such as this will suffice:</p>
<pre><code>youtube-dl --restrict-filenames 73VCKpU9ZnA
</code></pre>
<p>Here is the option detail:</p>
<blockquote>
<p>Restrict filenames to only ASCII characters, and avoid "&" and spaces in
filenames</p>
</blockquote>
<p>As far as what ASCII is, this script will reveal:</p>
<pre><code>#!/usr/bin/awk -f
BEGIN {
while (z++ < 0x7e) {
$0 = sprintf("%c", z)
if (/[[:graph:]]/) printf $0
}
}
</code></pre>
<p>Result</p>
<pre class="lang-none prettyprint-override"><code>!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~
</code></pre>
|
python|macos|youtube-dl
| 2 |
1,902,113 | 44,301,045 |
Wraps gives TypeError when used in a decorator
|
<p>I created a decorator to print the name of the function it decorates and it works:</p>
<pre><code>>>> def debug(func):
... msg=func.__qualname__
... def wrapper(*args, **kwargs):
... print(msg)
... return func(*args, **kwargs)
... return wrapper
...
>>> @debug
... def add(x, y):
... return x+y
...
>>> add(1,2)
add
3
</code></pre>
<p>Now I wanted to apply the wraps decorator to the wrapper but when I did I got the error "TypeError: update_wrapper() got multiple values for argument 'wrapped'"</p>
<pre><code>>>> from functools import wraps
>>>
>>> def debug(func):
... msg=func.__qualname__
... @wraps
... def wrapper(*args, **kwargs):
... print(msg)
... return func(*args, **kwargs)
... return wrapper
...
>>> @debug
... def add(x, y):
... return x+y
...
>>> add(1,2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: update_wrapper() got multiple values for argument 'wrapped'
>>>
</code></pre>
<p>What I'm doing wrong and why the error occurs?</p>
|
<p>Got it. Sorry the issue was I used wraps incorrectly as a decorator. Here is the correct code</p>
<pre><code>def debug(func):
msg = func.__qualname__
@wraps(func)
def wrapper(*args, **kwargs):
print(msg)
return func(*args, **kwargs)
return wrapper
</code></pre>
|
python|python-3.x|python-decorators
| 9 |
1,902,114 | 44,089,314 |
python operators for numpy.ndarray like class
|
<p>I have a class that has a numpy.ndarray as a member and behaves similar to ndarray by overloading <code>__getitem__</code> and <code>__getattr__</code>:</p>
<pre><code>class Foo(object):
def __init__(values):
# numpy.ndarray
self._values = values
def __getitem__(self, key):
return self._values[key]
def __getattr__(self, name):
return getattr(self._values, name)
</code></pre>
<p>Thus I can use the numpy method like shape, size, ... directly on an object of this class. I can also do things like <code>obj.__add__(1)</code>, which will add 1 to <code>obj._values</code>. However, if I try <code>obj + 1</code> it raises "unsupported operand type(s)". I would like to get the same behaviour for <code>obj + 1</code> as <code>obj.__add__(1)</code>. Is this possible without adding <code>__add__</code> to <code>Foo</code>?</p>
|
<p>I can see what you're trying to do here, but it's not going to work the way you think it should. This is a very non-obvious subtlety in Python. </p>
<p>What you're thinking is that what you do <code>obj + 1</code> Python is actually calling <code>obj.__add__(1)</code> and that, failing to find an <code>__add__</code> attribute on <code>obj</code> it will fall through to its <code>__getattr__</code>. </p>
<p>But this is not exactly how it works for arithmetic operators, the implementation of which is actually significantly more complicated. In this case if <code>obj</code> does not have an <code>__add__</code> method, it will attempt to call the right-hand operand's <code>__radd__</code> (for "right add") method to see if <code>1</code> knows how to add with the left-hand operator. It does not so you get an exception.</p>
<p>There are other subtleties involving type slots that I won't get into. </p>
<p>If you want your class to act as a proxy for an <code>ndarray</code> you have a few options. It really depends on what you're actually trying to accomplish, which you might consider asking in a separate question. You might just be able to <em>subclass</em> <code>ndarray</code> directly and implement your additional functionality in the subclass. </p>
<p>If you don't want a subclass of <code>ndarray</code> you might also consider using a proxy, such as the <code>ObjectProxy</code> from <a href="http://wrapt.readthedocs.io/en/latest/wrappers.html" rel="nofollow noreferrer">wrapt</a>. This may or may not be what you want. It will make an object that walks, talks, quacks like, and is even named <code>ndarray</code>, though you can still subclass <code>ObjectProxy</code> to override methods that you don't want proxied.</p>
<p>Otherwise there's the tedious manual method. </p>
|
python|numpy
| 0 |
1,902,115 | 32,858,126 |
How to use native Cpython extensions in Jython
|
<p>I have a python script in which, i used python packages, like numpy,scipy etc. When i am trying to run the script using Jython, it gives an exception("Import error").</p>
<p>My python code is:</p>
<pre><code>import numpy as np
#import pandas as pd
#import statsmodels.api as sm
#import matplotlib.pyplot as plt
#from patsy import dmatrices
#from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
import pandas
import pickle
#from numpy import genfromtxt, savetxt
import csv
trainingData="/home/gauge/Documents/TrainingData1/Training4.csv"
testFile = "/home/gauge/Documents/TrainingData1/TestCase1.csv"
PredictionOutput="/home/gauge/Documents/TrainingData1/result4.csv"
def make_x_and_y(filepath):
x_y = []
with open(filepath, 'rt') as f:
reader = csv.reader(f)
for idx,row in enumerate(reader):
if idx<0: continue
x_y.append([row[2],row[3],row[4],row[5],row[6],row[7],row[8]])
#x_y.append([row[2],row[3],row[4],row[5],row[6],row[8]])
#print x
X = [i[:-1] for i in x_y]
y = [i[-1] for i in x_y]
X = np.array(X,dtype='f8')
#print file_path
y = np.array(y,dtype='f8')
#print X.shape, y.shape
return X,y
def build_model(filepath):
X,y = make_x_and_y(filepath)
target = np.array(y,dtype='f8')
train = np.array(X,dtype='f8')
model = RandomForestClassifier(n_estimators=150,max_features=5,random_state=1,max_depth=10)
model.fit(train, target)
file_object=open("/home/gauge/Documents/pickle/model.pkl",'wb')
pickle.dump(model,file_object,-1)
return model
def predict():
#for index in range(10,200):
model = build_model(trainingData)
X=[]
data=[]
with open(testFile,'rt') as f:
reader = csv.reader(f)
for idx,row in enumerate(reader):
if idx<0: continue
data.append([row[0],row[1],row[2],row[3],row[4],row[5],row[6],row[7]])
X.append([row[2],row[3],row[4],row[5],row[6],row[7]])
X=np.array(X,dtype='f8')
if (len(X) != 0 ):
predicted = model.predict(X)
# prob=model.predict_proba(X)[0]
#print prob
file_table=pandas.read_csv("/home/gauge/Documents/TrainingData1/testdata2.csv",sep=",",quoting=1)
list=[]
list =file_table['Correct']
#print list
count=0
count1=0
with open(PredictionOutput, 'w') as fp:
a = csv.writer(fp)
for idx,p in enumerate(predicted):
#print X[idx]
"""
if list[idx]==0 and int(p)==1:
count+=1
elif list[idx]==1 and int(p)==0:
count1+=1
print "FP -",count,"FN -",count1
"""
prob = model.predict_proba(X[idx])[0][0]
prob1=model.predict_proba(X[idx])[0][1]
print prob,prob1
#a.writerows([[str(data[idx][0]),str(data[idx][1]),int(p)]])
if prob1>=0.90:
a.writerows([[str(data[idx][0]),str(data[idx][1]),int(p),prob,prob1]])
if list[idx]==0:
count+=1
else:
a.writerows([[str(data[idx][0]),str(data[idx][1]),0,prob,prob1]])
if list[idx]==1:
count1+=1
print "FP -",count,"FN -",count1
predict()
</code></pre>
<p>The Jython code:</p>
<pre><code>package com.gauge.ie.Jython;
import org.python.core.PyInteger;
import org.python.core.PyObject;
import org.python.util.PythonInterpreter;
public class PyJava
{
public static void main(String args[])
{
PythonInterpreter py=new PythonInterpreter();
py.execfile("/home/gauge/Spyder/Classifier.py");
PyObject obj=py.get("a");
System.out.println("val: "+obj.toString());
}
}
</code></pre>
|
<p>You can't use C extensions from Jython directly, because they are bound to <a href="https://en.wikipedia.org/wiki/CPython" rel="nofollow noreferrer">CPython</a> implementation. Jython is very different inside and it's not compatible with CPython on C API level.</p>
<p>If you want to connect Jython to CPython C extensions, you need some kind of <a href="https://lwn.net/Articles/641103/" rel="nofollow noreferrer">compatibility layer</a> between them. But AFAIK there is no reliable, production-ready library which does that.</p>
<p>See also this old question for some alternatives: <a href="https://stackoverflow.com/questions/3097466/using-numpy-and-cpython-with-jython">Using NumPy and Cpython with Jython</a></p>
|
python|numpy|jython|random-forest
| 4 |
1,902,116 | 14,154,851 |
any python min like function which gives a list as result
|
<pre><code>>>> lst
[('BFD', 0), ('NORTHLANDER', 3), ('HP', 23), ('VOLT', 3)]
>>> min([x for x in lst if x[1]!=0], key=lambda x: x[1])
('NORTHLANDER', 3)
>>>
</code></pre>
<p>Here min() only return one set. It should actually return:</p>
<pre><code>[('NORTHLANDER', 3), ('VOLT', 3)]
</code></pre>
<p>Any in-built function to this effect?</p>
|
<p>It's a simple two-step solution to first compute the min, then collect all tuples having the min value, so write your own function to do this. It's a rather specialized operation, not what is expected of a general-purpose <code>min()</code> function.</p>
<p>Find an element with the minimum value:</p>
<pre><code>>>> lstm = min([x for x in lst if x[1] > 0], key = lambda x: x[1])
>>> lstm
('NORTHLANDER', 3)
</code></pre>
<p>Then just form a new list taking elements from <code>list</code> where the value is that of <code>lstm</code>:</p>
<pre><code>>>> [y for y in lst if y[1] == lstm[1]]
[('NORTHLANDER', 3), ('VOLT', 3)]
</code></pre>
|
python|min
| 8 |
1,902,117 | 34,798,397 |
Scraping asp and java script generated table with python 2.7, beautiful soup and selenium
|
<p>I need to scrape a JavaScript generated table and write some of the data to a csv file. I am restricted to python 2.7, Beautiful Soup and/or Selenium. The closest code that will do part of what I need is in question <a href="https://stackoverflow.com/questions/14529849/python-scraping-javascript-using-selenium-and-beautiful-soup">14529849</a>, but all I am getting in return is an empty list.
The site I an looking at is:</p>
<p><a href="http://hydromet.lcra.org/repframe.html" rel="nofollow noreferrer">http://hydromet.lcra.org/repframe.html</a></p>
<p>with the source:
<a href="http://hydromet.lcra.org/repstage.asp" rel="nofollow noreferrer">http://hydromet.lcra.org/repstage.asp</a></p>
<p>For example, one of the records looks like this:</p>
<pre><code> <tr>
<td class="flagmay"><a href="javascript:dataWin('STAGE','119901','Colorado River at Winchell')" class="tablink">Colorado River at Winchell</a></td>
<td align="left" class="flagmay">Jan 12 2016 5:55PM</td><td align="right" class="flagmay">2.48</td><td align="right" class="flagmay">4.7</td></tr>
</code></pre>
<p></p>
<p>and what I am trying to write to csv, should look like:</p>
<p>Station| StationID| Time | Stage| Flow</p>
<p>Colorado River at Winchell | 119901 | Jan 12 2016 5:55PM | 2.48 | 4.7</p>
<p>Can anyone please give me any pointers?
Thank you in advance.</p>
|
<p>Try this:</p>
<p>Im using <code>pandas</code>, <code>requests</code> and <code>BeautifulSoup4</code> libraries and tested that the code works with python <code>2.7.11</code> and <code>3.5.1</code></p>
<pre><code>import requests
import pandas
from bs4 import BeautifulSoup
url = 'http://hydromet.lcra.org/repstage.asp'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
tables = soup.find_all('table')
# convert the html table data into pandas data frames, skip the heading so that it is easier to add a column
df = pandas.read_html(str(tables[1]), skiprows={0}, flavor="bs4")[0]
# loop over the table to find out station id and store it in a dict obj
a_links = soup.find_all('a', attrs={'class': 'tablink'})
stnid_dict = {}
for a_link in a_links:
cid = ((a_link['href'].split("dataWin('STAGE','"))[1].split("','")[0])
stnid_dict[a_link.text] = cid
# add the station id column from the stnid_dict object above
df.loc[:, (len(df.columns)+1)] = df.loc[:, 0].apply(lambda x: stnid_dict[x])
df.columns = ['Station', 'Time', 'Stage', 'Flow', 'StationID']
# added custom order of columns to add in csv, and to skip row numbers in the output file
df.to_csv('station.csv', columns=['Station', 'StationID', 'Time', 'Stage', 'Flow'], index=False)
</code></pre>
<p>This script will create a CSV file called <code>station.csv</code> at the same location as the script.</p>
|
javascript|asp.net|python-2.7|selenium|beautifulsoup
| 1 |
1,902,118 | 27,255,560 |
Training a Machine Learning predictor
|
<p>I have been trying to build a prediction model using a user’s data. Model’s input is documents’ metadata (date published, title etc) and document label is that user’s preference (like/dislike). I would like to ask some questions that I have come across hoping for some answers:</p>
<ol>
<li>There are way more liked documents than disliked. I read somewhere that if somebody train’s a model using way more inputs of one label than the other this affects the performance in a bad way (model tends to classify everything to the label/outcome that has the majority of inputs</li>
<li><p>Is there possible to have input to a ML algorithm e.g logistic regression be hybrid in terms of numbers and words and how that could be done, sth like:</p>
<p>input = [18,23,1,0,’cryptography’] with label = [‘Like’]</p>
<p>Also can we use a vector ( that represents a word, using tfidf etc) as an input feature (e.g. 50-dimensions vector) ?</p></li>
<li>In order to construct a prediction model using textual data the only way to do so is by deriving a dictionary out of every word mentioned in our documents and then construct a binary input that will dictate if a term is mentioned or not? Using such a version though we lose the weight of the term in the collection right?
Can we use something as a word2vec vector as a single input in a supervised learning model?</li>
</ol>
<p>Thank you for your time.</p>
|
<ol>
<li><p>You either need to under-sample the bigger class (take a small random sample to match the size of the smaller class), over-sample the smaller class (bootstrap sample), or use an algorithm that supports unbalanced data - and for that you'll need to read the documentation.</p></li>
<li><p>You need to turn your words into a word vector. Columns are all the unique words in your corpus. Rows are the documents. Cell values are one of: whether the word appears in the document, the number of times it appears, the relative frequency of its appearance, or its TFIDF score. You can then have these columns along with your other non-word columns.</p></li>
</ol>
<p>Now you probably have more columns than rows, meaning you'll get a singularity with matrix-based algorithms, in which case you need something like SVM or Naive Bayes.</p>
|
python|machine-learning|language-features|feature-selection
| 1 |
1,902,119 | 27,268,244 |
generator is yielded but the the yielded variable is not printed
|
<p>While using generators, we yield a variable whose value is saved and will resume with that saved value when we give the <code>next()</code> statement. Is there a way where we can do this but not actually print the value of the yielded variable?</p>
<pre><code>def foo():
n = 0
print("This is where we start")
yield n
n += 1
print("This is first")
yield n
n += 1
print("This is second")
yield n
a = foo()
next(a)
This is where we start
0
next(a)
This is first
1
</code></pre>
<ol>
<li>This is a very naive way of using generators(implementing them) and doesn't show their effectiveness.</li>
<li>I know that this can also be done using iterators where the value won't be printed, but just wondering if it can be done with generators.</li>
</ol>
|
<p>You are using the <em>Python interactive interpreter</em> to call <code>next()</code>, and it is a function of that shell to print return values. What you are seeing has <em>nothing to do with generators</em>.</p>
<p>Simply assign the return value of the <code>next()</code> call to variable to not have them echoed:</p>
<pre><code>ignored = next(a)
</code></pre>
<p>or run your code as a script.</p>
<p>Note that generators are paused <em>immediately</em>; no code inside is run until you call <code>next()</code> on it. At that point the code runs until a <code>yield</code> expression is reached; it's value is returned and the generator is paused again. At no point is the <code>yield</code> result 'saved'.</p>
|
python|generator|yield
| 2 |
1,902,120 | 7,776,907 |
SQL join or R's merge() function in NumPy?
|
<p>Is there an implementation where I can join two arrays based on their keys? Speaking of which, is the canonical way to store keys in one of the NumPy columns (NumPy doesn't have an 'id' or 'rownames' attribute)?</p>
|
<p>If you want to use only numpy, you can use <strong>structured arrays</strong> and the <code>lib.recfunctions.join_by</code> function (see <a href="http://pyopengl.sourceforge.net/pydoc/numpy.lib.recfunctions.html">http://pyopengl.sourceforge.net/pydoc/numpy.lib.recfunctions.html</a>). A little example:</p>
<pre><code>In [1]: import numpy as np
...: import numpy.lib.recfunctions as rfn
...: a = np.array([(1, 10.), (2, 20.), (3, 30.)], dtype=[('id', int), ('A', float)])
...: b = np.array([(2, 200.), (3, 300.), (4, 400.)], dtype=[('id', int), ('B', float)])
In [2]: rfn.join_by('id', a, b, jointype='inner', usemask=False)
Out[2]:
array([(2, 20.0, 200.0), (3, 30.0, 300.0)],
dtype=[('id', '<i4'), ('A', '<f8'), ('B', '<f8')])
</code></pre>
<p>Another option is to use <strong>pandas</strong> (<a href="http://pandas.sourceforge.net/index.html">documentation</a>). I have no experience with it, but it provides more powerful data structures and functionality than standard numpy, "to make working with “relational” or “labeled” data both easy and intuitive". And it certainly has joining and merging functions (for example see <a href="http://pandas.sourceforge.net/merging.html#joining-on-a-key">http://pandas.sourceforge.net/merging.html#joining-on-a-key</a>).</p>
|
python|sql|numpy
| 17 |
1,902,121 | 47,367,882 |
Python: How to convert a dictionary of lists to a JSON object?
|
<p>I am new to the 'json' library thingy and having trouble converting a dictionary of lists to a JSON object, below are the dictionary I got:</p>
<pre><code>import json
data = {
'title' : ['Seven days', 'Not Today', 'Bad Moms'],
'date' : ['July 17', 'Aug 18', 'Jan 19']
}
json_data = json.dumps(data)
print(json_data)
</code></pre>
<p>Here was the result I got:</p>
<pre><code>{"title" : ['Seven days', 'Not Today', 'Bad Moms'], "date" : ['July 17', 'Aug 18', 'Jan 19']}
</code></pre>
<p>How to get it structured it in this way:</p>
<pre><code>{"title" : "Seven days","date" : "July 17"}, {"title" : "Not Today","date" : "Aug 18"}, {"title" : "Bad Mom","date" : "Jan 19"}
</code></pre>
<p>Thank you.</p>
|
<p>You can convert your data like this:</p>
<pre><code>d = [{'title': t, 'date': d} for t, d in zip(data['title'], data['date'])]
#[{'title': 'Seven days', 'date': 'July 17'},
# {'title': 'Not Today', 'date': 'Aug 18'},
# {'title': 'Bad Moms', 'date': 'Jan 19'}]
</code></pre>
<p>Dumping this to json will result in some string like:</p>
<pre><code>'[{"title": "Seven days", "date": "July 17"}, {"title": "Not Today", "date": "Aug 18"}, {"title": "Bad Moms", "date": "Jan 19"}]'
</code></pre>
<p>If you want your json to have a guaranteed order with regard to the keys in each object, you can use:</p>
<pre><code>from collections import OrderedDict
d = [OrderedDict([('title', t), ('date', d)]) for t, d in zip(data['title'], data['date'])]
</code></pre>
|
python|json|list|dictionary
| 4 |
1,902,122 | 70,788,375 |
How to search for a substring in a string, but from a certain index onward
|
<p>I'm not an expert in python and since it's work related i may have to change some variables and info.</p>
<p>So i have a long string from an API request in which I'm looking for the information of users, and to make sure i only have the users info , I use their ID with .find(ID) which show me the index where the ID is in the string ( middle of the user info). After that what i want is to go up the</p>
<pre><code>string and search for the tag that starts the information
`**<resource>** and down to the tag that end the information ****</resource>**.`
**id_position = string.find(id)**
**upper = string[:id_position].rfind("<resource>")** to go in reverse order up the string and find the first match , and works fine
but when i use **lower = string[id_position].find("</resource>")**
</code></pre>
<p>, instead of going normally from the id position,down till it finds the first match , it actually searches in the entire original string, as in searching from the first position <strong>string[0]</strong> onwards.</p>
<p>when i print</p>
<pre><code>**string[:id_position] + string[id_position:] == string** ,
</code></pre>
<p>it shows it's true, so i'm guessing the find function doesn't help me like i think it should.</p>
<p><strong>So my question is, how can i search for my specific substring after the index of the id ?</strong></p>
<p>I know it's hard to understand but i hope some of you may know what i mean</p>
<p>for reference,the data looks like this</p>
<pre><code>.<resource>
name=
ip=
.</resource>
.<resource>
name2=
ip2=
</resource>
.<resource>
name=
ip2=
.</resource>
</code></pre>
|
<p>Have you tried using <code>find()</code>s optional parameters <code>start</code> and <code>end</code>?</p>
<pre><code>id_position = string.find(id)
upper = string.find("</resource>", id_position) + len("</resource>")
lower = string.rfind("<resource>", 0, id_position)
print(repr(string[lower:upper]))
</code></pre>
<pre class="lang-sh prettyprint-override"><code>> '<resource>\n name2=\n ip2=\n</resource>'
</code></pre>
|
python|string|substring
| 0 |
1,902,123 | 70,892,197 |
What is the fastest way in python to identify the greatest cumulated differences in python array or list?
|
<p>Let's say I have the following list, corresponding to stock prices in time:</p>
<pre><code>prices = [1, 3, 7, 10, 9, 8, 5, 3, 6, 8, 12, 9, 6, 10, 13, 8, 4, 11]
</code></pre>
<p>and I want to identify the following greatest cumulative difference overall, like the greatest return:</p>
<pre><code>[(1,10), (3,12), (6,13), (4,11)]
</code></pre>
<p>as cumulative difference is = 9+9+7+7 = 32</p>
<p>Now from this result, I want to assign a long position (1) between thoses prices, and a short position (0) the rest of the time:</p>
<pre><code>prices = [1, 3, 7, 10, 9, 8, 5, 3, 6, 8, 12, 9, 6, 10, 13, 8, 4, 11]
position = [1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1]
</code></pre>
<p>So far, I managed to identify the greatest differences in the list with this code:</p>
<pre><code>print(nlargest(20, combinations(prices, 2), key = lambda x: abs(x[0]-x[1])))
[(1, 13), (1, 12), (1, 11), (3, 13), (3, 13), (1, 10), (1, 10), (3, 12), (3, 12), (13, 4)]
</code></pre>
<p>but I can't make it take into consideration that it should "read" the list and not further use the previous elements when trying a combination, as time goes.</p>
<p>I also tried this, which identifies in a vectorial way, the only greatest difference:</p>
<pre><code>cummin = np.minimum.accumulate
print(np.max(prices - cummin(prices)))
12
</code></pre>
<p>and also this, which seems to be my closest guess:</p>
<pre><code>l = np.array(list)
a = np.tile(l,(len(l),1))
print(a - a.T)
[[ 0 2 6 9 8 7 4 2 5 7 11 8 5 9 12 7 3 10]
[ -2 0 4 7 6 5 2 0 3 5 9 6 3 7 10 5 1 8]
[ -6 -4 0 3 2 1 -2 -4 -1 1 5 2 -1 3 6 1 -3 4]
[ -9 -7 -3 0 -1 -2 -5 -7 -4 -2 2 -1 -4 0 3 -2 -6 1]
[ -8 -6 -2 1 0 -1 -4 -6 -3 -1 3 0 -3 1 4 -1 -5 2]
[ -7 -5 -1 2 1 0 -3 -5 -2 0 4 1 -2 2 5 0 -4 3]
[ -4 -2 2 5 4 3 0 -2 1 3 7 4 1 5 8 3 -1 6]
[ -2 0 4 7 6 5 2 0 3 5 9 6 3 7 10 5 1 8]
[ -5 -3 1 4 3 2 -1 -3 0 2 6 3 0 4 7 2 -2 5]
[ -7 -5 -1 2 1 0 -3 -5 -2 0 4 1 -2 2 5 0 -4 3]
[-11 -9 -5 -2 -3 -4 -7 -9 -6 -4 0 -3 -6 -2 1 -4 -8 -1]
[ -8 -6 -2 1 0 -1 -4 -6 -3 -1 3 0 -3 1 4 -1 -5 2]
[ -5 -3 1 4 3 2 -1 -3 0 2 6 3 0 4 7 2 -2 5]
[ -9 -7 -3 0 -1 -2 -5 -7 -4 -2 2 -1 -4 0 3 -2 -6 1]
[-12 -10 -6 -3 -4 -5 -8 -10 -7 -5 -1 -4 -7 -3 0 -5 -9 -2]
[ -7 -5 -1 2 1 0 -3 -5 -2 0 4 1 -2 2 5 0 -4 3]
[ -3 -1 3 6 5 4 1 -1 2 4 8 5 2 6 9 4 0 7]
[-10 -8 -4 -1 -2 -3 -6 -8 -5 -3 1 -2 -5 -1 2 -3 -7 0]]
</code></pre>
<p>but I can't manage to identify from this matrice the best combination which would have been these trades <code>[(1,10), (3,12), (6,13), (4,11)]</code></p>
<p>Can anyone please help me?</p>
|
<p>You can get your desired pairs using <a href="https://docs.python.org/3/library/functions.html#zip" rel="nofollow noreferrer"><code>itertools.groupby</code></a> as:</p>
<pre><code>from itertools import groupby
prices = [1, 3, 7, 10, 9, 8, 5, 3, 6, 8, 12, 9, 6, 10, 13, 8, 4, 11]
incr_pairs = [list(g) for i, g in groupby(zip(prices, prices[1:]), lambda x: x[0] < x[1]) if i]
greatest_diff = [(x[0][0], x[-1][-1]) for x in incr_pairs]
</code></pre>
<p>where <code>greatest_diff</code> will give you:</p>
<pre><code>[(1, 10), (3, 12), (6, 13), (4, 11)]
</code></pre>
<p>Now to get your cumulative diff, you can use <a href="https://docs.python.org/3/library/functions.html#sum" rel="nofollow noreferrer"><code>sum</code></a> as:</p>
<pre><code>my_sum = sum(j-i for i, j in greatest_diff)
# where `my_sum` will give you: 32
</code></pre>
|
python|numpy
| 2 |
1,902,124 | 11,581,086 |
Google app engine bulkloader environment variables
|
<p>I rely on python's <code>os.environ</code> to work out what configs my application should use (such as different API keys for different hosts).</p>
<p>It seems that bulkloader doesn't have access to these variables, is there anyway I can tell what the current version of my application or the current host is when bulkloader is running?</p>
<p>Usually I do this in my <code>config_helper</code>:</p>
<pre><code>env = os.environ[ 'CURRENT_VERSION_ID' ].split( '.' )[ 0 ]
</code></pre>
<p>And bulkloader has reported a KeyError regarding <code>CURRENT_VERSION_ID</code>, so I used this:</p>
<pre><code>if os.environ.get('HTTP_HOST'):
host = os.environ['HTTP_HOST']
else:
host = os.environ['SERVER_NAME']
if host is not None:
if host.find( 'locahost' ):
env = 'local'
elif host.find( 'prod-server' ):
env = 'prod'
elif host.find( 'dev-server' ):
env = 'dev'
elif host.find( 'stage-server' ):
env = 'stage'
os.environ[ 'CURRENT_VERSION_ID' ] = env + '.1'
</code></pre>
<p>However bulkloader complains that <code>SERVER_NAME</code> is an invalid object meaning that it also can't find <code>HTTP_HOST</code>.</p>
<p>Any other ideas?</p>
|
<p>Environment variables like HTTP_HOST and CURRENT_VERSION_ID are only available when your app is running as a web application.</p>
<p>Probably you can just pass the variables with env command as follows:</p>
<pre><code>$ env CURRENT_VERSION_ID=local.1 bulkloader ....
</code></pre>
|
python|google-app-engine|bulkloader
| 1 |
1,902,125 | 46,700,258 |
Python : How to interpret the result of logistic regression by sm.Logit
|
<p>When I run a logistic regression using <a href="https://www.statsmodels.org/dev/generated/statsmodels.discrete.discrete_model.Logit.html" rel="nofollow noreferrer"><code>sm.Logit</code></a> (from the <a href="https://www.statsmodels.org/stable/index.html" rel="nofollow noreferrer"><code>statsmodel</code></a> library), part of the result looks like this:</p>
<pre><code>Pseudo R-squ.: 0.4335
Log-Likelihood: -291.08
LL-Null: -513.87
LLR p-value: 2.978e-96
</code></pre>
<p>How could I explain the significance of the model? Or say, the ability of explaining? Which indicator should I use? I have searched online and there isn't much information about <code>Pseudo R2</code> and <code>LLR pvalue</code>. I'm confused and I don't know how to judge the performance of my model based on these numbers</p>
|
<p>From <a href="https://www.oreilly.com/library/view/hands-on-machine-learning/9781789346411/" rel="nofollow noreferrer">Hands-On Machine Learning for Algorithmic Trading</a>:</p>
<blockquote>
<ul>
<li><code>Log-Likelihood</code>: this is the maximized value of the log-likelihood function.</li>
<li><code>LL-Null</code>: this is the result of the maximized log-likelihood function when only an intercept is included. It forms the basis for the pseudo-<img src="https://chart.googleapis.com/chart?cht=tx&chl=R%5E2" alt="R^2" /> statistic and the Log-Likelihood Ratio (LRR) test (see below)</li>
<li><code>pseudo</code>-<img src="https://chart.googleapis.com/chart?cht=tx&chl=R%5E2" alt="R^2" />: this is a substitute of the familiar <img src="https://chart.googleapis.com/chart?cht=tx&chl=R%5E2" alt="R^2" /> available under least squares. It is computed based on the ratio of the maximized log-likelihood function for the null model <code>m0</code> and the full model <code>m1</code> as follows:</li>
</ul>
</blockquote>
<p><a href="https://i.stack.imgur.com/YLls8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YLls8.png" alt="pseudo-R^2" /></a><br />
<sub>(source: <a href="https://chart.googleapis.com/chart?cht=tx&chl=%5Crho%5E2%20%3D%201%20-%20%5Cfrac%7B%5Clog%20%5Cmathcal%7BL%7D(m_1%5E*)%7D%7B%5Clog%20%5Cmathcal%7BL%7D(m_0%5E*)%7D" rel="nofollow noreferrer">googleapis.com</a>)</sub></p>
<blockquote>
<p>The values vary from 0 (when the model does not improve the likelihood) to 1 (where the model fits perfectly and the log-likelihood is maximized at 0). Consquently, higher values indicate a better fit.</p>
<ul>
<li><code>LLR</code>: The LLR test generally compares a more restricted model and is computed as:</li>
</ul>
</blockquote>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=%5Cmathrm%7BLLR%7D%20=%20-2%20%5Clog%28%5Cfrac%7B%5Cmathcal%7BL%7D%28m_0%5E*%29%7D%7B%5Cmathcal%7BL%7D%28m_1%5E*%29%7D%29%20=%202%28%5Clog%20%5Cmathcal%7BL%7D%28m_1%5E*%29%20-%20%5Clog%20%5Cmathcal%7BL%7D%28m_0%5E*%29%29" alt="llr" /></p>
<blockquote>
<p>The null hypothesis is that the restricted model performs better but a low p-value suggests that we can reject this hypothesis and prefer the full model over the null model. This is similar to the F-test for linear regression (where can also use the LLR test when we estimate the model using MLE).</p>
<ul>
<li><p><code>z-statistic</code>: plays the same role as the t-statistic in the linear regression output and is equally computed as the ratio of the coefficient estimate and its standard error.</p>
</li>
<li><p><code>p-values</code>: these indicate the probability of observing the test statistic assuming the null hypothesis <img src="https://chart.googleapis.com/chart?cht=tx&chl=H_0:%20%5Cbeta%20=%200" alt="H0" /> that the population coefficient is zero.</p>
</li>
</ul>
</blockquote>
<p>As you can see (and the way I understand it), many of these metrics are counterparts to those of the linear regression case. Furthermore, as Rose already point out, I would recommend checking <a href="https://web.archive.org/web/20220331180133/https://www.statsmodels.org/stable/index.html" rel="nofollow noreferrer">the statsmodel documentation</a>.</p>
|
python|statistics|regression
| 10 |
1,902,126 | 47,075,240 |
Combining pyperclip copy-to-clipboard with pyautogui paste?
|
<p>I want to paste some text loaded in from python into a browser field:
Any method to load something into the clipboard, which I can then paste using Ctrl+V. Currently I see pyperclip.paste() only paste the text into the console, instead of where I want it. Pressing Ctrl+V after running <code>pyperclip.copy('sometext')</code> does nothing.</p>
<pre><code>import pyautogui
import pyperclip
def click():
try:
pyautogui.click()
except:
pass
pyperclip.copy('sometext')
pyautogui.moveTo(4796, 714)
click()
pyperclip.paste()
pyautogui.hotkey('ctrl', 'v', interval = 0.15)
</code></pre>
<p>What am I doing wrong here? An alternative method would be just as welcome as a fix - preferably one that avoids using <code>pyautogui.typewrite()</code> as it takes a long time for lots of text</p>
<p>Update: seems to be a problem with <code>pyperclip.copy('sometext')</code> not putting or overwriting <code>'sometext'</code> into the clipboard. the pyperclip paste function works as it should, and so does the pyautogui Ctrl+V</p>
|
<p>Try using <code>pyautogui.typewrite</code> instead:</p>
<pre><code>import pyautogui
def click():
try:
pyautogui.click()
except:
pass
pyautogui.moveTo(4796, 714)
click()
pyautogui.typewrite('sometext')
</code></pre>
<p>You can find useful information <a href="https://automatetheboringstuff.com/chapter18/" rel="nofollow noreferrer">here</a>.</p>
|
python|clipboard|pyautogui|pyperclip
| 5 |
1,902,127 | 37,930,500 |
How to put dowloaded JSON data into variables in python
|
<pre><code>import requests
import json
import csv
# These our are demo API keys, you can use them!
#location = ""
api_key = 'simplyrets'
api_secret = 'simplyrets'
#api_url = 'https://api.simplyrets.com/properties?q=%s&limit=1' % (location)
api_url = 'https://api.simplyrets.com/properties'
response = requests.get(api_url, auth=(api_key, api_secret))
response.raise_for_status()
houseData = json.loads(response.text)
#different parameters we need to know
p = houseData['property']
roof = p["roof"]
cooling = p["cooling"]
style = p["style"]
area = p["area"]
bathsFull = p["bathsFull"]
bathsHalf = p["bathsHalf"]
</code></pre>
<hr>
<p>This is a snippet of the code that I am working with to try and take the information from the JSON provided by the API and put them into variables that I can actually use. </p>
<p>I thought that when you loaded it with <code>json.loads()</code> it would become a dictionary. </p>
<p>Yet it is telling me that I cannot do <code>p = houseData['property']</code> because "<code>list indices must be integers, not str</code>". </p>
<p><strong>Am I wrong that houseData should be a dictionary?</strong> </p>
|
<p>There are hundreds of properties returned, all of which are in a list.</p>
<p>You'll need to specify which property you want, so for the first one:</p>
<pre><code>p = houseData[0]['property']
</code></pre>
|
python|json|dictionary
| 1 |
1,902,128 | 67,852,134 |
How to see if username already exists in UserCreationField - Django
|
<p>I am creating a simple registration field in Django for my website. I want to see if a username already exists and display an error if it does. For example, if I had an account with the username of <strong>hi</strong>, and another user tried to create an account with the username of <strong>hi</strong>, after they click on the submit button, I want to raise an error. Right now, if I was to create an account with a username that already exists, Django will not create the account but does not display an error. My code is down bellow.</p>
<p><strong>Views.py</strong></p>
<pre><code>def index(request,*args, **kwargs):
return render(request, "index.html", {} )
def register(request, ):
form = CreateUserForm()
if request.method == "POST":
form = CreateUserForm(request.POST)
if form.is_valid():
form.save()
username = form.cleaned_data.get('username')
messages.success(request, f'Your account has been successfully created, {username} ')
return redirect('login')
context = {'form': form}
return render(request, "register.html", context )
def login(request,):
return render(request,"login.html")
</code></pre>
<p><strong>Forms.py</strong></p>
<pre><code>class CreateUserForm(UserCreationForm):
username = forms.CharField(required=True, max_length=30, )
email = forms.EmailField(required=True)
first_name = forms.CharField(required=True, max_length=50)
last_name = forms.CharField(required=True, max_length=50)
class Meta:
model = User
fields = ['username', 'email', 'first_name', 'last_name', 'password1', 'password2',]
</code></pre>
<p><strong>I don't know if you need this but here is my register.html:</strong></p>
<pre><code><!--Might need to inherit from base.html-->
{% load crispy_forms_tags %}
{% load static %}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:400,700">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/css/bootstrap.min.css">
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css">
<link rel="stylesheet" href="{% static "registerstyles.css" %}">
<title>GoodDeed - Register</title>
<script src="https://code.jquery.com/jquery-3.5.1.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.0/dist/umd/popper.min.js"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/js/bootstrap.min.js"></script>
</head>
<body>
<div class="signup-form" style=" position:absolute; top: -1.5%; left: 38%;">
<form action="" method="post">
<h2 style=>Sign Up</h2>
<p style="font-size: xx-small; color: #f2f2f2;">---</p>
{% csrf_token %}
{{form|crispy}}
<button type="submit" class="btn btn-primary btn-lg">Sign Up</button>
<a href="/login" style="position: absolute; top: 90%; left: 50%; font-size: 16px !important;">Already a user? Sign in!</a>
</div>
</form>
<!--Sidebar bellow-->
</body>
</html>
</code></pre>
<h2>Thank you to everyone who helps!</h2>
<p>***<em><strong>EDIT</strong></em></p>
<p>NEW FORMS.PY CODE***</p>
<pre><code>class CreateUserForm(UserCreationForm):
username = forms.CharField(required=True, max_length=30, )
email = forms.EmailField(required=True)
first_name = forms.CharField(required=True, max_length=50)
last_name = forms.CharField(required=True, max_length=50)
class Meta:
model = User
fields = ['username', 'email', 'first_name', 'last_name', 'password1', 'password2',]
def clean(self):
cleaned_data=super.clean()
if User.objects.filter(username=cleaned_data["username"].exists():
raise ValidationError("The username is taken, please try another one")
</code></pre>
|
<p>In your form implement the <code>clean</code> function as shown <a href="https://docs.djangoproject.com/en/3.2/ref/forms/validation/#cleaning-and-validating-fields-that-depend-on-each-other" rel="nofollow noreferrer" title="here">in the documentation</a></p>
<pre><code>class CreateUserForm(UserCreationForm):
.....
def clean(self):
cleaned_data=super().clean()
if User.objects.filter(username=cleaned_data["username"].exists():
raise ValidationError("The username is taken, please try another one")
</code></pre>
|
python|html|css|django
| 2 |
1,902,129 | 30,248,266 |
Problems interpolating and evaluating numpy array at arbitrary points with Scipy
|
<p>I am trying to replicate some of the functionality of Matlab's interp2. I know somewhat similar questions have been asked before, but none apply to my specific case. </p>
<p>I have a distance map (available at this Google drive location):
<a href="https://drive.google.com/open?id=0B6acq_amk5e3X0Q5UG1ya1VhSlE&authuser=0" rel="nofollow">https://drive.google.com/open?id=0B6acq_amk5e3X0Q5UG1ya1VhSlE&authuser=0</a></p>
<p>Values are normalized in the range 0-1. Size is 200 rows by 300 columns.</p>
<p>I can load it up with this code snippet:</p>
<pre><code>import numpy as np
dstnc1=np.load('dstnc.npy')
</code></pre>
<p>Coordinates are defined by the next snippet:</p>
<pre><code>xmin = 0.
xmax = 9000.
ymin = 0.
ymax = 6000.
r1,c1 = dstnc1.shape
x = np.linspace(xmin,xmax,c1)
y = np.linspace(ymin, ymax,r1)
</code></pre>
<p>I have three map points defined by vectors xnew1, ynew1 with this snippet:</p>
<pre><code>xnew1=[3700.540199,3845.940199,3983.240199]
ynew1=[1782.8611,1769.862,1694.862]
</code></pre>
<p>I check their location with respect to the distance map with this:</p>
<pre><code>import matplotlib.pyplot as plt
fig = plt.figure(figsize=(20, 16))
ax = fig.add_subplot(1, 1, 1)
plt.imshow(dstnc1, cmap=my_cmap_r,vmin=0,vmax=0.3,
extent=[0, 9000, 0, 6000], origin='upper')
plt.scatter(xnew1, ynew1, s=50, linewidths=0.15)
plt.show()
</code></pre>
<p>They plot in the correct location. Now I would like to extract the
distance value at those three points. I tried first <code>interp2d</code>.</p>
<pre><code>from scipy.interpolate import interp2d
x1 = np.linspace(xmin,xmax,c1)
y1 = np.linspace(ymin,ymax,r1)
f = interp2d(x1, y1, dstnc1, kind='cubic')
</code></pre>
<p>but when I try to evaluate with:</p>
<pre><code>test=f(xnew1,ynew1)
</code></pre>
<p>I get this error:</p>
<pre><code>--------------------
ValueError Traceback (most recent call last)
<ipython-input-299-d0f42e609b23> in <module>()
----> 1 test=f(xnew1,ynew1)
C:\...\AppData\Local\Continuum\Anaconda\lib\site-packages\scipy\interpolate\interpolate.pyc
in __call__(self, x, y, dx, dy)
270 (self.y_min, self.y_max)))
271
--> 272 z = fitpack.bisplev(x, y, self.tck, dx, dy)
273 z = atleast_2d(z)
274 z = transpose(z)
C:\...\AppData\Local\Continuum\Anaconda\lib\site-packages\scipy\interpolate\fitpack.pyc
in bisplev(x, y, tck, dx, dy)
1027 z,ier = _fitpack._bispev(tx,ty,c,kx,ky,x,y,dx,dy)
1028 if ier == 10:
-> 1029 raise ValueError("Invalid input data")
1030 if ier:
1031 raise TypeError("An error occurred")
ValueError: Invalid input data
</code></pre>
<p>If I try <code>RectBivariateSpline</code>:</p>
<pre><code>from scipy.interpolate import RectBivariateSpline
x2 = np.linspace(xmin,xmax,r1)
y2 = np.linspace(ymin,ymax,c1)
f = RectBivariateSpline(x2, y2, dstnc1)
</code></pre>
<p>I get this error:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-302-d0f42e609b23> in <module>()
----> 1 test=f(xnew1,ynew1)
C:\...\AppData\Local\Continuum\Anaconda\lib\site-packages\scipy\interpolate\fitpack2.pyc
in __call__(self, x, y, mth, dx, dy, grid)
643 z,ier = dfitpack.bispev(tx,ty,c,kx,ky,x,y)
644 if not ier == 0:
--> 645 raise ValueError("Error code returned by
bispev: %s" % ier)
646 else:
647 # standard Numpy broadcasting
ValueError: Error code returned by bispev: 10
</code></pre>
<p>Any suggestion as to whether I am using the wrong functions or the right
function with wrong syntax, and how I may fix it is appreciated. Thank you.</p>
<h3>UPDATE</h3>
<p>I am running Python 2.7.9 and Scipy 0.14.0 (on Continuum Anaconda)
As posted on the Scipy mailing list <a href="http://mail.scipy.org/pipermail/scipy-user/2015-May/036536.html" rel="nofollow">here</a> the documentation seems confusing, being a mix of Scipy 0.14.0, and the next version. Can anybody suggest a workaround or the correct syntax for version 0.14.0.</p>
<h3>UPDATE 2</h3>
<p>tried </p>
<pre><code>xnew1=np.array([3700.540199,3845.940199,3983.240199])
ynew1=np.array([1782.8611,1769.862,1694.862])
</code></pre>
<p>as suggested inj a comment but the error remains.</p>
|
<p>This syntax worked with RectBivariateSpline</p>
<pre><code>x2 = np.linspace(xmin,xmax,c1)
y2 = np.linspace(ymin,ymax,r1)
f2 = sp.interpolate.RectBivariateSpline(x2, y2, dstnc1.T,kx=1, ky=1)
</code></pre>
<p>I can then evaluate at new points with this: </p>
<pre><code>out2 = f2.ev(xnew1,ynew1)
</code></pre>
<p>For interp2d I am stuck as I am not able to bypass firewall at my office to update Anaconda (Windows). I may be able at home on a Mac installation, in which case, if I get the syntax right, I will add to thsi answer.</p>
|
python|scipy|interpolation
| 1 |
1,902,130 | 27,658,210 |
python - Selectively choosing nucleotide sequences from a fasta file?
|
<p>Using biopython how can I snip genes of my interest from a fasta file if the gene names are stored in a text file?</p>
<pre><code>#extract genes
f1 = open('ortholog1.txt','r')
f2 = open('all.fasta','r')
f3 = open('ortholog1.fasta','w')
genes = [line.rstrip('\n') for line in f1.readlines()]
i=0
for seq_record in SeqIO.parse(f2, "fasta"):
if genes[i] == seq_record.id:
print genes[i]
f3.write('>'+genes[i])
i=i+1
if i==18:
break
f3.write('\n')
f3.write(str(seq_record.seq))
f3.write('\n')
f2.close()
f3.close()
</code></pre>
<p>I was trying the above code. But it has some mistakes and is not generic, since like <code>ortholog1.txt</code> (which contain gene names) there are 5 more similar files. Also the number of genes in each file varies (not 18 always as here). Here <code>all.fasta</code> is the file which contains all genes. <code>ortholog1.fasta</code> must contain the snipped nucleotide sequence.</p>
|
<p>Basically, you can make Biopython do all the work. </p>
<p>I'm going to guess that the gene names in "ortholog1.txt" are exactly the same as in the fasta file, and that there is one gene name per line. If not, you'd need to tweak them as necessary to make them line up.</p>
<pre><code>from Bio import SeqIO
with open('ortholog1.txt','r') as f:
orthologs_txt = f.read()
orthologs = orthologs_txt.splitlines()
genes_to_keep = []
for record in SeqIO.parse(open('all.fasta','r'), 'fasta'):
if record.description in orthologs:
genes_to_keep.append(record)
with open('ortholog1.fasta','w') as f:
SeqIO.write(genes_to_keep, f, 'fasta')
</code></pre>
<p>Edit: Here is one way to keep the output genes in the same order as in the orthologs file:</p>
<pre><code>from Bio import SeqIO
with open('all.fasta','r') as fasta_file:
record_dict = SeqIO.to_dict(open(SeqIO.parse(fasta_file, 'fasta')
with open('ortholog1.txt','r') as text_file:
orthologs_txt = text_file.read()
genes_to_keep = []
for ortholog in orthologs_txt.splitlines():
try:
genes_to_keep.append( record_dict[ortholog] )
except KeyError:
pass
with open('ortholog1.fasta','w') as output_file:
SeqIO.write(genes_to_keep, output_file, 'fasta')
</code></pre>
|
python|bioinformatics|biopython
| 1 |
1,902,131 | 65,622,705 |
How to make my rectangle rotate with a rotating sprite
|
<p>So I been having this problem with my rectangle/projectile rotation, I want it so that my rectangle/projectile will rotate with my rotating sprite but the code I'm trying for it is not working for me, The code that I'm trying is giving me this error. <code>'pygame.Surface' object has no attribute 'x'</code> I have tried moving the code around, I have also tried to change the code so I wont get the error no more, and I have tried using a hitbox but I still keep getting the error. This is my two sprites</p>
<p><a href="https://i.stack.imgur.com/W2CkY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W2CkY.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/28LAE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/28LAE.png" alt="enter image description here" /></a></p>
<p>The code I'm trying</p>
<pre><code> self.dist = 100
dx = self.pin.x + self.dist*math.cos(-self.pin.angle*(math.pi/180)) -65 # why offset needed ?
dy = self.pin.y + self.dist*math.sin(-self.pin.angle*(math.pi/180)) -50 # why offset needed ?
self.rect.topleft = (dx,dy)
pygame.draw.rect(window,self.color,self.rect)
</code></pre>
<p>My full code</p>
<pre><code>import pygame,math,random
pygame.init()
# Windowing screen width and height
width = 500
height = 500
window = pygame.display.set_mode((width,height))
# Name of window
pygame.display.set_caption("Game")
# The Background
background = pygame.image.load("img/BG.png")
def blitRotate(surf, image, pos, originPos, angle):
# calcaulate the axis aligned bounding box of the rotated image
w, h = image.get_size()
sin_a, cos_a = math.sin(math.radians(angle)), math.cos(math.radians(angle))
min_x, min_y = min([0, sin_a*h, cos_a*w, sin_a*h + cos_a*w]), max([0, sin_a*w, -cos_a*h, sin_a*w - cos_a*h])
# calculate the translation of the pivot
pivot = pygame.math.Vector2(originPos[0], -originPos[1])
pivot_rotate = pivot.rotate(angle)
pivot_move = pivot_rotate - pivot
# calculate the upper left origin of the rotated image
origin = (pos[0] - originPos[0] + min_x - pivot_move[0], pos[1] - originPos[1] - min_y + pivot_move[1])
# get a rotated image
rotated_image = pygame.transform.rotate(image, angle)
# rotate and blit the image
surf.blit(rotated_image, origin)
# Player class
class Player:
def __init__(self,x,y,width,height,color):
self.x = x
self.y = y
self.width = width
self.height = height
self.color = color
self.speed = 4
self.cannon = pygame.image.load("img/Cannon.png")
self.cannon = pygame.transform.scale(self.cannon,(self.cannon.get_width()//2, self.cannon.get_height()//2))
self.rect = pygame.Rect(x,y,width,height)
self.hitbox = (self.x,self.y,30,30)
self.image = self.cannon
self.rect = self.image.get_rect(center = (self.x, self.y))
self.look_at_pos = (self.x, self.y)
self.isLookingAtPlayer = False
self.look_at_pos = (x,y)
self.angle = 0
def get_rect(self):
self.rect.topleft = (self.x,self.y)
return self.rect
def draw(self):
self.rect.topleft = (self.x,self.y)
pygame.draw.rect(window,self.color,self.hitbox)
player_rect = self.cannon.get_rect(center = self.get_rect().center)
player_rect.centerx -= 0
player_rect.centery += 90
# Another part of cannon rotating
dx = self.look_at_pos[0] - self.rect.centerx
dy = self.look_at_pos[1] - self.rect.centery
angle = (180/math.pi) * math.atan2(-dy, dx) - 90
gun_size = self.image.get_size()
pivot_abs = player_rect.centerx, player_rect.top + 13
pivot_rel = (gun_size[0] // 2, 105)
pygame.draw.rect(window,self.color,self.rect)
blitRotate(window, self.image,pivot_abs, pivot_rel, angle)
def lookAt( self, coordinate ):
self.look_at_pos = coordinate
# Players gun
class projectile(object):
def __init__(self,x,y,dirx,diry,color):
self.x = x
self.y = y
self.dirx = dirx
self.diry = diry
self.pin = pygame.image.load("img/Pin.png")
self.pin = pygame.transform.scale(self.pin,(self.pin.get_width()//6, self.pin.get_height()//6))
self.rect = self.pin.get_rect()
self.topleft = ( self.x, self.y )
self.speed = 10
self.color = color
self.hitbox = (self.x + 20, self.y, 30,40)
def move(self):
self.x += self.dirx * self.speed
self.y += self.diry * self.speed
def draw(self):
self.rect.topleft = (round(self.x), round(self.y))
window.blit(self.pin,self.rect)
self.hitbox = (self.x + 20, self.y,30,30)
# For rotating the the projectile
self.dist = 100
dx = self.pin.x + self.dist*math.cos(-self.pin.angle*(math.pi/180)) -65 # why offset needed ?
dy = self.pin.y + self.dist*math.sin(-self.pin.angle*(math.pi/180)) -50 # why offset needed ?
self.rect.topleft = (dx,dy)
pygame.draw.rect(window,self.color,self.rect)
# The color white
white = (255,255,255)
# The xy cords, width, height and color of my classes[]
playerman = Player(350,385,34,75,white)
# This is where my balloons get hit by the bullet and disappers
# redrawing window
def redrawwindow():
window.fill((0,0,0))
# Drawing the window in
window.blit(background,(0,0))
# drawing the player in window
playerman.draw()
# Drawing the players bullet
for bullet in bullets:
bullet.draw()
# Frames for game
fps = 30
clock = pygame.time.Clock()
#projectile empty list
bullets = []
# main loop
run = True
while run:
clock.tick(fps)
for event in pygame.event.get():
if event.type == pygame.QUIT:
run = False
if event.type == pygame.MOUSEBUTTONDOWN:
if len(bullets) < 6700:
mousex , mousey = pygame.mouse.get_pos()
start_x , start_y = playerman.rect.x + 12, playerman.rect.y - 3
mouse_x , mouse_y = event.pos
dir_x , dir_y = mouse_x - start_x , mouse_y - start_y
distance = math.sqrt(dir_x**2 + dir_y**2)
if distance > 0:
new_bullet = projectile(start_x, start_y, dir_x/distance , dir_y/distance, (0,0,0))
bullets.append(new_bullet)
for bullet in bullets[:]:
bullet.move()
if bullet.x < 0 or bullet.x > 900 or bullet.y < 0 or bullet.y > 900:
bullets.pop(bullets.index(bullet))
# gun rotation
mousex, mousey = pygame.mouse.get_pos()
if not playerman.isLookingAtPlayer:
playerman.lookAt((mousex, mousey))
# telling game that key means when a key get pressed
keys = pygame.key.get_pressed()
# The player moving when the key a is pressed
if keys[pygame.K_a] and playerman.x > playerman.speed:
playerman.x -= playerman.speed
# The player moving when the key d is pressed
if keys[pygame.K_d] and playerman.x < 500 - playerman.width - playerman.speed:
playerman.x += playerman.speed
# Calling the redraw function
redrawwindow()
# updating game
pygame.display.update()
# quiting the game
pygame.quit()
</code></pre>
|
<p>You are already moving the pin in the <code>projectile.move</code> method. In the <code>projectile.draw</code> method you just have to rotate the pin. See <a href="https://stackoverflow.com/questions/4183208/how-do-i-rotate-an-image-around-its-center-using-pygame/54714144#54714144">How do I rotate an image around its center using PyGame?</a> and <a href="https://stackoverflow.com/questions/58603835/how-to-rotate-an-imageplayer-to-the-mouse-direction/58604116#58604116">How to rotate an image(player) to the mouse direction?</a>:</p>
<pre class="lang-py prettyprint-override"><code>
class projectile(object):
# [...]
def draw(self):
self.rect.center = (round(self.x), round(self.y))
angle = math.degrees(math.atan2(-self.diry, self.dirx)) - 90
rotated_pin = pygame.transform.rotate(self.pin, angle)
rotated_rect = rotated_pin.get_rect(center = self.rect.center)
pygame.draw.rect(window,self.color, rotated_rect)
window.blit(rotated_pin, rotated_rect)
self.hitbox = (self.x + 20, self.y,30,30)
</code></pre>
<p>You have to find a starting position of the pin. The pin should start somewhere on top of the blowpipe. Add a <code>get_pivot</code> method to the <code>Player</code> class:</p>
<pre class="lang-py prettyprint-override"><code>class Player:
# [...]
def get_pivot(self):
player_rect = self.cannon.get_rect(center = self.get_rect().center)
return player_rect.centerx, player_rect.top + 103
</code></pre>
<p>Add a <code>get_angle</code> method that calculates the angle of the blowpipe:</p>
<pre class="lang-py prettyprint-override"><code>class Player:
# [...]
def get_angle(self):
pivot_abs = self.get_pivot()
dx = self.look_at_pos[0] - pivot_abs[0]
dy = self.look_at_pos[1] - pivot_abs[1]
return math.degrees(math.atan2(-dy, dx))
</code></pre>
<p>Use this methods to compute the top of the blow pipes:</p>
<pre class="lang-py prettyprint-override"><code>class Player:
# [...]
def get_top(self):
pivot_x, pivot_y = self.get_pivot()
angle = self.get_angle()
length = 100
top_x = pivot_x + length * math.cos(math.radians(angle))
top_y = pivot_y - length * math.sin(math.radians(angle))
return top_x, top_y
</code></pre>
<p>You can also use the <code>get_pivot</code> and <code>get_angle</code> methods in the <code>draw</code> method:</p>
<pre class="lang-py prettyprint-override"><code>class Player:
# [...]
def draw(self):
self.rect.topleft = (self.x,self.y)
pygame.draw.rect(window,self.color,self.hitbox)
gun_size = self.image.get_size()
pivot_abs = self.get_pivot()
pivot_rel = (gun_size[0] // 2, 105)
angle = self.get_angle() - 90
pygame.draw.rect(window,self.color,self.rect)
blitRotate(window, self.image,pivot_abs, pivot_rel, angle)
</code></pre>
<p>Use <code>get_top</code> to set the starting position of the pin:</p>
<pre class="lang-py prettyprint-override"><code>mousex, mousey = pygame.mouse.get_pos()
start_x, start_y = playerman.get_top()
mouse_x, mouse_y = event.pos
dir_x, dir_y = mouse_x - start_x , mouse_y - start_y
distance = math.sqrt(dir_x**2 + dir_y**2)
if distance > 0:
new_bullet = projectile(start_x, start_y, dir_x/distance, dir_y/distance, (0,0,0))
bullets.append(new_bullet)
</code></pre>
<p>Draw the pins in before the blowpipe so it looks like the pins are coming out of the blowpipe:</p>
<pre class="lang-py prettyprint-override"><code>def redrawwindow():
window.fill((0,0,0))
# Drawing the window in
window.blit(background,(0,0))
# Drawing the players bullet
for bullet in bullets:
bullet.draw()
# drawing the player in window
playerman.draw()
</code></pre>
<hr />
<p>Complete examplee:</p>
<p><a href="https://i.stack.imgur.com/ocqjL.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ocqjL.gif" alt="" /></a></p>
<pre class="lang-py prettyprint-override"><code>import pygame,math,random
pygame.init()
# Windowing screen width and height
width = 500
height = 500
window = pygame.display.set_mode((width,height))
# Name of window
pygame.display.set_caption("Game")
# The Background
background = pygame.image.load("img/BG.png")
def blitRotate(surf, image, pos, originPos, angle):
# calcaulate the axis aligned bounding box of the rotated image
w, h = image.get_size()
sin_a, cos_a = math.sin(math.radians(angle)), math.cos(math.radians(angle))
min_x, min_y = min([0, sin_a*h, cos_a*w, sin_a*h + cos_a*w]), max([0, sin_a*w, -cos_a*h, sin_a*w - cos_a*h])
# calculate the translation of the pivot
pivot = pygame.math.Vector2(originPos[0], -originPos[1])
pivot_rotate = pivot.rotate(angle)
pivot_move = pivot_rotate - pivot
# calculate the upper left origin of the rotated image
origin = (pos[0] - originPos[0] + min_x - pivot_move[0], pos[1] - originPos[1] - min_y + pivot_move[1])
# get a rotated image
rotated_image = pygame.transform.rotate(image, angle)
# rotate and blit the image
surf.blit(rotated_image, origin)
# Player class
class Player:
def __init__(self,x,y,width,height,color):
self.x = x
self.y = y
self.width = width
self.height = height
self.color = color
self.speed = 4
self.cannon = pygame.image.load("img/Cannon.png")
self.cannon = pygame.transform.scale(self.cannon,(self.cannon.get_width()//2, self.cannon.get_height()//2))
self.rect = pygame.Rect(x,y,width,height)
self.hitbox = (self.x,self.y,30,30)
self.image = self.cannon
self.rect = self.image.get_rect(center = (self.x, self.y))
self.look_at_pos = (self.x, self.y)
self.isLookingAtPlayer = False
self.look_at_pos = (x,y)
self.angle = 0
def get_rect(self):
self.rect.topleft = (self.x,self.y)
return self.rect
def get_pivot(self):
player_rect = self.cannon.get_rect(center = self.get_rect().center)
return player_rect.centerx, player_rect.top + 103
def get_angle(self):
pivot_abs = self.get_pivot()
dx = self.look_at_pos[0] - pivot_abs[0]
dy = self.look_at_pos[1] - pivot_abs[1]
return math.degrees(math.atan2(-dy, dx))
def get_top(self):
pivot_x, pivot_y = self.get_pivot()
angle = self.get_angle()
length = 100
top_x = pivot_x + length * math.cos(math.radians(angle))
top_y = pivot_y - length * math.sin(math.radians(angle))
return top_x, top_y
def draw(self):
self.rect.topleft = (self.x,self.y)
pygame.draw.rect(window,self.color,self.hitbox)
gun_size = self.image.get_size()
pivot_abs = self.get_pivot()
pivot_rel = (gun_size[0] // 2, 105)
angle = self.get_angle() - 90
pygame.draw.rect(window,self.color,self.rect)
blitRotate(window, self.image,pivot_abs, pivot_rel, angle)
def lookAt( self, coordinate ):
self.look_at_pos = coordinate
# Players gun
class projectile(object):
def __init__(self,x,y,dirx,diry,color):
self.x = x
self.y = y
self.dirx = dirx
self.diry = diry
self.pin = pygame.image.load("img/Pin.png")
self.pin = pygame.transform.scale(self.pin,(self.pin.get_width()//6, self.pin.get_height()//6))
self.rect = self.pin.get_rect()
self.center = ( self.x, self.y )
self.speed = 10
self.color = color
self.hitbox = (self.x + 20, self.y, 30,40)
def move(self):
self.x += self.dirx * self.speed
self.y += self.diry * self.speed
def draw(self):
self.rect.center = (round(self.x), round(self.y))
angle = math.degrees(math.atan2(-self.diry, self.dirx)) - 90
rotated_pin = pygame.transform.rotate(self.pin, angle)
rotated_rect = rotated_pin.get_rect(center = self.rect.center)
pygame.draw.rect(window,self.color, rotated_rect)
window.blit(rotated_pin, rotated_rect)
self.hitbox = (self.x + 20, self.y,30,30)
# The color white
white = (255,255,255)
# The xy cords, width, height and color of my classes[]
playerman = Player(350,385,34,75,white)
# This is where my balloons get hit by the bullet and disappers
# redrawing window
def redrawwindow():
window.fill((0,0,0))
# Drawing the window in
window.blit(background,(0,0))
# Drawing the players bullet
for bullet in bullets:
bullet.draw()
# drawing the player in window
playerman.draw()
# Frames for game
fps = 30
clock = pygame.time.Clock()
#projectile empty list
bullets = []
# main loop
run = True
while run:
clock.tick(fps)
for event in pygame.event.get():
if event.type == pygame.QUIT:
run = False
if event.type == pygame.MOUSEBUTTONDOWN:
if len(bullets) < 6700:
mousex, mousey = pygame.mouse.get_pos()
start_x, start_y = playerman.get_top()
mouse_x, mouse_y = event.pos
dir_x, dir_y = mouse_x - start_x , mouse_y - start_y
distance = math.sqrt(dir_x**2 + dir_y**2)
if distance > 0:
new_bullet = projectile(start_x, start_y, dir_x/distance, dir_y/distance, (0,0,0))
bullets.append(new_bullet)
for bullet in bullets[:]:
bullet.move()
if bullet.x < 0 or bullet.x > 900 or bullet.y < 0 or bullet.y > 900:
bullets.pop(bullets.index(bullet))
# gun rotation
mousex, mousey = pygame.mouse.get_pos()
if not playerman.isLookingAtPlayer:
playerman.lookAt((mousex, mousey))
# telling game that key means when a key get pressed
keys = pygame.key.get_pressed()
# The player moving when the key a is pressed
if keys[pygame.K_a] and playerman.x > playerman.speed:
playerman.x -= playerman.speed
# The player moving when the key d is pressed
if keys[pygame.K_d] and playerman.x < 500 - playerman.width - playerman.speed:
playerman.x += playerman.speed
# Calling the redraw function
redrawwindow()
# updating game
pygame.display.update()
# quiting the game
pygame.quit()
</code></pre>
|
python|pygame
| 0 |
1,902,132 | 65,804,060 |
My Python Output Repeats Itself Hundreds of Time
|
<p>I am learning to scrape data from a website. I have a problem when printing my code using jupyter notebook. My output repeats itself many times and I don't know how to fix it. Here's my trial code:</p>
<pre><code>from bs4 import BeautifulSoup
from bs4.element import Comment
import urllib
from urllib.request import urlopen
import re
url = ('https://jogjapolitan.harianjogja.com/read/2020/10/06/510/1051831/omnibus-law-cipta-kerja-disahkan-buruh-di-jogja-berkabung')
html = urlopen(url).read()
soup = BeautifulSoup(html, 'html.parser')
results = soup.find('div', class_="entry_content")
datas = (''.join(results.stripped_strings))
for data in datas:
match = re.findall('(?:"(.*?)")', datas)
if match:
print(match[1])
</code></pre>
<p>I thought it was fine until I print it out:</p>
<pre><code>Aksi tersebut akan berlangsung mulai hari ini hingga puncaknya pada tanggal 8 Oktober di Tugu Pal Putih Jogja. Kemudian, langkah yang dilakukan kita akan ada aksi unjuk rasa besar-besaran serentak di seluruh kota di Indonesia tanggal 8 Oktober. Kita juga akan menggalang kekuatan dengan elemen-elemen masyarakat dan akan buat aksi simultan mulai 6 sampai 8 Oktober,
Aksi tersebut akan berlangsung mulai hari ini hingga puncaknya pada tanggal 8 Oktober di Tugu Pal Putih Jogja. Kemudian, langkah yang dilakukan kita akan ada aksi unjuk rasa besar-besaran serentak di seluruh kota di Indonesia tanggal 8 Oktober. Kita juga akan menggalang kekuatan dengan elemen-elemen masyarakat dan akan buat aksi simultan mulai 6 sampai 8 Oktober,
Aksi tersebut akan berlangsung mulai hari ini hingga puncaknya pada tanggal 8 Oktober di Tugu Pal Putih Jogja. Kemudian, langkah yang dilakukan kita akan ada aksi unjuk rasa besar-besaran serentak di seluruh kota di Indonesia tanggal 8 Oktober. Kita juga akan menggalang kekuatan dengan elemen-elemen masyarakat dan akan buat aksi simultan mulai 6 sampai 8 Oktober,
Aksi tersebut akan berlangsung mulai hari ini hingga puncaknya pada tanggal 8 Oktober di Tugu Pal Putih Jogja. Kemudian, langkah yang dilakukan kita akan ada aksi unjuk rasa besar-besaran serentak di seluruh kota di Indonesia tanggal 8 Oktober. Kita juga akan menggalang kekuatan dengan elemen-elemen masyarakat dan akan buat aksi simultan mulai 6 sampai 8 Oktober,
Aksi tersebut akan berlangsung mulai hari ini hingga puncaknya pada tanggal 8 Oktober di Tugu Pal Putih Jogja. Kemudian, langkah yang dilakukan kita akan ada aksi unjuk rasa besar-besaran serentak di seluruh kota di Indonesia tanggal 8 Oktober. Kita juga akan menggalang kekuatan dengan elemen-elemen masyarakat dan akan buat aksi simultan mulai 6 sampai 8 Oktober,
Aksi tersebut akan berlangsung mulai hari ini hingga puncaknya pada tanggal 8 Oktober di Tugu Pal Putih Jogja. Kemudian, langkah yang dilakukan kita akan ada aksi unjuk rasa besar-besaran serentak di seluruh kota di Indonesia tanggal 8 Oktober. Kita juga akan menggalang kekuatan dengan elemen-elemen masyarakat dan akan buat aksi simultan mulai 6 sampai 8 Oktober,
Aksi tersebut akan berlangsung mulai hari ini hingga puncaknya pada tanggal 8 Oktober di Tugu Pal Putih Jogja. Kemudian, langkah yang dilakukan kita akan ada aksi unjuk rasa besar-besaran serentak di seluruh kota di Indonesia tanggal 8 Oktober. Kita juga akan menggalang kekuatan dengan elemen-elemen masyarakat dan akan buat aksi simultan mulai 6 sampai 8 Oktober,
Aksi tersebut akan berlangsung mulai hari ini hingga puncaknya pada tanggal 8 Oktober di Tugu Pal Putih Jogja. Kemudian, langkah yang dilakukan kita akan ada aksi unjuk rasa besar-besaran serentak di seluruh kota di Indonesia tanggal 8 Oktober. Kita juga akan menggalang kekuatan dengan elemen-elemen masyarakat dan akan buat aksi simultan mulai 6 sampai 8 Oktober,
Aksi tersebut akan berlangsung mulai hari ini hingga puncaknya pada tanggal 8 Oktober di Tugu Pal Putih Jogja. Kemudian, langkah yang dilakukan kita akan ada aksi unjuk rasa besar-besaran serentak di seluruh kota di Indonesia tanggal 8 Oktober. Kita juga akan menggalang kekuatan dengan elemen-elemen masyarakat dan akan buat aksi simultan mulai 6 sampai 8 Oktober,
Aksi tersebut akan berlangsung mulai hari ini hingga puncaknya pada tanggal 8 Oktober di Tugu Pal Putih Jogja. Kemudian, langkah yang dilakukan kita akan ada aksi unjuk rasa besar-besaran serentak di seluruh kota di Indonesia tanggal 8 Oktober. Kita juga akan menggalang kekuatan dengan elemen-elemen masyarakat dan akan buat aksi simultan mulai 6 sampai 8 Oktober,
Aksi tersebut akan berlangsung mulai hari ini hingga puncaknya pada tanggal 8 Oktober di Tugu Pal Putih Jogja. Kemudian, langkah yang dilakukan kita akan ada aksi unjuk rasa besar-besaran serentak di seluruh kota di Indonesia tanggal 8 Oktober. Kita juga akan menggalang kekuatan dengan elemen-elemen masyarakat dan akan buat aksi simultan mulai 6 sampai 8 Oktober,
Aksi tersebut akan berlangsung mulai hari ini hingga puncaknya pada tanggal 8 Oktober di Tugu Pal Putih Jogja. Kemudian, langkah yang dilakukan kita akan ada aksi unjuk rasa besar-besaran serentak di seluruh kota di Indonesia tanggal 8 Oktober. Kita juga akan menggalang kekuatan dengan elemen-elemen masyarakat dan akan buat aksi simultan mulai 6 sampai 8 Oktober,
Aksi tersebut akan berlangsung mulai hari ini hingga puncaknya pada tanggal 8 Oktober di Tugu Pal Putih Jogja. Kemudian, langkah yang dilakukan kita akan ada aksi unjuk rasa besar-besaran serentak di seluruh kota di Indonesia tanggal 8 Oktober. Kita juga akan menggalang kekuatan dengan elemen-elemen masyarakat dan akan buat aksi simultan mulai 6 sampai 8 Oktober,
Aksi tersebut akan berlangsung mulai hari ini hingga puncaknya pada tanggal 8 Oktober di Tugu Pal Putih Jogja. Kemudian, langkah yang dilakukan kita akan ada aksi unjuk rasa besar-besaran serentak di seluruh kota di Indonesia tanggal 8 Oktober. Kita juga akan menggalang kekuatan dengan elemen-elemen masyarakat dan akan buat aksi simultan mulai 6 sampai 8 Oktober,
</code></pre>
<p>And hundreds more....</p>
<p>All answers are so much appreciated. Thanks!</p>
|
<p>The <code>for data in datas</code> loop is causing your issue. <strong>There is no issue with your regex, as some of the answers have suggested</strong> Let's take a look at your code's execution.</p>
<p>I've corrected the code at the bottom of the post, but first I want to talk you through the issue.</p>
<p>When you create the variable <code>data</code>, you join multiple things into a single string. As a demonstration, you should run <code>print(type(datas))</code>. What this means, is that your data has already been put together.</p>
<p>When you loop through the string, you actually end up looping through each letter of that string. As a test, try running this:</p>
<pre><code>for data in datas:
print(data)
</code></pre>
<p>The program will output <em>each letter</em> on a new line, like so:</p>
<pre><code>H
a
r
i
a
...
</code></pre>
<p><strong>The Solution</strong>
All you need to do is remove that for loop and everything will work fine. Here is what your final code should look like:</p>
<pre><code>#Imports
url = ('https://jogjapolitan.harianjogja.com/read/2020/10/06/510/1051831/omnibus-law-cipta-kerja-disahkan-buruh-di-jogja-berkabung')
html = urlopen(url).read()
soup = BeautifulSoup(html, 'html.parser')
results = soup.find('div', class_="entry_content")
datas = (''.join(results.stripped_strings))
#for data in datas:
match = re.findall('(?:"(.*?)")', datas)
if match:
print(match[1])
</code></pre>
<p>This should give you the result:</p>
<pre><code>Aksi tersebut akan berlangsung mulai hari ini hingga puncaknya pada tanggal 8 Oktober di Tugu Pal Putih Jogja. Kemudian, langkah yang dilakukan kita akan ada aksi unjuk rasa besar-besaran serentak di seluruh kota di Indonesia tanggal 8 Oktober. Kita juga akan menggalang kekuatan dengan elemen-elemen masyarakat dan akan buat aksi simultan mulai 6 sampai 8 Oktober,
</code></pre>
|
python|jupyter-notebook
| 1 |
1,902,133 | 36,972,113 |
Optimizing function computation in a pandas column?
|
<p>Let's assume that I have the following pandas dataframe:</p>
<pre><code>id |opinion
1 |Hi how are you?
...
n-1|Hello!
</code></pre>
<p>I would like to create a new pandas <a href="https://en.wikipedia.org/wiki/Part-of-speech_tagging" rel="nofollow">POS-tagged</a> column like this:</p>
<pre><code>id| opinion |POS-tagged_opinions
1 |Hi how are you?|hi\tUH\thi
how\tWRB\thow
are\tVBP\tbe
you\tPP\tyou
?\tSENT\t?
.....
n-1| Hello |Hello\tUH\tHello
!\tSENT\t!
</code></pre>
<p>From the documentation a tutorial, I tried several approaches. Particularly:</p>
<pre><code>df.apply(postag_cell, axis=1)
</code></pre>
<p>and</p>
<pre><code>df['content'].map(postag_cell)
</code></pre>
<p>Therefore, I created this POS-tag cell function:</p>
<pre><code>import pandas as pd
df = pd.read_csv('/Users/user/Desktop/data2.csv', sep='|')
print df.head()
def postag_cell(pandas_cell):
import pprint # For proper print of sequences.
import treetaggerwrapper
tagger = treetaggerwrapper.TreeTagger(TAGLANG='en')
#2) tag your text.
y = [i.decode('UTF-8') if isinstance(i, basestring) else i for i in [pandas_cell]]
tags = tagger.tag_text(y)
#3) use the tags list... (list of string output from TreeTagger).
return tags
#df.apply(postag_cell(), axis=1)
#df['content'].map(postag_cell())
df['POS-tagged_opinions'] = (df['content'].apply(postag_cell))
print df.head()
</code></pre>
<p>The above function return the following:</p>
<pre><code>user:~/PycharmProjects/misc_tests$ time python tagging\ with\ pandas.py
id| opinion |POS-tagged_opinions
1 |Hi how are you?|[hi\tUH\thi
how\tWRB\thow
are\tVBP\tbe
you\tPP\tyou
?\tSENT\t?]
.....
n-1| Hello |Hello\tUH\tHello
!\tSENT\t!
--- 9.53674316406e-07 seconds ---
real 18m22.038s
user 16m33.236s
sys 1m39.066s
</code></pre>
<p>The problem is that with large number of <a href="http://www.mediafire.com/download/5bowh8tzff91fvd/new_corpus.csv" rel="nofollow">opinions</a> it get takes a lot of time:</p>
<p><strong>How to perform pos-tagging more efficiently and in a more pythonic way with pandas and treetagger?</strong>. I believe that this issue is due my pandas limited knowledge, since I tagged very quickly the opinions just with treetagger, out of a pandas dataframe.</p>
|
<p>There are some obvious modifications that can be done to gain a reasonable amount of time (as removing the imports and the instantiation of TreeTagger class from <code>postag_cell</code> function). Then the code can be parallelized. However, the majority of work is done by treetagger itself. As I don't know anything about this software, I can't tell if it can be further optimized.</p>
<h2>The minimal working code:</h2>
<pre><code>import pandas as pd
import treetaggerwrapper
input_file = 'new_corpus.csv'
output_file = 'output.csv'
def postag_string(s):
'''Returns tagged text from string s'''
if isinstance(s, basestring):
s = s.decode('UTF-8')
return tagger.tag_text(s)
# Reading in the file
all_lines = []
with open(input_file) as f:
for line in f:
all_lines.append(line.strip().split('|', 1))
df = pd.DataFrame(all_lines[1:], columns = all_lines[0])
tagger = treetaggerwrapper.TreeTagger(TAGLANG='en')
df['POS-tagged_content'] = df['content'].apply(postag_string)
# Format fix:
def fix_format(x):
'''x - a list or an array'''
# With encoding:
out = list(tuple(i.encode().split('\t')) for i in x)
# or without:
# out = list(tuple(i.split('\t')) for i in x)
return out
df['POS-tagged_content'] = df['POS-tagged_content'].apply(fix_format)
df.to_csv(output_file, sep = '|')
</code></pre>
<p>I'm not using <code>pd.read_csv(filename, sep = '|')</code> because your input file is "misformatted" - it contains unescaped characters <code>|</code> in some text opinions.</p>
<p>(<strong>Update:</strong>) After format fix, the output file looks like this:</p>
<pre><code>$ cat output_example.csv
|id|content|POS-tagged_content
0|cv01.txt|How are you?|[('How', 'WRB', 'How'), ('are', 'VBP', 'be'), ('you', 'PP', 'you'), ('?', 'SENT', '?')]
1|cv02.txt|Hello!|[('Hello', 'UH', 'Hello'), ('!', 'SENT', '!')]
2|cv03.txt|"She said ""OK""."|"[('She', 'PP', 'she'), ('said', 'VVD', 'say'), ('""', '``', '""'), ('OK', 'UH', 'OK'), ('""', ""''"", '""'), ('.', 'SENT', '.')]"
</code></pre>
<p>If the formatting is not exactly what you want, we can work it out.</p>
<h2>Parallelized code</h2>
<p>It may give some speedup but don't expect miracles. The overhead coming from multiprocess setting may even exceed the gains. You can experiment with the number of processes <code>nproc</code> (here, set by default to number of CPUs; setting more than this is inefficient).</p>
<p>Treetaggerwrapper has its own multiprocess <a href="http://treetaggerwrapper.readthedocs.io/en/latest/#polls-of-taggers-threads" rel="nofollow">class</a>. I suspect that it does more less the same thing as the code below, so I didn't try it.</p>
<pre><code>import pandas as pd
import numpy as np
import treetaggerwrapper
import multiprocessing as mp
input_file = 'new_corpus.csv'
output_file = 'output2.csv'
def postag_string_mp(s):
'''
Returns tagged text for string s.
"pool_tagger" is a global name, defined in each subprocess.
'''
if isinstance(s, basestring):
s = s.decode('UTF-8')
return pool_tagger.tag_text(s)
''' Reading in the file '''
all_lines = []
with open(input_file) as f:
for line in f:
all_lines.append(line.strip().split('|', 1))
df = pd.DataFrame(all_lines[1:], columns = all_lines[0])
''' Multiprocessing '''
# Number of processes can be adjusted for better performance:
nproc = mp.cpu_count()
# Function to be run at the start of every subprocess.
# Each subprocess will have its own TreeTagger called pool_tagger.
def init():
global pool_tagger
pool_tagger = treetaggerwrapper.TreeTagger(TAGLANG='en')
# The actual job done in subprcesses:
def run(df):
return df.apply(postag_string_mp)
# Splitting the input
lst_split = np.array_split(df['content'], nproc)
pool = mp.Pool(processes = nproc, initializer = init)
lst_out = pool.map(run, lst_split)
pool.close()
pool.join()
# Concatenating the output from subprocesses
df['POS-tagged_content'] = pd.concat(lst_out)
# Format fix:
def fix_format(x):
'''x - a list or an array'''
# With encoding:
out = list(tuple(i.encode().split('\t')) for i in x)
# and without:
# out = list(tuple(i.split('\t')) for i in x)
return out
df['POS-tagged_content'] = df['POS-tagged_content'].apply(fix_format)
df.to_csv(output_file, sep = '|')
</code></pre>
<p><strong>Update</strong></p>
<p>In Python 3, all strings are by default in unicode, so you can save some trouble and time with decoding/encoding. (In the code below, I also use pure numpy arrays instead of data frames in child processes - but the impact of this change is insignificant.)</p>
<pre><code># Python3 code:
import pandas as pd
import numpy as np
import treetaggerwrapper
import multiprocessing as mp
input_file = 'new_corpus.csv'
output_file = 'output3.csv'
''' Reading in the file '''
all_lines = []
with open(input_file) as f:
for line in f:
all_lines.append(line.strip().split('|', 1))
df = pd.DataFrame(all_lines[1:], columns = all_lines[0])
''' Multiprocessing '''
# Number of processes can be adjusted for better performance:
nproc = mp.cpu_count()
# Function to be run at the start of every subprocess.
# Each subprocess will have its own TreeTagger called pool_tagger.
def init():
global pool_tagger
pool_tagger = treetaggerwrapper.TreeTagger(TAGLANG='en')
# The actual job done in subprcesses:
def run(arr):
out = np.empty_like(arr)
for i in range(len(arr)):
out[i] = pool_tagger.tag_text(arr[i])
return out
# Splitting the input
lst_split = np.array_split(df.values[:,1], nproc)
with mp.Pool(processes = nproc, initializer = init) as p:
lst_out = p.map(run, lst_split)
# Concatenating the output from subprocesses
df['POS-tagged_content'] = np.concatenate(lst_out)
# Format fix:
def fix_format(x):
'''x - a list or an array'''
out = list(tuple(i.split('\t')) for i in x)
return out
df['POS-tagged_content'] = df['POS-tagged_content'].apply(fix_format)
df.to_csv(output_file, sep = '|')
</code></pre>
<p>After single runs (so, not really statistically significant), I'm getting these timings on your file:</p>
<pre><code>$ time python2.7 treetagger_minimal.py
real 0m59.783s
user 0m50.697s
sys 0m16.657s
$ time python2.7 treetagger_mp.py
real 0m48.798s
user 1m15.503s
sys 0m22.300s
$ time python3 treetagger_mp3.py
real 0m39.746s
user 1m25.340s
sys 0m21.157s
</code></pre>
<p>If the only use of pandas dataframe <code>pd</code> is to save everything back to a file, then the next step would be removing pandas from the code at all. But again, the gain would be insignificant in comparison with treetagger's work time.</p>
|
python|python-2.7|pandas|nlp|treetagger
| 1 |
1,902,134 | 48,799,718 |
pandas pivot table to stacked bar chart
|
<p>I'm trying to take this pivot table and create a stacked bar chart of wins and losses by battle type.</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(1)
df = pd.DataFrame({'attacker_outcome':np.random.choice(['win', 'loss'], 20, replace=True),
'battle_type':np.random.choice(['pitched battle', 'siege', 'ambush', 'razing'], 20, replace=True)})
attacker_outcome battle_type
0 loss ambush
1 loss siege
2 win ambush
3 loss siege
4 loss siege
5 win ambush
6 win siege
7 win razing
8 loss siege
9 loss ambush
10 loss razing
11 loss siege
12 win razing
13 loss razing
14 win ambush
15 win pitched battle
16 loss ambush
17 loss siege
18 win pitched battle
19 loss siege
</code></pre>
<p>I tried to initialize a new column, <code>groupby</code> and <code>count</code>. I'm trying to create a stacked bar chart from this pivot table, and starting to get lost here. I'm getting this:</p>
<pre><code>df.assign(count =1 ).groupby(['attacker_outcome', 'battle_type']).count().plot.bar(stacked=True)
</code></pre>
<p><a href="https://i.stack.imgur.com/gFZSX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gFZSX.png" alt="barchart"></a></p>
<p>Any help is appreciated!</p>
|
<p>You can accomplish this through grouping and unstacking:</p>
<pre><code>df.groupby('battle_type')['attacker_outcome']\
.value_counts()\
.unstack(level=1)\
.plot.bar(stacked=True)
</code></pre>
<p><a href="https://i.stack.imgur.com/F6D4k.png" rel="noreferrer"><img src="https://i.stack.imgur.com/F6D4k.png" alt="enter image description here"></a></p>
|
python|pandas|matplotlib
| 7 |
1,902,135 | 20,103,011 |
compare lines within a single file using python
|
<p>I have a text file that has space delimited data as follows:</p>
<pre><code>aaa bbb 10
aaa bbb 5
aaa bbb 6
aaa bbb 2
aaa ccc 4
aaa ccc 11
aaa ccc 7
aaa ddd 9
aaa ddd 13
aaa ddd 12
aaa ddd 19
xxx yyy 20
xxx yyy 4
xxx yyy 6
xxx yyy 8
xxx yyy 12
xxx zzz 10
xxx zzz 11
xxx zzz 4
xxx zzz 5
xxx zzz 6
</code></pre>
<p>I'm not sure how to explain this in words but I want to write the lines with the biggest numerical value to a separate file.</p>
<p>The output should look like:</p>
<pre><code>aaa bbb 10
aaa ccc 11
aaa ddd 19
xxx yyy 20
xxx zzz 11
</code></pre>
<p>Here is some code I've tried, but hasn't worked</p>
<pre><code>for line in r.readlines()[1:]:
z = re.split(' ', line)
a = []
a.append(z)
for i in xrange(len(a)):
if z[0] == a[i][0] and z[1] == a[i][1]:
if z[7] > a[i][7]:
del a[i]
a.append(z)
for x in a:
p.write(' '.join(x))
</code></pre>
<p>I didn't make this clear originally in the question (I'm trying not to give out too much information about the data I'm working with), but there are 8 "columns", in this file. The first three are alpha-numeric, the fourth is an integer, and the last four are floats. I need to use the very last column (a float) to be the maximum. Sorry about this!</p>
<p>Another Solution</p>
<pre><code>allLines = r.readlines()
bestOf = re.split(' ', allLines[1])
f = open("results_filtered.txt", 'a')
for line in allLines[2:]:
z = re.split(' ', line)
if z[0] == bestOf[0] and z[1] == bestOf[1]:
# match, compare signals
if z[7] > bestOf[7]:
bestOf = z
else:
# no match, next set
f.write(' '.join(bestOf))
bestOf = z
</code></pre>
|
<p>If the lines are not sorted, track the maxima using a dictionary or <a href="http://docs.python.org/2/library/collections.html#collections.defaultdict" rel="nofollow"><code>collections.defaultdict</code></a> to track the maximum values:</p>
<pre><code>from collections import defaultdict
maxima = defaultdict(int)
with open(inputfilename, 'r') as ifh:
for line in ifh:
key, value = line.rsplit(None, 1)
value = int(value)
if value > maxima[key]:
maxima[key] = value
with open(outputfilename, 'w') as ofh:
for key in sorted(maxima):
ofh.write('{} {}\n'.format(key, maxima[key])
</code></pre>
<p>A regular dictionary would work too; you'd use <code>maxima = {}</code> and <code>if value > maxima.get(key, 0):</code> instead.</p>
<p>In the above sample code I use <a href="http://docs.python.org/2/library/stdtypes.html#str.rsplit" rel="nofollow"><code>str.rsplit()</code></a> to split on the last whitespace in the line that separates two words; this ensures that we grab <em>just</em> the integer value at the end of the line. The rest of the line is used as the key.</p>
<p>If the 'key' is only taken from <em>part</em> of the line, then split out the line further, and store both the maximum and the line. If the values are really floats, you may want to start out with <code>float('-inf')</code> as the 'maximum' for that key so far:</p>
<pre><code>from collections import defaultdict
maxima = defaultdict(lambda: (float('-inf'), ''))
with open(inputfilename, 'r') as ifh:
for line in ifh:
columns = line.rsplit(None, 1)
key = tuple(columns[:2]) # first two columns are the key
value = float(columns[-1]) # last column is the value
if value > maxima[key][0]:
maxima[key] = (value, line)
with open(outputfilename, 'w') as ofh:
for key in sorted(maxima):
# write the tracked lines
ofh.write(maxima[key][1])
</code></pre>
<p>Now both the maximum value <em>and</em> the whole line with that maximum are stored, per key. What you pick for a key is up to you; I picked the first two columns.</p>
|
python|loops|python-2.7
| 2 |
1,902,136 | 4,158,367 |
more than 9 subplots in matplotlib
|
<p>Is it possible to get more than 9 subplots in matplotlib?</p>
<p>I am on the subplots command <code>pylab.subplot(449);</code> how can I get a <code>4410</code> to work?</p>
<p>Thank you very much. </p>
|
<p>It was easier than I expected, I just did: <code>pylab.subplot(4,4,10)</code> and it worked.</p>
|
python|charts|matplotlib
| 72 |
1,902,137 | 51,384,210 |
How to group by and make sum from different sources in a pandas dataframe?
|
<p>I have a dataframe <code>df</code> that contains the number of transactions between companies.</p>
<pre><code>df Receiver Payer Amount
0 0045 xx04 300
1 5400 zz03 600
2 5400 0045 100
3 xx04 5400 400
</code></pre>
<p>For each companies I would like count <code>in</code> and <code>out</code> and distinguish it between companies with only numbers and companies with non-numeric values. I would like to return something like:</p>
<pre><code>df1 ID In_0 In_1 Out_0 Out_1
0 0045 0 300 100 0
1 5400 100 600 0 400
2 zz03 0 0 600 0
3 xx04 400 0 300 0
</code></pre>
<p>For now I just tried a simple <code>groupby</code>. For the total amount between each couple of companies, for instance such as</p>
<pre><code>df.groupby(['Receiver', 'Payer'], as_index = False)['Amount'].sum()
</code></pre>
|
<p>I think you need to a a little logic and reshaping your dataframe.</p>
<pre><code>df_out = df.rename(columns={'Receiver':'IN','Payer':'OUT'})
df_out['IN_TYPE'] = df_out['OUT'].str.contains(r'\D').astype(int).astype(str)
df_out['OUT_TYPE'] = df_out['IN'].str.contains(r'\D').astype(int).astype(str)
df_out = df_out.melt(['df','Amount','IN_TYPE','OUT_TYPE'], value_name='ID')
df_out['Cols'] = df_out['variable']+'_'+np.where(df_out['variable']=='IN',df_out['IN_TYPE'],df_out['OUT_TYPE'])
df_out = df_out.groupby(['ID','Cols'])['Amount'].sum().unstack().fillna(0).reset_index()
print(df_out)
</code></pre>
<p>Output:</p>
<pre><code>Cols ID IN_0 IN_1 OUT_0 OUT_1
0 0045 0.0 300.0 100.0 0.0
1 5400 100.0 600.0 0.0 400.0
2 xx04 400.0 0.0 300.0 0.0
3 zz03 0.0 0.0 600.0 0.0
</code></pre>
|
python|pandas|group-by
| 1 |
1,902,138 | 51,398,927 |
How to check what a character is before a word in a string in python
|
<p>How would I isolate the numbers before <code>"chaos"</code> and <code>"exalted"</code> when the words may change in this string?</p>
<blockquote>
<p>@From ProhtienArc: Hi, I'd like to buy your 69.5 chaos for my 1 exalted in Incursion.</p>
</blockquote>
<p>I'm also using sikuli if there's a way to do it with that.</p>
|
<p>There are many approaches.
The simplest (which will explode if the words you look for aren't there):</p>
<p>Given a string:</p>
<pre><code>s="@From ProhtienArc: Hi, I'd like to buy your 69.5 chaos for my 1 exalted in Incursion."
</code></pre>
<p>Split it (on spaces):</p>
<pre><code>L=s.split()
</code></pre>
<p>You can then search the list</p>
<pre><code>['@From', 'ProhtienArc:', 'Hi,', "I'd", 'like', 'to', 'buy', 'your', '69.5', 'chaos', 'for', 'my', '1', 'exalted', 'in', 'Incursion.']
</code></pre>
<p>like this</p>
<pre><code>>>> L.index("chaos")
9
</code></pre>
<p>and use that index to find the token before:</p>
<pre><code>>>> L[L.index("chaos")-1]
'69.5'
</code></pre>
<p>This is a string so you then need to convert it to a number.</p>
<p>Do likewise with the other word.
Watch out for not finding the words or them being at the initial index.</p>
<p>And more generally, you could use any word in place if the "chaos".</p>
<pre><code>def number_before(word, split_words):
return L[L.index(word)-1]
</code></pre>
<p>Split you string and call it:</p>
<pre><code>as_string = number_before("chaos", s.split())
number = float(as_string)
</code></pre>
<p>You need to decide what to do if the word isn't there.</p>
|
python|sikuli
| 0 |
1,902,139 | 17,533,511 |
Removing duplicate rows?
|
<p>Does anyone know how I might go about removing duplicate rows in the following data, where the duplicate rows are those with the same name? The catch is that I want to keep the phone numbers, emails, etc that are different in a duplicate entry.</p>
<p>This data is a tab-delimited text file.</p>
<p>Thx!</p>
<pre><code>name phone email website
Diane Grant Albrecht M.S.
Lannister G. Cersei M.A.T., CEP 111-222-3333 cersei@got.com www.got.com
Argle D. Bargle Ed.M.
Sam D. Man Ed.M. 000-000-1111 dman123@gmail.com www.daManWithThePlan.com
Sam D. Man Ed.M.
Sam D. Man Ed.M. 111-222-333 dman123@gmail.com www.daManWithThePlan.com
D G Bamf M.S.
Amy Tramy Lamy Ph.D.
</code></pre>
<p>Ideal output:</p>
<pre><code>name phone email website
Diane Grant Albrecht M.S.
Lannister G. Cersei M.A.T., CEP 111-222-3333 cersei@got.com www.got.com
Argle D. Bargle Ed.M.
Sam D. Man Ed.M. 000-000-1111, 111-222-333 dman123@gmail.com www.daManWithThePlan.com
D G Bamf M.S.
Amy Tramy Lamy Ph.D.
</code></pre>
<hr>
<p>FOLLOW-UP:</p>
<p>Thoughts on this:</p>
<pre><code>from collections import defaultdict
import csv
import re
input = open('ieca_first_col_fake_text.txt', 'rU')
for row in input:
row.split('\t')
print row
# default to empty set for phone, email, website, area, degrees
extracted_data = defaultdict(lambda: [set(), set(), set()])
data_set = {}
for entry in input:
for index, value in enumerate(entry):
if index == 0:
data_set = extracted_data[name]
elif value:
data_set[index - 1].add(value)
print data_set
</code></pre>
<p>data_set is empty ('{}') </p>
|
<p>When you parse the data, use a dictionary where the names are the keys and each value is a list for each additional value, each of which is in turn a set. This will work fine as long as you don't need to maintain any associations between the data by row.</p>
<pre><code>from collections import defaultdict
extracted_data = defaultdict(lambda: [set(), set(), set()])
# Splitting of data depends upon your input format
for entry in input:
# Assume split() returns a 4-length iterable containing name,
# phone, email, and url where the value is falsy if not present
for index, value in enumerate(split(entry)):
if index == 0:
data_set = extracted_data[name]
elif value:
data_set[index - 1].add(value)
</code></pre>
|
python
| 2 |
1,902,140 | 64,442,332 |
Why won't this Beautiful Soup code parse the text I am targeting?
|
<p>I am trying to select the Properties section header in this 10K filing; and once selected from there I intend to to grab the text in that section (i.e. all text between the Properties and Legal Proceedings section headers.</p>
<p>When I run the code below I get the IndexError 'list index out of range' but I don't understand why since
the text "PROPERTIES" appears to be within a 'p' tag. I have also tried using 'id="ITEM_2_PROPERTIES"' instead of text= but that didn't work either</p>
<p>Where am I going wrong?</p>
<pre><code>import requests
from bs4 import BeautifulSoup
url = 'https://www.sec.gov/ix?doc=/Archives/edgar/data/1318605/000156459020004475/tsla-10k_20191231.htm'
soup = BeautifulSoup(requests.get(url).content, 'lxml')
properties_header = soup.find_all('p', text="PROPERTIES")[0]
print(properties_header)
</code></pre>
|
<p>It's because you're making a request to a <code>JS</code> rendered site, so there's no such <code>p</code> with text <code>PROPERTIES</code>.</p>
<p>However, if you change your target URL, there's one:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
url = 'https://www.sec.gov/Archives/edgar/data/1318605/000156459020004475/tsla-10k_20191231.htm'
soup = BeautifulSoup(requests.get(url).content, 'lxml')
properties_header = soup.find_all('p', text="PROPERTIES")
print(properties_header)
</code></pre>
<p>Output:</p>
<pre><code>[<p id="ITEM_2_PROPERTIES" style="margin-bottom:0pt;margin-top:0pt;font-weight:bold;font-style:normal;text-transform:none;font-variant: normal;font-family:Times New Roman;font-size:10pt;">PROPERTIES</p>]
</code></pre>
<p>I got the new target URL from the Developer Tool. This comes up when you turn <code>JS</code> back on. So, I guess you should target that URL for your future requests.</p>
|
python|web-scraping|beautifulsoup|edgar
| 1 |
1,902,141 | 70,403,203 |
Why does implementing multiprocessing makes my program slower
|
<p>I'm trying to implement multiprocessing in my code to make it faster.</p>
<p>To make it easier to understand I will just say the program fits an observed curve using a linear combination of a library of curves and from that measures properties of the observed curve.</p>
<p>I have to do this for over 400 curves and in order to estimate the errors of these properties I perform a Monte Carlo simulation, which means I have to iterate a number of times each calculation.</p>
<p>This takes a lot of time and work, and granted I believe it is a CPU-bound task I figured I'd use multiprocessing in the error estimation step. Here's a simplification of my code:</p>
<p>Without multiprocessing</p>
<pre><code>import numpy as np
import fitting_package
import multiprocessing
def estimate_errors(best_fit_curve, signal_to_noise, fit_kwargs, iterations=100)
results = defaultdict(list)
def fit(best_fit_curve, signal_to_noise, fit_kwargs, results):
# Here noise is added to simulate a new curve (Monte Carlo simulation)
noise = best_fit/signal_to_noise
simulated_curve = np.random.normal(best_fit_curve, noise)
# The arguments from the original fit (outside the error estimation) are passed to the fitting
fit_kwargs.update({'curve' : simulated_curve})
# The fit is performed and it returns the properties packed together
solutions = fitting_package(**fit_kwargs)
# There are more properties so this is a simplification
property_1, property_2 = solutions
aux_dict = {'property_1' : property_1, 'property_2' : property_2}
for key, value in aux_dict.items():
results[key].append(values)
for _ in range(iterations):
fit(best_fit_curve, signal_to_noise, fit_kwargs, results)
return results
</code></pre>
<p>With multiprocessing</p>
<pre><code>def estimate_errors(best_fit_curve, signal_to_noise, fit_kwargs, iterations=100)
def fit(best_fit_curve, signal_to_noise, fit_kwargs, queue):
results = queue.get()
noise = best_fit/signal_to_noise
simulated_curve = np.random.normal(best_fit_curve, noise)
fit_kwargs.update({'curve' : simulated_curve})
solutions = fitting_package(**fit_kwargs)
property_1, property_2 = solutions
aux_dict = {'property_1' : property_1, 'property_2' : property_2}
for key, value in aux_dict.items():
results[key].append(values)
queue.put(results)
process_list = []
queue = multiprocessing.Queue()
queue.put(defaultdict(list))
for _ in range(iterations):
process = multiprocessing.Process(target=fit, args=(best_fit_curve, signal_to_noise, fit_kwargs, queue))
process.start()
process_list.append(process)
for p in process_list:
p.join()
results = queue.get()
return results
</code></pre>
<p>I thought using multiprocessing would save time, but it actually takes more than double than the other way to do it. Why is this? Is there anyway I can make it faster with multiprocessing?</p>
|
<blockquote>
<p>I thought using multiprocessing would save time, but it actually takes more than double than the other way to do it. Why is this?</p>
</blockquote>
<p>Starting a process takes a long time (at least in computer terms). It also uses a lot of memory.</p>
<p>In your code, you are starting 100 separate Python interpreters in 100 separate OS processes. That takes a really long time, so unless each process runs a very long time, the time it takes to start the process is going to dominate the time it actually does useful work.</p>
<p>In addition to that, unless you actually have 100 un-used CPU cores, those 100 processes will just spend most of their time waiting for each other to finish. Even worse, since they all have the same priority, the OS will try to give each of them a fair amount of time, so it will run them for a bit of time, then suspend them, run others for a bit of time, suspend them, etc. All this scheduling <em>also</em> takes time.</p>
<p>Having more parallel workloads than parallel resources <em>cannot</em> speed up your program, since they have to wait to be executed one-after-another anyway.</p>
<p>Parallelism will only speed up your program if the time for the parallel tasks is not dominated by the time of creating, managing, scheduling, and re-joining the parallel tasks.</p>
|
python-3.x|python-multiprocessing
| 1 |
1,902,142 | 70,579,083 |
What happens if I set the pool_connections parameter to 0 for request sessions?
|
<pre><code>s = requests.Session()
s.mount('https://', HTTPAdapter(pool_connections=1))
s.get('https://www.baidu.com')
</code></pre>
<ul>
<li>In the above code what will happen if I set the pool_connections to 0 instead of 1?</li>
<li>Will it work normally like requests.get() method?</li>
<li>Also if it works like requests.get() then can I mount pool_connections=0 for some specific domain prefixes?</li>
</ul>
|
<p>Found the answer, upon setting pool_connections = 0 it will give the following error</p>
<p><em>requests.exceptions.ConnectionError: HTTPSConnectionPool(host='baidu.com', port=443): Pool is closed.</em></p>
|
python|http|session|python-requests|connection-pooling
| 0 |
1,902,143 | 66,606,001 |
Error: Tensorflow preprocessing layers not converting to Tensorflow lite
|
<p>Using the example at
<a href="https://www.tensorflow.org/tutorials/structured_data/preprocessing_layers" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/structured_data/preprocessing_layers</a></p>
<p>I created a model with my own data. I want to save it in Tensorflow lite format. I am saving as SavedModel, but while converting, I encountered many error codes. The last error code I encountered;</p>
<pre><code>WARNING:tensorflow:AutoGraph could not transform <function canonicalize_signatures.<locals>.signature_wrapper at 0x7f4f61cd0560> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: closure mismatch, requested ('signature_function', 'signature_key'), but source function had ()
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function canonicalize_signatures.<locals>.signature_wrapper at 0x7f4f61cd0560> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: closure mismatch, requested ('signature_function', 'signature_key'), but source function had ()
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290>: no matching AST found
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290>: no matching AST found
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60>: no matching AST found
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60>: no matching AST found
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
INFO:tensorflow:Assets written to: /tmp/test_saved_model/assets
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
212 model body, the input/output will be quantized as well.
--> 213 inference_type: Data type for the activations. The default value is int8.
214 enable_numeric_verify: Experimental. Subject to change. Bool indicating
4 frames
Exception: <unknown>:0: error: loc("integer_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc("string_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc(callsite(callsite("model/string_lookup_1/string_lookup_1_index_table_lookup_table_find/LookupTableFindV2@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/add@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/mul@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/DenseBincount@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/integer_lookup_1/integer_lookup_1_index_table_lookup_table_find/LookupTableFindV2@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/add@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/mul@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/DenseBincount@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.AddV2 {device = ""}
tf.DenseBincount {T = f32, Tidx = i64, binary_output = true, device = ""}
tf.Mul {device = ""}Ops that need custom implementation (enabled via setting the -emit-custom-ops flag):
tf.LookupTableFindV2 {device = "/job:localhost/replica:0/task:0/device:CPU:0"}
tf.MutableHashTableV2 {container = "", device = "", key_dtype = !tf.string, shared_name = "table_704", use_node_name_sharing = false, value_dtype = i64}
tf.MutableHashTableV2 {container = "", device = "", key_dtype = i64, shared_name = "table_615", use_node_name_sharing = false, value_dtype = i64}
During handling of the above exception, another exception occurred:
ConverterError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
214 enable_numeric_verify: Experimental. Subject to change. Bool indicating
215 whether to add NumericVerify ops into the debug mode quantized model.
--> 216
217 Returns:
218 Quantized model in serialized form (e.g. a TFLITE model) with floating-point
ConverterError: <unknown>:0: error: loc("integer_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc("string_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc(callsite(callsite("model/string_lookup_1/string_lookup_1_index_table_lookup_table_find/LookupTableFindV2@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/add@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/mul@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/DenseBincount@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/integer_lookup_1/integer_lookup_1_index_table_lookup_table_find/LookupTableFindV2@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/add@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/mul@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/DenseBincount@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.AddV2 {device = ""}
tf.DenseBincount {T = f32, Tidx = i64, binary_output = true, device = ""}
tf.Mul {device = ""}Ops that need custom implementation (enabled via setting the -emit-custom-ops flag):
tf.LookupTableFindV2 {device = "/job:localhost/replica:0/task:0/device:CPU:0"}
tf.MutableHashTableV2 {container = "", device = "", key_dtype = !tf.string, shared_name = "table_704", use_node_name_sharing = false, value_dtype = i64}
tf.MutableHashTableV2 {container = "", device = "", key_dtype = i64, shared_name = "table_615", use_node_name_sharing = false, value_dtype = i64}
</code></pre>
<p>code;</p>
<pre><code>
# Save the model into temp directory
export_dir = "/tmp/test_saved_model"
tf.saved_model.save(model, export_dir)
# Convert the model into TF Lite.
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
tflite_model = converter.convert()
#save model
tflite_model_files = pathlib.Path('/tmp/save_model_tflite.tflite')
tflite_model_file.write_bytes(tflite_model)
</code></pre>
<p>What is the cause of this error code? My goal is to embed this model with react native in the app. Thank you.</p>
|
<p>Looking at your trace, seems like you have some HashTable ops. You need to set <code>converter.allow_custom_ops = True</code> in order to convert this model.</p>
<pre><code>export_dir = "/content/test_saved_model"
tf.saved_model.save(model, export_dir)
# Convert the model into TF Lite.
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
converter.allow_custom_ops = True
tflite_model = converter.convert()
#save model
tflite_model_files = pathlib.Path('/content/save_model_tflite.tflite')
tflite_model_files.write_bytes(tflite_model)
</code></pre>
|
tensorflow|tensorflow-lite|keras-layer|converters|data-preprocessing
| 2 |
1,902,144 | 63,786,514 |
How double equal == syntax work in SQLAlchemy
|
<p>How does the following code work?</p>
<pre class="lang-py prettyprint-override"><code>result = session.query(Customers).filter(Customers.id == 2)
</code></pre>
<p>Based on my non-solid Python knowledge, the <code>Customers.id == 2</code> is either not a valid syntax use in a function call? Or such a <code>foo == bar</code> syntax will be eval to a boolean first, and then call <code>.filter(True/False)</code>.</p>
|
<p>Re your statement:</p>
<blockquote>
<p>Or such a <code>foo == bar</code> syntax will be eval to a boolean first</p>
</blockquote>
<p>You can <em>override</em> the equality operator by using the <code>__eq__</code> dunder method (and other operators with equally suitably-named methods).</p>
<hr />
<p>For example, the following code shows how to confuse anyone that uses the class, unless they're quantum physicists well versed in the fact that things can be both dead and alive at the same time:</p>
<pre><code>class Confusing(object):
def __eq__(self, other): return True
def __ne__(self, other): return True
deadCat = Confusing()
liveCat = Confusing()
if (deadCat == liveCat and deadCat != liveCat):
print("Superposition of meow and deathly silence")
</code></pre>
<hr />
<p>In your case, <code>Customers.id</code> is a <code>Column</code> type. I haven't looked at the specific SqlAlchemy code, but <code>Column.__eq__()</code> is almost <em>certainly</em> going to be returning some filtering object, rather than a simple <code>True/False</code> value, which will then be passed to the <code>filter()</code> method of the thing returned from <code>query()</code>.</p>
<p>Here's a greatly simplified variant of that concept, when you have a key/value collection that you can add to, and the result of an equality check is a list of all the keys that have the requested value:</p>
<pre><code>class KeyValCollection(object):
def __init__(self):
self._keyVals = []
def add(self, key, val):
self._keyVals += [(key, val)]
def __eq__(self, compVal):
return [item[0] for item in self._keyVals if item[1] == compVal]
x = KeyValCollection()
x.add("pax", 42)
x.add("arthur", 43)
x.add("bill", 42)
x.add("carl", 44)
x.add("david", 42)
print(x == 42)
</code></pre>
<p>As expected, the output of that is neither <code>True</code> nor <code>False</code>:</p>
<pre><code>['pax', 'bill', 'david']
</code></pre>
|
python|sqlalchemy
| 5 |
1,902,145 | 68,575,132 |
tensorflow-gpu can't execute certain cells as t = tf.Variable(5), but detects my gpu
|
<p>Here are my specs :</p>
<p>Ubuntu 20.04</p>
<p>tried tensorflow-gpu version 2.0 to 2.5</p>
<p>cuda and cudnn downloaded with conda : cuda 10.1, cudnn 7.6.5</p>
<p>I followed this tutorial <a href="https://www.youtube.com/watch?v=tPq6NIboLSc" rel="nofollow noreferrer">https://www.youtube.com/watch?v=tPq6NIboLSc</a> (it lasts 5 minutes, and some comments say it works in the past month) step by step, even try the solutions in the comments :</p>
<pre><code>conda create -n tfgpu python=3.7
conda activate tfgpu
conda install tensorflow-gpu=2.1
pip uninstall tensorflow
pip uninstall tensorflow-estimator
pip uninstall tensorboard
pip uninstall tensorboard-plugin-wit
pip install tensorflow==2.3
pip check
</code></pre>
<p>to downgrade to many versions, I have tried all the possible combinations and versions, by rebuilding environments from scratch.</p>
<p>In some versions, tf detects my gpu :</p>
<pre><code>print(device_lib.list_local_devices())
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 14590583484823824053
, name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 7716553310653404229
physical_device_desc: "device: XLA_CPU device"
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 1505951744
locality {
bus_id: 1
links {
}
}
incarnation: 3827512444124672980
physical_device_desc: "device: 0, name: GeForce 840M, pci bus id: 0000:04:00.0, compute capability: 5.0"
, name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 11245957731175040440
physical_device_desc: "device: XLA_GPU device"
]
tf.config.list_physical_devices()
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'),
PhysicalDevice(name='/physical_device:XLA_CPU:0', device_type='XLA_CPU'),
PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU'),
PhysicalDevice(name='/physical_device:XLA_GPU:0', device_type='XLA_GPU')]
tf.config.list_logical_devices()
[LogicalDevice(name='/device:CPU:0', device_type='CPU'),
LogicalDevice(name='/device:XLA_CPU:0', device_type='XLA_CPU'),
LogicalDevice(name='/device:GPU:0', device_type='GPU'),
LogicalDevice(name='/device:XLA_GPU:0', device_type='XLA_GPU')]
</code></pre>
<p>However, when it detects my GPU, there are some cells it doesn't execute, a star '*' stays in the brackets '[]' at the left of the cell and last until I restart the kernel (more than 10 minutes, it is not normal).
For instance when I run <code>t = tf.Variable(5)</code>, or when I try to train a model</p>
<p>I am not used to ask without searching, it is the first topic I do because I have spent the last week-end looking for a solution and I need to find it quick since I am in internship</p>
<p>Also, I used the same tutorial on windows 10 and it had worked, but now I am on another laptop from my work, and I don't have my previous one.</p>
<p>If someone that knows how it actually works can spend 5 minutes to follow do the tutorial and explain me what to do to solve it, it would be wonderful.</p>
|
<p>My previous solution was deleted I am sorry I am new to stackoverflow</p>
<p>I got a solution using docker to create an 'Image' and then a 'container' to run tensorflow 2.4.0 on my gpu. A friend of mine succeded to install it on my linux 20.04 os</p>
<p>I let his very useful website here if anyone gets the same problem as me, everything you need is indicated : <a href="https://dinhanhthi.com/docker/" rel="nofollow noreferrer">https://dinhanhthi.com/docker/</a></p>
<p>All you need to do is copy/ paste the commands one by one in your terminal:</p>
<p>Firstly the third paragraph which is 'installation'</p>
<p>Then the sixth to check whether everything has gone right</p>
<p>Then build an 'image' with the nineth and a 'container' with the tenth
An image is kind of like a virtual machine, like when you create a virtual linux distribution from windows.
A container is more like an anaconda environment you can custom in order not to recreate an image from scratch every time (an image is about 7GO heavy)</p>
<p>After that go to <a href="https://dinhanhthi.com/docker-gpu/#install-nvidia-docker2" rel="nofollow noreferrer">https://dinhanhthi.com/docker-gpu/#install-nvidia-docker2</a> and folow the 3rd 4th and 5th steps to install docker-gpu, check the installation and install nvidia docker-2</p>
<p>Once everythong is done, all you have to do when you start your computer is</p>
<pre><code># Start your computer# Check if docker is running
docker ps
# if something like below comes out, it's running
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES#
#Run the container
docker start docker_ai
# If you wanna check currently running containers
docker ps
# Stop container
docker stop docker_ai
# Enter the running container (to install packages,
#for example)
docker exec -it <container_name> bash
# Inside this, if you wanna install some package, use pip
pip install <package_name>
</code></pre>
|
python|tensorflow|gpu
| 0 |
1,902,146 | 63,624,660 |
Response from Python requests to a file(Data Format Issue/Question)
|
<p>This is the python script, which im using to call an api and get the data into a file.</p>
<pre><code>import requests
url = "XXXX"
payload = {}
headers= {}
response = requests.request("GET", url, headers=headers, data = payload)
response.raise_for_status()
file = open("/u/users/xxxxx/Offers.csv", "w")
file.write(response.text.encode('utf8'))
file.close()
</code></pre>
<p>The data is coming to the file in below format</p>
<pre><code>[{"Id":"XXXXXX","Name":"XXXX"}]
</code></pre>
<p>But I need the data in the below format</p>
<pre><code>Id Name
XXX XXXXX
</code></pre>
<p>I want to use this data to load again into a table. Also, I want to implement error handling and setup an e-mail confirmation check if something happens to the job when it completes/fails.Can someone help how to acheive this?</p>
|
<p>You can use pandas to achieve this:</p>
<pre><code>import pandas as pd
x = [{"Id":"XXXXXX","Name":"XXXX"}]
df = pd.DataFrame(x)
csv_data = df.to_csv('mycsv.csv', index = False)
</code></pre>
|
python|api
| 1 |
1,902,147 | 63,498,627 |
Incorrect freezing of weights maskrcnn Tensorflow 2 in object_detection_API
|
<p>I am training the maskrcnn inception v2 model on the Tensorflow version for further work with OpenVino. After training the model, I freeze the model using a script in object_detection_API directory:</p>
<p><strong>python exporter_main_v2.py \
--trained_checkpoint_dir training <br />
--output_directory inference_graph <br />
--pipeline_config_path training/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8.config</strong></p>
<p>After this script, I get the saved model and pipeline files, which should be used in OpenVInO in the future
The following error occurs when uploading the received files to model optimizer:</p>
<p>Model Optimizer version:
2020-08-20 11:37:05.425293: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "C:\Users\Anna\Downloads\inference_graph\inference_graph\saved_model\saved_model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:</p>
<ol>
<li>frozen graph in text or binary format</li>
<li>inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format</li>
<li>meta graph</li>
</ol>
<p>Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message.
For more information please refer to Model Optimizer FAQ (<a href="https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html" rel="nofollow noreferrer">https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html</a>), question #43.</p>
<p>I teach the model by following the example from the link article, using my own dataset: <a href="https://gilberttanner.com/blog/train-a-mask-r-cnn-model-with-the-tensorflow-object-detection-api" rel="nofollow noreferrer">https://gilberttanner.com/blog/train-a-mask-r-cnn-model-with-the-tensorflow-object-detection-api</a></p>
<p>On gpu, the model starts and works, but I need to get the converted model for OpenVINO</p>
|
<p>Run the mo_tf.py script with a path to the SavedModel directory:</p>
<p>python3 mo_tf.py --saved_model_dir <SAVED_MODEL_DIRECTORY></p>
|
tensorflow|openvino
| 0 |
1,902,148 | 65,948,188 |
Pandas: can only use .dt accessor with datetimelike values
|
<p>I have a Pandas dataframe that looks as follows:</p>
<pre><code>key system impl_date
1 madison 2021-01-27T13:16:18.000-0600
2 madison 2021-01-27T13:15:04.000-0600
3 lexington 2021-01-27T13:08:27.000-0600
4 park 2021-01-27T13:05:42.000-0600
</code></pre>
<p>The <code>impl_date</code> column is (I think!) a string because earlier in the script I apply the following:</p>
<pre><code>df = df.applymap(str)
</code></pre>
<p>I want to take the <code>impl_date</code> column and strip the time element, resulting in a date that takes the following form:</p>
<pre><code>yyyy-mm-dd
</code></pre>
<p>To do so, I use the following:</p>
<pre><code>df['impl_date'] = pd.to_datetime(df['impl_date']).dt.strftime('%Y-%m-%d')
</code></pre>
<p>But, this fails with the following error message:</p>
<pre><code>AttributeError: Can only use .dt accessor with datetimelike values
</code></pre>
<p>So, I tried the following:</p>
<pre><code>df['impl_date'] = pd.to_datetime(df['impl_date'], errors='coerce').dt.strftime('%Y-%m-%d')
</code></pre>
<p>This fails with the same error message.</p>
<p><code>df.dtypes</code> gives the following:</p>
<pre><code>key object
system object
impl_date object
type: object
</code></pre>
<p><code>type(df)</code> gives:</p>
<pre><code>pandas.core.series.Series
</code></pre>
<p>And, <code>df.info()</code> gives:</p>
<pre><code># Column Non-Null Count Dtype
- ------ -------------- -----
0 key 6453 non-null object
1 system 6453 non-null object
2 impl_date 6453 non-null object
</code></pre>
<p>Given that (I think!) the <code>impl_date</code> is represented as a string, what's the best way to transform this column to a <code>yyyy-dd-mm</code> format?</p>
<p>Thanks!</p>
|
<p>Your data may contain different timezones, like this:</p>
<pre><code>key system impl_date
1 madison 2021-01-27T13:16:18.000-0600
2 madison 2021-01-27T13:15:04.000-0600
3 lexington 2021-01-27T13:08:27.000-0600
5 park 2021-01-27T13:05:42.000-0500 # here
</code></pre>
<p>Option is to pass <code>utc=True</code> to <code>to_datetime</code>:</p>
<pre><code>pd.to_datetime(df['impl_date'], errors='coerce', utc=True).dt.strftime('%Y-%m-%d')
</code></pre>
<p>And you get:</p>
<pre><code>0 2021-01-27
1 2021-01-27
2 2021-01-27
3 2021-01-27
Name: impl_date, dtype: object
</code></pre>
|
python|pandas
| 4 |
1,902,149 | 66,117,169 |
Read 3 Column csv into a nested Dictonary
|
<p>I had been following few example of reading csv file. I have a /some/path/file.csv as</p>
<pre><code>type,id,password
db,db_admin,admin123
db,db_user,user123
mw,mw_admin,admin456
mw,mw_user,user456
fe,fe_admin,admin789
fe,fe_user,user789
</code></pre>
<p>Code:</p>
<pre><code>import csv
from collections import defaultdict
types = defaultdict(list)
with open(r'/some/path/file.csv', 'r') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
types[row['type']].append({row['id'], row['password']})
print(types)
</code></pre>
<p>Output:</p>
<pre><code>{'db': [{'db_admin': 'admin123', 'db_user': 'user123'}],
'mw': [{'mw_admin': 'admin456', 'mw_user': 'user456'}],
'fe': [{'fe_admin': 'admin789', 'fe_user': 'user789'}]}
</code></pre>
<p>I wish to have it this way to pass it forward to a method. And how to iterate these in a loop and pass 'db' and {'db_admin': 'admin123', 'db_user': 'user123'} into a method.</p>
<pre><code>{'db': {'db_admin': 'admin123', 'db_user': 'user123'},
'mw': {'mw_admin': 'admin456', 'mw_user': 'user456'},
'fe': {'fe_admin': 'admin789', 'fe_user': 'user789'}}
</code></pre>
|
<p>You can use <code>setdefault</code> and create a shared entry for all the rows with the same <code>type</code> in the csv file:</p>
<pre class="lang-py prettyprint-override"><code>import csv
types = {}
with open('file.csv', 'r') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
types.setdefault(row['type'], {})[row['id']] = row['password']
print(types)
</code></pre>
<p>Output</p>
<pre class="lang-sh prettyprint-override"><code>{'db': {'db_admin': 'admin123', 'db_user': 'user123'}, 'mw': {'mw_admin': 'admin456', 'mw_user': 'user456'}, 'fe': {'fe_admin': 'admin789', 'fe_user': 'user789'}}
</code></pre>
|
python-3.x|csv
| 2 |
1,902,150 | 65,989,510 |
How to merge back dataframe from a pivot operation?
|
<p>pivot is a elegant operation in pandas.</p>
<p>but is there any methods can merge pivot dataframe back?</p>
<p>let me list an example:</p>
<pre><code>In [10]: df = pd.DataFrame([['a','2019', 1], ['b', '2019', 2], ['c', '2019',2], ['d','2019',3], ['e', '2009',1], ['f', '2012', 3]])
In [11]: df
Out[11]:
0 1 2
0 a 2019 1
1 b 2019 2
2 c 2019 2
3 d 2019 3
4 e 2009 1
5 f 2012 3
In [12]: df.columns = ['name', 'year', 'value1']
In [13]: df['value2'] = 4
In [14]: df
Out[14]:
name year value1 value2
0 a 2019 1 4
1 b 2019 2 4
2 c 2019 2 4
3 d 2019 3 4
4 e 2009 1 4
5 f 2012 3 4
</code></pre>
<p>here i created a dataframe, then i use pivot function:</p>
<pre><code>In [15]: a = df.pivot('name', 'year', 'value1')
Out[15]:
year 2009 2012 2019
name
a NaN NaN 1.0
b NaN NaN 2.0
c NaN NaN 2.0
d NaN NaN 3.0
e 1.0 NaN NaN
f NaN 3.0 NaN
In [16]: b = df.pivot('name', 'year', 'value2')
Out[16]:
year 2009 2012 2019
name
a NaN NaN 4.0
b NaN NaN 4.0
c NaN NaN 4.0
d NaN NaN 4.0
e 4.0 NaN NaN
f NaN 4.0 NaN
</code></pre>
<p>as i expected, i have two good dataframe which only contains value1 and value2.</p>
<p>my question is: how can i get <code>df</code> back from <code>a</code> and <code>b</code>?</p>
<p>is there any elegant methods?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>DataFrame.stack</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>DataFrame.unstack</code></a>:</p>
<pre><code>df = pd.concat([a.stack(), b.stack()], keys=('value1','value2')).unstack(0).reset_index()
print (df)
name year value1 value2
0 a 2019 1.0 4.0
1 b 2019 2.0 4.0
2 c 2019 2.0 4.0
3 d 2019 3.0 4.0
4 e 2009 1.0 4.0
5 f 2012 3.0 4.0
</code></pre>
|
python|pandas|pivot
| 4 |
1,902,151 | 66,829,412 |
pygame : while runs once and stops
|
<p>I want to make a program that draw six number. please help me.
When (A)button is pressed, it runs once and stops. What should I do if I want to run it many times?</p>
<pre><code>numT = []
team = random.randint(1, 6)
if event.type == pygame.KEYDOWN:
if event.key == event.key == ord('a'):
for i in range(6):
while team in numT:
team = random.randint(1, 6)
numT.append(team)
num1 = font.render(str(numT), True, (255, 255, 255))
screen.blit(num1, (200, 117))
</code></pre>
<p>The result that I want(on the pygame screen) : example</p>
<pre><code>[1, 3, 5, 2, 6, 4]
</code></pre>
|
<p>If you want to generate a new sequence, you have to clear the old one with <code>numT = []</code> or <code>numT.clear()</code>.<br />
However, the conde can be simplified using <a href="https://docs.python.org/3/library/random.html" rel="nofollow noreferrer"><code>random.shuffle()</code></a>:</p>
<pre class="lang-py prettyprint-override"><code>if event.type == pygame.KEYDOWN:
if event.key == pygame.K_a:
numT = list(range(1, 7))
random.shuffle(numT)
num1 = font.render(str(numT), True, (255, 255, 255))
</code></pre>
|
python|pygame
| 1 |
1,902,152 | 50,719,086 |
CPython: Dynamic module does not define module export function error
|
<p>I just compiled my Python wrapper for C++ classes successfully. However, I am getting following messages when I am trying to load my module to the Python (through <code>import cell</code>):</p>
<pre><code>ImportError: dynamic module does not define module export function (PyInit_cell)
</code></pre>
<p>I checked that the system is using the Python3 on all cases so this is not a Python version problem.<br>
Below is my <strong>setup.py</strong> file:</p>
<pre><code>from distutils.core import setup, Extension
from Cython.Build import cythonize
setup(ext_modules = cythonize(Extension(
"cell",
sources=["cell.pyx", "cell.cc"],
language="c++",
extra_compile_args=["-std=c++11"],
)))
</code></pre>
<p>Below is the dump for the generated <strong>.so</strong> file:</p>
<pre><code>0000000000201020 B __bss_start
0000000000201020 b completed.7594
w __cxa_finalize@@GLIBC_2.2.5
0000000000000530 t deregister_tm_clones
00000000000005c0 t __do_global_dtors_aux
0000000000200de8 t __do_global_dtors_aux_fini_array_entry
0000000000201018 d __dso_handle
0000000000200df8 d _DYNAMIC
0000000000201020 D _edata
0000000000201028 B _end
0000000000000630 T _fini
0000000000000600 t frame_dummy
0000000000200de0 t __frame_dummy_init_array_entry
0000000000000640 r __FRAME_END__
0000000000201000 d _GLOBAL_OFFSET_TABLE_
w __gmon_start__
00000000000004e8 T _init
w _ITM_deregisterTMCloneTable
w _ITM_registerTMCloneTable
0000000000200df0 d __JCR_END__
0000000000200df0 d __JCR_LIST__
w _Jv_RegisterClasses
0000000000000570 t register_tm_clones
0000000000201020 d __TMC_END__
</code></pre>
<p>I really don't understand why the module is not loaded to the python because there was no error during the building process.</p>
<p>Any help would be appreciated!</p>
|
<p>You should not call your extension/module <code>cell.pyx</code>, call it differently - for example <code>cycell.pyx</code>.</p>
<p>Why? The following steps are taken while extension is built</p>
<ol>
<li>Cython generates file <code>cell.cpp</code> out of <code>cell.pyx</code>.</li>
<li>Compiler compiles <code>cell.cpp</code> to the object file <code>cell.o</code>.</li>
<li>Compiler compiles <code>cell.cc</code> to the object file <code>cell.o</code> and overwrites the object file created from <code>cell.pyx</code>.</li>
<li>Linker links both <code>cell.o</code> files (but it is in reality only one) - in the result there is nothing what was defined in <code>cell.pyx</code>/<code>cell.cpp</code> in particular <code>PyInit_cell</code>.</li>
</ol>
<p>By renaming the Cython-file you avoid that the object-file is overwritten.</p>
<p>Clearly, another option would be to rename your c++-file.</p>
|
python|cython|cpython
| 3 |
1,902,153 | 58,137,537 |
How to get the frequency counts of columns in pandas?
|
<p>I was wondering how to get the frequency count of items of pandas dataframe like in the following question:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
df = pd.DataFrame({'A': [1,1,2,3,5,2],
'B': [10,10,10,300,400,500],
'C': ['p','p','q','q','q','q']})
print(df)
A B C
0 1 10 p
1 1 10 p
2 2 10 q
3 3 300 q
4 5 400 q
5 2 500 q
</code></pre>
<h1>Required output</h1>
<pre><code> A B C
(1,2) (10,3) ('p', 2)
(2,2) (300,1) ('q', 4)
(3,1) (400,1)
(5,1) (500,1)
</code></pre>
|
<p>You can also try:</p>
<pre><code>s=df.stack().groupby(df.stack()).transform('count').unstack()
final=pd.concat([df,s])
final.groupby(final.index).agg(tuple)
</code></pre>
<hr>
<pre><code> A B C
0 (1, 2) (10, 3) (p, 2)
1 (1, 2) (10, 3) (p, 2)
2 (2, 2) (10, 3) (q, 4)
3 (3, 1) (300, 1) (q, 4)
4 (5, 1) (400, 1) (q, 4)
5 (2, 2) (500, 1) (q, 4)
</code></pre>
|
python|pandas
| 1 |
1,902,154 | 69,551,000 |
Difficult case with TypeError: only size-1 arrays can be converted to Python scalars
|
<p>I am trying to use <strong>plotly</strong> to graph a huge function which is in turn a sum of other functions - I want to append the smaller functions to arrays and add them up to the final function. Process with simple functions works, e.g.:</p>
<pre><code>import numpy as np
import plotly.graph_objects as go
pi = math.pi
X, Y, Z = np.mgrid[-2:2:100j, -2:2:100j, -2:2:100j]
x=[pi*2,4]
values1 = pi*X*X+Y*Y
values2 = Z*Z
values.append(values1*x[0])
values.append(values2*x[1])
valuestot=values[0]+values[1]
fig = go.Figure(data=go.Isosurface(
x=X.flatten(),
y=Y.flatten(),
z=Z.flatten(),
value=valuestot.flatten(),
isomin=-0.1,
isomax=0.1
))
fig.show()
</code></pre>
<p>However, problem occurs when I use a longer function (respective to values1), for example:</p>
<pre><code>n1s2 = nns21 * (((3 * nas21 / pi) ** (1 / 4)) * math.exp( -nas21 * ((X + cXN1) ** 3 + (Y + cYN1) ** 2 + (Z + cZN1) ** 2))) + nns22 * ( ((2 * nas22 / pi) ** (1 / 4)) * math.exp( -nas22 * ((X + cXN1) ** 2 + (Y + cYN1) ** 3 + (Z + cZN1) ** 2)))
</code></pre>
<p>gives the <strong>TypeError: only size-1 arrays can be converted to Python scalars</strong> error for the line of the function in the code.
I tried all the solutions with np.vectorize() given on different forums, yet that does not help, as python presumably reads the long line as a list of items. How could I make the line read as a single function which is recognized by python? Thank you for your help.</p>
|
<p>Succeeded - got a suggestion from a brilliant CS student to change math.exp() to np.exp() so that different exponents account for separate mgrid vectors (X,Y,Z).</p>
|
python|arrays|function|plotly|scalar
| 0 |
1,902,155 | 54,245,381 |
Gremlin Python - Connection to Multi-Node JanusGraph Cluster
|
<p>We looking to build a multi node JanusGraph cluster </p>
<p>10.74.19.32 (e.g. IP)</p>
<p>10.74.19.33 (e.g. IP)</p>
<p>Our application is written in python and uses the gremlin python driver</p>
<pre><code>session = client.Client('ws://10.74.19.32:8182/gremlin', 'g',
message_serializer=GraphSONSerializersV3d0(),
username=remote_graph_user,
password=remote_graph_password, pool_size=8)
</code></pre>
<p>we could not find examples of how to connect round robin between the two JanusGraph servers 10.74.19.32 & 10.74.19.33</p>
<p>Should we put this through a load balancer url and once the connection is opened, python app stays with the same server until connection is closed or interrupted?</p>
<p><strong>should we do</strong></p>
<pre><code>session = client.Client('ws://vanity_url:8182/gremlin', 'g',
message_serializer=GraphSONSerializersV3d0(),
username=remote_graph_user,
password=remote_graph_password, pool_size=8)
</code></pre>
|
<p>You're already on the right track. You'll want to setup a loadbalancer in front of the gremlin servers. This isn't something that gremlin-server will handle. </p>
|
python|gremlin|janusgraph
| 3 |
1,902,156 | 58,254,502 |
How to get to another page after saving form by CreateView
|
<p>I wan't to get to order_list page after adding new order.</p>
<p>Was trying both reverse and reverse_lazy method also just to set page adres value to success_url directly like success_url = 'orders/order_list' or sucess url = 'order_list' but it always returns me Http 405 error.</p>
<p>views.py</p>
<pre><code>django.shortcuts import render
from django.urls import reverse_lazy
from django.views import View
from django.views.generic import ListView, DetailView, CreateView
from django.http import HttpResponse, HttpResponseRedirect
from django.contrib.auth.mixins import PermissionRequiredMixin, LoginRequiredMixin
from .models import Order
from .forms import CreateOrder
from django.contrib.auth.decorators import login_required
# Create your views here.
class OrderCreateView(LoginRequiredMixin, PermissionRequiredMixin, CreateView):
login_url = '/login_required'
permission_required = 'orders.add-order'
model = Order
success_url = reverse_lazy('orders:order_list')
fields = ['airport', 'direction', 'adress', 'client', 'telephone', 'flight_number', 'plane', 'pick_up', 'gate', 'driver']
</code></pre>
<p>urls.py</p>
<pre><code>from django.contrib import admin
from django.urls import path
from django.contrib.auth import views as auth_views
from orders.views import OrderCreateView, OrderListView, AboutView, LoginRequiredView
urlpatterns = [
path('admin/', admin.site.urls),
path('add_order/', OrderCreateView.as_view(template_name="orders/add_order.html"), name="add_order"),
path('order_list/', OrderListView.as_view(), name="order_list"),
path('login/', auth_views.LoginView.as_view(template_name="pages/login.html"), name="login"),
path('logout/', auth_views.LogoutView.as_view(template_name="pages/logout.html"), name="logout"),
path('about/', AboutView.as_view(), name="about"),
path('login_required/', LoginRequiredView.as_view(), name='login_required')
]
</code></pre>
<p>add_order.html</p>
<pre><code>{% extends 'base.html' %}
{% load static %}
{% load crispy_forms_tags %}
{% block content %}
<div class="container" style="width: 40%; height: 80%;">
<div class="page header">
<h1>Add new order</h1>
</div>
<form action="/order_list/" method="post">
{% csrf_token %}
{{ form|crispy }}
<button type="submit" class="btn btn-success">Save order</button>
</form>
</div>
{% endblock %}
</code></pre>
<p>Any ideas what am I doing wrong ?</p>
|
<p>Change:</p>
<pre><code>success_url = reverse_lazy('orders:order_list')
</code></pre>
<p>To:</p>
<pre><code>success_url = reverse_lazy('order_list')
</code></pre>
<hr>
<p>And change:</p>
<pre><code><form action="/order_list/" method="post">
</code></pre>
<p>To:</p>
<pre><code><form action="/add_order/" method="post">
</code></pre>
<p><strong>Note</strong>: You are using hardcoded URL which is not recommended. Use the <a href="https://docs.djangoproject.com/en/2.2/ref/templates/builtins/#url" rel="nofollow noreferrer">url</a> template tag. </p>
|
python|django|django-generic-views
| 1 |
1,902,157 | 41,493,813 |
ValueError on writing lxml text
|
<p>I have the following block to write an xml tag. Sometimes the name is already in the correct form (that is, it won't error), and sometimes it is not</p>
<pre><code>if 'Name' in title_data:
name = etree.SubElement(info, 'Name')
try:
name.text = title_data['Name']
except ValueError:
name.text = title_data['Name'].decode('utf-8')
</code></pre>
<p>Is there a way to simplify this? For example, something along the lines of:</p>
<pre><code>name.text = title_data['Name'] if (**something**) else title_data['Name'].decode('utf-8')
</code></pre>
|
<p>I assume that you want to avoid having to write similar code for every element you want to set. This has the smell of trying to treat the symptom rather than the cause, but if nothing else, you can simply break that out into a helper function:</p>
<pre><code>def assign_text(field, text):
try:
field.text = text
except ValueError:
field.text = text.decode("utf-8")
# ...
if "Name" in title_data:
name = etree.SubElement(info, "Name")
assign_text(name, title_data["Name"] or None)
</code></pre>
|
python|xml|unicode|lxml
| 1 |
1,902,158 | 44,724,948 |
TensorFlow - dense vector to one-hot
|
<p>Suppose I have the following tensor:</p>
<pre><code>T = [[0.1, 0.3, 0.7],
[0.2, 0.5, 0.3],
[0.1, 0.1, 0.8]]
</code></pre>
<p>I want to transform this into a one-hot tensor, such that the indexes with the maximum value over dimension 0 get set to 1 and all the other ones get set to zero, like this:</p>
<pre><code>T_onehot = [[0, 0, 1],
[0, 1, 0],
[0, 0, 1]]
</code></pre>
<p>I know there's <code>tf.argmax</code> to get the indices of the largest elements in the tensor, but is there any method which allows me to do what I want to do in one step?</p>
|
<p>I don't know if there's a way to do this in one step, but there's a <code>one_hot</code> function in tensorflow:</p>
<pre><code>import tensorflow as tf
T = tf.constant([[0.1, 0.3, 0.7], [0.2, 0.5, 0.3], [0.1, 0.1, 0.8]])
T_onehot = tf.one_hot(tf.argmax(T, 1), T.shape[1])
tf.InteractiveSession()
print(T_onehot.eval())
# [[ 0. 0. 1.]
# [ 0. 1. 0.]
# [ 0. 0. 1.]]
</code></pre>
|
python|tensorflow
| 1 |
1,902,159 | 20,543,360 |
Pandas - Replace Integers with Floats
|
<p>I want to "recode" an integer variable into a float variable in Pandas. But this does not seem to work like I would have expected. Basically I have a 1-6 Scale which I want to assign new values to.</p>
<p>My current approach with an example:</p>
<pre><code>df2 = pd.DataFrame({
'A' : [1,2,3,4,5]
})
df2['B'] = df2['A'].replace([1, 2, 3, 4, 5], [1, 0.85, 0.70, 0.55, 0.40])
print df2
</code></pre>
<p>Result:</p>
<pre><code> A B
0 1 1
1 2 0
2 3 0
3 4 0
4 5 0
</code></pre>
<p>What is a correct way of doing this?</p>
|
<pre><code>>>> df2['A'].astype(float).replace([1, 2, 3, 4, 5], [1, 0.85, 0.70, 0.55, 0.40])
0 1.00
1 0.85
2 0.70
3 0.55
4 0.40
Name: A, dtype: float64
</code></pre>
<p>May be more appropriate way to do this is to use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html#pandas.Series.map" rel="nofollow"><code>pandas.Series.map()</code></a>:</p>
<pre><code>>>> df2['A'].map(dict(zip([1, 2, 3, 4, 5], [1, 0.85, 0.70, 0.55, 0.40])))
0 1.00
1 0.85
2 0.70
3 0.55
4 0.40
Name: A, dtype: float64
</code></pre>
|
python|pandas
| 4 |
1,902,160 | 46,249,113 |
Python in Maya: main function calls other function while passing variables?
|
<p>I am an animator who is a beginner at coding...I appreciate your help!</p>
<p>I am writing a Python script to automatically duplicate controller objects in Maya. It creates controllers for each bone in the character's hand.</p>
<p>It worked fine as a single long function, but I want to have a Main Function, "handCtrls()", call functions within it for the various tasks. I also need to pass variables to the sub-functions, because I am iterating on the variable "i".</p>
<p>I get an error on the line which calls the function "nullGrouper(side, boney, i, ctrl)", and the error says that "global variable 'ctrl' is undefined". But it is defined in the "ctrlDup" function, and I used "return ctrl", which I though would send that variable back to 'handCtrls()'.</p>
<p>here is the code:</p>
<pre><code>import maya.cmds as c
def ctrlDup(side, boney, i, ctrlOrig):
#Select and duplicate the original Control, and place it at the appropiate boney
c.select(ctrlOrig)
ctrl = c.duplicate(ctrlOrig, n=side + boney + str(i) + '_CTRL')
print(ctrl)
print('ctrlDup passed')
return ctrl
def nullGrouper(side, boney, i, ctrl):
#Create null group at same position as Control
print(ctrl)
print(ctrl[0])
nullGrp = c.group(n=side + boney + str(i) + '_GRP', empty=1, world=1)
print(nullGrp)
c.parentConstraint(ctrl, nullGrp, n='nullGrp_parentConstr_temp')
c.delete('nullGrp_parentConstr_temp')
print('nullGrouper passed')
return ctrl, nullGrp
def handCtrls():
#First select the Controller to be duplicated
sel = c.ls(sl=1)
ctrlOrig = sel[0]
#List sides
sideList = ['L_', 'R_']
#List bones of part
boneyList = ['thumb', 'index', 'middle', 'ring', 'pinky']
#Now, iterate across finger joints, up to 3, group controls, and parent them
for side in sideList:
for boney in boneyList:
for i in range(1, 4, 1):
#Check if boney is thumb, and joint number is 3
if (boney!='thumb') or (i!=3):
ctrlDup(side, boney, i, ctrlOrig)
nullGrouper(side, boney, i, ctrl)
else:
pass
print("It's done.")
handCtrls()
</code></pre>
<p>There are several 'print' commands just to check if the variable is getting passed. Thank you!</p>
|
<p>It is because you are not storing the return </p>
<pre><code>...
ctrl = ctrlDup(side, boney, i, ctrlOrig)
nullGrouper(side, boney, i, ctrl)
</code></pre>
|
python|animation|maya
| 0 |
1,902,161 | 53,692,787 |
Define Subgroup in Brian2 (library in python)
|
<p>I tried to test a spiking neural network in Python using Brian2. I received this error:</p>
<pre><code> File "C:\ProgramData\Anaconda3\envs\snn\lib\site-packages\brian2\groups\group.py", line 393, in __getattr__
raise AttributeError('No attribute with name ' + name)
</code></pre>
<p>AttributeError: No attribute with name subgroup</p>
<p>My main problem is in making a subgroup of <code>G</code> (<code>NeuronGroup</code>). The first subgroup is excitatory neurons and the second one is inhibitory neurons.</p>
<pre><code>G = NeuronGroup(4000, model=eqs, threshold=Vt, reset=Vr)
Ge = G.subgroup(3200) # Excitatory neurons
Gi = G.subgroup(800) # Inhibitory neurons
</code></pre>
<p>Can anyone help me to solve this error? Thanks.
The code for this SNN (spiking neural network) is:</p>
<pre><code>import brian2
from brian2 import *
from brian2 import start_scope
taum = 20 * ms # membrane time constant
taue = 5 * ms # excitatory synaptic time constant
taui = 10 * ms # inhibitory synaptic time constant
Vt = -50 * mV # spike threshold
Vr = -60 * mV # reset value
El = -49 * mV # resting potential
we = (60 * 0.27 / 10) * mV # excitatory synaptic weight
wi = (20 * 4.5 / 10) * mV # inhibitory synaptic weight
eqs = Equations('''
dV/dt = (ge-gi-(V-El))/taum : volt
dge/dt = -ge/taue : volt
dgi/dt = -gi/taui : volt
''')
G = NeuronGroup(4000, model=eqs, threshold=Vt, reset=Vr)
Ge = G.subgroup(3200) # Excitatory neurons
Gi = G.subgroup(800) # Inhibitory neurons
Ce = Connection(Ge, G, 'ge', sparseness=0.02, weight=we)
Ci = Connection(Gi, G, 'gi', sparseness=0.02, weight=wi)
M = SpikeMonitor(G)
MV = StateMonitor(G, 'V', record=0)
Mge = StateMonitor(G, 'ge', record=0)
Mgi = StateMonitor(G, 'gi', record=0)
G.V = Vr + (Vt - Vr) * rand(len(G))
run(500 * ms)
subplot(211)
raster_plot(M, title='The CUBA network', newfigure=False)
subplot(223)
plot(MV.times / ms, MV[0] / mV)
xlabel('Time (ms)')
ylabel('V (mV)')
show()
subplot(224)
plot(Mge.times / ms, Mge[0] / mV)
plot(Mgi.times / ms, Mgi[0] / mV)
xlabel('Time (ms)')
ylabel('ge and gi (mV)')
legend(('ge', 'gi'), 'upper right')
show()
</code></pre>
|
<p>Defining subgroup in <code>Brian2</code> according to following:</p>
<p>For example we have p(neuron group) and we want to have two subgroup of it.</p>
<pre><code>P = NeuronGroup(4000, model=eqs, threshold='v>-20*mV', refractory=3*ms, method='exponential_euler')
Pe = P[:3200]
Pi = P[3200:]
</code></pre>
<p>In my question I used instructions of Brian not Brian2! so I received error.
Be careful about using instruction of Brian2 and Brian!</p>
|
python|anaconda
| 2 |
1,902,162 | 38,470,838 |
Click button, then scrape data on seemingly static webpage?
|
<p>I'm trying to scrape the player statistics in the <code>Totals</code> table at this link: <a href="http://www.basketball-reference.com/players/j/jordami01.html" rel="nofollow">http://www.basketball-reference.com/players/j/jordami01.html</a>. It's much more difficult to scrape the data as-is when you first appear on that site, so you have the option of clicking 'CSV' right above the table. This format would be much easier to digest.</p>
<p>I'm having trouble </p>
<pre><code>import urllib2
from bs4 import BeautifulSoup
from selenium import webdriver
player_link = "http://www.basketball-reference.com/players/j/jordami01.html"
browser = webdriver.Firefox()
browser.get(player_link)
elem = browser.find_element_by_xpath("//span[@class='tooltip' and @onlick='table2csv('totals')']")
elem.click()
</code></pre>
<p>When I run this, a Firefox window pops up, but the code never changes the table from its original format to CSV. The CSV table only pops up in the source code after I click CSV (obviously). How can I get <code>selenium</code> to click that CSV button and then BS to scrape the data?</p>
|
<p><em>You don't need <code>BeautifulSoup</code> here</em>. Click the <code>CSV</code> button with selenium, extract the contents of the appeared <code>pre</code> element with CSV data and parse it with <a href="https://docs.python.org/2/library/csv.html" rel="nofollow">built-in <code>csv</code> module:</a></p>
<pre><code>import csv
from StringIO import StringIO
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
player_link = "http://www.basketball-reference.com/players/j/jordami01.html"
browser = webdriver.Firefox()
wait = WebDriverWait(browser, 10)
browser.set_page_load_timeout(10)
# stop load after a timeout
try:
browser.get(player_link)
except TimeoutException:
browser.execute_script("window.stop();")
# click "CSV"
elem = wait.until(EC.presence_of_element_located((By.XPATH, "//div[@class='table_heading']//span[. = 'CSV']")))
elem.click()
# get CSV data
csv_data = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, "pre#csv_totals"))).text.encode("utf-8")
browser.close()
# read CSV
reader = csv.reader(StringIO(csv_data))
for line in reader:
print(line)
</code></pre>
|
python|selenium
| 3 |
1,902,163 | 47,058,011 |
histogram2d in tensorflow
|
<p>I want to create a histogram2d in Tensorflow. Something like this:
<a href="https://i.stack.imgur.com/B2b3w.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B2b3w.jpg" alt="example 2d histogram"></a></p>
<p>Preferably in Tensorboard, but it's fine if there's a simple solution without tensorboard.</p>
<p>Tensorboard seems to only plot 1D histograms/distributions. I have a solution using np.histogram2d and images, which I'll add as an answer, but it's far from ideal since I cannot show the axis or observe quantitative values.</p>
|
<p>My solution involves using numpy's <code>np.histogram2d</code>, using <code>tf.py_func</code> to embed it within tensorflow and then plotting 'height' as grey_scale in an image using <code>tf.summary_image</code>.</p>
<pre><code>def _histogram_2d(a,b):
"""
takes two tensors of the same shape and computes the 2d histogram of their pairs
"""
ar = a.reshape(-1)
br = b.reshape(-1)
aux = np.histogram2d(ar, br)
return aux[0].astype(np.float32), aux[1].astype(np.float32), aux[2].astype(np.float32)
[H, xedges, yedges] = tf.py_func(_histogram_2d, [a, b], [tf.float32, tf.float32, tf.float32])
tf.summary.image('/2d_hist', tf.expand_dims(tf.expand_dims(H,0),-1))
</code></pre>
<p>You get something like this:
<a href="https://i.stack.imgur.com/JkzC2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JkzC2.png" alt="tboard_image_2dhist"></a></p>
<p>It does the job, but there may be something better.</p>
|
plot|tensorflow|tensorboard
| 2 |
1,902,164 | 64,275,053 |
Unable to make predictions from Keras model due to CUDA errors
|
<p>I am new to Python. I followed <a href="https://towardsdatascience.com/time-series-forecasting-with-recurrent-neural-networks-74674e289816" rel="nofollow noreferrer">this website</a> as a guide to do some future predictions. After I did everything, the graph did not show up and I got these errors:</p>
<pre><code>2020-10-09 08:27:09.619051: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found
2020-10-09 08:27:09.620905: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2020-10-09 08:27:16.105403: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found
2020-10-09 08:27:16.107108: W tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit: UNKNOWN ERROR (303)
2020-10-09 08:27:16.110329: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: kwk-tech-05
2020-10-09 08:27:16.110968: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: kwk-tech-05
2020-10-09 08:27:16.111746: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-10-09 08:27:16.119507: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2624680e450 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-10-09 08:27:16.120408: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
</code></pre>
<p>This is the code I wrote:</p>
<pre><code>import pandas as pd
import numpy as np
import tensorflow as tf
import keras
from keras.preprocessing.sequence import TimeseriesGenerator
from keras.models import Sequential
from keras.layers import LSTM, Dense
import plotly.graph_objects as go
df = pd.read_excel('T:/Python/NRN-Netze_Python/Stromverbrauch2Jahren.xlsx')
print(df.info())
df['Datum'] = pd.to_datetime(df['Datum'])
df.set_axis(df['Datum'], inplace = True)
df.drop(columns = ['Datum_u_Uhrzeit', 'Stunden', 'Minuten', 'Uhrzeit'], inplace = True)
value = df['Werte'].values
value = value.reshape((-1, 1))
split_percent = 0.80
split = int(split_percent * len(value))
value_train = value[:split]
value_test = value[split:]
date_train = df['Datum'][:split]
date_test = df['Datum'][split:]
print('')
print(len(value_train))
print(len(value_test))
look_back = 15
train_generator = TimeseriesGenerator(value_train, value_train, length = look_back, batch_size = 20)
test_generator = TimeseriesGenerator(value_test, value_test, length = look_back, batch_size = 1)
model = Sequential()
model.add(LSTM(10, activation = 'relu', input_shape = (look_back, 1)))
model.add(Dense(1))
model.compile(optimizer = 'adam', loss = 'mse')
num_epochs = 25
model.fit(train_generator, epochs = num_epochs, verbose = 1)
prediction = model.predict(test_generator)
value_train = value_train.reshape((-1))
value_test = value_test.reshape((-1))
prediction = prediction.reshape((-1))
trace1 = go.Scatter(
x = date_train,
y = value_train,
mode = 'lines',
name = 'Original'
)
trace2 = go.Scatter(
x = date_test,
y = value_test,
mode = 'lines',
name = 'Prediction'
)
layout = go.Layout(
title = 'Strom Verbrauch In Der Wasserversorgung',
xaxis = {'title' : 'Datum'},
yaxis = {'title' : 'Werte'}
)
value = value.reshape((-1))
def predict(num_prediction, model):
prediction_list = value[-look_back:]
for _ in range(num_prediction):
x = prediction_list[-look_back:]
x = x.reshape((1, look_back, 1))
out = model.predict(x)[0][0]
prediction_list = np.append(prediction_list, out)
prediction_list = prediction_list[look_back - 1:]
return prediction_list
def predict_dates(num_prediction):
last_date = df['Datum'].values[-1]
prediction_dates = pd.date_range(last_date, periods = num_prediction + 1).tolist()
return prediction_dates
num_prediction = 365
forecast = predict(num_prediction, model)
forecast_dates = predict_dates(num_prediction)
fig = go.Figure(data = [trace1, trace2], layout = layout)
fig.show()
</code></pre>
|
<p>From the error message, it looks like you didn’t install the CUDA driver for your graphics card.</p>
|
python|pandas|tensorflow
| 0 |
1,902,165 | 70,662,767 |
I want the result of a user input to cause a different variable to be used
|
<p>Disclaimer I'm only 10 days into learning Python (little to no experience in any language prior to this)</p>
<p>I'm having difficulty conceptualizing my problem and thus, a solution for it.</p>
<p>I currently have the following code:</p>
<pre><code>spacer = "_|_"
spacer_btm = " | "
blank = "_"
blank_btm = " "
print(blank + spacer + blank + spacer + blank)
print(blank + spacer + blank + spacer + blank)
print(blank_btm + spacer_btm + blank_btm + spacer_btm + blank_btm)
</code></pre>
<p>This is going to print out a grid. Originally i had the <strong>blank</strong> and <strong>blank_btm</strong> variables split into 9 separate variables, each denoting a space in the grid that I want to change. The problem I have is that since these are strings, they are immutable. I want to change that blank space/underscore value for something else based on the result of a user input. Ideally the value that replaces the <strong>blank</strong> or <strong>blank_btm</strong> would also be a string, but I'm confounded as to what sort of process I should use to get there.</p>
<p>for example if the input is <strong>1</strong> lets say that would mean the top left box in the grid gets <strong>"A"</strong> (or literally anything as a string)</p>
<p>For reference the previous code I had was</p>
<pre><code>spacer = str("_|_")
spacer_btm = " | "
a1 = str("_")
a2 = str("_")
a3 = str("_")
b4 = str("_")
b5 = str("_")
b6 = str("_")
c7 = str(" ")
c8 = str(" ")
c9 = str(" ")
print(a1 + spacer + a2 + spacer + a3)
print(b4 + spacer + b5 + spacer + b6)
print(c7 + spacer_btm + c8 + spacer_btm + c9)
</code></pre>
|
<p>The closest solution to your code I can think of is the following one.</p>
<pre><code>spacer = "_|_"
spacer_btm = " | "
blank = "_"
blank_btm = " "
print(("A" if input() == "1" else blank) + spacer + blank + spacer + blank)
print(blank + spacer + blank + spacer + blank)
print(blank_btm + spacer_btm + blank_btm + spacer_btm + blank_btm)
</code></pre>
<p>When Python executes the <code>input</code> builtin function, the program hangs until you press <kbd>ENTER</kbd>.</p>
<p>I would not do it like this but I don't want to give you a solution you would not understand. For example, I suppose you need to prompt the user repeatedly, in a read-eval-print loop, but do you know what a <code>while</code> loop is?</p>
|
python|variables
| 0 |
1,902,166 | 72,988,605 |
Pandas covert one dataframe to another
|
<p>I am making trouble on this matter. Would like to ask how to convert the following raw data to result data? Thanks</p>
<p>Raw data</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Board</th>
<th>Slot No.</th>
</tr>
</thead>
<tbody>
<tr>
<td>55</td>
<td>WD22UBBPe4</td>
<td>3</td>
</tr>
<tr>
<td>14</td>
<td>WD22UBBPd6</td>
<td>2</td>
</tr>
<tr>
<td>14</td>
<td>QWL1WBBPF4</td>
<td>3</td>
</tr>
<tr>
<td>14</td>
<td>QWL1WBBPD2</td>
<td>0</td>
</tr>
<tr>
<td>14</td>
<td>WD22LBBPD2</td>
<td>1</td>
</tr>
<tr>
<td>16</td>
<td>QWL1WBBPD2</td>
<td>4</td>
</tr>
<tr>
<td>16</td>
<td>QWL1WBBPD2</td>
<td>3</td>
</tr>
<tr>
<td>16</td>
<td>WD22UBBPd6</td>
<td>2</td>
</tr>
<tr>
<td>16</td>
<td>QWL1WBBPD2</td>
<td>0</td>
</tr>
<tr>
<td>16</td>
<td>QWL1WBBPD2</td>
<td>1</td>
</tr>
<tr>
<td>72</td>
<td>QWL1WBBPD2</td>
<td>0</td>
</tr>
<tr>
<td>72</td>
<td>WD22LBBPD2</td>
<td>1</td>
</tr>
<tr>
<td>72</td>
<td>WD22UBBPd6</td>
<td>2</td>
</tr>
<tr>
<td>72</td>
<td>QWL1WBBPD2</td>
<td>3</td>
</tr>
</tbody>
</table>
</div>
<p>Result</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Slot 0</th>
<th>Slot 1</th>
<th>Slot 2</th>
<th>Slot 3</th>
<th>Slot 4</th>
<th>Slot 5</th>
</tr>
</thead>
<tbody>
<tr>
<td>55</td>
<td></td>
<td></td>
<td></td>
<td>WD22UBBPe4</td>
<td></td>
<td></td>
</tr>
<tr>
<td>14</td>
<td>QWL1WBBPD2</td>
<td>WD22LBBPD2</td>
<td>WD22UBBPd6</td>
<td>QWL1WBBPF4</td>
<td></td>
<td></td>
</tr>
<tr>
<td>16</td>
<td>QWL1WBBPD2</td>
<td>QWL1WBBPD2</td>
<td>WD22UBBPd6</td>
<td>QWL1WBBPD2</td>
<td>QWL1WBBPD2</td>
<td></td>
</tr>
<tr>
<td>72</td>
<td>QWL1WBBPD2</td>
<td>WD22LBBPD2</td>
<td>WD22UBBPd6</td>
<td>QWL1WBBPD2</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div>
|
<p>Here is one way to do it</p>
<pre><code>df.pivot(index='Name', columns='Slot No.').add_prefix('Slot ').fillna('').reset_index()
</code></pre>
<pre><code>
Name Slot Board
Slot No. Slot 0 Slot 1 Slot 2 Slot 3 Slot 4
0 14 QWL1WBBPD2 WD22LBBPD2 WD22UBBPd6 QWL1WBBPF4
1 16 QWL1WBBPD2 QWL1WBBPD2 WD22UBBPd6 QWL1WBBPD2 QWL1WBBPD2
2 55 WD22UBBPe4
3 72 QWL1WBBPD2 WD22LBBPD2 WD22UBBPd6 QWL1WBBPD2
</code></pre>
|
python|pandas
| 2 |
1,902,167 | 72,993,451 |
I want python to skip the list if it doesn't have more than 3 parts (space separated) while reading a file line
|
<p>I'm making python read one file that makes all the lines of a file as a space-separated list current I'm facing an issue where I want python to read a list only if the content is more than 3 parts
I'm trying <code>if int(len(list[3])) == 3:</code> then read the 3 parts of the list but the program is giving the error of</p>
<blockquote>
<p>`IndexError: list index out of range</p>
</blockquote>
<p>it is usually given when I access something that doesn't exist but the line of code that read the 3rd part shouldn't run on a list without 3+ parts</p>
|
<p>I think is this:</p>
<pre class="lang-py prettyprint-override"><code>
def get_file_matrix(file:str):
with open(file,'r') as arq:
lines = arq.readlines()
#saniting
lines_clear = list(map(lambda line:line.strip(),lines))
lines_splited = list(map(lambda line:line.split(' '),lines_clear))
lines_filtered = list(filter(lambda line: len(line) <= 3,lines_splited ) )
return lines_filtered
r = get_file_matrix('test.txt')
print(r)
</code></pre>
|
python|list
| 1 |
1,902,168 | 73,034,919 |
Testing Starknet Cairo contract function with address
|
<p>I built a module with Cairo language and I would like to unit test it. The contract is pretty simple : it manages a list of authorized addresses and provide some "modifier" functions to help.</p>
<p>I have taken the sample unit testing code from the documentation but nothing is referring about sending an account address to a function with python.</p>
<p>How should I proceed ?</p>
<p>Thanks in advance</p>
|
<p>You can spoof an account by providing the caller address to the invoke function like this:</p>
<pre class="lang-py prettyprint-override"><code>await contract.function(
arg1=felt,
arg2=felt2,
).invoke(caller_address=private_to_stark_key(123))
</code></pre>
<p>If you want a more detailed example of how the python unit testing framework works you can check these 2 links</p>
<p><a href="https://github.com/starknet-edu/basecamp/blob/main/camp_4/buidl/tests/test_contract.py" rel="nofollow noreferrer">https://github.com/starknet-edu/basecamp/blob/main/camp_4/buidl/tests/test_contract.py</a>
<a href="https://github.com/starknet-edu/starknet-debug/tree/master/python" rel="nofollow noreferrer">https://github.com/starknet-edu/starknet-debug/tree/master/python</a></p>
|
python|starknet|cairo-lang
| 0 |
1,902,169 | 73,232,171 |
Python. Find value in a complex dictionary by key
|
<p>I have a complicated dictionary from which I need to get the account number and balance.</p>
<pre><code>response = requests.request("GET", url=HOST_GET + str("8af*********************e66b024e"), headers=headers, data=payload)
data = response.json()
</code></pre>
<p>response</p>
<pre><code>{
"reportKey": "8af*********************e66b024e",
"status": "COMPLETE",
"items": [
{
"glAccount": {
"encodedKey": "8af6a********b66017*********00e",
"glCode": "1235802509PLN",
"type": "ASSET",
"usage": "DETAIL",
"name": "Some account 2",
"stripTrailingZeros": false,
"currency": {
"code": "USD"
}
},
"amounts": {
"openingBalance": 18.1200000000
}
},
{
"glAccount": {
"encodedKey": "8a**********10018************2e2",
"glCode": "12188009912PKOUSD01",
"type": "ASSET",
"usage": "DETAIL",
"name": "UAH account(payment channel acct.)",
"stripTrailingZeros": false,
"currency": {
"code": "USD"
}
},
"amounts": {
"openingBalance": 155532.5900000000
}
},
{
"glAccount": {
"encodedKey": "8af*********0719d0179***********f",
"glCode": "1134800294USD",
"type": "ASSET",
"usage": "DETAIL",
"name": "UAH account",
"stripTrailingZeros": false,
"currency": {
"code": "USD"
}
},
"amounts": {
"openingBalance": 455219.9500000000
}
},
{
"glAccount": {
"encodedKey": "8*************5fed***************1a4",
"glCode": "124567376001EUR",
"type": "ASSET",
"usage": "DETAIL",
"name": "EUR account",
"stripTrailingZeros": false,
"currency": {
"code": "EUR"
}
},
"amounts": {
"openingBalance": 14.0000000
}
}
]
}
</code></pre>
<p>I hard-coded the search, but the code must be redone constantly when new accounts appear.</p>
<pre><code>if data["items"][2]["glAccount"]["glCode"] == "1134800294USD":
glaccount = data["items"][2]["glAccount"]["glCode"]
amount1 = data["items"][2]["amounts"]["openingBalance"]
print('Account -', data["items"][2]["glAccount"]["glCode"], "Amount -", data["items"][2]["amounts"]["openingBalance"])
</code></pre>
<p>response</p>
<pre><code>Account - 1134800294USD Amount - 455219.95
</code></pre>
<p>I need to find the right account and then get the amount in the account. I have little experience in this, so please help.</p>
|
<p>Can try below for loop to get the values, instead of print you can save them as variable or create new dictionary.</p>
<pre><code>for item in json["items"]:
print(f'Account ID: {item["glAccount"]["glCode"]}')
print(f'Account Balance: {item["amounts"]["openingBalance"]}')
</code></pre>
<p>Example;</p>
<pre><code>accounts={}
for item in json["items"]:
account_id = ({item["glAccount"]["glCode"]})
account_balance = ({item["amounts"]["openingBalance"]})
accounts_to_be_added = {f'{account_id}': account_balance}
accounts.update(accounts_to_be_added)
</code></pre>
|
python|dictionary
| 0 |
1,902,170 | 64,667,040 |
Scipy Optimize Curve fit not properly fitting with real data
|
<p>I am trying to fit a decaying exponential function to real world data. I'm having a problem with aligning the function to the actual data.</p>
<p>Here's my code:</p>
<pre><code>def test_func(x, a, b, c):
return a*np.exp(-b*x)*np.sin(c*x)
my_time = np.linspace(0,2.5e-6,25000)
p0 = [60000, 700000, 2841842]
params, params_covariance = curve_fit(test_func, my_time, my_amp,p0)
</code></pre>
<p>My signal and fitted function
<a href="https://i.stack.imgur.com/akaEG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/akaEG.png" alt="My signal and fitted function" /></a></p>
<p>My question: why doesn't the fitted function start where my data starts increasing in amplitude?</p>
|
<p>As I said in my comment, the problem is that your function does not take into account that the exponential curve can be shifted. If you include this shift as an additional parameter, the fit will probably converge.</p>
<pre><code>from scipy.optimize import curve_fit
from matplotlib import pyplot as plt
import numpy as np
def test_func(x, a, b, c, d):
return a*np.exp(-b*(x+d))*np.sin(c*(x+d))
my_time = np.linspace(0,2.5e-6,25000)
#generate fake data
testp0 = [66372, 765189, 2841842, -1.23e-7]
test_amp = test_func(my_time, *testp0)
my_amp = test_func(my_time, *testp0)
my_amp[:2222] = my_amp[2222]
p0 = [600, 700000, 2000, -2e-7]
params, params_covariance = curve_fit(test_func, my_time, test_amp, p0)
print(params)
fit_amp = test_func(my_time, *params)
plt.plot(my_time, my_amp, label="data")
plt.plot(my_time, fit_amp, label="fit")
plt.legend()
plt.show()
</code></pre>
<p>Sample output</p>
<p><a href="https://i.stack.imgur.com/XzNZY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XzNZY.jpg" alt="enter image description here" /></a></p>
|
python|python-3.x|curve-fitting|scipy-optimize
| 0 |
1,902,171 | 64,074,698 |
How to add 5% Gaussian noise to the signal data
|
<p>I want to add 5% Gaussian noise to the multivaraite data.
Here is the approach</p>
<pre><code>import numpy as np
mu, sigma = 0, np.std(data)*0.05
noise = np.random.normal(mu, sigma, data.shape)
noise.shape
</code></pre>
<p>Here is the signal. Is this a correct approach to add 5% Gaussian noise
<a href="https://i.stack.imgur.com/X4vuX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X4vuX.png" alt="enter image description here" /></a></p>
|
<p>I think you are on the right track, noise is additive in nature and if you look at the (SNR) Signal to Noise Ratio calculation</p>
<p><strong>SNR = 20 * log(p_s)/(p_n)</strong></p>
<p>which is nothing but</p>
<p><strong>SNR = 20 (log(p_s) - log(p_n))</strong></p>
<p>so we are basically <strong>subtracting</strong> the power of noise from the signal(which has noise)</p>
<h3>To add noise to the entire signal</h3>
<p>I would do the same as what you have posted</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
np.random.seed(137)
t = np.linspace(0, 10, 100)
p = np.sin(t)
percentage = 0.05
n = np.random.normal(0, p.std(), t.size) * percentage
pn = p + n
fig = plt.figure()
ax1 = fig.add_subplot(211)
ax1.set_title('Noise added to entire signal')
ax1.plot(t, p, label='pure signal')
ax1.plot(t, pn, label='signal+noise')
ax2 = fig.add_subplot(212)
ax2.plot(t, pn - p, label='added noise', c='r')
plt.legend()
fig = plt.figure()
ax1 = fig.add_subplot(211)
ax1.set_title('Noise added to part of the signal')
ax1.plot(t, p, label='pure signal')
random_indices = np.random.randint(0, t.size, int(t.size*percentage))
pr = p.copy()
pr[random_indices] += n[random_indices]
ax1.plot(t, pr, label='signal+noise')
ax2 = fig.add_subplot(212)
ax2.plot(t, pr - p, label='added noise', c='r')
plt.legend()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/ZarSm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZarSm.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/OgYqc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OgYqc.png" alt="enter image description here" /></a></p>
<h2>Note</h2>
<p>One interesting thing that I have noticed is <code>np.random.normal</code> for very small values of variance mainly samples positive values, so it is better to scale the 5% i.e, the variance after sampling with a higher variance value</p>
|
python|numpy|signal-processing|gaussian
| 2 |
1,902,172 | 53,115,177 |
Tkinter project
|
<p>I need to have an input asking file path and when the user fills file path with file name, program must save a file with this file name.Under input I need a text and multiple inputs just for one answer. How can I do that?</p>
|
<p>Use <code>filedialog</code> of <code>tkinter</code>,</p>
<p>Whole code demo:</p>
<pre><code>from tkinter import Tk, Label, Button, Text, filedialog
class MyFirstGUI:
def __init__(self, master):
self.master = master
master.title("A simple GUI")
self.text = Text(master)
self.text.pack()
self.save_button = Button(master, text="Save as...", command=self.open)
self.save_button.pack()
def open(self):
self._filetypes = [
('Text', '*.txt'),
('All files', '*'),
]
self.filename = filedialog.asksaveasfilename(defaultextension='.txt',
filetypes = self._filetypes)
f = open(self.filename, 'w')
f.write(self.text.get('1.0', 'end'))
f.close()
root = Tk()
my_gui = MyFirstGUI(root)
root.mainloop()
</code></pre>
<p>So just need to do save file functions, and that's it, and use <code>open</code> for saving (writing it)</p>
|
python|python-2.7|tkinter
| 1 |
1,902,173 | 52,936,360 |
How to Remove Integers with Dashes from a list while keeping dashes in objects
|
<p>I have a list filled with the following strings:</p>
<pre><code> list1 = ['01', '02', '03', '04', 05', '101-1', '101-2', 101-3',
'Name1', 'Name2', 'Name3', 'Name-4', 'Name-5', 'Name-6']
</code></pre>
<p>I need to remove both the regular integers as well as the integers with dashes in them while keeping the Names as well as the Names with dashes in them. I have written the following code so far:</p>
<p>This code removes all of the dashes (but how do I specify only to remove the dashes from the integer strings and not the object strings):</p>
<pre><code>list2 = [i.replace('-','') for i in list1 if i.isdigit()]
</code></pre>
<p>This code removes all integers wrapped in strings:</p>
<pre><code> list3 = [x for x in list2 if not (x.isdigit() or x[0] == '-' and x[1:].isdigit())]
</code></pre>
<p>With the above code, I am able to remove all of the integers, but it also removes all of the 'Names' with dashes in them as well - I need to keep the Names with dashes in them. How can I do this?</p>
|
<p>(Since this is tagged pandas) You can use <code>str.replace</code> + <code>str.isdigit</code>:</p>
<pre><code>s = pd.Series(list1)
s[~s.str.replace('-', '', regex=False).str.isdigit()]
8 Name1
9 Name2
10 Name3
11 Name-4
12 Name-5
13 Name-6
dtype: object
</code></pre>
<p>To get back a list, call <code>.tolist()</code> on the result.</p>
<p>Translating this into pure python, we have the list comp equivalent (look ma, no regex):</p>
<pre><code>>>> [x for x in list1 if not x.replace('-', '').isdigit()]
['Name1', 'Name2', 'Name3', 'Name-4', 'Name-5', 'Name-6']
</code></pre>
|
python|pandas|list
| 4 |
1,902,174 | 65,329,704 |
How to calculate an average stock price depending on periods
|
<p>I am trying to calculate the average opening price for a stock, depending on different periods (week, month, year).</p>
<p>Here you can see a part of my df : <a href="https://i.stack.imgur.com/jREi7.png" rel="nofollow noreferrer">My dataframe</a> (987 rows for the complete df)</p>
<p>Firstly, I am trying to calculate the average opening price week by week. I found a solution, but it is unsustainable (it took my computer 5min to finish the calculations). Here it is :</p>
<pre><code>def average_opening_and_closing_prices(df):
array = [0]
n = df["weekofyear"].count()
j=0
for i in range(0,n):
array[j] = array[j] + kdf["Open"][i]
if i != n-1 and kdf["weekofyear"][i] != kdf["weekofyear"][i+1]:
array.append(0)
j = j+1
for x in array:
print(str(x) + " ")
average_opening_and_closing_prices(AMAZON_df)
</code></pre>
<p>Could you help me to improve my solution (mainly on execution time) ? Also, for example, I would like to add a column, directly to my df, which contains the results for each week, instead of putting the results in an array.</p>
<p>I am not allowed to use pandas, I can only use pyspark and koalas.</p>
|
<p>[UPDATED: To include year into the calculation]
As you are looking for average price for week (and year) and already added the weekofyear in data frame, panda's itself can do it for you. Just add a column for year and try <code>df.groupby(['year', 'weekofyear']).mean()</code>
Sample below:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'weekofyear' : [1, 1, 1, 2, 2, 2, 3, 3, 3],
'year' : [2017, 2017, 2018, 2017, 2017, 2018, 2017, 2017, 2018],
'Open' : [757, 758, 759, 761, 761, 762, 763, 764, 764]
})
result_df = df.groupby(['year', 'weekofyear']).mean()
print(result_df)
</code></pre>
<p>Output</p>
<pre><code>Open
year weekofyear
2017 1 757.5
2 761.0
3 763.5
2018 1 759.0
2 762.0
3 764.0
</code></pre>
|
python|apache-spark|pyspark|spark-koalas
| 2 |
1,902,175 | 72,101,987 |
numpy.savetxt keeps giving an error with 2d list
|
<p>So I've been coding a 2d list that users can edit in python. At the end of this, I need to save the list into a txt file, then I get the error. This is the bit of code that's problematic:</p>
<pre><code>tdlist = [["trash","laundry","dishes"],[1,2,3]]
items = numpy.array(tdlist)
savelist = open("list.txt", "w")
for row in items:
numpy.savetxt("list.txt", row)
savelist.close()
</code></pre>
<p><strong>This is the wall of text error</strong></p>
<pre><code>Traceback (most recent call last):
File "/home/runner/ToDoList/venv/lib/python3.8/site-packages/numpy/lib/npyio.py", line 1450, in savetxt
v = format % tuple(row) + newline
TypeError: must be real number, not numpy.str_
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "main.py", line 113, in <module>
numpy.savetxt("list.txt", row)
File "<__array_function__ internals>", line 180, in savetxt
File "/home/runner/ToDoList/venv/lib/python3.8/site-packages/numpy/lib/npyio.py", line 1452, in savetxt
raise TypeError("Mismatch between array dtype ('%s') and "
TypeError: Mismatch between array dtype ('<U21') and format specifier ('%.18e')
</code></pre>
<p>Any help is great, thanks!</p>
|
<p>There are 2 problems. the format problem is only one of them. You mixed two ways of how to write content to a text file.</p>
<p>Either you open the file, write to it, and close it again. Could be done like this:</p>
<pre><code>tdlist = [["trash", "laundry", "dishes"], [1, 2, 3]]
items = numpy.array(tdlist)
savelist = open("list.txt", "w")
for row in items:
savelist.write(", ".join(row) + "\n") #needs to be a string
savelist.close()
</code></pre>
<p>or you just use <code>numpy.savetxt</code> and write the whole items array in one step to a text file. Could be done like this:</p>
<pre><code>tdlist = [["trash", "laundry", "dishes"], [1, 2, 3]]
items = numpy.array(tdlist)
numpy.savetxt("list.txt", items, fmt="%s") #like in the comments already mentioned, you need to specify a format here
</code></pre>
|
python|numpy|typeerror
| 1 |
1,902,176 | 68,476,485 |
Random values from a PERT distribution in Python
|
<p>I'd like to generate in Python 10,000 random values from a PERT distribution that has the following parameters low=6898.5, peak= 7338.93, high=7705.87</p>
<p>How can I do that?</p>
|
<p>If you just want to use the standard library, you could do something like:</p>
<pre><code>from random import betavariate
def pert(a, b, c, *, lamb=4):
r = c - a
alpha = 1 + lamb * (b - a) / r
beta = 1 + lamb * (c - b) / r
return a + betavariate(alpha, beta) * r
arr = [pert(6898.5, 7338.93, 7705.87) for _ in range(10_000)]
</code></pre>
<p>Using Numpy is mostly the same:</p>
<pre><code>import numpy as np
def pert(a, b, c, *, size=1, lamb=4):
r = c - a
alpha = 1 + lamb * (b - a) / r
beta = 1 + lamb * (c - b) / r
return a + np.random.beta(alpha, beta, size=size) * r
arr = pert(6898.5, 7338.93, 7705.87, size=10_000)
</code></pre>
<p>but is about 20 times faster (20ms vs 0.8ms).</p>
<p>Either of these can be used to produce similar plots to Severin, e.g.:</p>
<p><a href="https://i.stack.imgur.com/CfkYy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/CfkYy.png" alt="KDE via Seaborn" /></a></p>
|
python|random|statistics|distribution
| 5 |
1,902,177 | 68,649,004 |
Grouping of output plots from function
|
<p>Here is something I have been struggling with today. It's a question of how to present data in such a way as to avoid having to scroll downa notebook for ages and loosing the ability to compare graphs.</p>
<p>Suppose I have this dataframe:</p>
<pre><code>id type zone d
0 1 a a1 23
1 1 a b1 45
2 1 a c1 23
3 2 a c1 56
4 2 b a1 7
5 2 b b1 5
6 3 b a1 2
7 3 b a1 9
8 3 b b1 43
9 4 c c1 21
10 4 c c1 67
11 5 c b1 34
12 5 c a1 21
13 1 a a1 3
14 1 a b1 4
15 1 a c1 12
16 2 a c1 10
17 2 b a1 33
18 2 b b1 22
19 3 b a1 334
20 3 b a1 22
21 3 b b1 11
22 4 c c1 55
23 4 c c1 88
24 5 c b1 22
25 5 c a1 9
</code></pre>
<p>and the following function</p>
<pre><code>def my_function(df,zone):
df_new = df[df['zone']=="{}".format(zone)]
df_new.hist()
plt.suptitle("distances for zone {}".format(zone))
</code></pre>
<p>I generate one graph per zone doing the following</p>
<pre><code>zoneList = list(set(df['zone'].unique()))
for zone in zoneList:
my_function(df,zone)
</code></pre>
<p>Now this returns a sequence of barplots, one on top of the other. This is not very convenient. What I would like is for these plots to be in a grid, say 2 plots by row.</p>
<p>I tried this:</p>
<pre><code>fig, axs = plt.subplots(2, 2)
for zone in zoneList:
axs = my_function(df,zone)
</code></pre>
<p>but it returns the grid I want and then the same result that I previously got.</p>
<p>How can I solve this?</p>
|
<p>You have created a grid that you want, but you have not used it anywhere.</p>
<p><code>pandas.DataFrame.hist()</code> has an argument <code>ax</code> as:</p>
<blockquote>
<p><strong>ax : Matplotlib axes object, default None</strong></p>
<p>The axes to plot the histogram on.</p>
</blockquote>
<p>This code:</p>
<pre><code>fig, axs = plt.subplots(2, 2)
</code></pre>
<p>returns a Matplotlib figure <code>fig</code> and <code>numpy.array</code> of Matplotlib axes objects <code>axs</code> (1-dimensional if either rows or columns is 1, 2-dimensional otherwise).</p>
<p>So, you need to:</p>
<ol>
<li>loop over the flattened version of <code>axs</code></li>
<li>pass respective <code>ax</code> object to <code>df_new.hist()</code> method</li>
</ol>
<p>Like such:</p>
<pre><code>def my_function(df,zone,ax):
df_new = df[df['zone']=="{}".format(zone)]
df_new.hist(ax = ax)
# set title just for this subplot
ax.set_title("distances for zone {}".format(zone), loc = 'center')
fig, axs = plt.subplots(2, 2)
for zone, ax in zip(zoneList, axs.flatten()):
my_function(df,zone,ax)
</code></pre>
|
python|pandas|matplotlib|plot|seaborn
| 3 |
1,902,178 | 5,257,822 |
How to count the number of currently connected Protocols in Python twisted framework
|
<p>I was trying to count the number of active protocols in twisted but i got an error:</p>
<pre><code>exceptions.AttributeError: Factory instance has no attribute 'numProtocols'
</code></pre>
<p>Below is the code:</p>
<pre><code>class EchoPro(Protocol):
def connectionMade(self):
self.factory.numProtocols = self.factory.numProtocols+1
if self.factory.numProtocols > 100:
self.transport.write("Too many connections, try later")
self.transport.loseConnection()
def connectionLost(self, reason):
self.factory.numProtocols = self.factory.numProtocols-1
def dataReceived(self, data):
self.transport.write(data)
</code></pre>
|
<p>That's because <code>self.factory</code> does not contain the <code>numProtocols</code> attribute. </p>
<p>To customise the protocol's factory you create a Factory for your protocol by subclassing <code>twisted.internet.protocol.Factory</code>.</p>
<p>Example:</p>
<pre><code>from twisted.internet.protocol import Protocol, Factory
from twisted.internet import reactor
class Echo(Protocol):
# ... your implementation as it is now ...
class EchoFactory(Factory): # Factory for your protocol
protocol = Echo
numProtocols = 0
factory = EchoFactory()
factory.protocol = Echo
reactor.listenTCP(8007, factory)
reactor.run()
</code></pre>
<p>Alternatively, you could just modify the factory instance once it is created, <a href="http://twistedmatrix.com/documents/10.1.0/core/howto/servers.html#auto5" rel="noreferrer">as done in the docs</a>.</p>
<p>Example:</p>
<pre><code>from twisted.internet.protocol import Protocol, Factory
from twisted.internet import reactor
class Echo(Protocol):
# ... your implementation as it is now ...
def getEchoFactory():
factory = Factory()
factory.protocol = Echo
factory.numProtocols = 0
return factory
reactor.listenTCP(8007, getEchoFactory())
reactor.run()
</code></pre>
|
python|twisted|twisted.internet
| 5 |
1,902,179 | 62,842,304 |
Deploy exe Kivy with multi folder and files and TensorFlow
|
<p>I'm try to deploy a app kivy to Windows with PyInstaller like this tutorial: <a href="https://kivy.org/doc/stable/guide/packaging-windows.html" rel="nofollow noreferrer">Create a package for Windows</a></p>
<p>But's when i try execute, it crash.</p>
<p>I trying to use the <code>--onefile</code> command to create.</p>
<p><strong>This is my Tree folder:</strong></p>
<pre><code>Detector:.
│ camera.py
│ data.json
│ dataControler.py
│ gui.kv
│ Main.py
│ controle.py
│ detector.model
│ detector.spec
│
├───face_detector
│ deploy.prototxt
│ res10_300x300_ssd_iter_140000.caffemodel
│
├───icons
│ agta.jpg
│ ico.png
│ icoagta.ico
│
└───songs
en.mp3
ptbr.mp3
</code></pre>
<p>I changed the detector.spec as explaned in kivy tutorial</p>
<p><strong>detector.spec</strong></p>
<pre><code># -*- mode: python ; coding: utf-8 -*-
from kivy.tools.packaging.pyinstaller_hooks import get_deps_minimal, get_deps_all, hookspath, runtime_hooks
block_cipher = None
a = Analysis(['Main.py'],
pathex=['C:\\Users\\**User**\\Desktop\\detector\\Main.py'],
binaries=[],
datas=[],
hiddenimports=[],
hookspath=hookspath(),
runtime_hooks=runtime_hooks(),
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False,
**get_deps_all())
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
coll = COLLECT(exe, Tree('detector\\'),
a.binaries,
a.zipfiles,
a.datas,
*[Tree(p) for p in (sdl2.dep_bins + glew.dep_bins)],
strip=False,
upx=True,
name='detector')
exe = EXE(pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
[],
name='detector',
debug=True,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=True , icon='icons\\icoagta.ico')
</code></pre>
<p>When i execute de Main.py it work fantastic However when i packed it dont work.</p>
<p>Does anyone know how to solve? I tried the documentation but still haven't found the solution.</p>
|
<p>Well After many attempts the solution for me was to make a <strong>downgree of the tensorflow == 1.14</strong> compatible with PyInstaller. and change my .spec to</p>
<pre><code># -*- mode: python ; coding: utf-8 -*-
block_cipher = None
from kivy_deps import sdl2, glew, gstreamer
a = Analysis(['C:\\Users\\bababa\\Desktop\\mask\\Main.py'],
pathex=['C:\\Users\\bababa\\Desktop\\mask', 'C:\\Program Files (x86)\\Windows Kits\\10\\Redist\\ucrt\\DLLs\\x86','C:\\Program Files (x86)\\Windows Kits\\10\\Redist\\ucrt\\DLLs\\x64'],
binaries=[],
datas=[],
hiddenimports=[],
hookspath=[],
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
[],
exclude_binaries=True,
name='Mask',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
console=True , icon='C:\\Users\\bababa\\Desktop\\mask\\icons\\icoagta.ico')
coll = COLLECT(exe,Tree('C:\\Users\\bababa\\Desktop\\mask'),
a.binaries,
a.zipfiles,
a.datas,
*[Tree(p) for p in (sdl2.dep_bins + glew.dep_bins + gstreamer.dep_bins)],
strip=False,
upx=True,
upx_exclude=[],
name='Mask')
</code></pre>
<p>Now it work fine</p>
|
python|tensorflow|kivy|pyinstaller|kivy-language
| 0 |
1,902,180 | 62,624,321 |
Manifest error. Line: 1, column: 1, Syntax error in Flask
|
<ul>
<li>I am trying to link manifest.json file to the website I built to convert it to PWA. Have used <code>html/css</code> and <code>python flask</code> for the backend.</li>
<li>I am not getting whether it is the issue of the path or something else. Service worker is being detected and that is working absolutely fine.</li>
<li>But in the Application manifest I am getting this error Manifest is not valid JSON. Line: 1, column: 1, Unexpected token
<a href="https://i.stack.imgur.com/sRldD.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sRldD.jpg" alt="enter image description here" /></a></li>
<li><code>manifest.json</code> file</li>
</ul>
<pre><code>
{
"name": "Flask-PWA",
"short_name": "Flask-PWA",
"description": "A progressive webapp template built with Flask",
"theme_color": "transparent",
"background_color": "transparent",
"display": "standalone",
"orientation": "portrait",
"scope": "/",
"start_url": "../templates/login_user.html",
"icons": [
{
"src": "images/icons/icon-72x72.png",
"sizes": "72x72",
"type": "image/png"
},
{
"src": "images/icons/icon-96x96.png",
"sizes": "96x96",
"type": "image/png"
},
{
"src": "images/icons/icon-128x128.png",
"sizes": "128x128",
"type": "image/png"
},
{
"src": "images/icons/icon-144x144.png",
"sizes": "144x144",
"type": "image/png"
},
{
"src": "images/icons/icon-152x152.png",
"sizes": "152x152",
"type": "image/png"
},
{
"src": "images/icons/icon-192x192.png",
"sizes": "192x192",
"type": "image/png"
},
{
"src": "images/icons/icon-384x384.png",
"sizes": "384x384",
"type": "image/png"
},
{
"src": "images/icons/icon-512x512.png",
"sizes": "512x512",
"type": "image/png"
}
]
}
</code></pre>
<ul>
<li>This is the file structure for the manifest</li>
</ul>
<p><a href="https://i.stack.imgur.com/zrHOL.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zrHOL.jpg" alt="enter image description here" /></a></p>
|
<h2>change content type</h2>
<p>Check your network tab in the developer console and look for the <code>manifest.json</code> request. If the content type of the response is <code>text/html</code>, then you might need to add an additional header changing <code>Content-Type</code> to <code>application/json</code> in your flask route.</p>
<h2>use python object</h2>
<p>If changing the content type doesn't work, you can write your entire manifest as a python object then <code>jsonify</code> it before returning it to the browser.</p>
<pre class="lang-py prettyprint-override"><code>from flask import jsonify
@app.route('/manifest.json')
def manifest():
return jsonify(manifest_python_object)
</code></pre>
|
python|html|flask|progressive-web-apps|manifest.json
| 1 |
1,902,181 | 61,980,231 |
Mean grouped by two columns with window by 3 months and NaN for less than 3 months
|
<p>I have to apply the mean calculation in this dataset by customer, account but this mean needs to be applied to each 3 months in these groups. For the customer A1200 that don't has 3 months, the result need to be <code>NaN</code>.</p>
<pre><code>customer account month invoice
C1000 A1100 2019-10-01 34000
2019-11-01 55000
2019-12-01 80000
A1200 2019-10-01 90000
2019-11-01 55000
A1300 2019-10-01 10000
2019-11-01 10000
2019-12-01 20000
C2000 A2100 2019-10-01 78000
2019-11-01 55000
2019-12-01 80000
</code></pre>
<p>I tried to use this command, but the average looks incorrect.</p>
<pre><code>df_3m.groupby(['customer','account']).mean()
</code></pre>
<p>Are there some ideias in <code>pandas</code> or <code>pyspark</code>?</p>
|
<p>Data</p>
<pre><code>+----------+---------+----------+----------+
| customer | account | month | invoice |
+----------+---------+----------+----------+
| C1000 | A1100 | 01-10-19 | 34000 |
| C1000 | A1100 | 01-11-19 | 55000 |
| C1000 | A1100 | 01-12-19 | 80000 |
| C1000 | A1200 | 01-10-19 | 90000 |
| C1000 | A1200 | 01-11-19 | 55000 |
| C1000 | A1300 | 01-10-19 | 10000 |
| C1000 | A1300 | 01-11-19 | 10000 |
| C1000 | A1300 | 01-12-19 | 20000 |
| C2000 | A2100 | 01-10-19 | 78000 |
| C2000 | A2100 | 01-11-19 | 55000 |
| C2000 | A2100 | 01-12-19 | 80000 |
+----------+---------+----------+----------+
</code></pre>
<p>Your Query </p>
<pre><code>res = df_3m.groupby(['customer','account']).mean()
</code></pre>
<p>Query to filter accounts with <code>less than 3 months</code></p>
<pre><code>lt_3 = df.groupby(['account']).count() >2
</code></pre>
<p>Final Result </p>
<pre><code>res[lt_3]
</code></pre>
<p>Output</p>
<pre><code>+----------+---------+--------------+
| customer | account | invoice |
+----------+---------+--------------+
| C1000 | A1100 | 56333.333333 |
| | A1200 | NaN |
| | A1300 | 13333.333333 |
| C2000 | A2100 | 71000.000000 |
+----------+---------+--------------+
</code></pre>
|
python|python-3.x|pandas|pyspark|pandas-groupby
| 3 |
1,902,182 | 61,839,615 |
Can I call a class in python? It's saying that there is an error in last two lines
|
<pre><code>class game:
gamestillgoing=True
if gamestillgoing:
def __init__(self, board, turn):
self.board = ["#", "-", "-", "-",
"-", "-", "-",
"-", "-", "-"]
self.turn = input("Choose your turn X or O?\n")
print("Player {} goes first!".format(turn))
def displayboard(self):
print(self.board[1] + "|" + self.board[2] + "|" + self.board[3])
print(self.board[4] + "|" + self.board[5] + "|" + self.board[6])
print(self.board[7] + "|" + self.board[8] + "|" + self.board[9])
game.displayboard(self)
def choosepositionX(self):
position = int(input("Choose the position from 1-9\n"))
position = int(position)
if self.turn == "X":
self.board[position] = "X"
game.displayboard(self)
else:
while not self.turn == "X":
print("Choose the valid input.")
def choosepositonO(self):
position = int(input("Choose the position from 1-9\n"))
position = int(position)
if self.turn == "O":
self.board[position] = "O"
game.displayboard(self)
else:
while not self.turn == "O":
print("Choose the valid input.")
def checkwinrow(self):
row1 = self.board[1] == self.board[2] == self.board[3]
row2 = self.board[4] == self.board[5] == self.board[6]
row3 = self.board[7] == self.board[8] == self.board[9]
if row1 or row2 or row3 == "X":
print("Player X has won!\n ")
gamestillgoing = False
elif row1 or row2 or row3 == "O":
print("Player O has won!\n")
gamestillgoing = False
else:
gamestillgoing = True
def checkwincolumn(self):
column1 = self.board[1] == self.board[4] == self.board[7]
column2 = self.board[2] == self.board[5] == self.board[8]
column3 = self.board[3] == self.board[6] == self.board[9]
if column1 or column2 or column3 == "X":
print("Player X has won!\n")
gamestillgoing = False
elif column1 or column2 or column3 == "O":
print("Player O has won!\n")
gamestillgoing = False
else:
gamestillgoing = True
def checkwindiagonal(self):
diagonal1 = self.board[1] == self.board[5] == self.board[9]
diagonal2 = self.board[3] == self.board[5] == self.board[7]
if diagonal1 or diagonal2 == "X":
print("Player X has won!\n")
gamestillgoing = False
elif diagonal1 or diagonal2 == "O":
print("Player O has win!\n")
gamestillgoing = False
else:
gamestillgoing = True
def checktie(self):
row1 = self.board[1] == self.board[2] == self.board[3]
row2 = self.board[4] == self.board[5] == self.board[6]
row3 = self.board[7] == self.board[8] == self.board[9]
column1 = self.board[1] == self.board[4] == self.board[7]
column2 = self.board[2] == self.board[5] == self.board[8]
column3 = self.board[3] == self.board[6] == self.board[9]
diagonal1 = self.board[1] == self.board[5] == self.board[9]
diagonal2 = self.board[3] == self.board[5] == self.board[7]
if row1 == row2 == row3 or column1 == column2 == column3 or diagonal1 == diagonal2 != "X" or "O":
print("The game is Tie!")
gamestillgoing = False
else:
gamestillgoing = True
def playerturn(self):
if self.turn == "X":
game.choosepositionX(self)
elif self.turn == "O":
game.choosepositonO(self)
else:
"Choose the valid Input!\n"
game.playerturn(self)
else:
gamestillgoing=False
Start=game()
Start()
</code></pre>
<p>When I try to run this game its saying that there is an error in the last two lines.
Can anyone help me to solve this problem?
I just need the solution for the errors in the last two lines
(i.e Start=game()
Start())
The main problem is in calling the class.</p>
|
<p>No, you can't generally "call" a class in Python. But you can call its methods (including its constructor). (If you really must call a class, or an instance thereof, <a href="https://stackoverflow.com/questions/14585987/making-a-class-callable-in-same-instance">you have some extra work to do</a>.</p>
<p>You have many syntax, logic, and architectural errors in your code.</p>
<ol>
<li><p>Your use of <code>gamestillgoing</code> as a class attribute (instead of a member attribute) isn't technically impossible, but it is not advised. You should
make <code>gamestillgoing</code> be a member, or better yet, a property.</p></li>
<li><p>You have an infinite recursion in <code>displayboard()</code>.</p></li>
<li><p>You have a mix of indentations, 2-5 spaces. You should always indent 4 spaces.</p></li>
<li><p>You can't just call your class once. You need to call the <code>playerturn()</code> method repeatedly until <code>gamestillgoing</code> is <code>False</code>.</p></li>
</ol>
<p>I cleaned up those problems. This is far from a working game (eg, it doesn't alternate turns yet - you have do that after each player takes their turn), but this should be enough to get you un-stuck.</p>
<pre class="lang-py prettyprint-override"><code>class game:
def __init__(self):
self._gamestillgoing = True
self.board = ["#", "-", "-", "-",
"-", "-", "-",
"-", "-", "-"]
self.turn = input("Choose your turn X or O?\n")
print("Player {} goes first!".format(self.turn))
@property
def gamestillgoing(self):
return self._gamestillgoing
def displayboard(self):
print(self.board[1] + "|" + self.board[2] + "|" + self.board[3])
print(self.board[4] + "|" + self.board[5] + "|" + self.board[6])
print(self.board[7] + "|" + self.board[8] + "|" + self.board[9])
def choosepositionX(self):
position = int(input("Choose the position from 1-9\n"))
position = int(position)
if self.turn == "X":
self.board[position] = "X"
game.displayboard(self)
else:
while not self.turn == "X":
print("Choose the valid input.")
def choosepositonO(self):
position = int(input("Choose the position from 1-9\n"))
position = int(position)
if self.turn == "O":
self.board[position] = "O"
game.displayboard(self)
else:
while not self.turn == "O":
print("Choose the valid input.")
def checkwinrow(self):
row1 = self.board[1] == self.board[2] == self.board[3]
row2 = self.board[4] == self.board[5] == self.board[6]
row3 = self.board[7] == self.board[8] == self.board[9]
if row1 or row2 or row3 == "X":
print("Player X has won!\n ")
self._gamestillgoing = False
elif row1 or row2 or row3 == "O":
print("Player O has won!\n")
self._gamestillgoing = False
else:
self._gamestillgoing = True
def checkwincolumn(self):
column1 = self.board[1] == self.board[4] == self.board[7]
column2 = self.board[2] == self.board[5] == self.board[8]
column3 = self.board[3] == self.board[6] == self.board[9]
if column1 or column2 or column3 == "X":
print("Player X has won!\n")
self._gamestillgoing = False
elif column1 or column2 or column3 == "O":
print("Player O has won!\n")
self._gamestillgoing = False
else:
self._gamestillgoing = True
def checkwindiagonal(self):
diagonal1 = self.board[1] == self.board[5] == self.board[9]
diagonal2 = self.board[3] == self.board[5] == self.board[7]
if diagonal1 or diagonal2 == "X":
print("Player X has won!\n")
self._gamestillgoing = False
elif diagonal1 or diagonal2 == "O":
print("Player O has win!\n")
self._gamestillgoing = False
else:
self._gamestillgoing = True
def checktie(self):
row1 = self.board[1] == self.board[2] == self.board[3]
row2 = self.board[4] == self.board[5] == self.board[6]
row3 = self.board[7] == self.board[8] == self.board[9]
column1 = self.board[1] == self.board[4] == self.board[7]
column2 = self.board[2] == self.board[5] == self.board[8]
column3 = self.board[3] == self.board[6] == self.board[9]
diagonal1 = self.board[1] == self.board[5] == self.board[9]
diagonal2 = self.board[3] == self.board[5] == self.board[7]
if row1 == row2 == row3 or column1 == column2 == column3 or diagonal1 == diagonal2 != "X" or "O":
print("The game is Tie!")
self._gamestillgoing = False
else:
self._gamestillgoing = True
def playerturn(self):
if self.turn == "X":
self.choosepositionX()
elif self.turn == "O":
self.choosepositonO()
else:
"Choose the valid Input!\n"
self.playerturn(self)
g=game()
while g.gamestillgoing:
g.playerturn()
</code></pre>
|
python|new-operator
| 0 |
1,902,183 | 67,254,276 |
How to build non-web python application image with buildpack?
|
<p>I am new to container, below questions might sound stupid.</p>
<p>There are two questions actually.</p>
<ol>
<li><p>I have a non-web python application fully tested in VScode without any error, then I use below Dockerfile to build it locally.</p>
<pre><code> FROM python:3.8-slim
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "./mycode.py"]
</code></pre>
</li>
</ol>
<p>An image was built successfully, but running ended with a TypeError. I have made sure the requirements.txt has same dependency as in my project environment. The error message is "wrong tuple index" which gives me no clue on where the problem could come from a fully tested code. I am stuck here with a weird feeling.</p>
<ol start="2">
<li>I then tried buildpack with Procfile: <code>worker: python mycode.py</code>
An image was built successfully, but docker run could not launch the application with below error. I have no idea about what else beside "worker:" could launch a non-web python application in Procfile. Stuck again!</li>
</ol>
<blockquote>
<p>ERROR: failed to launch: determine start command: when there is no
default process a command is required</p>
</blockquote>
<p>I searched but all are about web application with "web:" in Procfile. Any help on either question will be appreciated.</p>
|
<p>When you start the container, you'll need to pass it the <code>worker</code> process type like this:</p>
<pre><code>$ docker run -it myapp worker
</code></pre>
<p>Then it should run the the command you added to the <code>Procfile</code>.</p>
<p>A few other things:</p>
<ul>
<li>Make sure you're using the <code>heroku/python</code> buildpack or another buildpack that includes <code>Procfile</code> detection.</li>
<li>Confirm in the build output that the <code>worker</code> process type was created</li>
<li>You <em>can</em> put your command as the <code>web:</code> process type if you do not want to add <code>worker</code> to your start command. There's nothing wrong with a <code>web:</code> process that does run web app</li>
</ul>
|
python|docker|buildpack|non-web
| 0 |
1,902,184 | 70,333,242 |
Python Progress Bar for non-iterable process
|
<p>I'm using this <a href="https://github.com/deepset-ai/haystack/blob/master/tutorials/Tutorial16_Document_Classifier_at_Index_Time.ipynb" rel="nofollow noreferrer">Notebook</a>, where section <strong>Apply DocumentClassifier</strong> is altered as below.</p>
<p><strong>Jupyter Labs</strong>, kernel: <code>conda_mxnet_latest_p37</code>.</p>
<p><a href="https://github.com/tqdm/tqdm" rel="nofollow noreferrer">tqdm</a> is a progress bar wrapper. It seems to work both on <code>for loops</code> and in <code>CLI</code>. However, I would like to use it on line:</p>
<pre><code>classified_docs = doc_classifier.predict(docs_to_classify)
</code></pre>
<p>This is an iterative process, but under the bonnet.</p>
<p><strong>How can I apply tqdm to this line?</strong></p>
<hr />
<p><strong>Code Cell:</strong></p>
<pre><code>doc_dir = "GRIs/" # contains 2 .pdfs
with open('filt_gri.txt', 'r') as filehandle:
tags = [current_place.rstrip() for current_place in filehandle.readlines()]
doc_classifier = TransformersDocumentClassifier(model_name_or_path="cross-encoder/nli-distilroberta-base",
task="zero-shot-classification",
labels=tags,
batch_size=2)
# convert to Document using a fieldmap for custom content fields the classification should run on
docs_to_classify = [Document.from_dict(d) for d in docs_sliding_window]
# classify using gpu, batch_size makes sure we do not run out of memory
classified_docs = doc_classifier.predict(docs_to_classify)
</code></pre>
|
<p>Based on this <a href="https://towardsdatascience.com/learning-to-use-progress-bars-in-python-2dc436de81e5" rel="nofollow noreferrer">TDS Article</a>; all Python progress bar libraries work with <code>for loops</code>. Hypothetically, I could alter the <code>predict()</code> function and append there but that's simply too much work.</p>
<p>Note: I'm happy to remove this answer if there is indeed a solution for non-iterablly <em>"accessible"</em> processes.</p>
|
python-3.x|jupyter-lab|tqdm
| 0 |
1,902,185 | 70,092,884 |
chromedriver is not opening the profile I want it to
|
<p><a href="https://i.stack.imgur.com/odzK4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/odzK4.png" alt="enter image description here" /></a></p>
<p>Even when I use this code, it opens chrome as a guest user.</p>
|
<p>As you are in <a href="/questions/tagged/windows" class="post-tag" title="show questions tagged 'windows'" rel="tag">windows</a> system, you need to pass the name of the <a href="https://stackoverflow.com/questions/48079120/what-is-the-difference-between-chromedriver-and-webdriver-in-selenium/48080871#48080871">WebDriver</a> variant along with the extension, i.e. <em><code>chromedriver.exe</code></em>.</p>
<p>So you need to change the line:</p>
<pre><code>driver = webdriver.Chrome(executable_path=r'C:\foo\bar\chromedriver', options=options)
</code></pre>
<p>With:</p>
<pre><code>driver = webdriver.Chrome(executable_path=r'C:\foo\bar\chromedriver.exe', options=options)
</code></pre>
|
python|selenium|selenium-chromedriver
| 0 |
1,902,186 | 70,050,598 |
How to delete user message in discord.py?
|
<p>how can I delete a message in discord.py if it contains a word from a word list? I have already tried several methods, some gave an error and some did not but the messages were not deleted.
Maybe I have set something wrong in discord? I have turned on manage messages.</p>
|
<p>This code loops through every word you have in a list <code>badwords</code> and searches for them in the message content. If it finds a word that matches the word from the list it deletes it. I used <a href="https://www.programiz.com/python-programming/methods/string/lower" rel="nofollow noreferrer"><code>lower()</code></a> to make sure it doesn't matter how you write your message (<em>badword, BADWORD, BAdWoRd etc.</em>).</p>
<pre class="lang-py prettyprint-override"><code>badwords = ["badword", "another word", "example"]
@client.event
async def on_message(message):
if message.author == client.user:
return
msg = message.content
for x in badwords:
if x in msg.lower():
await message.delete()
break
await client.process_commands(message)
</code></pre>
<p><strong>I'm also pretty sure you have to enable <a href="http://discordpy.readthedocs.io/en/latest/api.html?#discord.Intents" rel="nofollow noreferrer"><code>intents.messages</code></a> to make it work.</strong></p>
|
python|discord.py
| 1 |
1,902,187 | 11,045,711 |
A private list variable is inadvertently shared between instance objects
|
<p>I created many instances of a <code>PlotHandler</code> class. An instance must keep it's variables private. But the way I managed them led to a hard to detect problem, <strong>a private list variable is shared between instances</strong>! And that too without any obvious source for the leak.</p>
<p>My debugging told me that the private member function that modifies the list sees the same list, even if they are different objects.</p>
<p>Is this a "gotcha" problem? What is the best way to troubleshoot this?</p>
<hr>
<p>Here are the relevant parts (I hope they are!) of the implementation. Please see the ALL-CAPS comments: </p>
<p>The file implementing PlotHandler:</p>
<pre><code>class PlotHandler(wx.Frame):
__crop_section = None
__projection = None
__crop_xcord = None
_band_data = [] #THIS GETS SHARED
def _on_plot_click(self, xcord): #CALLED BY ANOTHER OBJECT
band = self._analyze_band( xcord )
self._band_data.append(band)
...
</code></pre>
<p>The parent class that it is managing PlotHandlers:</p>
<pre><code>class MainFrame(wx.Frame):
__close_callback__ = None
_plot_handlers = []
def __init__(self, parent, title):
...
def InitUI(self):
...
img_handler = ImageHandler(panel)
self.img_src.register_callback( img_handler.update_image )
#you need to call PlotHandler(parent, cropped)
img_handler.register_sample_callback( self._create_new_plot_handler )
...
def _create_new_plot_handler(self, cropped_sample ):
self._plot_handlers.append( PlotHandler(self, cropped_sample) ) #CREATE THEM
</code></pre>
|
<p>See <a href="https://stackoverflow.com/questions/11040438/class-variables-is-shared-across-all-instances-in-python/11040559#11040559">this question</a>, <a href="https://stackoverflow.com/questions/867219/python-class-members-initialization">this one</a>, and tons of other stuff you can find by googling "Python class variables shared", "Python FAQ class variables", etc.</p>
<p>The short answer is: variables defined directly in the class body are class variables, not instance variables, and are thus shared among instances of the class. If you want instance variables you must assign them from within a method, where you have access to <code>self</code>.</p>
|
python
| 3 |
1,902,188 | 10,994,975 |
Weird python behaviour with list of dictionaries
|
<p>I have a list of dictionaries <code>listEntries</code>. When I say <code>print listEntries</code> in IPython, I get the following output.</p>
<pre><code>[{'div': ' ',
'id': ' EE 760 ',
'instructor': 'Narayanan H.',
'instructor_type': 'I ',
'name': 'Advanced Network Analysis',
'slot': '3 ',
'student': '28'},
{'div': ' ',
'id': ' EE 764 ',
'instructor': 'Karandikar Abhay',
'instructor_type': 'I ',
'name': 'Wireless & Mobile Communication',
'slot': '1 ',
'student': '36'}]
</code></pre>
<p>But when I do <code>a = listEntries[0]</code> following by <code>print a</code> all I get is <code>{}</code>. </p>
<p>UPDATE : When I use pop method, I get the expected behaviour. I have created this list by parsing a csv file. </p>
<p>Here is the code.</p>
<pre><code>import os
import csv
file1 = open('./data.txt', 'r');
csv1 = csv.reader(file1);
listEntry = list();
for row in csv1 :
# if first row is a number then this row represent course.
course = {}
if row[0].isdigit() :
course['id'] = row[1]
course['name'] = row[2]
course['instructor'] = row[3]
course['instructor_type'] = row[4]
course['slot'] = row[5]
course['div'] = row[6]
course['student'] = row[7]
else : pass
listEntry.append(course);
print listEntry[0]
</code></pre>
<p>Here is file data.txt which I am parsing. I am using python2.7</p>
<pre><code>"Running courses for year 2005-2006 and semester 2 "
" 1991-1992 1992-1993 1993-1994 1994-1995 1995-1996 1996-1997 1997-1998 1998-1999 1999-2000 2000-2001 2001-2002 2002-2003 2003-2004 2004-2005 2005-2006 2006-2007 2007-2008 2008-2009 2009-2010 2010-2011 2011-2012 2012-2013 1 - Autumn 2 - Spring 3 - Summer 4 - winter "
"Instructor Status A=Associate"
"Sr no.","Course Code","Course Name","Instructor Name","Instructor Status","Slot","Division","Enrolled students","Biometric Attendance Enabled?","Registration Limit","Restrictions","Division"
"67"," EE 760 ","Advanced Network Analysis","Narayanan H.","I ","3 "," ","28","-","0","",""
"68"," EE 764 ","Wireless & Mobile Communication","Karandikar Abhay","I ","1 "," ","36","-","0","",""
</code></pre>
|
<p>What yo posted works perfectly fine:</p>
<pre><code>In [1]: L = [{'div': ' ',
'id': ' EE 760 ',
'instructor': 'Narayanan H.',
'instructor_type': 'I ',
'name': 'Advanced Network Analysis',
'slot': '3 ',
'student': '28'},
< {'div': ' ',
'id': ' EE 764 ',
...: 'id': ' EE 760 ',
...: 'instructor': 'Narayanan H.',
...: 'instructor_type': 'I ',
...: 'name': 'Advanced Network Analysis',
...: 'slot': '3 ',
...: 'student': '28'},
...: {'div': ' ',
...: 'id': ' EE 764 ',
...: 'instructor': 'Karandikar Abhay',
...: 'instructor_type': 'I ',
...: 'name': 'Wireless & Mobile Communication',
...: 'slot': '1 ',
...: 'student': '36'}]
In [5]: print L[0]
{'slot': '3 ', 'name': 'Advanced Network Analysis', 'instructor_type': 'I ', 'student': '28', 'div': ' ', 'instructor': 'Narayanan H.', 'id': ' EE 760 '}
In [6]: print L[1]
{'slot': '1 ', 'name': 'Wireless & Mobile Communication', 'instructor_type': 'I ', 'student': '36', 'div': ' ', 'instructor': 'Karandikar Abhay', 'id': ' EE 764 '}
In [7]: a = L[0]
In [8]: print a
{'slot': '3 ', 'name': 'Advanced Network Analysis', 'instructor_type': 'I ', 'student': '28', 'div': ' ', 'instructor': 'Narayanan H.', 'id': ' EE 760 '}
</code></pre>
|
python
| 3 |
1,902,189 | 11,200,183 |
Creating SubCategories with Django Models
|
<p>I am trying to expand the relationships within my Django Models. I have a system where elements are stored within Categories. How do I structure my <code>models.py</code> so that each category is related to a subcategory? </p>
<p>Here is what my category model looks like:</p>
<pre><code>class Category(models.Model):
site = models.ForeignKey(Site)
template_prefix = models.CharField(max_length=200, blank=True)
name = models.CharField(max_length=200)
slug = models.SlugField()
description = models.TextField(default='')
sortby_fields = models.CharField(max_length=200,
help_text=_(u'A comma separated list of field names that should show up as sorting options.'),
blank=True)
sort_order = models.PositiveIntegerField(default=0)
def __unicode__(self):
return self.name + u' Category'
class Meta:
verbose_name_plural = u'categories'
</code></pre>
<p>Thanks for any suggestions.</p>
|
<p>You can create a foreign key to itself:</p>
<pre><code>class Category(models.Model):
...
parent_category = models.ForeignKey('self', null=True, blank=True)
</code></pre>
<p>Then you can assigning any existing <code>Category</code> instance as the <code>parent_category</code> of that instance. Then, if you wanted to find all of the subcategories of a given <code>Category</code> instance you would do something like:</p>
<pre><code>subcategories = Category.objects.filter(
parent_category__id=target_category.id)
</code></pre>
|
python|django|models|categories
| 18 |
1,902,190 | 70,584,848 |
How to make output file name reflect for loop list in python?
|
<p>I have a function that does an action for every item in a list and then outputs a final product file for each item in that list. What I am trying to do is append a string to each of the output files that reflects the items in the initial list. I will explain more in detail here.</p>
<p>I have the following code:</p>
<pre><code>list = ['Blue', 'Red', 'Green']
df = pd.read_csv('data.csv')
for i in list:
df_new = (df[i] * 2)
df_new.to_csv('Path/to/My/Folder/Name.csv')
</code></pre>
<p>Ok, so what is going on here is that I have a file 'data.csv' that has 3 columns with unique values, the 'Blue' column, the 'Red' column, and the 'Green' column. From all of this code, I want to produce 3 separate output .csv files, one file where the 'Blue' column is multiplied by 2, the next where the 'Red' column is multiplied by 2, and lastly a file where the 'Green' column is multiplied by 2. So what My code does is first write a list of all the column names, then open the .csv file as a dataframe, and then FOR EACH color in the list, multiply that column by 2, and send each product to a new dataframe.</p>
<p>What I am confused about is how to go about naming the output .csv files so I know which is which, specifically which column was multiplied by 2. Specifically I simply want my files named as such: 'Name_Blue.csv', 'Name_Red.csv', and 'Name_Green.csv'. How can I do this so that it works with my for loop code? I am not sure what this iterative naming process would even be called.</p>
|
<p>You need to use a formatted string (A string with an <em>f</em> at the beginning). For example:</p>
<pre><code>name = "foo"
greeting = f'Hello, {name}!'
</code></pre>
<p>Inside those curly brackets is the variable you want to put in the string. So here's the modified code:</p>
<pre><code>colors = ['Blue', 'Red', 'Green']
df = pd.read_csv('data.csv')
for i in colors:
df_new = (df[i] * 2)
df_new.to_csv(f'Path/to/My/Folder/Name_{i}.csv')
</code></pre>
<p>Now the formatted string in the last line will input i (the item in the list) as the name of the file!</p>
|
python|pandas|loops|csv
| 2 |
1,902,191 | 70,469,790 |
Unexpected output - prime numbers
|
<p>I wrote an answer to a program that gets a number from the user and display whether the number is prime.This is a beginner exercise. <br></p>
<p>Here is the question: <br>
<br></p>
<blockquote>
<p>A prime number is a number that is only evenly divisible by itself and 1. For example, the number 5 is prime because it can only be evenly divided by 1 and 5. The number 6, however, is not prime because it can be divided evenly by 1, 2, 3 and 6. Write a Boolean function named is_prime which takes an integer as an argument and returns true if the argument is a prime number, or false otherwise. Use the function in a program that prompts the user to enter a number then displays a message indicating whether the number is prime.
<br></p>
</blockquote>
<br>
Program I wrote :
<br>
<pre><code>def main():
# Get a number
number = get_number()
# Check the number is a prime
status = is_prime(number)
# Display a message if the number is prime or not
display_prime(status)
def get_number():
number = int(input('Enter a number: '))
return number
def is_prime(number):
if (number % 1 == 0) and (number % number == 0):
status = True
else:
status = False
return status
def display_prime(status):
if is_prime(status):
print('The number is a prime')
else:
print('The number is not a prime.')
main()
</code></pre>
<br>
Here is the output:
<br>
<pre><code>Enter a number: 6
The number is a prime
</code></pre>
<br>
<p>I can't find where I went wrong. Can you help me ?
<br>
Thank you in advance.</p>
|
<p>Where you went wrong is you ignored the word "only" in the definition of a prime number:</p>
<blockquote>
<p>A prime number is a number that is <em>only</em> evenly divisible by itself and 1</p>
</blockquote>
<p>Every integer, prime or otherwise, is evenly divisible by itself and 1. The difference is that a prime number is <em>only</em> divisible by itself and 1. Because your function is simply checking that the input is divisible by itself and by 1, <em>every</em> input results in returning true.</p>
<p>What you want to do instead is write a function that determines that the number is <em>not</em> evenly divisible by any other number other than itself and 1.</p>
<p>There are some good examples of how to do that here:
<a href="https://www.programiz.com/python-programming/examples/prime-number" rel="nofollow noreferrer">https://www.programiz.com/python-programming/examples/prime-number</a></p>
|
python|python-3.x|function
| 2 |
1,902,192 | 70,636,079 |
Why would a much lighter Keras model run at the same speed at inference as the much larger original model?
|
<p>I trained a Keras model with the following architecture:</p>
<pre><code>def make_model(input_shape, num_classes):
inputs = keras.Input(shape=input_shape)
# Image augmentation block
x = inputs
# Entry block
x = layers.experimental.preprocessing.Rescaling(1.0 / 255)(x)
x = layers.Conv2D(32, 3, strides=2, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.Conv2D(64, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
previous_block_activation = x # Set aside residual
for size in [128, 256, 512, 728]:
x = layers.Activation("relu")(x)
x = layers.SeparableConv2D(size, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.SeparableConv2D(size, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.MaxPooling2D(3, strides=2, padding="same")(x)
# Project residual
residual = layers.Conv2D(size, 1, strides=2, padding="same")(
previous_block_activation
)
x = layers.add([x, residual]) # Add back residual
previous_block_activation = x # Set aside next residual
x = layers.SeparableConv2D(1024, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.GlobalAveragePooling2D()(x)
if num_classes == 2:
activation = "sigmoid"
units = 1
else:
activation = "softmax"
units = num_classes
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(units, activation=activation)(x)
return keras.Model(inputs, outputs)
</code></pre>
<p>And that model has over 2 million trainable parameters.</p>
<p>I then trained a much lighter model with only 300,000. trainable parameters:</p>
<pre><code>def make_model(input_shape, num_classes):
inputs = keras.Input(shape=input_shape)
# Image augmentation block
x = inputs
# Entry block
x = layers.experimental.preprocessing.Rescaling(1.0 / 255)(x)
x = layers.Conv2D(64, kernel_size=(7, 7), activation=tf.keras.layers.LeakyReLU(alpha=0.01), padding = "same", input_shape=image_size + (3,))(x)
x = layers.MaxPooling2D(pool_size=(2, 2))(x)
x = layers.Conv2D(192, kernel_size=(3, 3), activation=tf.keras.layers.LeakyReLU(alpha=0.01), padding = "same", input_shape=image_size + (3,))(x)
x = layers.Conv2D(128, kernel_size=(1, 1), activation=tf.keras.layers.LeakyReLU(alpha=0.01), padding = "same", input_shape=image_size + (3,))(x)
x = layers.MaxPooling2D(pool_size=(2, 2))(x)
x = layers.Conv2D(128, kernel_size=(3, 3), activation=tf.keras.layers.LeakyReLU(alpha=0.01), padding = "same", input_shape=image_size + (3,))(x)
x = layers.MaxPooling2D(pool_size=(2, 2))(x)
x = layers.Dropout(0.5)(x)
x = layers.GlobalAveragePooling2D()(x)
if num_classes == 2:
activation = "sigmoid"
units = 1
else:
activation = "softmax"
units = num_classes
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(units, activation=activation)(x)
return keras.Model(inputs, outputs)
</code></pre>
<p>However, the last model (which is much lighter and even accepts a smaller input size) seems to run at the same speed, only classifying at 2 images per second. Shouldn't there be a difference in speed being it's a smaller model? Looking at the code, is there a glaring reason why that wouldn't be the case?</p>
<p>I'm using the same method at inference in both cases:</p>
<pre><code>image_size = (180, 180)
batch_size = 32
model = keras.models.load_model('model_13.h5')
t_end = time.time() + 10
iters = 0
while time.time() < t_end:
img = keras.preprocessing.image.load_img(
"test2.jpg", target_size=image_size
)
img_array = image.img_to_array(img)
#print(img_array.shape)
img_array = tf.expand_dims(img_array, 0) # Create batch axis
predictions = model.predict(img_array)
score = predictions[0]
print(score)
iters += 1
if score < 0.5:
print('Fire')
else:
print('No Fire')
print('TOTAL: ', iters)
</code></pre>
|
<p>The number of parameters is at most and indication how fast a model trains or runs inference. It might depend on many other factors.</p>
<p>Here some examples, which might influence the throughput of your model:</p>
<ol>
<li>The activation function: ReLu activations are faster then e.g. ELU or GELU which have exponetial terms. Not only is computing an exponention number slower than a linear number, but also the gradient is much more complex to compute since in Case of Relu is constant number, the slope of the activation (e.g.1).</li>
<li>the bit precission used for your data. Some HW accelerators can make faster computations in float16 than in float32 and also reading less bits decreses latency.</li>
<li>Some layers might not have parameters but perform fixed calculations. Eventhough no parameter is added to the network's weight, a computation still is performed.</li>
<li>The archetecture of your training HW. Certain filter sizes and batch sizes can be computed more efficiently than others.</li>
<li>sometimes the speed of the computing HW is not the bottleneck, the input pipeline for loading and preprocessing your data</li>
</ol>
<p>It's hard to tell without testing but in your particular example I would guess, that the following might slow down your inference:</p>
<ol>
<li>large perceptive field with a 7x7 conv</li>
<li>leaky_relu is slightly slower than relu</li>
<li>Probably your data input pipeline is the bottleneck, not the inference speed. If the inference speed is much faster than the data preparation, it might appear that both models have the same speed. But in reality the HW is idle and waits for data.</li>
</ol>
<p>To understand whats going on, you could either change some parameters and evaluate the speed, or you could analyze your input pipeline by tracing your hardware using tensorboard. Here is a smal guide: <a href="https://www.tensorflow.org/tensorboard/tensorboard_profiling_keras" rel="nofollow noreferrer">https://www.tensorflow.org/tensorboard/tensorboard_profiling_keras</a></p>
<p>Best,
Sascha</p>
|
python|tensorflow|machine-learning|keras|computer-vision
| 2 |
1,902,193 | 63,477,746 |
Fabric, fabfile, fab: paramiko.ssh_exception.SSHException: No existing session
|
<p>cf_key is my private ssh key. Please take a look at my traceback and my code to see what I'm doing wrong. I have redacted the actual server name and replaced it with "". Looks like "key cannot be used for signing." what is wrong with my key?</p>
<p>I executed the following command: <code>fab doit 'cf_key' refresh 'dev'</code> and got this error:</p>
<pre><code>maintenance on
Exception: key cannot be used for signing
Traceback (most recent call last):
File "/home/michael/projects/campaignfinances/venv/lib/python3.6/site-packages/paramiko/transport.py", line 2109, in run
handler(self.auth_handler, m)
File "/home/michael/projects/campaignfinances/venv/lib/python3.6/site-packages/paramiko/auth_handler.py", line 298, in _parse_service_accept
sig = self.private_key.sign_ssh_data(blob)
File "/home/michael/projects/campaignfinances/venv/lib/python3.6/site-packages/paramiko/agent.py", line 418, in sign_ssh_data
raise SSHException("key cannot be used for signing")
paramiko.ssh_exception.SSHException: key cannot be used for signing
Traceback (most recent call last):
File "/home/michael/projects/campaignfinances/venv/bin/fab", line 8, in <module>
sys.exit(program.run())
File "/home/michael/projects/campaignfinances/venv/lib/python3.6/site-packages/invoke/program.py", line 384, in run
self.execute()
File "/home/michael/projects/campaignfinances/venv/lib/python3.6/site-packages/invoke/program.py", line 566, in execute
executor.execute(*self.tasks)
File "/home/michael/projects/campaignfinances/venv/lib/python3.6/site-packages/invoke/executor.py", line 129, in execute
result = call.task(*args, **call.kwargs)
File "/home/michael/projects/campaignfinances/venv/lib/python3.6/site-packages/invoke/tasks.py", line 127, in __call__
result = self.body(*args, **kwargs)
File "/home/michael/projects/campaignfinances/fabfile.py", line 51, in refresh
conn.run('sudo ln -sf /home/django/sites/{0}.campaignfinances.org/src/project/templates/maintenance.html {0}-maintenance.html'.format(server))
File "<decorator-gen-3>", line 2, in run
File "/home/michael/projects/campaignfinances/venv/lib/python3.6/site-packages/fabric/connection.py", line 29, in opens
self.open()
File "/home/michael/projects/campaignfinances/venv/lib/python3.6/site-packages/fabric/connection.py", line 634, in open
self.client.connect(**kwargs)
File "/home/michael/projects/campaignfinances/venv/lib/python3.6/site-packages/paramiko/client.py", line 446, in connect
passphrase,
File "/home/michael/projects/campaignfinances/venv/lib/python3.6/site-packages/paramiko/client.py", line 764, in _auth
raise saved_exception
File "/home/michael/projects/campaignfinances/venv/lib/python3.6/site-packages/paramiko/client.py", line 740, in _auth
self._transport.auth_publickey(username, key)
File "/home/michael/projects/campaignfinances/venv/lib/python3.6/site-packages/paramiko/transport.py", line 1570, in auth_publickey
raise SSHException("No existing session")
paramiko.ssh_exception.SSHException: No existing session
</code></pre>
<p>fabfile.py</p>
<pre><code>###################################################################
#
# Usage:
# fab doit(path_to_ssh_key) refresh('dev|staging')
# fab doit(path_to_ssh_key) maintenanceon('dev|staging')
# fab doit(path_to_ssh_key) maintenanceoff('dev|staging')
#
# fab doit(path_to_ssh_key) productionrefresh
#
# Example: fab doit('c:\users\tom\.ssh\id_rsa.pem') maintenanceon('dev')
#
# If you use a passphrase then add --prompt-for-passphrase
####################################################################
from fabric import task, Connection
@task
def doit(ctx, keypath):
ctx.user = 'django'
ctx.host = '<servername>'
ctx.connect_kwargs.key_filename = ''.format(keypath)
@task
def maintenanceon(ctx, server):
conn = Connection(ctx.host, ctx.user, connect_kwargs=ctx.connect_kwargs)
# create ln to maintenance file
print('maintenance on')
with conn.cd('/usr/share/nginx/html/'):
conn.run('sudo ln -sf /home/django/sites/{0}.<servername>/src/project/templates/maintenance.html {0}-maintenance.html'.format(server))
@task
def maintenanceoff(ctx, server):
conn = Connection(ctx.host, ctx.user, connect_kwargs=ctx.connect_kwargs)
# create ln to maintenance file
print('maintenance off')
with conn.cd('/usr/share/nginx/html/'):
conn.run('sudo unlink {}-maintenance.html'.format(server))
@task
def refresh(ctx, server):
env_command = '. /home/django/sites/{0}.<servername>.com/{0}/bin/activate'.format(server)
conn = Connection(ctx.host, ctx.user, connect_kwargs=ctx.connect_kwargs)
# set to maintenance mode
with conn.cd('/usr/share/nginx/html/'):
print('maintenance on')
conn.run('sudo ln -sf /home/django/sites/{0}.<servername>.com/src/project/templates/maintenance.html {0}-maintenance.html'.format(server))
# refresh install
with conn.cd('/home/django/sites/{}.<servername>.com/src/'.format(server)):
print('git pull')
conn.run('git pull')
# check requirements
with conn.cd('/home/django/sites/{}.<servername>.com/src/requirements/'.format(server)):
print('pip-sync')
conn.run(env_command + '&&' + 'pip-sync {}.txt'.format(server))
# run migrations and collectstatic
with conn.cd('/home/django/sites/{}.<servername>.com/src/project/'.format(server)):
print('migrate')
conn.run(env_command + '&&' + 'python manage.py migrate')
print('collecstatic')
conn.run(env_command + '&&' + 'python manage.py collectstatic --no-input')
# restart server
print('restart server')
conn.sudo('systemctl restart {}.service'.format(server), pty=True)
# maintenance mode off
with conn.cd('/usr/share/nginx/html/'):
print('maintenance off)')
conn.run('sudo unlink {}-maintenance.html'.format(server))
@task
def productionrefresh(ctx):
env_command = '. /home/django/sites/www.<servername>.com/www/bin/activate'
conn = Connection(ctx.host, ctx.user, connect_kwargs=ctx.connect_kwargs)
# set to maintenance mode
with conn.cd('/usr/share/nginx/html/'):
print('Set to maintenance mode')
conn.run('sudo ln -sf /home/django/sites/www.<servername>.com/src/project/templates/maintenance.html prod-maintenance.html')
# refresh install
with conn.cd('/home/django/sites/www.<servername>.com/src/'):
print('Git pull')
conn.run('git pull')
# check requirements
with conn.cd('/home/django/sites/www.<servername>.com/src/requirements/'):
print('pip-sync production.txt')
conn.run(env_command + '&&' + 'pip-sync production.txt')
# run migrations and collectstatic
with conn.cd('/home/django/sites/www.<servername>.com/src/project/'):
print('python manage.py migrate')
conn.run(env_command + '&&' + 'python manage.py migrate')
print('python manage.py collectstatic')
conn.run(env_command + '&&' + 'python manage.py collectstatic --no-input')
# restart server
print('restart production service')
conn.sudo('systemctl restart production.service', pty=True)
# maintenance mode off
with conn.cd('/usr/share/nginx/html/'):
print('maintenance off')
conn.run('sudo unlink prod-maintenance.html')
@task
def productioncollect(ctx):
env_command = '. /home/django/sites/www.<servername>.com/www/bin/activate'
conn = Connection(ctx.host, ctx.user, connect_kwargs=ctx.connect_kwargs)
with conn.cd('/home/django/sites/www.<servername>.com/src/project/'):
conn.run(env_command + '&&' + 'python manage.py collectstatic --no-input')
</code></pre>
|
<p>Are you able to ssh to the server using the openssh client with that key? The output indicates that your key can't be used for signing, which I've never seen before.</p>
<p>Also, test that the key is valid for signing like this:</p>
<pre><code>redkrieg@cortex-0:~$ echo "signme" > testfile
redkrieg@cortex-0:~$ ssh-keygen -Y sign -f cf_key -n testsigning testfile
Signing file testfile
Write signature to testfile.sig
</code></pre>
<p>Make sure testfile.sig has an SSH SIGNATURE block in it after this.</p>
|
python|fabric
| 0 |
1,902,194 | 63,712,100 |
How to type-hint a function return based on input parameter value?
|
<p>How can I type-hint a function in Python based on the <em>value</em> of an input parameter?</p>
<p>For instance, consider the following snippet:</p>
<pre><code>from typing import Iterable
def build(
source: Iterable,
factory: type
) -> ?: # what can I write here?
return factory(source)
as_list = build('hello', list) # -> list ['h', 'e', 'l', 'l', 'o']
as_set = build('hello', set) # -> set {'h', 'e', 'l', 'o'}
</code></pre>
<p>When building <code>as_list</code>, the value of <code>factory</code> is <code>list</code>, and this should be the type annotation.</p>
<p>I am aware of <a href="https://stackoverflow.com/questions/52445559/how-can-i-type-hint-a-function-where-the-return-type-depends-on-the-input-type-o">this other question</a>, but, in that case, the return type depended only on the input <em>types</em>, not on their <em>values</em>.
I would like to have <code>def build(source: Iterable, factory: type) -> factory</code>, but of course this doesn't work.</p>
<p>I am also aware of <a href="https://docs.python.org/3/library/typing.html#typing.Literal" rel="nofollow noreferrer">Literal types</a> in Python 3.8+, and something similar to this could be achieved:</p>
<pre><code>from typing import Iterable, Literal, overload
from enum import Enum
FactoryEnum = Enum('FactoryEnum', 'LIST SET')
@overload
def build(source: Iterable, factory: Literal[FactoryEnum.LIST]) -> list: ...
@overload
def build(source: Iterable, factory: Literal[FactoryEnum.SET]) -> set: ...
</code></pre>
<p>But this solution would make <code>factory</code> useless (I could just define two functions <code>build_list(source) -> list</code> and <code>build_set(source) -> set</code>).</p>
<p>How can this be done?</p>
|
<p>Rather than using <code>type</code>, you could use a <a href="https://docs.python.org/3/library/typing.html#generics" rel="nofollow noreferrer">generic</a> and define the <code>factory</code> as a <code>Callable</code>, as follows:</p>
<pre><code>from typing import Callable, Iterable, TypeVar
T = TypeVar('T')
def build(
source: Iterable,
factory: Callable[[Iterable], T]
) -> T:
return factory(source)
</code></pre>
|
python|python-3.x|type-hinting
| 2 |
1,902,195 | 55,613,728 |
Error attempting to write html tagged text to .txt file
|
<p>Receiving the following error attempting to write dictionary key value containing HTML tags to a text file.</p>
<pre><code>Traceback (most recent call last):
File "/Users/jackboland/PycharmProjects/NLTK_example/JsonToTxt.py", line 11, in <module>
data = json.load(json_data)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc0 in position 10: invalid start byte
</code></pre>
<p>I have a set of JSON files. I am successfully extracting that data to a Python dictionary. Then, from there, I am identifying the dictionary key whose value is the longest and extracting that value to a text file. The code works for all JSON files whose longest dictionary key value is a string. It is throwing the above error for files whose longest dictionary key value is html content.</p>
<pre><code>with open(path + file) as json_data:
data = json.load(json_data)
for value in data.values(): # gets the value of each dictionary key
value = str(value) # converts the value of each dictionary key to a string enabling counting total characters
vLength = len(value) # calculates the length of each value in every dictionary key to enable identifying only the longest of the key values
if vLength > 100: # if the length of a value is over 200 characters, it prints that, this insures capturing the opinion text no matter what dictionary key it is in
f = open(newpath + file[:-5] + ".txt", 'w+')
f.write(value)
f.close()
</code></pre>
<p>The dictionary key values that are strings are parsing from the dictionary and into a text file. It is only the dictionary key values that contain html that are not being written to a text file.</p>
|
<p>Python tries to convert a byte-array to a unicode string. When it tries this, it encounters a byte sequence which is not allowed in utf-8-encoded strings (here at 0xc0 in position 10).</p>
<p>Try reading the file in a binary format so that the content of the file remain as bytes.</p>
<pre><code>with open(path + file, 'rb') as json_data:
//rest of the code
</code></pre>
<p>If this doesn't work, manually specify encoding formats.</p>
<p>Example : </p>
<pre><code>open(path + file, encoding="utf-8") as json_data
//rest of the code
</code></pre>
<p>You can get the various encoding formats here.</p>
<p><a href="https://docs.python.org/2.4/lib/standard-encodings.html" rel="nofollow noreferrer">https://docs.python.org/2.4/lib/standard-encodings.html</a></p>
|
python
| 0 |
1,902,196 | 55,830,738 |
Django serializer.data contains non existant value
|
<h2>Context</h2>
<p>I have an API endpoint that is used for devices to perform a check-in.</p>
<p>When a device is <strong>not</strong> known:</p>
<ul>
<li>A new <code>CustomerDevice</code> instance is created</li>
<li>A new <code>DeviceStatus</code> instance is created related to the newly created <code>CustomerDevice</code></li>
</ul>
<p>API response:</p>
<pre><code>{
"customer_device_uuid": "646aaff6-debf-4281-bd7f-064dd6dc8ab8",
"device_group": {
"group_uuid": "ebd0990b-aeb5-46a4-9fad-82237a5a5118",
"device_group_name": "Default",
"color": "4286f4",
"is_default": true
}
}
</code></pre>
<p>When a device is known:</p>
<ul>
<li>A new <code>DeviceStatus</code> instance is created related to the device that made the request.</li>
</ul>
<p>API response:</p>
<pre><code>{
"customer_device_uuid": "fbbdf1d1-766d-40a9-961f-2c5a5cb3db6e"
}
</code></pre>
<h2>The problem</h2>
<p>The problem is the API response when a known device performs a check-in. The returned <code>customer_device_uuid</code> does not exist in the database and seems like a randomly generated <code>uuid</code>.</p>
<p>I want the API response for a known device to be the same as the API response for an unknown device.</p>
<p><code>serializer.data</code> contains the random uuid untill the <code>else</code> statement. From there <code>serializer.data</code> contains the data I need in my response.</p>
<p>I tried to call <code>self.perform_create</code> in the <code>if</code> block. This results in <code>serializer.data</code> having the correct data for the response. However, this creates a new <code>CustomerDevice</code> and related <code>DeviceStatus</code> for every check-in no matter if the device is known or unknown.</p>
<p>My <code>views.py</code>:</p>
<pre><code>class DeviceCheckinViewSet(viewsets.ModelViewSet):
serializer_class = CheckinSerializer
queryset = CustomerDevice.objects.all()
http_method_names = 'post'
def create(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
# if the device is known -> just log the data
if CustomerDevice.objects.filter(device_id_android=serializer.validated_data['device_id_android']).exists():
self.log_device_status(request)
headers = self.get_success_headers(serializer.data)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
# if device is not known -> link it to the customer and log the device data
else:
self.perform_create(serializer)
customer_device = CustomerDevice.objects.get(device_id_android=request.data['device_id_android'])
DeviceStatus.objects.create(
customer_device_id=customer_device.customer_device_uuid,
disk_space=request.data['disk_space'],
battery_level=request.data['battery_level'],
battery_cycles=request.data['battery_cycles'],
battery_health=request.data['battery_health']
)
headers = self.get_success_headers(serializer.data)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
# creates new instance with default group
def perform_create(self, serializer):
serializer.save(group_uuid=get_default_group())
def log_device_status(request):
DeviceStatus.objects.create(
customer_device_id=request.data['customer_device_uuid'],
disk_space=request.data['disk_space'],
battery_level=request.data['battery_level'],
battery_cycles=request.data['battery_cycles'],
battery_health=request.data['battery_health']
)
</code></pre>
<p>My <code>serializers.py</code></p>
<pre><code>class DeviceGroupSerializer(serializers.ModelSerializer):
class Meta:
model = DeviceGroup
fields = ('group_uuid', 'device_group_name', 'color', 'is_default')
class CheckinSerializer(serializers.ModelSerializer):
device_group = DeviceGroupSerializer(many=False, read_only=True, source='group_uuid')
class Meta:
model = CustomerDevice
fields = ('customer_device_uuid', 'customer_uuid', 'device_id_android', 'device_group')
extra_kwargs = {
'customer_uuid': {'write_only': True},
'device_id_android': {'write_only': True}
}
</code></pre>
<h2>Desired outcome</h2>
<p>The below API response either when a device is known or unknown.</p>
<pre><code>{
"customer_device_uuid": "646aaff6-debf-4281-bd7f-064dd6dc8ab8",
"device_group": {
"group_uuid": "ebd0990b-aeb5-46a4-9fad-82237a5a5118",
"device_group_name": "Default",
"color": "4286f4",
"is_default": true
}
}
</code></pre>
<h2>EDIT</h2>
<p>Added <code>DeviceGroupSerializer()</code> to <code>serializers.py</code></p>
<p>Relevant <code>models.py</code>:</p>
<pre><code># Customer table
class Customer(models.Model):
customer_uuid = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False, db_index=True)
customer_name = models.CharField(max_length=128, unique=True)
users = models.ManyToManyField('auth.User', related_name='customers')
def __str__(self):
return self.customer_name
class Meta:
db_table = 'customer'
# Device group table
class DeviceGroup(models.Model):
group_uuid = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False, db_index=True)
customer_uuid = models.ForeignKey(Customer, on_delete=models.DO_NOTHING)
device_group_name = models.CharField(max_length=20)
color = models.CharField(max_length=8)
is_default = models.BooleanField(default=False)
def __str__(self):
return self.device_group_name
class Meta:
db_table = 'device_group'
# Customer_device table
class CustomerDevice(models.Model):
customer_uuid = models.ForeignKey(Customer, on_delete=models.CASCADE)
customer_device_uuid = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
group_uuid = models.ForeignKey(DeviceGroup, null=True, blank=True, on_delete=models.SET(get_default_group))
device_id_android = models.CharField(max_length=100, blank=True, null=True) # generated by client
def __repr__(self):
return '%r' % self.customer_device_uuid
class Meta:
db_table = 'customer_device'
unique_together = (('customer_uuid', 'customer_device_uuid'),) # composite key
# Device status table
class DeviceStatus(models.Model):
device_status_uuid = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False, db_index=True)
disk_space = models.PositiveIntegerField(blank=True, null=True)
battery_level = models.PositiveIntegerField(blank=True, null=True)
battery_health = models.CharField(max_length=30)
battery_cycles = models.PositiveIntegerField(blank=True, null=True)
customer_device = models.ForeignKey(CustomerDevice, on_delete=models.CASCADE, related_name="customer_device")
def __repr__(self):
return '%r' % self.device_status_uuid
class Meta:
db_table = 'device_status'
</code></pre>
|
<p>I have instantiated a <code>Customer</code> with uuid <code>e73141ab-883c-44bc-8ce2-21b492cab03e</code> and a <code>DeviceGroup</code> with <code>is_default</code> set to <code>True</code> so that <code>get_default_group</code> can assign it.</p>
<p>Now what happens is that when I call the endpoint with:</p>
<pre><code>{
"customer_uuid": "e73141ab-883c-44bc-8ce2-21b492cab03e",
"customer_device_uuid": "444b944a-4bcf-4ea2-8860-1d3ca82fd1d3",
"device_id_android": "111",
"disk_space": 10,
"battery_level": 10,
"battery_health": "good",
"battery_cycles": 10
}
</code></pre>
<p>The response that is yielded is:</p>
<pre><code>{
"customer_device_uuid": "0d53c7d4-ce34-42ce-9dd8-5e7ea07c4438",
"device_group": {
"group_uuid": "5ee2f30e-bfd1-4f31-83ad-753d1d7220ad",
"device_group_name": "",
"color": "",
"is_default": true
}
}
</code></pre>
<p>It is exactly what you needed. My assumption is that either your <code>get_default_group</code> is not returning a default group.</p>
<p>The reason why the second time your serializer does not yield your desired output is because you are serializing the request data instead of the instance, and in the request data you don't have a <code>group_uuid</code> so you need to do the following:</p>
<pre><code># if the device is known -> just log the data
customer_device = CustomerDevice.objects.filter(
device_id_android=serializer.validated_data['device_id_android']).first()
if customer_device:
self.log_device_status(request)
headers = self.get_success_headers(serializer.data)
customer_device_serializer = CheckinSerializer(instance=customer_device)
return Response(customer_device_serializer.data, status=status.HTTP_201_CREATED, headers=headers)
</code></pre>
|
python|django|django-rest-framework
| 1 |
1,902,197 | 56,635,398 |
Remove tuple from list of tuples if tuple's elements aren't in a list of strings
|
<p>I'm working on some code where I need to remove a tuple from a list of tuples if the tuple doesn't contain all strings in a separate list. I've got it working in a for loop, but I'm trying to improve the efficiency of my code. As an example, if I have</p>
<pre class="lang-py prettyprint-override"><code>list_of_tups = [('R', 'S', 'T'), ('A', 'B'), ('L', 'N', 'E'), ('R', 'S', 'T', 'L'), ('R', 'S', 'T', 'L', 'N', 'E')]
needed_strings = ['R', 'S', 'T']
</code></pre>
<p>I want to keep the following tuples in my list:</p>
<pre class="lang-py prettyprint-override"><code>[('R', 'S', 'T'), ('R', 'S', 'T', 'L'), ('R', 'S', 'T', 'L', 'N', 'E')]
</code></pre>
<p>This works in the following for-loop:</p>
<pre class="lang-py prettyprint-override"><code>for s in needed_strings:
for tup in list_of_tups:
if s not in tup:
list_of_tups.remove(tup)
</code></pre>
<p>However, I'd like for this to be done via list comprehension. My attempts at doing this result in a list of tuples where <em>any</em> of the strings, not <em>all</em>, appear in the tuple.</p>
|
<p>You can use <code>all</code> with a nested comprehension:</p>
<pre><code>list_of_tups = [('R', 'S', 'T'), ('A', 'B'), ('L', 'N', 'E'), ('R', 'S', 'T', 'L'), ('R', 'S', 'T', 'L', 'N', 'E')]
needed_strings = ['R', 'S', 'T']
[t for t in list_of_tups if all(c in t for c in needed_strings)]
</code></pre>
<p><strong>result</strong></p>
<pre><code>[('R', 'S', 'T'), ('R', 'S', 'T', 'L'), ('R', 'S', 'T', 'L', 'N', 'E')]
</code></pre>
<p>So long as the lists contain hashable items, an alternative that may be a little easier to read is to make <code>needed_strings</code> a <code>set</code>. Then you can use <code>issubset()</code></p>
<pre><code>list_of_tups = [('R', 'S', 'T'), ('A', 'B'), ('L', 'N', 'E'), ('R', 'S', 'T', 'L'), ('R', 'S', 'T', 'L', 'N', 'E')]
needed_strings = set(['R', 'S', 'T'])
[t for t in list_of_tups if needed_strings.issubset(t)]
</code></pre>
|
python|list-comprehension
| 5 |
1,902,198 | 56,512,465 |
How to return a copy of an instance of a class?
|
<p>I am currently practising <code>python</code> on code wars, here is a prompt:</p>
<p>Create a <code>Vector</code> object that supports addition, subtraction, dot products, and norms. So, for example:</p>
<pre><code> a = Vector([1, 2, 3])
b = Vector([3, 4, 5])
c = Vector([5, 6, 7, 8])
a.add(b) # should return a new Vector([4, 6, 8])
a.subtract(b) # should return a new Vector([-2, -2, -2])
a.dot(b) # should return 1*3 + 2*4 + 3*5 = 26
a.norm() # should return sqrt(1^2 + 2^2 + 3^2) = sqrt(14)
a.add(c) # raises an exception
</code></pre>
<p>I have written functions add and subtract that pass some of the tests. However, I am running into issues with overwriting my previous list values of 'a' after running the add function. When I go into subtract, the 'a' values in the <code>vector</code> are the summations computed from the previous instance of the add function.</p>
<p>I suspect its due to me running this line of code:
<code>return self.__class__(self.list)</code> causing the instance of the class to overwrite itself.</p>
<p>Kindly please help, I believe I need to return a copy of the instance of the class but don't know how to do it.</p>
<pre><code> class Vector:
def __init__(self, list):
self.list = list #[1,2]
self.copylist = list
def add(self,Vector):
try:
self.list = self.copylist
#take list from other vector
other = Vector.list
#take each value from other Vector and add it to self.list
for index,item in enumerate(Vector.list,0):
self.list[index] = item + self.list[index]
except:
print("Different size vectors")
#return the instance of a class
return self.__class__(self.list)
def subtract(self,Vector):
self.list = self.copylist
other = Vector.list
print(self.list)
print(other)
for index,item in enumerate(Vector.list,0):
self.list[index] = self.list[index] - item
return self.__class__(self.list)
def dot(self,Vector):
self.list = self.copylist
other = Vector.list
#print(self.list)
#print(other)
running_sum =0
for index,item in enumerate(Vector.list,0):
running_sum = running_sum + item * self.list[index]
#print(running_sum, " ", self.list[index], " ", item)
return running_sum
def norm(self):
running_sum = 0
for item in self.list:
running_sum += item**2
return running_sum ** 0.5
def toString(self):
return str(self.list)
`def equals(self,Vector):
return self.list == Vector.list
</code></pre>
<p>Here are some of the tests:</p>
<pre><code> a = Vector([1, 2])
b = Vector([3, 4])
test.expect(a.add(b).equals(Vector([4, 6])))
a = Vector([1, 2, 3])
b = Vector([3, 4, 5])
test.expect(a.add(b).equals(Vector([4, 6, 8])))
test.expect(a.subtract(b).equals(Vector([-2, -2, -2]))) #code fails here
test.assert_equals(a.dot(b), 26)
test.assert_equals(a.norm(), 14 ** 0.5)
</code></pre>
|
<p>I think you're making this more complicated than it needs to be. You shouldn't be working with class objects at all. You should just be working with instances of the Vector class. Here's what I think your code should look like:</p>
<pre><code>class Vector:
def __init__(self, initial_elements):
self.elements = list(initial_elements) # make a copy of the incoming list of elements
def add(self, other):
# insure that the two vectors match in length
if len(self.elements) != len(other.elements):
raise Exception("Vector sizes are different")
# copy our elements
r = list(self.elements)
# add the elements from the second vector
for index, item in enumerate(other.elements, 0):
r[index] += item
# return a new vector object defined by the computed elements
return Vector(r)
def subtract(self, other):
# insure that the two vectors match in length
if len(self.elements) != len(other.elements):
raise Exception("Vector sizes are different")
# copy our elements
r = list(self.elements)
# subtract the elements from the second vector
for index, item in enumerate(other.elements, 0):
r[index] -= item
# return a new vector object defined by the computed elements
return Vector(r)
def dot(self, other):
running_sum = 0
for index, item in enumerate(other.elements, 0):
running_sum += item * self.elements[index]
return running_sum
def norm(self):
running_sum = 0
for item in self.elements:
running_sum += item ** 2
return running_sum ** 0.5
def toString(self):
return str(self.elements)
def equals(self, other):
return self.elements == other.elements
def test():
a = Vector([1, 2])
b = Vector([3, 4])
print(a.add(b).equals(Vector([4, 6])))
a = Vector([1, 2, 3])
b = Vector([3, 4, 5])
print(a.add(b).equals(Vector([4, 6, 8])))
print(a.subtract(b).equals(Vector([-2, -2, -2])))
print(a.dot(b) == 26)
print(a.norm() == 14 ** 0.5)
test()
</code></pre>
<p>Result:</p>
<pre><code>True
True
True
True
True
</code></pre>
<p>The general structure of your code is spot on.</p>
<p>One thing to note is that you shouldn't be using <code>list</code> as a variable name, as it is a type name in Python. Also, you don't want to be passing around <code>Vector</code> as a value. You want to be passing instances of <code>Vector</code> and <code>list</code>, with names that do not conflict with these type names.</p>
<p>My solution assumes you want Vector instances to be immutable, so each of your operations will return a new Vector object. You could also have them not be immutable and have, for example, the <code>add</code> method just add the incoming vector into the target vector without creating a new object. I like keeping them immutable. I've been doing more and more of this "functional style" programming lately, where calls to object methods don't modify the target object (don't have side effects), but rather just return a new object.</p>
<p>I like your use of the <code>test</code> class to do your testing. I chose to not deal with this, and just print the results of each test comparison to see that they all come out to <code>True</code>. I'll leave it to you to restore your tests to using a test object with <code>expect</code> and <code>assert_equals</code> methods.</p>
<p>UPDATE: Here is a more compact way to write your <code>add</code> and <code>subtract</code> methods:</p>
<pre><code>def add(self, other):
# insure that the two vectors match in length
if len(self.elements) != len(other.elements):
raise Exception("Vector sizes are different")
return Vector([self.elements[i] + other.elements[i] for i in range(len(self.elements))])
def subtract(self, other):
# insure that the two vectors match in length
if len(self.elements) != len(other.elements):
raise Exception("Vector sizes are different")
return Vector([self.elements[i] - other.elements[i] for i in range(len(self.elements))])
</code></pre>
|
python
| 3 |
1,902,199 | 69,846,647 |
To sort a list of numbers in an ascending order
|
<p>I am new to python programming and trying to write a code to answer the question below:
<strong>Create a lambda function named sort to sort a list of numbers in an ascending order.</strong>
The output is given as:
<strong>sort([6,2,3,9,1,5]) == [1, 2, 3, 5, 6, 9]</strong>
I have written the code below as my response and its working:</p>
<pre><code>a = [6,2,3,9,1,5]
sort = list(map(lambda a:a,a))
sort.sort()
print(sort)
</code></pre>
<p>My question is this is a genuine code or I missing something?</p>
|
<p>Well I don't know if you're missing something, but as per the question, <code>sort</code> must be a <code>lambda</code> function;but in accordance to your code, <code>sort</code> seems like returning a <code>list</code> instead of being a <code>lambda</code> function.</p>
<p>The following will satisfy the question :-</p>
<pre class="lang-py prettyprint-override"><code>a = [6,2,3,9,1,5]
sort = lambda x : sorted(x)
print(sort(a))
</code></pre>
<p>Here <code>sort</code> is a <code>lambda</code> function that returns a sorted list, as asked in the question.</p>
|
python
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.