Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,902,700
12,577,118
Building a List from a text file in python
<p>I am a python newbie.</p> <p>I want to read a text file which reads something like this</p> <pre><code>1345.. 245.. ..456 </code></pre> <p>and store it in a list of lists of integers. I want to keep the numbers and replaces the periods by 0s.How do i do it? </p> <p>EDIT: Apologize for the ambiguous output spec</p> <p>p.s I want the output to be a list of list</p> <pre><code>[ [1,3,4,5,0,0], [2,4,5,0,0], [0,0,4,5,6]] </code></pre>
<pre><code>with open('yourfile') as f: lst = [ map(int,x.replace('.','0')) for x in f ] </code></pre> <p>Which is the same thing as the following nested list-comp:</p> <pre><code>lst = [ [int(val) for val in line.replace('.','0')] for line in f] </code></pre> <p>Here I used <code>str.replace</code> to change the <code>'.'</code> to <code>'0'</code> before converting to an integer.</p>
python
4
1,902,701
12,153,285
Esky freezes after escalating permissions on windows 7
<p>We're about to start using <a href="https://github.com/cloudmatrix/esky" rel="nofollow">Esky</a> to deploy updates to our application. On Windows 7, Esky appears to be unable to get the escalated privileges it needs to install an update.</p> <p>I've narrowed it down to this simple script. It asks for escalated permissions, but will either freeze, or crash with the stacktrace below.</p> <h2>Script</h2> <pre><code>import esky import esky.sudo class MyProgram(object): @esky.sudo.allow_from_sudo() def do_stuff(self): pass app = MyProgram() sapp = esky.sudo.SudoProxy(app) sapp.start() sapp.do_stuff() sapp.drop_root() </code></pre> <h2>Stack Trace</h2> <pre><code>$ python test.py Traceback (most recent call last): File "test.py", line 16, in &lt;module&gt; sapp.start() File "c:\Python27\lib\site-packages\esky\sudo\__init__.py", line 125, in start raise RuntimeError("sudo helper process terminated unexpectedly") RuntimeError: sudo helper process terminated unexpectedly $ python test.py Traceback (most recent call last): File "test.py", line 16, in &lt;module&gt; sapp.start() File "c:\Python27\lib\site-packages\esky\sudo\__init__.py", line 140, in start self.close() File "c:\Python27\lib\site-packages\esky\sudo\__init__.py", line 156, in close self.pipe.read() File "c:\Python27\lib\site-packages\esky\sudo\sudo_base.py", line 123, in read raise EOFError EOFError </code></pre> <p>Does anyone know of a solution, or have any suggestions?</p> <p>Using:</p> <ul> <li>python 2.7.3</li> <li>esky 0.9.7</li> </ul>
<p>3 years and no answer that is very sad :(</p> <p>This is a bug in esky.</p> <p>Esky works fine for me besides the fact that escalating privileges fails.</p> <p>I'm used to programming in python3... so once I'm done updating Esky to be python 2 and 3 compatible I'm going to tackle this issue.</p> <p>If anyone wants to solve this problem let's rock and roll! To the github issue tracker!</p>
python|windows
0
1,902,702
23,315,337
Django, register extended user model
<p>I have extended djago's user model to a model <strong>teachers</strong>. Teachers have 3 fields:</p> <ul> <li>user (one to one field to user)</li> <li>modules (many to many field to module)</li> <li>confirmed (boolean)</li> </ul> <p>So I got a registration forms which works for non-teachers, and copied it for teachers. I'm trying to save the 3 extra fields of teachers when the user register from the teacher registration form.</p> <p>The registration should work since it's the same form with normal users but it has to save the 3 additional field of <strong>teacher</strong>.</p> <p>I read that since <code>modules</code> is a <code>ManyToManyField</code> I have to use <code>save_m2m()</code> but I don't know how to get the value of the field(select multiple) from the form.</p> <p>models.py</p> <pre><code>class Teacher(models.Model): user = models.OneToOneField(User) modules = models.ManyToManyField(Module) confirmed = models.BooleanField(default=False) </code></pre> <p>forms.py</p> <pre><code>class RegistrationFormTeacher(UserCreationForm): email = forms.EmailField(required=True) modules = forms.SelectMultiple() class Meta: model = User fields = ('username', 'email', 'password1', 'password2') def save(self, commit=True): user = super(RegistrationFormTeacher, self).save(commit=False) user.email = self.cleaned_data['email'] if commit: user.save() return user </code></pre> <p>views.py</p> <pre><code>def register_teacher(request): args = {} universities = University.objects.order_by('name').distinct() if request.method == 'POST': form = RegistrationFormTeacher(request.POST) if form.is_valid(): new_teacher = form.save(commit=False) new_teacher.modules = form.cleaned_data['modules'] new_teacher.save() form.save_m2m() return HttpResponseRedirect(reverse('dashboard_teacher')) else: form = RegistrationFormTeacher() args['form'] = form return render_to_response("registration/registration_form_teacher.html", {'universities': universities}, RequestContext(request)) </code></pre>
<p>You want to create a <code>Teacher</code> model but your modelform's model is <code>User</code> - so you're not saving what you want in the way you want.</p> <p>The absolute easiest, but not necessarily the most elegant, way to do this is with two separate forms.</p> <pre><code>class UserRegistrationForm(UserCreationForm): class Meta: model = User fields = ('first_name', 'last_name', 'username', 'password1', 'password2') class RegistrationFormTeacher(forms.ModelForm): class Meta: model = Teacher fields = (modules, confirmed) </code></pre> <p>Then when you're processing your RegistrationFormTeacher:</p> <pre><code>if request.method == 'POST': form = RegistrationFormTeacher(request.POST) if form.is_valid(): new_teacher = form.save(commit=False) new_teacher.user = request.user #get the user object however you want - you #can pass the user ID to the view as a parameter and do #User.objects.get(pk=id) or some such, too. new_teacher.save() form.save_m2m() </code></pre>
python|django|django-registration|django-users
2
1,902,703
8,077,806
Python class methods overloading
<p>How can I overload class methods? I failed with:</p> <pre><code>class D(object): def create(self): foo = 100 bar = 'squirrels' baz = 'I have %d insane %s in my head.' % (foo, bar) return baz class C(D): def create(self): super(C, self).create() baz = 'I have %s cute %s in my yard.' % (self.foo, self.bar) C().create() </code></pre> <p>Traceback was:</p> <pre><code>AttributeError: 'C' object has no attribute 'foo' </code></pre>
<p>You have tried to use local variables as class attributes. Try to do the following changes:</p> <pre><code>class D(object): def create(self): self.foo = 100 self.bar = 'squirrels' baz = 'I have %d insane %s in my head.' % (self.foo, self.bar) return baz class C(D): def create(self): super(C, self).create() print self.foo self.baz = 'I have %s cute %s in my yard.' % (self.foo, self.bar) C().create() </code></pre>
python|oop
3
1,902,704
33,632,219
Use inherited class method within __init__
<p>I have a parent class that is inherited by several children. I would like to initialize one of the children using the parent's <code>@classmethod</code> initializers. How can I do this? I tried:</p> <pre><code>class Point(object): def __init__(self,x,y): self.x = x self.y = y @classmethod def from_mag_angle(cls,mag,angle): x = mag*cos(angle) y = mag*sin(angle) return cls(x=x,y=y) class PointOnUnitCircle(Point): def __init__(self,angle): Point.from_mag_angle(mag=1,angle=angle) p1 = Point(1,2) p2 = Point.from_mag_angle(2,pi/2) p3 = PointOnUnitCircle(pi/4) p3.x #fail </code></pre>
<p>If you try to write <code>__init__</code> like that, your <code>PointOnUnitCircle</code> has a different interface to <code>Point</code> (as it takes <code>angle</code> rather than <code>x, y</code>) and therefore shouldn't really be a sub-class of it. How about something like:</p> <pre><code>class PointOnUnitCircle(Point): def __init__(self, x, y): if not self._on_unit_circle(x, y): raise ValueError('({}, {}) not on unit circle'.format(x, y)) super(PointOnUnitCircle, self).__init__(x, y) @staticmethod def _on_unit_circle(x, y): """Whether the point x, y lies on the unit circle.""" raise NotImplementedError @classmethod def from_angle(cls, angle): return cls.from_mag_angle(1, angle) @classmethod def from_mag_angle(cls, mag, angle): # note that switching these parameters would allow a default mag=1 if mag != 1: raise ValueError('magnitude must be 1 for unit circle') return super(PointOnUnitCircle, cls).from_mag_angle(1, angle) </code></pre> <p>This keeps the interface the same, adds logic for checking the inputs to the subclass (once you've written it!) and provides a new class method to easily construct a new <code>PointOnUnitCircle</code> from an <code>angle</code>. Rather than </p> <pre><code>p3 = PointOnUnitCircle(pi/4) </code></pre> <p>you have to write </p> <pre><code>p3 = PointOnUnitCircle.from_angle(pi/4) </code></pre>
python|inheritance|class-method
3
1,902,705
46,978,264
Delete rows based on list in pandas
<pre><code>node1 node2 weight date 3 6 1 2002 2 7 1 1998 2 7 1 2002 2 8 1 1999 2 15 1 2002 9 15 1 1998 2 16 1 2003 2 18 1 2001 </code></pre> <p>I want to delete rows which have the values <code>[3, 7, 18]</code>.These values can be in any of the rows <code>node1</code> or <code>node2</code>.</p>
<pre><code>In [8]: new = df[~df.filter(regex='^node').isin([3,7,18]).any(1)] In [9]: new Out[9]: node1 node2 weight date 3 2 8 1 1999 4 2 15 1 2002 5 9 15 1 1998 6 2 16 1 2003 </code></pre> <p>step by step:</p> <pre><code>In [164]: df.filter(regex='^node').isin([3,7,18]) Out[164]: node1 node2 0 True False 1 False True 2 False True 3 False False 4 False False 5 False False 6 False False 7 False True In [165]: df.filter(regex='^node').isin([3,7,18]).any(1) Out[165]: 0 True 1 True 2 True 3 False 4 False 5 False 6 False 7 True dtype: bool In [166]: ~df.filter(regex='^node').isin([3,7,18]).any(1) Out[166]: 0 False 1 False 2 False 3 True 4 True 5 True 6 True 7 False dtype: bool In [167]: df[~df.filter(regex='^node').isin([3,7,18]).any(1)] Out[167]: node1 node2 weight date 3 2 8 1 1999 4 2 15 1 2002 5 9 15 1 1998 6 2 16 1 2003 </code></pre>
python|pandas|dataframe
5
1,902,706
46,832,108
finding value corresponding other value
<p>Well for an assignment (I'm a beginner) I have to find the max. temperature and the corresponding date. This is my code but it's not working. I know that I'm defining date wrong or that I should try another approach but I don't know what to do differently. I get the following error: <strong><em>TypeError: cannot do label indexing on class 'pandas.core.indexes.range.RangeIndex' with these indexers [-1.3] of class 'numpy.float64'</em></strong></p> <p>This is my code: </p> <pre><code>import pandas as pd import matplotlib.pyplot as plt # read data data = pd.read_csv("klimaat.csv") data["TX"] /= 10 maxvalue = data['TX'][0] for i in range(1, len(data["TX"])): if(data["TX"][i] &gt; maxvalue): maxvalue = data["TX"][i] date = data["DATE"][maxvalue] print(maxvalue,date) </code></pre> <p>screenshot of my data file: <a href="https://i.stack.imgur.com/ZnVYy.png" rel="nofollow noreferrer">csv file!</a></p>
<p>There is more than one way to skin a cat -- this approach is not the most efficient, but here is the concept:</p> <p>Place your temperatures into one list, and the dates in another. Find the max of temperature and its location within the list. Use the location to find the date that corresponds to the max temperature.</p> <pre><code>temperatures = [] dates = [] with open('filename.csv', 'r') as input_file: input_file.readline() #this skips the header for line in input_file.readlines(): sLine = line.split(',') date = sLine[2] temp = sLine[3] temperatures.append(temp) dates.append(date) maxtemp = max(temperatures) location = temperatures.index(maxtemp) print(max(temperatures)) print(location) print(dates[location]) </code></pre>
python|pandas|numpy
0
1,902,707
29,857,558
Short for 'for i in range(1,len(a)):' in python
<p>In python, is there a short way of writing <code>for i in range(len(l)):</code>?</p> <p>I know I can use <code>for i,_ in enumerate(l):</code>, but I want to know if there's another way (without the <code>_</code>).</p> <p>Please don't say <code>for v in l</code> because I need the indices (for example to compare consecutive values <code>l[i]==l[i+1]</code>).</p>
<p>If you wanted to compare consecutive elements, you don't need to use indices. You can use <code>zip()</code> instead:</p> <pre><code>for el1, el2 in zip(l, l[1:]): </code></pre> <p>Or use <code>enumerate()</code> anyway:</p> <pre><code>for el2idx, el1 in enumerate(l[:-1], 1): el2 = l[el2idx] </code></pre> <p>If you really must generate <em>just</em> and index, then <code>range(len(l))</code> is short enough, there is no shorter form.</p>
python|for-loop|range|iteration|enumerate
5
1,902,708
56,983,367
How to install pip for installing external packages in NAO robot
<p>I have written some code and want to run it on NAO robot, but unfortunately the I used some packages like pygame and boto3 in my code, so now to work this code on NAO I have to install the packages on NAO robot, Can someone please explain the process, please.</p> <p>I have tried running get-pip.py file on NAO using ssh nao@ip, but it can't install I have been attempting copying the packages files to NAO, but it also doesn't solve the problem. I followed the <a href="https://community.ald.softbankrobotics.com/ja/node/1506" rel="nofollow noreferrer">https://community.ald.softbankrobotics.com/ja/node/1506</a> forum, but it also doesn't solve it</p> <p>Below is the console output when I run get-pip.py file</p> <p><code> PS C:\Users\hp&gt; ssh nao@169.254.252.60 Password: nao [0] ~ $ su Password: root@nao [0] nao # ls Ashim DigiCertHighAssuranceEVRootCA.pem classes.bin diagnosis naoqi recordings DigiCertHighAssuranceEVRootCA.crt angles.bin couples.bin expo.bin rayons.bin remotes root@nao [0] nao # cd Ashim root@nao [0] Ashim # ls boto3 client_secret.json example.py jmespath pip python-engineio six botocore custom-env get-pip.py mpolly.py pygame python-socketio urllib3 root@nao [0] Ashim # python get-pip.py </code> <code>DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. Collecting pip /tmp/tmpdQ8F7J/pip.zip/pip/_vendor/urllib3/connection.py:324: SystemTimeWarning: System time is way off (before 2017-06-30). This will probably lead to SSL verification errors /tmp/tmpdQ8F7J/pip.zip/pip/_vendor/urllib3/util/ssl_.py:354: SNIMissingWarning: An HTTPS request has been made, but the SNI (Server Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings /tmp/tmpdQ8F7J/pip.zip/pip/_vendor/urllib3/util/ssl_.py:150: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '_ssl.c:504: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version'),)': /simple/pip/ /tmp/tmpdQ8F7J/pip.zip/pip/_vendor/urllib3/connection.py:324: SystemTimeWarning: System time is way off (before 2017-06-30). This will probably lead to SSL verification errors /tmp/tmpdQ8F7J/pip.zip/pip/_vendor/urllib3/util/ssl_.py:150: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '_ssl.c:504: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version'),)': /simple/pip/ /tmp/tmpdQ8F7J/pip.zip/pip/_vendor/urllib3/connection.py:324: SystemTimeWarning: System time is way off (before 2017-06-30). This will probably lead to SSL verification errors /tmp/tmpdQ8F7J/pip.zip/pip/_vendor/urllib3/util/ssl_.py:150: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '_ssl.c:504: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version'),)': /simple/pip/ /tmp/tmpdQ8F7J/pip.zip/pip/_vendor/urllib3/connection.py:324: SystemTimeWarning: System time is way off (before 2017-06-30). This will probably lead to SSL verification errors /tmp/tmpdQ8F7J/pip.zip/pip/_vendor/urllib3/util/ssl_.py:150: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '_ssl.c:504: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version'),)': /simple/pip/ /tmp/tmpdQ8F7J/pip.zip/pip/_vendor/urllib3/connection.py:324: SystemTimeWarning: System time is way off (before 2017-06-30). This will probably lead to SSL verification errors /tmp/tmpdQ8F7J/pip.zip/pip/_vendor/urllib3/util/ssl_.py:150: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '_ssl.c:504: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version'),)': /simple/pip/ /tmp/tmpdQ8F7J/pip.zip/pip/_vendor/urllib3/connection.py:324: SystemTimeWarning: System time is way off (before 2017-06-30). This will probably lead to SSL verification errors /tmp/tmpdQ8F7J/pip.zip/pip/_vendor/urllib3/util/ssl_.py:150: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError(SSLError(1, '_ssl.c:504: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version'),)) - skipping ERROR: Could not find a version that satisfies the requirement pip (from versions: none) ERROR: No matching distribution found for pip root@nao [err 1] Ashim # </code></p> <p>Also, I tried to update the system time, but this also not solves the problem. the output remains the same as before:</p> <pre><code>root@nao [err 127] nao # ntpdate 17 Sep 02:55:02 ntpdate[8986]: no servers can be used, exiting root@nao [err 1] nao # ntpdate -s 0.de.pool.ntp.org root@nao [0] nao # ntpdate 11 Jul 10:51:42 ntpdate[9024]: no servers can be used, exiting root@nao [err 1] nao # cd Ashim </code></pre> <p>I also tried to install the packages without pip, using <code>sudo python setup.py install</code> But in this case it shows the error below:</p> <pre><code>error: could not create '/usr/lib/python2.7/site-packages/awscli-1.16.196-py2.7.egg': No space left on device </code></pre>
<p>You have a:</p> <blockquote> <p>SystemTimeWarning: System time is way off (before 2017-06-30). This will probably lead to SSL verification errors</p> </blockquote> <p>Fix your systemtime otherwise ssl verfication will fail.</p> <p>It might work similar to pepper, follow the instructions <a href="https://stackoverflow.com/questions/53902765/pepper-robot-datetime-rtc-is-out-thus-cannot-sync-apps-from-app-store-ssl-auth">here</a></p> <p>You also get the </p> <blockquote> <p>SNIMissingWarning: An HTTPS request has been made, but the SNI (Server Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see <a href="https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings" rel="nofollow noreferrer">https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings</a></p> </blockquote> <p>To fix it you would need to install <code>requests[security]</code> package</p> <p>You can also install packages witout pip as discribed <a href="https://stackoverflow.com/questions/13270877/how-to-manually-install-a-pypi-module-without-pip-easy-install">here</a></p>
python-2.7|pip|pygame|boto3|nao-robot
1
1,902,709
27,511,760
Relationship between two tables one with 2 foreign keys in flask-sqlalchemy
<p>I've two tables representing users and messages as follows:</p> <pre><code>class Users (db.Model): UserId = db.Column(db.INTEGER, primary_key=True,autoincrement=True) UserName = db.Column(db.String(25),nullable=False, unique=True) Email = db.Column(db.String,nullable=False,unique=True) class Messages (db.Model): MessageId = db.Column(db.INTEGER,primary_key=True,autoincrement=True) SenderId = db.Column(db.INTEGER,db.ForeignKey('Users.UserId'),nullable=False) ReceiverId = db.Column(db.INTEGER,db.ForeignKey('Users.UserId'),nullable=False) Message = db.Column(db.String,nullable=False) </code></pre> <p>I'd like to know that If I want to write the following in Users class:</p> <blockquote> <p>db.relationship ('Messages', backref='MessageSender', lazy='dynamic' # this return message sender.</p> <p>db.relationship ('Messages', backref='MessageReceiver', lazy='dynamic' #this returns message receiver.</p> </blockquote> <p>How should I specify that ? so that I can backref both foreign keys ?</p> <p>Thanks in advance.</p>
<p>Below should work:</p> <pre><code>class Messages(db.Model): MessageId = db.Column(db.INTEGER, primary_key=True, autoincrement=True) SenderId = db.Column(db.INTEGER, db.ForeignKey('users.UserId'), nullable=False) ReceiverId = db.Column(db.INTEGER, db.ForeignKey('users.UserId'), nullable=False) Message = db.Column(db.String, nullable=False) # define relationships sender = db.relationship(Users, foreign_keys=[SenderId], backref='sent') receiver = db.relationship(Users, foreign_keys=[ReceiverId], backref='received') </code></pre> <p>Usage example:</p> <pre><code>u0 = Users(UserName='Lila') u1 = Users( UserName='John', sent=[Messages(Message='hi', receiver=u0)] ) </code></pre>
python|database|sqlalchemy|relationship|flask-sqlalchemy
1
1,902,710
27,566,566
Inplace permutation of a numpy arrray
<p>I have a quite large numpy array of one dimension for which I would like to apply some sorting on a slice inplace and also retrieve the permutation vector for other processing.</p> <p>However, the ndarray.sort() (which is an inplace operation) method does not return this vector and I may use the ndarray.argsort() method to get the permutation vector and use it to permute the slice. However, I can't figure out how to do it inplace.</p> <pre><code>Vslice = V[istart:istop] # This is a view of the slice iperm = Vslice.argsort() V[istart:istop] = Vslice[iperm] # Not an inplace operation... </code></pre> <p>Subsidiary question : Why the following code does not modifies V as we are working on a view of V ?</p> <pre><code>Vslice = Vslice[iperm] </code></pre> <p>Best wishes !</p> <p>François</p>
<p>To answer your question of why assignment to view does not modify the original:</p> <p>You need to change <code>Vslice = Vslice[iperm]</code> to <code>Vslice[:] = Vslice[iperm]</code> otherwise you are <em>assigning</em> a new value to <code>Vslice</code> rather than <em>changing</em> the values inside <code>Vslice</code>:</p> <pre><code>&gt;&gt;&gt; a = np.arange(10, 0, -1) &gt;&gt;&gt; a array([10, 9, 8, 7, 6, 5, 4, 3, 2, 1]) &gt;&gt;&gt; b = a[2:-2] &gt;&gt;&gt; b array([8, 7, 6, 5, 4, 3]) &gt;&gt;&gt; i = b.argsort() &gt;&gt;&gt; b[:] = b[i] # change the values inside the view &gt;&gt;&gt; a # note `a` has been sorted in [2:-2] slice array([10, 9, 3, 4, 5, 6, 7, 8, 2, 1]) </code></pre>
python|arrays|algorithm|numpy
3
1,902,711
27,624,723
multiple args function call in map(lambda) function in python
<p>How would i call this function if the input function is a multiple argument function:</p> <pre><code>def process_list(_func, _list): return map( lambda x: process_list(_func, x) if type(x)==list else _func(x), _list ) </code></pre> <p>so I can call this <code>newList = process_list(someFunction, inputList)</code> if someFunction is a single input function like so:</p> <pre><code>def makeRvtDetailLines(crv): detailLine = doc.Create.NewDetailCurve(doc.ActiveView, crv) return detailLine </code></pre> <p>However, if i need to call a function that has more than one input ex:</p> <pre><code>def makeRvtDetailLines(crv, _lineStyle): detailLine = doc.Create.NewDetailCurve(doc.ActiveView, crv) detailLine.LineStyle = _lineStyle return detailLine </code></pre> <p>How do i call the <code>newList = process_list(makeRvtDetailLines, inputList)</code>? Where do the arguments for the function go? Thank you for all the help. </p> <p>ps. this is not a Revit question. This is python syntax question. </p>
<p>If <code>_lineStyle</code> is a single object, you could use a (very) slightly more complex version of <code>process_list</code>:</p> <pre><code>def process_list(_func, _list, arg): return map( lambda x: process_list(_func, x, arg) if type(x)==list else _func(x,arg), _list ) </code></pre> <p>If <code>_lineStyle</code> were a list, assuming it is the same length as the <code>cdv</code> list, you could <code>zip</code> them together into a single list of (cdv,_lineStyle) pairs, and then modify <code>process_list</code> accordingly:</p> <pre><code>def process_list(_func, _list, _argList): ziplist = zip( _list, _argsList) return map( lambda x: process_list(_func, x[0], x[1]) if type(x[0])==list else _func(x[0], x[1]), ziplist ) </code></pre>
python|function|syntax|lambda
0
1,902,712
65,888,303
Find intersection of words in two strings in python
<p>I have two strings containing words: <code>'dan esh gah'</code> and <code>'da nesh gah'</code></p> <p>I need the intersection words, which is <code>'gah'</code> in this case.</p> <p>I used this code</p> <pre><code>vocab=['dan esh gah'] gold=['da nesh gah'] s1 = ''.join(vocab) s2=''.join(gold) a=[] track=[] for k in range(len(s1)+1): if k!=0: for ka in range(0,len(s1)+1,k): if s1[ka:ka+k] in s2: track.append((len(s1[ka:ka+k])+1,s1[ka:ka+k])) intersect=max(track)[1] print(intersect) </code></pre> <p>but the answer is wrong:</p> <pre class="lang-none prettyprint-override"><code>sh ga </code></pre> <p>Please help me to solve this problem.</p>
<p>You can do the <em>intersection</em> using <a href="https://docs.python.org/3/library/stdtypes.html#frozenset.intersection" rel="nofollow noreferrer"><code>&amp;</code></a> on <a href="https://docs.python.org/3/library/functions.html#func-set" rel="nofollow noreferrer"><code>set()</code></a> object:</p> <pre><code>&gt;&gt;&gt; s1='da nesh gah' &gt;&gt;&gt; s2='dan esh gah' &gt;&gt;&gt; set(s1.split()) &amp; set(s2.split()) set(['gah']) </code></pre> <p>Here, I am firstly converting the string to list of words using <a href="https://docs.python.org/3/library/stdtypes.html#str.split" rel="nofollow noreferrer"><code>str.split()</code></a>. <a href="https://docs.python.org/3/library/functions.html#func-set" rel="nofollow noreferrer"><code>set()</code></a> will convert the list to <a href="https://docs.python.org/3/library/stdtypes.html#set" rel="nofollow noreferrer">set object</a>, on which you can find intersection between two sets using <a href="https://docs.python.org/3/library/stdtypes.html#frozenset.intersection" rel="nofollow noreferrer"><code>&amp;</code></a>.</p> <p>If you prefer functional style, you can use <a href="https://docs.python.org/3/library/stdtypes.html#frozenset.intersection" rel="nofollow noreferrer"><code>set().intersection()</code></a> to get the same result:</p> <pre><code>&gt;&gt;&gt; set(s1.split()).intersection(s2.split()) set(['gah']) </code></pre>
python|string|list|intersection|word
2
1,902,713
72,386,380
Failed to change folder path in python but I do not know why
<p>I want to open the font in folder 1 and folder 2. I wrote <code>openOTF.py</code> to open them.</p> <pre><code>Desktop ├── openOTF.py ├── 1 │ ├── CMBSY7.otf │ ├── CMBSY8.otf │ └── CMBSY9.otf ├── 2 │ ├── CMCSC8.otf │ └── CMCSC9.otf </code></pre> <p>this is openOTF.py</p> <pre class="lang-py prettyprint-override"><code>import fontforge as ff import os folders = [&quot;1/&quot;, &quot;2/&quot;] for folder in folders: os.chdir(folder) print(os.getcwd()) files = os.listdir(&quot;./&quot;) for font in files: print(font) f = ff.open(font) print(f.path) os.chdir(&quot;../&quot;) </code></pre> <p>but this is the output of <code>python open-otf.py</code>, which cannot find <code>CMCSC8.otf</code> in folder <code>2</code>, in fact it want to search <code>/home/firestar/Desktop/1/CMCSC8.otf</code>:</p> <pre><code>/home/firestar/Desktop/1 CMBSY9.otf /home/firestar/Desktop/1/CMBSY9.otf CMBSY8.otf /home/firestar/Desktop/1/CMBSY8.otf CMBSY7.otf /home/firestar/Desktop/1/CMBSY7.otf /home/firestar/Desktop/2 CMCSC8.otf The requested file, CMCSC8.otf, does not exist Traceback (most recent call last): File &quot;/home/firestar/Desktop/open-otf.py&quot;, line 12, in &lt;module&gt; f = ff.open(font) OSError: Open failed </code></pre> <p>it seems that <code>os.chdir(&quot;../&quot;)</code> changed the path to folder <code>2</code> but fontforge did not change the path (still in folder <code>1</code>).</p>
<pre><code>import fontforge as ff import glob files = glob.glob('*/*.otf') print(files) for font in files: f = ff.open(font) print(f.path) </code></pre> <p>I think <code>glob</code> is the simplest solution</p>
python|fontforge
0
1,902,714
43,404,028
Using 'like', '<=' and '=' operators when building an SQLAlchemy query filter from dict
<p>I currently have a python dictionary that is created from the data that a user submits through a form. The form fields are optional, but if they are all filled out, then the dictionary (<code>dict_filter</code>) might look like this:</p> <blockquote> <p>{"item_type": "keyboard", "location": "storage1"}</p> </blockquote> <p>I can then query the database as shown:</p> <pre><code>items = Item.query.filter_by(**dict_filter).all() </code></pre> <p>This works fine and returns all the <code>keyboard</code> items that are currently in <code>storage1</code> as desired.</p> <p>However, I want to add two new date fields to the form such that a completely filled out form would result in a dictionary similar to the following:</p> <blockquote> <p>{"item_type": "keyboard", "location": "storage1", "purchase_date": 2017-02-18, "next_maintenance": 2018-02-18}</p> </blockquote> <p>Based on this new dict, I would like to do the following:</p> <p>First, use <code>like()</code> when filtering the <code>item_type</code>. I want this so that if a user searches for <code>keyboard</code> then the results will also include items like <code>mechanical keyboard</code> for example. I know I can do this individually as shown:</p> <pre><code>val = form.item_type.data items = Item.query.filter(getattr(Item, 'item_type').like("%%%s%%" % val)).all() </code></pre> <p>Second, use the '&lt;=' (less than or equal to) operator when dealing with dates such that if, for example, a user enters a <code>purchase_date</code> in the form, then all the items returned will have a <code>purchase_date</code> before or on the same date as entered by the user. I know I can do this individually as shown:</p> <pre><code>items = Item.query.filter(Item.purchase_date &lt;= form.purchase_date.data) </code></pre> <p>Note that if both dates are filled out in the form, then the filter should check both dates as shown:</p> <pre><code>items = Item.query.filter(and_(Item.purchase_date &lt;= form.purchase_date.data, Item.next_maintenance &lt;= form.next_maintenance.data)) </code></pre> <p>Third, if the <code>location</code> field is filled out in the form, then the query should check for items with matching locations (as it currently does with the dict). I know I can do this using a dict as I am currently doing:</p> <pre><code>dict = {"location": "storage1"} items = Item.query.filter_by(**dict_filter).all() </code></pre> <p>or</p> <pre><code>items = Item.query.filter_by(location=form.location.data).all() </code></pre> <p>The greatest challenge that I have is that since the form fields are optional I have no way of knowing beforehand what combination of filter conditions I'll have to apply. Therefore, it may be possible that for one user's input, I'll have to search the db for all <code>screen</code> items in <code>office1</code> with <code>next_maintenance</code> date before <code>yyyy-mm-dd</code> while for another user's input I'll have to search the db for all items in all location regardless of <code>next_maintenance</code> date with a <code>purchase_date</code> before <code>yyyy-mm-dd</code>, and so on. This is precisely why I'm currently using a dict as a filter; it allows me to check if a form field was completed and if it was, then I add it to the dict and filter only based on form fields with input.</p> <p><strong>With all that being said, how can I combine all three filters discussed above (like, &lt;=, =) into one while also accounting for the fact that not all three filters may always be necessary?</strong></p>
<p>This was not intended to be an answer but a comment. But apparently I can't use code block in a comment.</p> <p>In case you don't know, you can use multiple <code>filter</code> or <code>filter_by</code> by chaining them together like this:</p> <pre><code>Item.query.filter(Item.a &lt; 5).filter(Item.b &gt; 6).all() </code></pre> <p>Therefore you can store the returning value as a variable (it is actually an object of <code>Query</code>) temporarily and use it later.</p> <pre><code>q = Item.query.filter(Item.a &lt; 5) if some_condition_value: q = q.filter(Item.b &gt; 6) items = q.all() </code></pre> <p>You can apply your conditions to the <code>Query</code> object and then you can have optional filters.</p>
python|sqlalchemy
1
1,902,715
43,250,499
Clearing tkinter canvas containing label
<p>I create such a code:</p> <pre><code>import Tkinter as tk from Tkinter import * def b(canvas): canvas.delete("all") canvas.update() print "works" def main(): root = Tk() canvas=Canvas(root) canvas.config(width=400, height=300) bb=Button(canvas, text="ssss",command=lambda:b(canvas)) bb.place(x=100,y=200) root.geometry('400x300') aa=Label(canvas,text="aaaaa") aa.place(x=10,y=200) canvas.pack() root.mainloop() if __name__ == '__main__': main() </code></pre> <p>And the problem is that after clicking on a button label is not destroyed despite the fact that a function with <code>canvas.destroy("all")</code> runs.</p>
<p>The label is not deleted when <code>canvas.delete("all")</code> is invoked because you have used place to display the label, so it is not an item of the canvas. To make the label a canvas item, you need to replace</p> <pre><code>aa.place(x=10,y=200) </code></pre> <p>by</p> <pre><code>canvas.create_window(10, 200, window=aa) </code></pre> <p>And then <code>canvas.delete("all")</code> will also delete the label.</p>
python-2.7|tkinter
1
1,902,716
43,406,616
How to generate a random star using turtle graphics in Python?
<p>I'm new to turtle graphics in Python, and I'm running into some issues with a particular problem. I'm trying to generate a star that uses a <code>while</code> loop to draw random jagged lines from the center of a circle. </p> <p>Each line should have a distance of 250. I'm using the <code>penup</code> <code>pendown</code> and <code>setpos</code> commands within the loop to draw these random lines, and each line should be a random color.</p> <p>Here's an idea of what I'm hoping to generate: <a href="https://i.stack.imgur.com/b4cmi.png" rel="nofollow noreferrer">random star</a></p> <p>Here's the code I have so far:</p> <pre><code># tg_random_star.py from random import randrange from turtle import * MAX_ANGLE = 30 def jaggedLine(turtle, pieces, pieceLength): for i in range(pieces): turtle.forward(pieceLength) r = randrange(-MAX_ANGLE, MAX_ANGLE + 1) turtle.right(r) def jumpToCenter(turtle): turtle.penup() turtle.setpos(0, 0) turtle.pendown() def randomColor(turtle): turtle.colormode(255) r = randrange(255) # red component of color g = randrange(255) # green component b = randrange(255) # blue component turtle.pencolor(r, g, b) def main(): colormode(255) t = Turtle() jaggedLine(t, 10, 30) jumpToCenter(t) jaggedLine(t, 10, 30) if __name__ == "__main__": main() </code></pre> <p>It currently generates 2 lines, but the <code>turtle.pencolor(r, g, b)</code> and the <code>colormode(255)</code> don't seem to be working, as both lines are black. Any idea why these lines aren't in color?</p> <p>Rather than using <code>for i in range(pieces)</code> to draw lines that are based on the number of segments, how can I use a <code>while</code> loop to draw jagged lines that each have a distance of 250? In other words, I want each line to have a distance of 250 before drawing a new line from the center.</p> <p>(Maybe I could use the <code>xcor</code> and <code>ycor</code> methods to find the turtle’s position, then calculate the distance using the distance formula?)</p> <pre><code>def distance(p0, p1): return math.sqrt((p0[0] - p1[0])**2 + (p0[1] - p1[1])**2)` </code></pre> <p>Any help or explanation would be appreciated, thank you.</p>
<blockquote> <p>Any idea why these lines aren't in color?</p> </blockquote> <p>That one's easy, it's because you never actually call <code>randomColor()</code></p> <blockquote> <p>Rather than using for i in range(pieces) to draw lines that are based on the number of segments, how can I use a while loop to draw jagged lines that each have a distance of 250?</p> </blockquote> <p>Here we can take advantage of the under utilized <code>.distance()</code> method of turtle to tell us how far we are from center. This is straight line distance, not path travelled distance, which seems to match your target illustration:</p> <pre><code>from turtle import Turtle, Screen from random import randrange MAX_ANGLE = 30 MAX_DISTANCE = 250 def jaggedLine(t, pieceLength): randomColor(t) while t.distance(0, 0) &lt; MAX_DISTANCE: angle = randrange(-MAX_ANGLE, MAX_ANGLE + 1) t.right(angle) t.forward(pieceLength) def jumpToCenter(t): t.penup() t.home() # return turtle to original position t.pendown() def randomColor(t): # red, green &amp; blue components of turtle color r, g, b = randrange(255), randrange(255), randrange(255) t.pencolor(r, g, b) def main(): screen = Screen() screen.tracer(False) # because I have no patience screen.colormode(255) turtle = Turtle() turtle.hideturtle() turtle.pensize(2) for angle in range(0, 360, 2): jumpToCenter(turtle) turtle.setheading(angle) jaggedLine(turtle, 30) screen.tracer(True) # forces an update screen.mainloop() if __name__ == "__main__": main() </code></pre> <p><strong>OUTPUT</strong></p> <p><a href="https://i.stack.imgur.com/noWuk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/noWuk.png" alt="enter image description here"></a></p>
python|python-3.x|colors|turtle-graphics
0
1,902,717
43,195,344
Pyparsing two-dimensional list
<p>I have the following sample data:</p> <pre><code>165 150 238 402 395 571 365 446 284 278 322 282 236 16 5 19 10 12 5 18 22 6 4 5 259 224 249 193 170 151 95 86 101 58 49 6013 7413 8976 10392 12678 9618 9054 8842 9387 11088 11393; </code></pre> <p>It is the equivalent of a two dimensional array (except each row does not have an equal amount of columns). At the end of each line is a space and then a <code>\n</code> except for the final entry which is followed by no space and only a <code>;</code>.</p> <p>Would anyone know the pyparsing grammer to parse this? I've been trying something along the following lines but it will not match.</p> <pre><code>data = Group(OneOrMore(Group(OneOrMore(Word(nums) + SPACE)) + LINE) + \ Group(OneOrMore(Word(nums) + SPACE)) + Word(nums) + Literal(";") </code></pre> <p>The desired output would ideally be as follows</p> <pre><code>[['165', '150', '238', '402', '395', '571', '365', '446', '284', '278', '322', '282', '236'], ['16', '5', ... ], [...], ['6013', ..., '11393']] </code></pre> <p>Any assistance would be greatly appreciated.</p>
<p>You can use the <code>stopOn</code> argument to <code>OneOrMore</code> to make it stop matching. Then, since newlines are by default skippable whitespace, the next group can start matching, and it will just skip over the newline and start at the next integer.</p> <pre><code>import pyparsing as pp data_line = pp.Group(pp.OneOrMore(pp.pyparsing_common.integer(), stopOn=pp.LineEnd())) data_lines = pp.OneOrMore(data_line) + pp.Suppress(';') </code></pre> <p>Applying this to your sample data:</p> <pre><code>data = """\ 165 150 238 402 395 571 365 446 284 278 322 282 236 16 5 19 10 12 5 18 22 6 4 5 259 224 249 193 170 151 95 86 101 58 49 6013 7413 8976 10392 12678 9618 9054 8842 9387 11088 11393;""" parsed = data_lines.parseString(data) from pprint import pprint pprint(parsed.asList()) </code></pre> <p>Prints:</p> <pre><code>[[165, 150, 238, 402, 395, 571, 365, 446, 284, 278, 322, 282, 236], [16, 5, 19, 10, 12, 5, 18, 22, 6, 4, 5], [259, 224, 249, 193, 170, 151, 95, 86, 101, 58, 49], [6013, 7413, 8976, 10392, 12678, 9618, 9054, 8842, 9387, 11088, 11393]] </code></pre>
python|pyparsing
2
1,902,718
36,983,549
Translate json file with Django
<p>I try to build some navbar in my html with my json_file like it:</p> <p>exemple of my json_file:</p> <pre><code>{ "_comment":"example auto build navbar", "TOP" :[ { "name" : "name1", "id" : "MLO", "title" : "Title than i want translate" }] } </code></pre> <p>in my view.py:</p> <pre><code>def view(request): ''' ''' with open('IHMWEB/json_file.json') as data_file: data = json.load(data_file) c = {'user_username': request.session['user_username'], "data" : data} context = Context(c) template = get_template('view.html') translation.activate(settings.LANGUAGE_CODE) html = template.render(context) return HttpResponse(html) </code></pre> <p>and in my template:</p> <pre><code>{% for menu in data.TOP %} &lt;a href="#" id={{menu.id}} title="{{menu.title}}" class="navbar-brand"&gt; {{menu.name}}&lt;/a&gt; {% endfor %} </code></pre> <p>How can i translate "title" with gettext and send translation to my template.html? Is it possible?</p>
<p>It would probably be a better idea to load the translation strings from a Python file and use the regular <code>ugettext()</code> for translation.</p> <p>But, to answer your question: the Django template system is very versatile and can be used on basically any kind of text string. So you can use it to translate your JSON file content as well. However, its pretty "hackish" and not really recommended.</p> <pre><code>t = Template(open('/path/to/menu.json').read()) c = Context({}) translated_json = t.render(c) py_obj = json.loads(translated_json) </code></pre> <p>That should produce a python object out of the template-rendered JSON string. With your <code>menu.json</code> looking like this</p> <pre><code>{% load i18n %} { "_comment":"example auto build navbar", "TOP" :[ { "name" : "name1", "id" : "MLO", "title" : "{% trans 'Title than i want translate' %}" } ] } </code></pre> <p>You load that file into the template renderer that will then load the i18n module and translate any <code>{% trans %}</code> strings.</p> <p>When running <code>makemessages</code> remember to include <code>.json</code> files to be searched for transation strings.</p>
python|json|django|translation
2
1,902,719
37,036,682
Delete from database in Python
<p>I have <code>deleteuser.py</code> where i need to delete one user, here is the code: </p> <pre><code># Import modules for CGI handling import MySQLdb import cgi, cgitb # Open database connection db = MySQLdb.connect("localhost", "root", "", "moviedb" ) # prepare a cursor object using cursor() method cursor = db.cursor() # Create instance of FieldStorage form = cgi.FieldStorage() # Get data from fields iduser = form.getvalue('iddelete') # execute SQL query using execute() method. try: cursor.execute("""DELETE FROM user WHERE ID = '%s'""",(iduser)) # Commit your changes in the database db.commit() except: db.rollback() # disconnect from server db.close() print "Content-Type: text/plain;charset=utf-8" print </code></pre> <p>I have no error but it doesn`t work. The database is still the same.</p> <p>Thank you</p>
<p>here is the answer: </p> <pre><code>#!/Python27/python # -*- coding: UTF-8 -*- # Import modules for CGI handling import MySQLdb import cgi, cgitb # Open database connection db = MySQLdb.connect("localhost", "root", "", "moviedb" ) # prepare a cursor object using cursor() method cursor = db.cursor() # Create instance of FieldStorage form = cgi.FieldStorage() # Get data from fields iduser = form.getvalue('iddelete') # execute SQL query using execute() method. query = "delete from user where id = '%s' " % iduser cursor.execute(query) # Commit your changes in the database db.commit() # disconnect from server db.close() print "Content-Type: text/plain;charset=utf-8" print </code></pre>
python|mysql|sql|mysql-python
1
1,902,720
48,601,396
Calculating Incremental Entropy for Data that is not real numbers
<p>I have a set of data for which has an ID, timestamp, and identifiers. I have to go through it, calculate the entropy and save some other links for the data. At each step more identifiers are added to the identifiers dictionary and I have to re-compute the entropy and append it. I have really large amount of data and the program gets stuck due to growing number of identifiers and their entropy calculation after each step. I read the following solution but it is about the data consisting of numbers. <a href="https://stackoverflow.com/questions/17104673/incremental-entropy-computation">Incremental entropy computation</a></p> <p>I have copied two functions from this page and the incremental calculation of entropy gives different values than the classical full entropy calculation at every step. Here is the code I have:</p> <pre><code>from math import log # ---------------------------------------------------------------------# # Functions copied from https://stackoverflow.com/questions/17104673/incremental-entropy-computation # maps x to -x*log2(x) for x&gt;0, and to 0 otherwise h = lambda p: -p*log(p, 2) if p &gt; 0 else 0 # entropy of union of two samples with entropies H1 and H2 def update(H1, S1, H2, S2): S = S1+S2 return 1.0*H1*S1/S+h(1.0*S1/S)+1.0*H2*S2/S+h(1.0*S2/S) # compute entropy using the classic equation def entropy(L): n = 1.0*sum(L) return sum([h(x/n) for x in L]) # ---------------------------------------------------------------------# # Below is the input data (Actually I read it from a csv file) input_data = [["1","2008-01-06T02:13:38Z","foo,bar"], ["2","2008-01-06T02:12:13Z","bar,blup"], ["3","2008-01-06T02:13:55Z","foo,bar"], ["4","2008-01-06T02:12:28Z","foo,xy"], ["5","2008-01-06T02:12:44Z","foo,bar"], ["6","2008-01-06T02:13:00Z","foo,bar"], ["7","2008-01-06T02:13:00Z","x,y"]] total_identifiers = {} # To store the occurrences of identifiers. Values shows the number of occurrences all_entropies = [] # Classical way of calculating entropy at every step updated_entropies = [] # Incremental way of calculating entropy at every step for item in input_data: temp = item[2].split(",") identifiers_sum = sum(total_identifiers.values()) # Sum of all identifiers old_entropy = 0 if all_entropies[-1:] == [] else all_entropies[-1] # Get previous entropy calculation for identifier in temp: S_new = len(temp) # sum of new samples temp_dictionaty = {a:1 for a in temp} # Store current identifiers and their occurrence if identifier not in total_identifiers: total_identifiers[identifier] = 1 else: total_identifiers[identifier] += 1 current_entropy = entropy(total_identifiers.values()) # Entropy for current set of identifiers updated_entropy = update(old_entropy, identifiers_sum, current_entropy, S_new) updated_entropies.append(updated_entropy) entropy_value = entropy(total_identifiers.values()) # Classical entropy calculation for comparison. This step becomes too expensive with big data all_entropies.append(entropy_value) print(total_identifiers) print('Sum of Total Identifiers: ', identifiers_sum) # Gives 12 while the sum is 14 ??? print("All Classical Entropies: ", all_entropies) # print for comparison print("All Updated Entropies: ", updated_entropies) </code></pre> <p>The other issue is that when I print "Sum of total_identifiers", it gives <strong>12</strong> instead of <strong>14</strong>! (Due to very large amount of data, I read the actual file line by line and write the results directly to the disk and do not store it in the memory apart from the dictionary of identifiers).</p>
<p>The code above uses Theorem 4; it seems to me that you want to use Theorem 5 instead (from the paper in the next paragraph).</p> <p>Note, however, that if the number of identifiers is really the problem then the incremental approach below isn't going to work either---at some point the dictionaries are going to get too large.</p> <p>Below you can find a proof-of-concept Python implementation that follows the description from <a href="https://arxiv.org/abs/1403.6348" rel="nofollow noreferrer">Updating Formulas and Algorithms for Computing Entropy and Gini Index from Time-Changing Data Streams</a>.</p> <pre><code>import collections import math import random def log2(p): return math.log(p, 2) if p &gt; 0 else 0 CountChange = collections.namedtuple('CountChange', ('label', 'change')) class EntropyHolder: def __init__(self): self.counts_ = collections.defaultdict(int) self.entropy_ = 0 self.sum_ = 0 def update(self, count_changes): r = sum([change for _, change in count_changes]) residual = self._compute_residual(count_changes) self.entropy_ = self.sum_ * (self.entropy_ - log2(self.sum_ / (self.sum_ + r))) / (self.sum_ + r) - residual self._update_counts(count_changes) return self.entropy_ def _compute_residual(self, count_changes): r = sum([change for _, change in count_changes]) residual = 0 for label, change in count_changes: p_new = (self.counts_[label] + change) / (self.sum_ + r) p_old = self.counts_[label] / (self.sum_ + r) residual += p_new * log2(p_new) - p_old * log2(p_old) return residual def _update_counts(self, count_changes): for label, change in count_changes: self.sum_ += change self.counts_[label] += change def entropy(self): return self.entropy_ def naive_entropy(counts): s = sum(counts) return sum([-(r/s) * log2(r/s) for r in counts]) if __name__ == '__main__': print(naive_entropy([1, 1])) print(naive_entropy([1, 1, 1, 1])) entropy = EntropyHolder() freq = collections.defaultdict(int) for _ in range(100): index = random.randint(0, 5) entropy.update([CountChange(index, 1)]) freq[index] += 1 print(naive_entropy(freq.values())) print(entropy.entropy()) </code></pre>
python|python-3.x|math|python-3.5|entropy
1
1,902,721
48,699,711
How to replace cells in a larger Pandas dataframe with cells from a smaller dataframe
<p>I have two pandas dataframes:</p> <p>Smaller:</p> <p><a href="https://i.stack.imgur.com/LPOkQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LPOkQ.png" alt="enter image description here"></a></p> <p>Larger: <a href="https://i.stack.imgur.com/kEPyv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kEPyv.png" alt="enter image description here"></a></p> <p>I want to match on the Ticker and Year and then replace the numbers in the First and Last columns with those from the smaller dataframe. </p> <p>I've tried using pd.merge but I succeeded only in adding rows or columns not replacing the specific cells. Can someone please post some code that would achieve this?</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>merge</code></a> with left join and <code>suffixes</code> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.combine_first.html" rel="nofollow noreferrer"><code>combine_first</code></a> with rename for remove <code>_</code>:</p> <pre><code>df1 = pd.DataFrame({'Ticker':list('abcdef'), 'Year':[2013,2014,2013,2014,2013,2014], 'C':[7,8,9,4,2,3], 'Last':[1,3,5,7,1,0], 'First':[5,3,6,9,2,4], 'F':list('aaabbb')}) print (df1) C F First Last Ticker Year 0 7 a 5 1 a 2013 1 8 a 3 3 b 2014 2 9 a 6 5 c 2013 3 4 b 9 7 d 2014 4 2 b 2 1 e 2013 5 3 b 4 0 f 2014 df2 = pd.DataFrame({'First':[4,5,4,5], 'Last':[7,8,9,4], 'Year':[2013,2014,2014,2015], 'Ticker':list('aabc')}) print (df2) First Last Ticker Year 0 4 7 a 2013 1 5 8 a 2014 2 4 9 b 2014 3 5 4 c 2015 </code></pre> <hr> <pre><code>df = df1.merge(df2, suffixes=('_',''), on=['Ticker','Year'], how='left') df1[['First','Last']] = (df[['First','Last']].combine_first(df[['First_','Last_']] .rename(columns=lambda x: x.strip('_')))) print (df1) C F First Last Ticker Year 0 7 a 4.0 7.0 a 2013 1 8 a 4.0 9.0 b 2014 2 9 a 6.0 5.0 c 2013 3 4 b 9.0 7.0 d 2014 4 2 b 2.0 1.0 e 2013 5 3 b 4.0 0.0 f 2014 </code></pre>
pandas|replace|find|conditional
1
1,902,722
66,981,494
sympy's ConditionSet object not iterable
<p>I would like to generate a simple set based on a condition to mimic set builder notation, and then enumerate its contents. I tried the following based on an example <a href="https://docs.sympy.org/latest/modules/sets.html" rel="nofollow noreferrer">in the docs</a>:</p> <pre><code>from sympy import Symbol, S, ConditionSet from sympy.abc import x new_set = ConditionSet(x, x&lt;7, S.Naturals) iterable = iter(new_set) print(&quot;done&quot;) </code></pre> <p>When I run this I get an error:</p> <pre><code>Traceback (most recent call last): File &quot;test.py&quot;, line 5, in &lt;module&gt; iterable = iter(new_set) TypeError: 'ConditionSet' object is not iterable </code></pre> <p>Why is it that I cannot enumerate the <code>ConditionSet</code> object? It has finite contents so I would assume this should be possible?</p>
<p>ConditionSet does not try hard to resolve its expression with an infinite base set. For linear or quadratic univariate inequalities, it wouldn't be hard to do so at instantiation, however. But for now you have to do so on your own:</p> <pre><code>&gt;&gt;&gt; c = ConditionSet(x, x &lt; 7, S.Naturals) &gt;&gt;&gt; solveset(c.args[1], c.args[0], domain=S.Reals).intersection(c.args[-1]) Range(1, 7, 1) </code></pre> <p>The Range is iterable.</p> <pre><code>&gt;&gt;&gt; next(iter(_)) 1 </code></pre> <p>(In theory you should be able to set <code>domain=c.args[-1]</code> but the solver does not handle such sets well, yet.)</p>
python|set|sympy
0
1,902,723
69,437,461
How to make new dataframe of every second or third id with Python Pandas?
<p>I have a data frame, with id, username, and date. I sorted the data frame by id. How to make new data frames, that contains every second or third id?</p> <p>Here is my code where I made a Data Frame and I sorted it by id:</p> <pre><code>import pandas as pd id = ['11', '11', '11', '15', '15', '15', '23', '23', '25'] username = ['usera','userb','userc','userd','usere','userf','userd','usere','userf'] date = ['2021-05-04','2021-05-05','2021-05-05','2021-05-06','2021-06-07','2021-06-08','2021-07-09','2021-03-09','2021-04-10'] df = pd.DataFrame({'id': id, 'username': username, 'date': date}) dx = df.sort_values(by=['id'], ignore_index=True) #Sort because the dataframe not sorted. by default print(dx) Here is some expected output: #dx = get every second value id username date 0 11 usera 2021-05-04 1 11 userb 2021-05-05 2 11 userc 2021-05-05 6 23 userd 2021-07-09 7 23 usere 2021-03-09 .... # Get every third by id: id username date 0 11 usera 2021-05-04 1 11 userb 2021-05-05 2 11 userc 2021-05-05 8 25 userf 2021-04-10 ..... </code></pre> <p>In my task the user names and the rows here are not relevant, just the id's. I need to get every second or third id in the same dataframe.</p>
<p>You could <code>groupby</code> the cumcount of the groups and transform to dict:</p> <pre><code>dfs = dict(list(df.groupby(df.groupby('id').cumcount().add(1)))) </code></pre> <p>output:</p> <pre><code>{1: id username date 0 11 usera 2021-05-04 3 15 userd 2021-05-06 6 23 userd 2021-07-09 8 25 userf 2021-04-10, 2: id username date 1 11 userb 2021-05-05 4 15 usere 2021-06-07 7 23 usere 2021-03-09, 3: id username date 2 11 userc 2021-05-05 5 15 userf 2021-06-08} </code></pre> <p>Then each value in the <code>dfs</code> dictionary is a sub dataframe:</p> <pre><code>&gt;&gt;&gt; dfs[1] # 1st occurence id username date 0 11 usera 2021-05-04 3 15 userd 2021-05-06 6 23 userd 2021-07-09 8 25 userf 2021-04-10 </code></pre>
python|pandas
1
1,902,724
48,198,676
Jupyter input, display, print execution order is chaotic
<p>I'm using Jupyter and my Python version is 3.5. In my <code>while</code> loop, execution order is not correct; the <code>input</code> from one iteration is shown <em>before</em> the final <code>print</code> of the previous iteration. This is my code.</p> <pre><code>from IPython.display import display import pandas as pd df = pd.DataFrame({'a':[1,2],'b':[3,4]}) while(True): a = input(&quot;please input:\n&quot;) display(df.head()) print (a) </code></pre> <p>The execution result is <a href="https://i.stack.imgur.com/KpzAv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KpzAv.png" alt="screenshot of execution" /></a></p>
<p>I was able to reproduce the behavior under Chrome 63 on OSX. I added several more consecutive <code>print(a)</code> statements and where the input field ends up is random: before them, after them, or in between. My suspicion is that each display and print call sends a request to the server but awaits its result asynchronously, so that <code>input</code> may be called again before the result from <code>print(a)</code> is ready.</p> <p>It is not an elegant solution, but adding a small sleep (<code>time.sleep(.02)</code>) after <code>print(a)</code> fixes the problem for me.</p>
python|pandas|output|jupyter-notebook
4
1,902,725
64,392,813
TypeError at /posts/12/tesing/like/ quote_from_bytes() expected bytes
<p>Well i am trying to add like toggle or like button in my project and got this error . How can i fix this error ?</p> <p>view.py</p> <pre><code>class PostLikeToggle(RedirectView): def get_redirect_url(self, *args, **kwargs): slug=self.kwargs.get('slug') print(slug,'slug') pk=self.kwargs.get('pk') print(pk,'pk') obj =get_object_or_404(Post,pk=pk,slug=slug) print(obj,'post') user=self.request.user if user.is_authenticated: if user in obj.likes.all(): obj.likes.remove(user) else: obj.likes.add(user) return redirect(f'/posts/{pk}/{slug}') </code></pre> <p>traceback:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\AHMED\anaconda3\lib\site-packages\django\core\handlers\exception.py&quot;, line 34, in inner response = get_response(request) File &quot;C:\Users\AHMED\anaconda3\lib\site-packages\django\core\handlers\base.py&quot;, line 115, in _get_response response = self.process_exception_by_middleware(e, request) File &quot;C:\Users\AHMED\anaconda3\lib\site-packages\django\core\handlers\base.py&quot;, line 113, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File &quot;C:\Users\AHMED\anaconda3\lib\site-packages\django\views\generic\base.py&quot;, line 71, in view return self.dispatch(request, *args, **kwargs) File &quot;C:\Users\AHMED\anaconda3\lib\site-packages\django\views\generic\base.py&quot;, line 97, in dispatch return handler(request, *args, **kwargs) File &quot;C:\Users\AHMED\anaconda3\lib\site-packages\django\views\generic\base.py&quot;, line 193, in get return HttpResponseRedirect(url) File &quot;C:\Users\AHMED\anaconda3\lib\site-packages\django\http\response.py&quot;, line 485, in __init__ self['Location'] = iri_to_uri(redirect_to) File &quot;C:\Users\AHMED\anaconda3\lib\site-packages\django\utils\encoding.py&quot;, line 147, in iri_to_uri return quote(iri, safe=&quot;/#%[]=:;$&amp;()+,!?*@'~&quot;) File &quot;C:\Users\AHMED\anaconda3\lib\urllib\parse.py&quot;, line 839, in quote return quote_from_bytes(string, safe) File &quot;C:\Users\AHMED\anaconda3\lib\urllib\parse.py&quot;, line 864, in quote_from_bytes raise TypeError(&quot;quote_from_bytes() expected bytes&quot;) Exception Type: TypeError at /posts/12/tesing/like/ Exception Value: quote_from_bytes() expected bytes </code></pre> <p>if more detail is require than tell me i will update my question with that information.</p>
<p><a href="https://docs.djangoproject.com/en/3.1/ref/class-based-views/base/#django.views.generic.base.RedirectView.get_redirect_url" rel="nofollow noreferrer"><code>get_redirect_url</code></a> should return a string, not an <code>HttpResponse</code></p> <p>Change it to:</p> <pre><code>return f'/posts/{pk}/{slug}' </code></pre>
python|django|django-models|django-rest-framework|django-views
1
1,902,726
70,496,171
Creating website with button that executes python script
<p>My python program uses an input variable (a number 1-10) each day. Preferably, I want to create a simple website with only 1 input field (number) and a button which then executes the python script. Is there any easy way to do so? I don't have any experience with making websites.</p>
<p>As far as I understand, you wish to create a website which has 1 input field and a button visible on the frontend piece and on click of a button you need to run a python script.</p> <ol> <li>You can write APIs to perform the same action and call the API using JQuery from frontend.</li> <li>You can render html pages using python, by using its own routing techniques, check out video here <a href="https://www.youtube.com/watch?v=9MHYHgh4jYc" rel="nofollow noreferrer">https://www.youtube.com/watch?v=9MHYHgh4jYc</a></li> </ol>
python|web
0
1,902,727
69,670,697
Trying to sort in an API
<p>im new in this of &quot;Calling APIs&quot;, so i don't understand a lot, I was wondering if anyone can help me.</p> <p>I want to sort the cryptocurrencies from coinmarketcap whit their API, in the Api documentation, says that there is a parameter 'sort', and i can use differents values, like 'name','symbol' and 'recently added'. So if i want to get the last cryptocurrency listed, i have to sort whit 'recently added'. But i dont know how to do it. Here is a piece of code that i can write, but whit that code i get All the cryptocurrencys and i dont know how to sort it. Thanks</p> <pre><code>import requests from requests import Request, Session from requests.exceptions import ConnectionError, Timeout, TooManyRedirects import json url = &quot;https://pro-api.coinmarketcap.com/v1/cryptocurrency/listings/latest&quot; parameters = { 'start':'1', 'limit':'5000', 'convert':'USD' } headers = { 'Accepts': 'application/json', 'X-CMC_PRO_API_KEY': 'private key', } session = Session() session.headers.update(headers) response = session.get(url, params=parameters) print(response.json()) </code></pre>
<p>I guess its</p> <pre><code>parameters = { 'start':'1', 'limit':'5000', 'convert':'USD', 'sort':'recently_added' } </code></pre> <p>assuming you interpretted the docs right</p> <p>it actually looks like it want <code>'sort':'date_added'</code> NOT <code>'recently_added'</code> based on the docs at <a href="https://coinmarketcap.com/api/documentation/v1/#operation/getV1CryptocurrencyListingsLatest" rel="nofollow noreferrer">https://coinmarketcap.com/api/documentation/v1/#operation/getV1CryptocurrencyListingsLatest</a></p>
python|api
0
1,902,728
72,894,986
Time series split in python taking into account different products
<p>I have df (pandas) containing temporal data for several products (see below). The products may not start or finish in the same date as the others (eg. prod 1 and 2 series finished before dn, while prod 4 started somewhere between d3 and dn). I want to do a time series split, taking into account each produt. By doing that, I can have the same date on training and test, depending on the product. How do I do that?</p> <pre><code>date prod value d1 p1 10 d1 p2 10 d2 p1 15 d2 p2 12 d3 p1 8 d3 p2 5 d3 p3 7 . dn p2 20 dn p4 10 </code></pre>
<p>you could use:</p> <pre><code>d = {prod: group.set_index('date')['value'] for prod, group in df.groupby('prod')} </code></pre> <p>if you want all products have the same index:</p> <pre><code>d = {prod: group.set_index('date')['value'].reindex(df['date'].unique()) for prod, group in df.groupby('prod')} </code></pre>
python|pandas|time-series|cross-validation
0
1,902,729
55,686,085
Dependent options in docopt
<p>I was wondering if I could have dependent options in docopt. </p> <p>example:</p> <pre><code>""" Description: Flash a system with the manufacturing software from the specifiedx folder. Usage: flash_drop.py (--drop-dir=&lt;DIR&gt;) [--factory-reset=&lt;BOOL&gt;] [--flash-all=&lt;BOOL&gt;] [--flash-system1=&lt;BOOL&gt; | --flash-system2=&lt;BOOL&gt;] flash_drop.py -h | --help flash_drop.py --version Options: -h --help Show this screen. --version Show version. --drop-dir=DIR Path to the drop directory --factory-reset=BOOL Factory reset the chips on all selected devices. [default: False] --flash-all=BOOL Flash all devices. [default: False] --flash-system1=BOOL Flash first system. [default: False] --flash-system2=BOOL Flash second system. [default: False] """ </code></pre> <p>Namely, the value of an option is ignored if a previous option hasn't been selected. So for instance, the value for <code>--flash-system2</code> is ignored unless <code>--flash-system1</code> is set </p>
<p>Not with a single usage pattern, but you can do it with 2 patterns:</p> <pre><code>Usage: flash_drop.py (--drop-dir=&lt;DIR&gt;) [options] [--flash-system1=&lt;BOOL&gt;] flash_drop.py (--drop-dir=&lt;DIR&gt;) [options] --flash-system1=&lt;BOOL&gt; --flash-system2=&lt;BOOL&gt; </code></pre> <p>But probably better with three pattern, easier to read IMO:</p> <pre><code>Usage: flash_drop.py (--drop-dir=&lt;DIR&gt;) [options] flash_drop.py (--drop-dir=&lt;DIR&gt;) [options] --flash-system1=&lt;BOOL&gt; flash_drop.py (--drop-dir=&lt;DIR&gt;) [options] --flash-system1=&lt;BOOL&gt; --flash-system2=&lt;BOOL&gt; flash_drop.py -h | --help flash_drop.py --version </code></pre> <p><a href="http://try.docopt.org/?doc=Description%3A%0D%0A++Flash+a+system+with+the+manufacturing+software+from+the+specifiedx+folder.%0D%0A%0D%0AUsage%3A%0D%0A++flash_drop.py+%28--drop-dir%3D%3CDIR%3E%29+%5Boptions%5D%0D%0A++flash_drop.py+%28--drop-dir%3D%3CDIR%3E%29+%5Boptions%5D+--flash-system1%3D%3CBOOL%3E%0D%0A++flash_drop.py+%28--drop-dir%3D%3CDIR%3E%29+%5Boptions%5D+--flash-system1%3D%3CBOOL%3E+--flash-system2%3D%3CBOOL%3E%0D%0A++flash_drop.py+-h+%7C+--help%0D%0A++flash_drop.py+--version%0D%0A%0D%0AOptions%3A%0D%0A++-h+--help+++++++++++++++++Show+this+screen.%0D%0A++--version+++++++++++++++++Show+version.%0D%0A++--drop-dir%3DDIR++++++++++++Path+to+the+drop+directory%0D%0A++--factory-reset%3DBOOL++++++++++Factory+reset+the+chips+on+all+selected+devices.+%5Bdefault%3A+False%5D%0D%0A++--flash-all%3DBOOL++++++++++++++Flash+all+devices.+%5Bdefault%3A+False%5D%0D%0A++--flash-system1%3DBOOL++++++++++Flash+first+system.+%5Bdefault%3A+False%5D%0D%0A++--flash-system2%3DBOOL++++++++++Flash+second+system.+%5Bdefault%3A+False%5D&amp;argv=--drop-dir%3DDDD+--factory-reset%3Dtrue+--flash-system1%3Dtrue+--flash-system2%3Dtrue" rel="nofollow noreferrer">Live demo</a></p> <hr> <p>P.S.</p> <p>Well, you <em>can</em>, <em>technically</em>, do it with a single pattern, but it starts to get really long...</p> <pre><code>Usage: flash_drop.py (--drop-dir=&lt;DIR&gt;) [options] [(--flash-system1=&lt;BOOL&gt;) | (--flash-system1=&lt;BOOL&gt; --flash-system2=&lt;BOOL&gt;)] </code></pre> <p>Lines can be broken, so maybe:</p> <pre><code>Usage: flash_drop.py (--drop-dir=&lt;DIR&gt;) [options] [(--flash-system1=&lt;BOOL&gt;) | (--flash-system1=&lt;BOOL&gt; --flash-system2=&lt;BOOL&gt;)] </code></pre> <p>Personally I prefer the 3-pattern solution.</p>
python|docopt
0
1,902,730
55,977,680
Visualizing a frozen graph_def.pb
<p>I am wondering how to go about visualization of my frozen graph def. I need it to figure out my tensorflow networks input and output nodes. I have already tried several methods to no avail, like the summarize graph tool. Does anyone have an answer for some things that I can try? I am open to clarifying questions, thanks in advance.</p>
<p>You can try to use TensorBoard. It is on the Tensorflow website...</p>
python|tensorflow|tensorboard
0
1,902,731
49,866,204
sort list of dictionaries based on keys inside the list
<p>How can I sort this based on the <code>key</code> in <code>dictionary</code>?</p> <pre><code>df1 = [('f', {'abe': 1}), ('f', {'tbeli': 1}), ('f', {'mos': 1}), ('f', {'esc': 1})] </code></pre> <p>I tried this </p> <pre><code>L1 = [year for (title, year) in (sorted(df1.items(), key=lambda t: t[0]))] </code></pre> <p>I want </p> <pre><code>df1 = [('f', {'abe': 1}), ('f', {'esc': 1}), ('f', {'mos': 1}), ('f', {'tbeli': 1})] </code></pre> <p>Thanks </p>
<p>You could have a separate function to get the only item in an iterable:</p> <pre><code>def only(iterable): x, = iterable return x </code></pre> <p>Dicts are iterables of keys:</p> <pre><code>&gt;&gt;&gt; only({'abe': 1}) 'abe' &gt;&gt;&gt; only({'tbeli': 1}) 'tbeli' </code></pre> <p>so you can use it for your sort:</p> <pre><code>sorted(df1, key=lambda t: only(t[1])) </code></pre>
python|sorting|dictionary
2
1,902,732
64,753,909
Python group list to sub-lists lists that are monotonic with equal diff between elements
<pre><code>l = [2,4,6,12,14,16,21,27,29,31] </code></pre> <p>I want to split it to lists, such that each list's elements are a monotonic list with diff of 2 between elements:</p> <pre><code>new_l = [[2,4,6], [12,14,16],[21], [27,29,31]] </code></pre> <p>What is the most efficient way to do this?</p>
<p>You could identify indices where to split and then apply <code>np.split</code> like so:</p> <pre><code>np.split(l, np.flatnonzero(np.diff(l)!=2) + 1) </code></pre> <p>Output:</p> <pre><code>[array([2, 4, 6]), array([12, 14, 16]), array([21]), array([27, 29, 31])] </code></pre> <p>However, playing with arrays of different lenghts is never efficient so that's why <code>np.split</code> is quite slow.</p>
python|list|numpy|vectorization|numpy-ndarray
1
1,902,733
64,152,302
Avoid running script when refreshing page in Django
<p>I am running my external python script using submit button on my web page</p> <p><strong>script.py:</strong></p> <pre><code>def function(): print('test it') </code></pre> <p><strong>views.py</strong></p> <pre><code>from script import function def func_view(request): if request .method == 'POST': function() return render('my.html',{'run_script':function}) </code></pre> <p>Problem: when I am opening <code>my</code> page, program is running, it's not waiting until I will click this button.</p> <p>additionally: when I add in my <code>if request loop</code> File input, after refreshing web page , Django is adding same file again.</p> <p>What I am doing wrong here?</p>
<p>problem solved using redirect to this function:</p> <pre><code>from script import function def func_view(request): if request .method == 'POST': function() return redirect(func_view) return render('my.html',{'run_script':function}) </code></pre>
python|django
0
1,902,734
63,946,473
Why is this script printing to a file once and then rerunning and printing to console?
<p>I have wrote code for both a bat file and a script that should use the bat to run the script. It runs the script correctly, but when I tried to get it to save it's output to a file, it runs the script twice, saving once to the file and once to the console. I would like some help figuring this out since I can't find any anywhere else.</p> <p>Python Script-</p> <pre><code>import time import sys def goalshineinput(): #gets gfx names for print global gfxName #name of file without .dds gfxName = input(&quot;Enter GFX name exluding .dds or other file format. Type quit to quit.&quot;) if gfxName in {'quit', 'Quit', 'QUIT'}: #used to exit if quit is typed in exit('Goodbye') else: global gfxNameFile #name of file with .dds gfxNameFile = input(&quot;Enter GFX name including .dds or other file format. Type quit to quit.&quot;) confirmation() def confirmation(): #confirms the user wants those, if not, restarts goalshineinput if gfxNameFile in {'Quit', 'quit', 'QUIT'}: #used to exit if quit is typed in exit('Goodbye') else: print(gfxName) #for testing, combine into complete sentence print(gfxNameFile) #for testing, combine into complete sentence sure = input('Are you sure? Y|N') if sure in {'y', 'Y'}: print('Confirmation Code Works!') goalshinework() elif sure in {'n', 'N'}: goalshineinput() else: print(&quot;Please Enter Y or N&quot;) time.sleep(2) confirmation() def goalshinework(): #creates the thing to be printed print('SpriteType = {') print('\t''name = ''\&quot;' + gfxName + '\&quot;') print('\t''texturefile = ''\&quot;''gfx/interface/goals/' + gfxNameFile + '\&quot;') time.sleep(3) goalshineinput() </code></pre> <p>Bat File-</p> <pre><code>@echo off %1goalshine.py &gt; output.txt @py.exe goalshine.py %* pause </code></pre>
<p>You actually <em>run</em> it twice from the batch file, assuming you haven't provided an argument (the <code>%1</code>) to said batch file (comments added):</p> <pre><code>@echo off %1goalshine.py &gt; output.txt - Run once, capturing output. @py.exe goalshine.py %* - Run again, output to screen. pause </code></pre> <p>If you only want it to run once, you should probably remove one of those lines.</p>
python|python-3.x
0
1,902,735
65,418,357
Python, store results in a dict or similar
<p>I'm not sure if a dict is the best way to do this. If don't, I'd be glad if you guys could tell me the best way to do this.</p> <p>I want to store <code>usernames</code>, and inside of this, I want to store a <code>list of objects</code>. For example.</p> <pre><code> Name: 'Sophy', Lst: [ object1, object2, object3... ], Name: 'Osprey', Lst: [ object1, object2, object3... ], .... </code></pre> <p>Thing is, I'm not sure if a dict is what I'm looking for. and if so, I'm not sure how to make a dict to look like this. I also will need later to iterate through all objects of each user.</p>
<p>this is very open ended, you can do a variety of thing like</p> <ul> <li><p>make it a dictionary with key the username and value the whatever list of yours</p> <pre><code> data = {'Sophy':[ object1, object2, object3... ], 'Osprey':[ object1, object2, object3... ], ... } </code></pre> </li> <li><p>a list of dictionaries</p> <pre><code> data = [ {&quot;Name&quot;: 'Sophy', &quot;Lst&quot;: [ object1, object2, object3... ]}, {&quot;Name&quot;: 'Osprey', &quot;Lst&quot;: [ object1, object2, object3... ]}, ... ] </code></pre> </li> <li><p>a class</p> <pre><code> class User: def __init__(self,name, lst): self.name=name self.lst=lst data = [User('Sophy', [ object1, object2, object3... ]), User('Osprey', [ object1, object2, object3... ]), ... ] </code></pre> </li> </ul> <p>among others things...</p>
python|list|dictionary
2
1,902,736
71,824,804
Python strings split and assign to variable
<p>I am trying to get google images's url using python. I used to google_images_download module to it. I have get code in stack overflow.</p> <pre><code>from io import BytesIO, TextIOWrapper from google_images_download import google_images_download import sys old_stdout = sys.stdout sys.stdout = TextIOWrapper(BytesIO(), sys.stdout.encoding) response = google_images_download.googleimagesdownload() arguments = { &quot;keywords&quot;: &quot;stackoverflow&quot;, &quot;limit&quot;: 3, &quot;print_urls&quot;: True, &quot;size&quot;: &quot;large&quot;, } paths = response.download(arguments) sys.stdout.seek(0) output = sys.stdout.read() sys.stdout.close() sys.stdout = old_stdout for line in output.split('\n'): if line.startswith(&quot;Image URL:&quot;): s = line.replace(&quot;Image URL: &quot;, &quot;&quot;) print(s) </code></pre> <p>print(s) in last line is print three links of images. I want assign that three links to different variables. How can I do it?</p>
<p>Just create a list of URLs:</p> <pre class="lang-py prettyprint-override"><code>url_list = [] for line in output.split('\n'): if line.startswith(&quot;Image URL:&quot;): s = line.replace(&quot;Image URL: &quot;, &quot;&quot;) url_list.append(s) </code></pre>
python
0
1,902,737
68,752,085
Discord.py: Commands got corrupted when implement an error handler
<p>So, I'm creating a music bot that play musics when a user specify a Youtube URL, here's the code:</p> <pre class="lang-py prettyprint-override"><code>@commands.command() # I'm creating this command in a cog by the way async def play(self, ctx, url: str): # Stop the current track if a song was playing. ctx.voice_client.stop() # Just some stuff that help the bot play the specified song FFMPEG_OPTIONS = { &quot;before_options&quot;: &quot;-reconnect 1 -reconnect_streamed 1 -reconnect_delay_max 5&quot;, &quot;options&quot;: &quot;-vn&quot; } YDL_OPTIONS = {&quot;formats&quot;: &quot;bestaudio&quot;} vc = ctx.voice_client with youtube_dl.YoutubeDL(YDL_OPTIONS) as ydl: info = ydl.extract_info(url, download=False) url2 = info[&quot;formats&quot;][0][&quot;url&quot;] src = await discord.FFmpegOpusAudio.from_probe(url2, **FFMPEG_OPTIONS) vc.play(src) </code></pre> <p>Realizing that my command needs an error handler to check whether the user entered a URL as an argument for the command, if not then send an error message.</p> <pre class="lang-py prettyprint-override"><code>@play.error async def play_error_handler(self, ctx, error): if isinstance(error, commands.MissingRequiredArgument): await ctx.send(&quot;Please pass in a url in order to play a song.&quot;) </code></pre> <p>But then when I tested it, it didn't show me any messages neither on the console nor on the chat; another weird part of this is when I test other commands like ping command or so, it caused the same problem as above. The only thing that worked is the client mention event, basically what it does is it sends a message about the bot's prefix to the user when someone mentions it:</p> <pre class="lang-py prettyprint-override"><code>@client.event # this event is located on the main file async def on_message(message): if client.user.mentioned_in(message): await message.channel.send(f'My prefix here is `.`') </code></pre> <p>I tried re-giving it permissions and regenerate the token but neither of them worked. I checked the error handler syntax on this <a href="https://www.youtube.com/watch?v=_2ifplRzQtM&amp;t=306s" rel="nofollow noreferrer">Youtube tutorial</a> and I don't see any problems going on with it (maybe). <br> <br> Why is this happening? Did I missed something important perhaps?</p>
<p>After testing for a while, I realized that my client's mention event is overriding other commands. So I decided to remove it and it worked like normal.</p> <p>for more information, please check: <a href="https://stackoverflow.com/questions/49331096/why-does-on-message-stop-commands-from-working">Why does on_message stop commands from working?</a></p>
python|discord.py
0
1,902,738
5,480,742
Incorporating multiple login systems?
<p>I have something simple right now, userdb schema is:</p> <ul> <li>userid - autoincrement id email</li> <li>email address</li> <li>password</li> </ul> <p>I want to incorporate Facebook and twitter, how would i deal with it on the DB side?</p>
<p>You can do this many ways, either you store most of the data in a generic usertable (as you are about to) and the provider details separated.</p> <p>Or you make a design where you can connect multiple logins to same user. This will end up with something like</p> <ul> <li>id user</li> <li>id facebookuser (nullable)</li> <li>id twitteruser (nullable)</li> </ul> <p>This will maybe get you N many e-mail adresses (and still no password! since you arent the provider of the account); or none at all. It depends how much this user trust you in each provider.</p> <p>Edit: You might also want to normalize the data without nullables. You can do this by having</p> <ul> <li>id_user</li> <li>id_facebookuser id_user</li> <li>id_twitteruser id_user</li> </ul>
python|database|database-design|authentication
3
1,902,739
61,612,072
Python Selenium: Find an input location where unique identifier goes blank when input is selected
<p>I'm trying to select the User ID &amp; password inputs on this page: <a href="https://kite.zerodha.com/" rel="nofollow noreferrer">https://kite.zerodha.com/</a></p> <p>The User ID input element looks like this: </p> <p><code>&lt;input type="text" placeholder="User ID" autocorrect="off" maxlength="6" autofocus="autofocus" autocapitalize="characters" animate="true" label="" rules="[object Object]" dynamicwidthsize="8"&gt;</code> </p> <p>However, when I click into the cell, it becomes this:</p> <p><code>&lt;input type="text" placeholder="" autocorrect="off" maxlength="6" autofocus="autofocus" autocapitalize="characters" animate="true" label="" rules="[object Object]" dynamicwidthsize="8"&gt;</code> </p> <p>Essentially, the only identifiable element "placeholder" becomes blank and my script throws an error. It looks like they are running a script that makes it blank on purpose. </p> <p>How can I select these fields in Selenium?</p> <p>Thanks for your help! </p>
<p>Induce <code>WebDriverWait</code> and wait for <code>element_to_be_clickable</code>() and following xpath.</p> <pre><code>driver.get("https://kite.zerodha.com/") WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH,"//label[text()='User ID']/following::input[1]"))).send_keys("KK") WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH,"//label[text()='Password']/following::input[1]"))).send_keys("KK1234") </code></pre> <hr> <p>Or use following css selector.</p> <pre><code>driver.get("https://kite.zerodha.com/") WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR,".uppercase.su-input-group&gt;input"))).send_keys("KK") WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR,"div[class='su-input-group']&gt;input"))).send_keys("KK1234") </code></pre> <p>You need to import following librareis.</p> <pre><code>from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By </code></pre> <p>Browser snapshot.</p> <p><a href="https://i.stack.imgur.com/vHjjs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vHjjs.png" alt="enter image description here"></a></p>
python-3.x|selenium|selenium-webdriver
1
1,902,740
71,160,897
How to convert excel if else condition in python dataframe column
<p>I am trying to convert excel if else condition in python dataframe columns, can anyone help me out in this:</p> <p><strong>Input: df</strong></p> <pre><code> Name1 Name2 Name3 Name4 Value1 Value2 Value3 Value4 MaxValue 0 John1 John2 John3 John4 10 3 5 7 10 1 Sony1 Sony2 Sony3 Sony4 2 12 4 8 12 2 Mark1 Mark2 Mark3 Mark4 5 13 0 3 13 3 Biky1 Biky2 Biky3 Biky4 7 7 5 44 44 4 Rose1 Rose2 Rose3 Rose4 7 0 9 7 9 </code></pre> <p>Name values may not be ended with 1/2/3 etc this may have different name also.</p> <p><strong>Output: How to calculate the <strong>Final_Name</strong> column</strong></p> <pre><code> Name1 Name2 Name3 Name4 Value1 Value2 Value3 Value4 MaxValue Final_Name 0 John1 John2 John3 John4 10 3 5 7 10 John1 1 Sony1 Sony2 Sony3 Sony4 2 12 4 8 12 Sony2 2 Mark1 Mark2 Mark3 Mark4 5 13 0 3 13 Mark2 3 Biky1 Biky2 Biky3 Biky4 7 7 5 44 44 Biky4 4 Rose1 Rose2 Rose3 Rose4 7 0 9 7 9 Rose3 </code></pre> <p>In excel we, can write something like this:</p> <pre><code>=IF(I2=H2,D2,IF(I2=G2,C2,IF(I2=F2,B2,IF(I2=E2,A2,&quot;&quot;)))) </code></pre> <p><a href="https://i.stack.imgur.com/vKSx8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vKSx8.png" alt="enter image description here" /></a></p>
<p>You can first <code>filter</code> the df into two parts , then we use the value position locate the Name</p> <pre><code>v = df.filter(regex = '^Value') name = df.filter(regex = '^Name') df['out'] = name.values[df.index, v.columns.get_indexer(v.idxmax(1))] df Out[188]: Name1 Name2 Name3 Name4 Value1 Value2 Value3 Value4 MaxValue out 0 John1 John2 John3 John4 10 3 5 7 10 John1 1 Sony1 Sony2 Sony3 Sony4 2 12 4 8 12 Sony2 2 Mark1 Mark2 Mark3 Mark4 5 13 0 3 13 Mark2 3 Biky1 Biky2 Biky3 Biky4 7 7 5 44 44 Biky4 4 Rose1 Rose2 Rose3 Rose4 7 0 9 7 9 Rose3 </code></pre>
python-3.x|pandas|dataframe
1
1,902,741
56,454,462
Get a twilio phone number SID, at a later date, if you don't capture it when you purchase twilio number
<p>Is there any easy way to get a phone number SID, at a later date, if you don't capture it when you purchase twilio number.</p> <p>It is easy to capture a phone number sid when purchasing number, but solutions I found for capturing it at later date seem complicated and use loops.</p> <p>Capturing Phone Number SID when buying number:</p> <pre><code>from twilio.rest import Client account_sid = 'accountsid' auth_token = 'your_auth_token' client = Client(account_sid, auth_token) # Purchase the phone number number = client.incoming_phone_numbers \ .create(phone_number=number) print(number.sid) </code></pre>
<p>You can filter with "exact match". </p> <p>For example, if your number is <code>+17775553333</code>, try this code to get the <code>sid</code>.</p> <pre class="lang-py prettyprint-override"><code>from twilio.rest import Client account_sid = 'ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' auth_token = 'your_auth_token' client = Client(account_sid, auth_token) incoming_phone_numbers = client.incoming_phone_numbers.list(phone_number='+17775553333', limit=20) for record in incoming_phone_numbers: print(record.sid) </code></pre>
python|twilio|twilio-api
1
1,902,742
56,561,266
Sort text file lines using python by timestamp
<p>I have a txt file where line 1-5 are all words and line 6 and above has <code>timestamp</code> at the beginning as shown:</p> <pre><code>This is a document1 This is a document2 This is a document3 This is a document4 This is a document5 2019-05-27 07:00:00, value1, value2, value3 2019-05-27 06:38:00, value1, value2, value3 2019-05-27 07:05:00, value1, value2, value3 </code></pre> <p>How can I sort lines 6 to the last line where the earliest time is on top and latest time at below?</p> <p>This is what I have attempted based on another stack overflow question but did not work.</p> <pre><code> lines = sorted(open(outputFile.txt).readlines(), key=lambda line: line[5:-1].split(",")[0]) outFile.close() </code></pre>
<p>If you don't "need" a one-liner, you can do the following:</p> <pre class="lang-py prettyprint-override"><code># Read all lines with open("file.txt") as f: lines = f.readlines() # Keep only from 6th line lines = lines[5:] # Sort based on the date of each line lines.sort(key = lambda l : l.split(',')[0]) </code></pre> <p>Untested, but should work.</p>
python
0
1,902,743
65,920,182
Manipulate a List in Python that Contains Other Lists
<p>I am trying to extract information from a list within a list within a list to end up with something like this from the information below: ('h': '0.77584', 'l': '0.77292'), ('h': '0.77521', 'l': '0.77206')</p> <p><code>print(dict)</code></p> <p><code>[{'complete': True, 'volume': 2290, 'time': '2021-01-15', 'mid': {'o': '0.77540', 'h': '0.77584', 'l': '0.77292', 'c': '0.77440'}}, {'complete': True, 'volume': 2312, 'time': '2021-01-15', 'mid': {'o': '0.77436', 'h': '0.77521', 'l': '0.77206', 'c': '0.77206'}}]</code></p> <p>Not sure how to go about it. I tried</p> <p><code>something = list(list(dict.items())[0].items())[3][1]</code></p> <p><code>print(something)</code></p> <p>However, this returned {'o': '0.77540', 'h': '0.77584', 'l': '0.77292', 'c': '0.77440'}</p> <p>How to get the requested data?</p>
<p>You can use the following list and dict comprehension</p> <pre><code>dict = [{'complete': True, 'volume': 2290, 'time': '2021-01-15', 'mid': {'o': '0.77540', 'h': '0.77584', 'l': '0.77292', 'c': '0.77440'}}, {'complete': True, 'volume': 2312, 'time': '2021-01-15', 'mid': {'o': '0.77436', 'h': '0.77521', 'l': '0.77206', 'c': '0.77206'}}] res = [{k:v for k, v in i['mid'].items() if k in 'hl'} for i in dict] print(res) </code></pre> <p>Output</p> <pre><code>[{'h': '0.77584', 'l': '0.77292'}, {'h': '0.77521', 'l': '0.77206'}] </code></pre>
python|list
2
1,902,744
68,953,680
Django DRF and generic relations: how to obtain content_object field from API response?
<p>I have implemented Generic Relations in Django DRF following the official guide and it works well, apart from the fact that I cannot seem to obtain the field content_object from my API response.</p> <p>Basically, I have a model called Document that can either be related to a model called Folder or a model called Collection.</p> <pre><code>class Document(models.Model): limit = models.Q(app_label='folders', model='folder') | models.Q( app_label='collections', model='collection') title = models.CharField(max_length=500) # Field necessari per la Generic Relation content_type = models.ForeignKey( ContentType, on_delete=models.CASCADE, null=True, blank=True, limit_choices_to=limit) object_id = models.PositiveIntegerField(null=True, blank=True) content_object = GenericForeignKey( 'content_type', 'object_id') category = models.CharField(max_length=30, blank=True, null=True) def __str__(self): return self.title class Folder(models.Model): ... documents = GenericRelation(Document) def __str__(self): return self.title class Collection(models.Model): ... documents = GenericRelation(Document) def __str__(self): return self.title </code></pre> <p>Here are my serializers:</p> <pre><code>class ContentObjectRelatedField(serializers.RelatedField): def to_representation(self, value): if isinstance(value, Folder): serializer = FolderSerializer(value) elif isinstance(value, Collection): serializer = CollectionSerializer(value) else: raise Exception('Unexpected type of object') return serializer.data class DocumentSerializer(serializers.ModelSerializer): class Meta: model = Document fields = ('id', 'title', 'content_type', 'object_id', 'category') class FolderSerializer(serializers.ModelSerializer): documents = DocumentSerializer(many=True, read_only=True) class Meta: model = Folder fields = (&quot;id&quot;, &quot;title&quot;, &quot;description&quot;, &quot;documents&quot;) depth = 1 (Collection serializer is essentially the same ad the Folder serializer, with its own fields). </code></pre> <p>I was expecting to be able to access the content of the field content_object when retrieving - with a GET request to the API endpoint - the documents. Instead, that field is not available. If I do try to add it to the fields listed in its serializers, it throws an error.</p> <p>How can I access that content so that I know, for each document, to what folder or what collection is belongs exactly?</p> <p>Thanks a lot.</p>
<p>Try this:</p> <pre><code>class ContentObjectRelatedField(serializers.RelatedField): def to_representation(self, value): if isinstance(value, Folder): serializer = FolderForDocumentSerializer(value) elif isinstance(value, Collection): serializer = CollectionForDocumentSerializer(value) # Defines CollectionForDocumentSerializer in the same manner of FolderForDocumentSerializer else: raise Exception('Unexpected type of object') return serializer.data class FolderForDocumentSerializer(serializers.ModelSerializer): class Meta: model = Folder fields = (&quot;id&quot;, &quot;title&quot;, &quot;description&quot;) depth = 1 class DocumentSerializer(serializers.ModelSerializer): content_object = ContentObjectRelatedField(read_only=True) class Meta: model = Document fields = ('id', 'title', 'content_object', 'category') # Note that you can use DocumentSerializer and CollectionSerializer, but not in ContentObjectRelatedField.to_representation </code></pre> <p>Your frontend can deduct the type of content_object inspecting the returned fields</p>
python|django|django-rest-framework|generic-relations
1
1,902,745
72,770,717
Pandas: Retain column entries after inner join even if there are no common values
<p>I have 3 dataframes. I merge <code>df1</code> and <code>df2</code> through a common column. However, I need to use <code>df3</code> to find what values are allowed for pairs seen in groupby created. I could get this part done too using 2-column merge through inner join, but I also need to se the entries that did not have any common elements. So far what I could do is represented with a model problem here:</p> <pre><code>ch = {'country':['India','India','India','USA','USA','Italy','Italy'],'hotel':['Taj','Oberoi','Hilton','Taj','Hilton','Oberoi','Marriott']} ch_df = pd.DataFrame.from_dict(ch) hm = {'hotel':['Taj','Taj','Taj','Oberoi','Oberoi','Marriott','Marriott','Marriott','Hilton','Hilton'],'menu':['ildi','dosa','soup','soup','ildi','soup','pasta','pizza','pizza','burger']} hm_df = pd.DataFrame.from_dict(hm) cm = {'country':['India','India','India','USA','USA','USA','Italy','Italy'],'menu':['ildi','dosa','soup','dosa','burger','pizza','pizza','pasta']} cm_df = pd.DataFrame.from_dict(cm) chm_df = pd.merge(ch_df, hm_df, left_on='hotel', right_on='hotel') pd.merge(left=chm_df, right=cm_df, on=['country','menu'], how='inner').groupby(['country','hotel'])['menu'].apply(list).reset_index(name='menu items') country hotel menu items 0 India Oberoi [ildi, soup] 1 India Taj [ildi, dosa, soup] 2 Italy Marriott [pasta, pizza] 3 USA Hilton [pizza, burger] 4 USA Taj [dosa] </code></pre> <p>What I need are entries such as:</p> <pre><code>5 Italy Oberoi [] ... </code></pre> <p>One inefficient way is to add to each pair in <code>hm_df</code> an allowed menu item and remove it after groupby. But it looks ugly. What is a more elegant method?</p>
<p>If need all possible combinations is possible use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>DataFrame.unstack</code></a> with<a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>DataFrame.stack</code></a>, for replace non exist values to empty lists use <code>fill_value=[]</code> parameter:</p> <pre><code>df = pd.merge(chm_df, cm_df, on=['country','menu']).groupby(['country','hotel'])['menu'].apply(list).unstack(fill_value=[]).stack().reset_index(name='menu items') print (df) country hotel menu items 0 India Hilton [] 1 India Marriott [] 2 India Oberoi [ildi, soup] 3 India Taj [ildi, dosa, soup] 4 Italy Hilton [] 5 Italy Marriott [pasta, pizza] 6 Italy Oberoi [] 7 Italy Taj [] 8 USA Hilton [pizza, burger] 9 USA Marriott [] 10 USA Oberoi [] 11 USA Taj [dosa] </code></pre> <p>For completness if need only non exist values from <code>chm_df</code> convert to empty lists:</p> <pre><code>df = pd.merge(chm_df, cm_df, on=['country','menu']).groupby(['country','hotel'])['menu'].apply(list).reindex(pd.MultiIndex.from_frame(ch_df), fill_value=[]).reset_index(name='menu items') print (df) country hotel menu items 0 India Taj [ildi, dosa, soup] 1 India Oberoi [ildi, soup] 2 India Hilton [] 3 USA Taj [dosa] 4 USA Hilton [pizza, burger] 5 Italy Oberoi [] 6 Italy Marriott [pasta, pizza] </code></pre>
pandas|join|group-by
1
1,902,746
59,194,385
Search for tweets with Tweepy library
<p>I've been doing extraction of tweets with keywords using Tweepy library for python. It's been only recently that I've noticed that my database include tweets like this: <a href="https://i.stack.imgur.com/9XiAt.png" rel="nofollow noreferrer">tweet example</a>. </p> <p>I searched for "ozone hole" and it returned a tweet whose text doesn't actually include "ozone hole", but "ozone hole" can be found in the title of the news, to which the author of the tweets made a reference.</p> <p>Is there any way to avoid tweets like that and to search for tweets that include my keywords in the actual tweet text?</p> <p>Chunk of my code that searches for tweets: </p> <pre><code>for tweet in tweepy.Cursor(api.search, q="ozone hole", lang="en", #Since="2019-11-27", #until="2019-11-14", tweet_mode='extended').items(): </code></pre>
<p>This is simply how Twitter's search works. If you search for the same query through Twitter's website, you'll see that it comes up with those same results. </p> <p>Note though, that it's likely due to the query showing up in the URL itself, not in the title of that site.</p>
python|api|twitter|tweepy
3
1,902,747
62,920,548
Geopandas how to move plot
<p>i read some shx file and make a geopandas.plot And I have such problem the part of the map are on the left side, how to move the plot to the center?</p> <p><a href="https://i.stack.imgur.com/AApcQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AApcQ.png" alt="geopandas.plot output" /></a></p>
<p>If you don't mind losing the edge, you could always reorient the axis.</p> <pre><code>plt.xmlim(0, 2.5) </code></pre> <p>However, you probably want the entire map. Can you provide any more context such as the where you go the data or any code?</p>
python|dictionary|data-visualization|visualization|geopandas
1
1,902,748
58,718,720
Check if object is in an iterable using "is" identity instead of "==" equality
<pre><code>if object in lst: #do something </code></pre> <p>As far as I can tell, when you execute this statement it is internally checking <code>==</code> between <code>object</code> and every element in <code>lst</code>, which will refer to the <code>__eq__</code> methods of these two objects. This can have the implication of two distinct objects being "equal", which is usually desired if all of their attributes are the same.</p> <p>However, is there a way to Pythonically achieve a predicate such as <code>in</code> where the underlying equality check is <code>is</code> - i.e. we're actually checking if the two references are to the same object?</p>
<p>3list membership in python is dictated by the __contains__ dunder method. You can choose to overwrite this for a custom implementation if you want to use the normal "<code>in</code>" syntax:</p> <pre><code>class my_list(list): def __contains__(self, x): for y in self: if x is y: return True return False 4 in my_list([4, [3,2,1]]) &gt;&gt; True [3,2,1] in my_list([4, [3,2,1]]) # Because while the lists are "==" equal, they have different pointers. &gt;&gt;&gt; False </code></pre> <p>Otherwise, I'd suggest kaya3's answer of using a generator check.</p>
python|list|identity|equality
1
1,902,749
25,234,927
Faster numpy.random.shuffle with a length limit?
<p>I am using <code>numpy.random.shuffle</code> to shuffle a list of data. The length of the list is large so I want to randomly sample some of data to do my work. </p> <p>I implement this using the following code:</p> <pre><code># data_list is a numpy array of shape (num_data,) index = np.arange(data_list.size) np.random.shuffle(index) index = index[:len_limit] data = data_list[index] </code></pre> <p>But since index is big, the shuffle is slow. </p> <p><strong>Any advice to improve the performance?</strong> </p>
<p>This is a common problem. I use the following:</p> <p>Drawing with replacement</p> <pre><code>idxs = np.random.randint(0, high=len(data), size=(N,)) result = data[idxs] </code></pre> <p>Drawing without replacement</p> <pre><code>import random idxs = random.sample(xrange(len(data)), N) result = data[idxs] </code></pre> <p>where <code>data</code> is your original dataset and <code>N</code> is the number of desired samples. Either should be faster than shuffling, as long as N &lt;&lt; len(data).</p>
numpy|shuffle
1
1,902,750
25,069,017
using unittest what is b?: self.assertTrue(b'Please login'
<p>This is a question about how assertTrue() works and why this seemingly stray typo doesn't cause problems when I run it as a test. I'm learning Flask and how to unit test, so please bear with me if I get some terminology wrong.</p> <p><strong>Why does the following test pass regardless of presence/absence of 'b' before 'Please login'?</strong></p> <p>Using code that the tutorial gives me:</p> <pre><code> def test_login_page_loads(self): tester = app.test_client(self) response = tester.get('/login', content_type='html/text') self.assertTrue(b'Please login' in response.data) </code></pre> <p>I thought the 'b' in self.assertTrue(b'Please login' in response.data) was a typo, but it passes the test with or without the character there.</p> <p>For reference, this is (most of) what it's testing:</p> <pre><code>&lt;h1&gt;Please login&lt;/h1&gt; &lt;br&gt; &lt;form action="" method="post"&gt; &lt;input type="text" placeholder="Username" name="username" value="{{ request.form.username }}"&gt; &lt;input type="password" placeholder="Password" name="password" value="{{ request.form.password }}"&gt; &lt;input class="btn btn-default" type="submit" value="Login"&gt; &lt;/form&gt; </code></pre> <p>Looking at documents for <a href="https://docs.python.org/2/library/unittest.html#test-cases" rel="nofollow">unittest</a> I didn't see anything special, simple Googling yields nothing valuable, the posts here are related to logical flow issues, and I'm not sure where to go after that.</p> <p>From what I can see there are a few possible answers: 1)'b' is an option for the function assertTrue(), 2) that I don't understand, it's being skipped by the function, 3) or the test is 'evaluating as true' regardless of the input. </p>
<p>b in this case means the string is a bytestring. See the answer to this question for more details:</p> <p><a href="https://stackoverflow.com/questions/6269765/what-does-the-b-character-do-in-front-of-a-string-literal">What does the &#39;b&#39; character do in front of a string literal?</a></p>
python|python-unittest
0
1,902,751
30,621,194
Sub-program error after second execution
<p>I am using the YouTube API to search for a video, obtain some info such as channelID, videoID etc and add the info to a table. The sub-procedure is triggered by a signal from a button click and search input from a line edit.</p> <p>Here is the code:</p> <pre><code>def searchSend(self): search = self.ui.YoutubeSearch.text() self.ui.requestView.setRowCount(0) def youtube_search(options): youtube = build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, developerKey=DEVELOPER_KEY) # Call the search.list method to retrieve results matching the specified # query term. search_response = youtube.search().list( q=options.q, part="id,snippet", maxResults=options.max_results ).execute() VideoID = [] ChannelID = [] VideoName = [] ChannelName = [] # Add each result to the appropriate list, and then display the lists of # matching videos, channels, and playlists. for search_result in search_response.get("items", []): if search_result["id"]["kind"] == "youtube#video": VideoID.append(search_result["id"]["videoId"]) ChannelID.append(search_result["snippet"]["channelId"]) VideoName.append(search_result["snippet"]["title"]) ChannelName.append(search_result["snippet"]["channelTitle"]) for item in VideoID: rowCount = self.ui.requestView.rowCount() self.ui.requestView.insertRow(rowCount) self.ui.requestView.setItem(rowCount,0,QtGui.QTableWidgetItem(VideoName[rowCount])) self.ui.requestView.setItem(rowCount,1,QtGui.QTableWidgetItem(ChannelName[rowCount])) self.ui.requestView.setItem(rowCount,2,QtGui.QTableWidgetItem(VideoID[rowCount])) argparser.add_argument("--q", default=str(search)) argparser.add_argument("--max-results", default=15) args = argparser.parse_args() try: youtube_search(args) except HttpError: print ("An HTTP error %d occurred:\n%s") % (e.resp.status, e.content) </code></pre> <p>Here is the error second time the sub-procedure is run:</p> <pre><code>Traceback (most recent call last): File "D:\My Documents\Request\mainwindow.py", line 112, in searchSend argparser.add_argument("--q", default=str(search)) File "C:\Python33\lib\argparse.py", line 1326, in add_argument return self._add_action(action) File "C:\Python33\lib\argparse.py", line 1686, in _add_action self._optionals._add_action(action) File "C:\Python33\lib\argparse.py", line 1530, in _add_action action = super(_ArgumentGroup, self)._add_action(action) File "C:\Python33\lib\argparse.py", line 1340, in _add_action self._check_conflict(action) File "C:\Python33\lib\argparse.py", line 1479, in _check_conflict conflict_handler(action, confl_optionals) File "C:\Python33\lib\argparse.py", line 1488, in _handle_conflict_error raise ArgumentError(action, message % conflict_string) argparse.ArgumentError: argument --q: conflicting option string: --q </code></pre> <p>Thanks</p>
<p>I think the problem is that when running the sub-procedure again, the argument has already been added by argparser and is stored.</p> <p>You can't add the argument again as it conflicts with the one already stored.</p> <p>Instead, I returned the args value and edited the 'q' value within that.</p>
python-3.x|youtube-api
0
1,902,752
42,831,588
Dynamic field name in queryset annotation
<p>I need to rename output field name with incoming variable value. There is a function:</p> <pre><code>def metric_data(request, test_id, metric): metric_name = metric data = ServerMonitoringData.objects. \ filter(test_id=test_id). \ annotate(timestamp=RawSQL("((data-&gt;&gt;%s)::timestamp)", ('timestamp',))).\ annotate(metric=RawSQL("((data-&gt;&gt;%s)::numeric)", (metric,))). \ values('timestamp', "metric") </code></pre> <p>So in this case no matter what value comes with the variable <strong>metric</strong> the output is looking like:</p> <pre><code> {"timestamp": "0:31:02", "metric": "8.82414500398"} </code></pre> <p>I need to have an output with a key names equals to metric variable (if metric == 'CPU_iowait'):</p> <pre><code>{"timestamp": "0:31:02", "CPU_iowait": "8.82414500398"} </code></pre> <p>Tryed to use something like this:</p> <pre><code> metric_name = metric ... annotate(metric_name=F('metric')).\ values('timestamp', metric_name) </code></pre> <p>But it is trying to find 'CPU_iowait' column when exists 'metric_name'. So is there any way to pass field name as a variable ?</p>
<pre><code># use dict to map the metric's name to a RawSQL query # and pass it as keyword argument to `.annotate`. metric_mapping = { metric: RawSQL("((data-&gt;&gt;%s)::numeric)", (metric,)) } queryset.annotate(**metric_mapping) </code></pre>
python|django|django-models|django-orm
3
1,902,753
50,878,915
Python dictionary: creating new dictionary based on values of the first
<p>Let's say I have a dictionary like this:</p> <pre><code>d1 = {'user1': 5, 'user2': 50, 'user3': 75, 'user4': 100}^M </code></pre> <p>And another dictionary like so:</p> <pre><code>d2 = {5: 1, 50: 2, 75: 3, 100: 4} </code></pre> <p>How would I create a third dictionary like this?</p> <pre><code>d1 = {'user1': 1, 'user2': 2, 'user3': 3, 'user4': 4}^M </code></pre> <p>(Not manually of course, since the dictionaries are just simple examples).</p>
<p>You can use a <code>dict</code> comprehension for this by using the value from the first dictionary as a key into the second</p> <pre><code>&gt;&gt;&gt; d1 = {'user1': 5, 'user2': 50, 'user3': 75, 'user4': 100} &gt;&gt;&gt; d2 = {5: 1, 50: 2, 75: 3, 100: 4} &gt;&gt;&gt; {key:d2[value] for key, value in d1.items()} {'user1': 1, 'user2': 2, 'user3': 3, 'user4': 4} </code></pre>
python|dictionary
2
1,902,754
50,643,873
How to work with node.js and Dialogflow
<p>can you show me how to work with Dialogflow with node.js. Where can I find information?</p> <p>For example, here I have found one example in Python, but how to make the same in node.js?</p> <p>Python code:</p> <pre><code>from telegram.ext import Updater, CommandHandler, MessageHandler, Filters import apiai, json updater = Updater(token='ВАШ API ТОКЕН') # Токен API к Telegram dispatcher = updater.dispatcher def startCommand(bot, update): bot.send_message(chat_id=update.message.chat_id, text='Привет, давай пообщаемся?') def textMessage(bot, update): request = apiai.ApiAI('ВАШ API ТОКЕН').text_request() # Токен API к Dialogflow request.lang = 'ru' # На каком языке будет послан запрос request.session_id = 'BatlabAIBot' request.query = update.message.text responseJson = json.loads(request.getresponse().read().decode('utf-8')) response = responseJson['result']['fulfillment']['speech'] if response: bot.send_message(chat_id=update.message.chat_id, text=response) else: bot.send_message(chat_id=update.message.chat_id, text='Я Вас не совсем понял!') start_command_handler = CommandHandler('start', startCommand) text_message_handler = MessageHandler(Filters.text, textMessage) dispatcher.add_handler(start_command_handler) dispatcher.add_handler(text_message_handler) updater.start_polling(clean=True) updater.idle() </code></pre>
<p>The example in your quote requires apiai (second line). A good starting point is looking for those libraries and how they work. They often have example code and/or references to examples.</p> <p>Take a look at <a href="https://github.com/dialogflow/dialogflow-nodejs-client" rel="nofollow noreferrer">https://github.com/dialogflow/dialogflow-nodejs-client</a> or <a href="https://github.com/dialogflow/dialogflow-nodejs-client-v2" rel="nofollow noreferrer">https://github.com/dialogflow/dialogflow-nodejs-client-v2</a></p> <p>A quick online search for 'dialogflow node.js' gives tons of examples, either in Youtube form, Medium articles or oter instructables. </p>
python|node.js|dialogflow-es
0
1,902,755
58,116,401
DetailedView and expiring items not working in Django
<p>I'm working on a simple application, where elements should expire automatically after 5 minutes.</p> <p>In models.py I have the following:</p> <pre><code>from django.utils import timezone def calc_default_expire(): return timezone.now() + timezone.timedelta(minutes=5) class MyModel(models.Model): uploaded_at = models.DateTimeField(auto_now_add=True) expire_date = models.DateTimeField(default=calc_default_expire) ... </code></pre> <p>In my views.py, I have the following:</p> <pre><code>from django.views.generic import DetailView from .models import MyModel from django.utils import timezone class MyModelDetail(DetailView): model = MyModel queryset = MyModel.objects.filter(expire_date__gt=timezone.now()) </code></pre> <p>I'm getting some strange behaviour. Even after 5 minutes, when I call the url of the expired item, it still gets returned (http code 200).</p> <p>However, when I restart the builtin django dev server, and call the url again, I'm getting a 404, which is the desired result.</p> <p>I see two possible causes:</p> <ul> <li>the built-in webserver is caching some stuff (I doubt this to be honest, I could not find anything in the docs that mentions this behaviour)</li> <li>I'm doing something wrong in my queryset filter (but I'm not seeing it).</li> </ul> <p>Expire_date seems to be calculated correctly when I add new items. Anyone got a clue what I'm missing here?</p> <p><code>USE_TZ = True</code> in my settings.py BTW.</p>
<p>That makes perfect sense, since the <code>timezone.now()</code> is evalated once, when you start the server. After that, it will thus each time query with the same datetime.</p> <p>You can use <a href="https://docs.djangoproject.com/en/dev/ref/models/database-functions/#now" rel="nofollow noreferrer"><strong><code>Now()</code></strong> [Django-doc]</a> instead, which will then let the database determine the time:</p> <pre><code>from django.db.models.functions import <b>Now</b> class MyModelDetail(DetailView): model = MyModel queryset = MyModel.objects.filter(expire_date__gt=<b>Now()</b>)</code></pre> <p>It will thus not evaluate the time at the moment you call <code>Now()</code>, but use <code>CURRENT_TIMESTAMP</code> (or some other function the database provides) each time you make the query.</p> <p>An alternative is to postpone the query, and thus use <code>get_queryset</code> to construct a queryset each time:</p> <pre><code>class MyModelDetail(DetailView): model = MyModel def <b>get_queryset</b>(self): return MyModel.objects.filter(expire_date__gt=timezone.now())</code></pre>
python|django
1
1,902,756
55,161,366
Unsure whether my version of Python/numpy is using optimized BLAS/LAPACK libraries?
<p>I read <a href="https://stackoverflow.com/questions/7596612/benchmarking-python-vs-c-using-blas-and-numpy">here</a> that it is important to "make sure that numpy uses optimized version of BLAS/LAPACK libraries on your system."</p> <p>When I input:</p> <pre><code>import numpy as np np.__config__.show() </code></pre> <p>I get the following results:</p> <pre><code>blas_mkl_info: NOT AVAILABLE blis_info: NOT AVAILABLE openblas_info: libraries = ['openblas', 'openblas'] library_dirs = ['/home/anaconda3/lib'] language = c define_macros = [('HAVE_CBLAS', None)] blas_opt_info: libraries = ['openblas', 'openblas'] library_dirs = ['/home/anaconda3/lib'] language = c define_macros = [('HAVE_CBLAS', None)] lapack_mkl_info: NOT AVAILABLE openblas_lapack_info: libraries = ['openblas', 'openblas'] library_dirs = ['/home/anaconda3/lib'] language = c define_macros = [('HAVE_CBLAS', None)] lapack_opt_info: libraries = ['openblas', 'openblas'] library_dirs = ['/home/anaconda3/lib'] language = c define_macros = [('HAVE_CBLAS', None)] </code></pre> <p>Does this mean my version of numpy is using optimized BLAS/LAPACK libraries, and if not, how can I set numpy so that it does use the optimized version?</p>
<p>Kind of. OpenBLAS is quite alright. I just took the first link, that I could find on google looking for "OpenBLAS, ATLAS, MKL comparison".</p> <p><a href="http://markus-beuckelmann.de/blog/boosting-numpy-blas.html" rel="nofollow noreferrer">http://markus-beuckelmann.de/blog/boosting-numpy-blas.html</a></p> <p>Now, this is not the whole story. The differences might not be / be slightly / be a lot different depending on the algorithms, which you need. There is really not much that can be done than to run your own code linked against the different implementations. </p> <p>My favourites in average across all sorts of linear algebraic problems, SVDs, Eigs, real and pseudo inversions, factorisations ... single core / multicore on the different OSes:</p> <p>MacOS: Accelerated framework (comes along with the OS) Linux/Windows: </p> <ol> <li>MKL </li> <li>with great distance but still quiet alright: ATLAS and OpenBLAS on par</li> <li>ACML has been always a disappointment to me even on AMD processors</li> </ol> <p>TLDR: Your setup is fine. But if you want to squeeze the last drop of blood out of your CPU / RAM / Mainboard combination you need MKL. It comes of course with quite a price tag, but if you can get hardware half as expensive in return, maybe worth it. And if you write an open source package, you may use MKL free of charge for development purposes. </p>
python|numpy|anaconda|lapack|blas
2
1,902,757
53,897,584
Move objects from AWS S3 to MediaStore
<p>As a legacy from the previous version of our system, I have around 1 TB of old video files on AWS S3 bucket. Now we decided to migrate to AWS Media Services and all those files should be moved to MediaStore for the access unification.</p> <p><strong>Q:</strong> Is there any way to move the data programmatically from S3 to MediaStore directly?</p> <p>After reading AWS API docs for these services, the best solution I've found is to run a custom Python script on an intermediate EC2 instance and pass the data through it.</p> <p>Also, I have an assumption, based on pricing, data organization and some pieces in docs, that MediaStore built on top of S3. That's why I hope to find a more native way to move the data between them.</p>
<p>I've clarified this with AWS support. There is no way to transfer files directly, although, it's a popular question and, probably, will be implemented.</p> <p>Now I'm doing this with an intermediate EC2 server, a speed of internal AWS connections between this, S3 and MediaStore is quite good. So I would recommend this way, at least, for now.</p>
python|amazon-web-services|amazon-s3|amazon-ec2|aws-mediastore
0
1,902,758
25,715,940
Configuring Django 1.7 and Python 3 on mac osx 10.9.x
<p>I have installed the latest versions of both django and python. The default "python" command is set to 2.7; if I want to use python 3, I have to type "python3". </p> <p>Having to type "python3" and a django command causes problems. For example if I type: "python3 manage.py migrate" , I get an error. The error is:</p> <p>Traceback (most recent call last): File "manage.py", line 8, in from django.core.management import execute_from_command_line ImportError: No module named 'django'</p> <p>Django does not seem to recognize my python 3. How do I get around this? Your help is greatly appreciated.</p>
<p>You need to install <code>django</code> for <code>python 3</code>, <code>pip3 install django</code></p>
python|django|macos|python-3.x
3
1,902,759
36,138,817
Maya GUI freezes during subprocess call
<p>I need to conform some maya scenes we receive from a client to make them compatible to our pipeline. I'd like to batch that action, obviously, and I'm asked to launch the process from within Maya.<br> I've tried two methods already (quite similar to each other), which both work, but the problem is that the Maya GUI freezes until the process is complete. I'd like for the process to be completely transparent for the user so that they can keep workind, and only a message when it's done.<br> Here's what I tried and found until now:<br> This tutorial here : <a href="http://www.toadstorm.com/blog/?p=136" rel="nofollow noreferrer">http://www.toadstorm.com/blog/?p=136</a> led me to write this and save it:</p> <pre><code>filename = sys.argv[1] def createSphere(filename): std.initialize(name='python') try: mc.file(filename, open=True, pmt=False, force=True) sphere = mc.polySphere() [0] mc.file(save=True, force=True) sys.stdout.write(sphere) except Exception, e: sys.stderr.write(str(e)) sys.exit(-1) if float(mc.about(v=True)) &gt;= 2016.0: std.uninitialize() createSphere(filename) </code></pre> <p>Then to call it from within maya that way:</p> <pre><code>mayapyPath = 'C:/Program Files/Autodesk/Maya2016/bin/mayapy.exe' scriptPath = 'P:/WG_MAYA_Users/lbouet/scripts/createSphere.py' filenames = ['file1', 'file2', 'file3', 'file4'] def massCreateSphere(filenames): for filename in filenames: maya = subprocess.Popen(mayapyPath+' '+scriptPath+' '+filename,stdout=subprocess.PIPE,stderr=subprocess.PIPE) out,err = maya.communicate() exitcode = maya.returncode if str(exitcode) != '0': print(err) print 'error opening file: %s' % (filename) else: print 'added sphere %s to %s' % (out,filename) massCreateSphere(filenames) </code></pre> <p>It works fine, but like I said, freezes Maya GUI until the process is over. And it's just for creating a sphere, so not nearly close to all the actions I'll actually have to perform on the scenes.<br> I've also tried to run the first script via a .bat file calling mayabatch and running the script, same issue.<br> I found this post (<a href="https://stackoverflow.com/questions/34295749/running-list-of-cmd-exe-commands-from-maya-in-python">Running list of cmd.exe commands from maya in Python</a>) who seems to be exactly what I'm looking for, but I can't see how to adapt it to my situation ?<br> From what I understand the issue might come from calling Popen in a loop (i.e. multiple times), but I really can't see how to do otherwise... I'm thinking maybe saving the second script somewhere on disk too and calling that one from Maya ?</p>
<p>In this case <code>subprocess.communicate()</code> will block until the child process is done, so it is not going to fix your problem on its own. </p> <p>If you just want to kick off the processes and not wait for them to complete -- 'fire and forget' style -- you can just use threads, starting off a new thread for each process. However you'll have to be very careful about reporting back to the user -- if you try to touch the Maya scene or GUI from an outside thread you'll get mysterious, undebuggable errors. <code>print()</code> is <em>usually</em> ok but <code>maya.cmds()</code> is not. If you're only printing messages you can probably get away with <code>maya.utils.executeDeferred()</code> which is discussed <a href="https://stackoverflow.com/questions/16657811/how-to-use-python-maya-multithreading/16661599#16661599">in this question</a> and <a href="https://knowledge.autodesk.com/support/maya/learn-explore/caas/CloudHelp/cloudhelp/2015/ENU/Maya/files/Python-Python-and-threading-htm.html" rel="nofollow noreferrer">in the docs</a>. </p>
python|python-2.7|maya
0
1,902,760
29,388,710
set properties and union method
<p>I have a question regarding set. I have the following code that illustrate my question.</p> <pre><code>def f2( s ): return { c.upper() for c in s if c.isalpha() } print f2( "A r'a|ccCc^#zZ" ) print f2( "A r'a|ccCc^#zZ" ).union( [( 'B', )] ) print f2( "A r'a|ccCc^#zZ" ).union( [( 'T', )]) </code></pre> <p>The result is:</p> <pre class="lang-none prettyprint-override"><code>set(['A', 'C', 'R', 'Z']) set(['A', ('B',), 'C', 'R', 'Z']) set(['A', 'C', 'R', 'Z', ('T',)]) </code></pre> <p>Why is the set order in that order ? In the first time I can guess it is ordered according to the A-Z (hash function?) But why there is a difference in the position of the tuple in the other lines ?</p>
<p><a href="https://docs.python.org/2/library/sets.html" rel="nofollow">Sets have no order.</a></p> <p>From the documentation:</p> <blockquote> <p>The sets module provides classes for constructing and manipulating <strong>unordered</strong> collections of unique elements. Common uses include membership testing, removing duplicates from a sequence, and computing standard math operations on sets such as intersection, union, difference, and symmetric difference.</p> </blockquote>
python|hash|set|tuples|union
0
1,902,761
29,429,767
cv2 a python - 16 bit fits file in cv2
<p>before sometime I wrote some script which find center of the Sun (with Canny and moments) and center of the image. Here is it <a href="https://stackoverflow.com/questions/19768508/python-opencv-finding-circle-sun-coordinates-of-center-the-circle-from-pictu">python opencv-finding circle (Sun) , coordinates of center the circle from picture</a></p> <p>But now I have this problem. When I used this script on fits file it doesnt work. I have 16 bit monochromatic picture of the Sun <a href="http://files.uloziste.com/d16feb4de5aeda18/" rel="nofollow noreferrer">http://files.uloziste.com/d16feb4de5aeda18/</a>.</p> <p>I know that the picture from my method must be 8 bit (CV_8U)</p> <p>How I can convert this picture to 8 bit and get image depth information too? When I used height, width, depth = im.shape i get error:</p> <p>height, width, depth = im.shape ValueError: need more than 2 values to unpack</p> <p>Here is code for open fits file</p> <pre><code>import numpy as np import cv2 import pyfits hdul = pyfits.open('11_18_46_640_syn_snap_obs_1533.fits') im=hdul[0].data #height, width, depth = im.shape print im.shape thresh = 123 imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(imgray,(5,5),0) edges = cv2.Canny(blur,thresh,thresh*2) contours,hierarchy=cv2.findContours(edges,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) c=len(contours) - 1 cnt = contours[c] cv2.drawContours(im,contours,-1,(0,255,0),-1) #centroid_x = M10/M00 and centroid_y = M01/M00 M = cv2.moments(cnt) x = int(M['m10']/M['m00']) y = int(M['m01']/M['m00']) print x,y cv2.circle(im,(x,y),1,(0,0,255),2) cv2.putText(im,"x[px],y[px]",(10,50), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,0,255)) cv2.putText(im,"center of Sun",(x,y), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255)) cv2.putText(im,str(x)+","+str(y),(10,100), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255)) cv2.circle(im,(width/2,height/2),1,(255,0,0),2) cv2.putText(im,"center of image",(width/2,height/2), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,0,0)) cv2.putText(im,str(width/2)+","+str(height/2), (10,150), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,0,0)) cv2.putText(im,"difference:"+str(width/2-x)+","+str(height/2-y),(400,50), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,255,255)) cv2.imshow('contour',im) cv2.waitKey(0) code here </code></pre> <p>Ok here is edit:</p> <p>This code convert my picture to 8 bit. But when I run script I get error: cnt = contours[0] IndexError: list index out of range</p> <p>Why is that? Any suggestion?</p> <p>Code</p> <pre><code>import pyfits import numpy as np import cv2 hdul = pyfits.open('11_18_46_640_syn_snap_obs_1533.fits') hdu=hdul[0].data ma=hdu.max() mi=hdu.min() image = np.array(hdu, copy=True) image.clip(mi,ma, out=image) image -=mi image //= (ma - mi + 1) / 255. im=image.astype(np.uint8) #height, width, depth = im.shape #print im.shape thresh = 123 imgray = cv2.cvtColor(im,cv2.COLOR_GRAY2RGB) blur = cv2.GaussianBlur(im,(5,5),0) edges = cv2.Canny(blur,thresh,thresh*2) contours, hierarchy =cv2.findContours(edges,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) c=len(contours) - 1 cnt = contours[0] cv2.drawContours(im,contours,-1,(0,255,0),-1) #centroid_x = M10/M00 and centroid_y = M01/M00 M = cv2.moments(cnt) x = int(M['m10']/M['m00']) y = int(M['m01']/M['m00']) print x,y cv2.circle(im,(x,y),1,(0,0,255),2) cv2.putText(im,"x[px],y[px]",(10,50), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,0,255)) cv2.putText(im,"center of Sun",(x,y), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255)) cv2.putText(im,str(x)+","+str(y),(10,100), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255)) cv2.circle(im,(width/2,height/2),1,(255,0,0),2) cv2.putText(im,"center of image",(width/2,height/2), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,0,0)) cv2.putText(im,str(width/2)+","+str(height/2), (10,150), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,0,0)) cv2.putText(im,"difference:"+str(width/2-x)+","+str(height/2-y),(400,50), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,255,255)) cv2.imshow('im',im) cv2.waitKey(0) </code></pre>
<p>Well, your initial problem is with understanding <strong>16-bit file format</strong>.<br /> Regular grayscale encodes shade with unsigned 8 bits, thus ranging from 0 to 255. 16-bit image just extends this range, allowing up to 2^16 shades. Therefore it makes no sense to get depth of an image with <code>ndarray.shape</code> - it only has one color channel (i.e. 2-D matrix with single UINT16 value per pixel).</p> <p>As for the second <strong>issue with indexing</strong>:<br /> <code>ndarray.astype('uint8')</code> grabs least significant bits, so, let's say 1460 (0101_1011_0100) will become 180 (1011_0100). My guess is that you then try to find contours with Canny in a different picture from what you expect, with no success. Then you are simply indexing an empty array.</p> <p><strong>Solution</strong><br /> Use <code>cv2.convertScaleAbs(image, alpha=(255/65535))</code> to convert 16-bit to 8-bit</p>
python|opencv
0
1,902,762
46,428,586
Storing formulas in a table to be calculated later?
<p>What is the best way to store metric formulas in a database? In the beginning, I just threw the raw columns to a visualization tool and it calculated metrics for me. I quickly learned that there are many (valid) exceptions to the standard rules due to client requirements, etc. I am now considering whether I should create numerator and denominator columns during the ETL/database layer, or right as I send the data to the visualization tool.</p> <p>I was considering using Python evaluate to read a string which would be stored in a Postgres table:</p> <p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.eval.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.eval.html</a></p> <p>Pardon the formatting, but I have three columns below. One column to tie back to a specific project, and then two example metrics. </p> <pre><code>id productive_time productive_status 165 "productive_time = talk_time + hold_time + after_call_work_time" "productive_status = status_3_time + status_4_time + status_5_time" 1911 "productive_time = talk_time + hold_time + after_call_work_time + ring_time" "productive_status = status_7_time + status_8_time" </code></pre> <p>Then, in the visualization layer, the metric calculation would simply be <code>SUM(productive_time) / SUM(call_count)</code> compared to having potentially dozens of calculations. </p> <p>Does this make sense, are there other best practices? </p> <p>The alternative is to have massive CASE WHEN statements, I suppose. But there are literally several hundred - over a thousand ids to cover. 95% of them will be the same though.</p> <p>Edit:</p> <p><a href="https://i.stack.imgur.com/wPzvW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wPzvW.png" alt="enter image description here"></a></p> <pre><code>for x in range(0, len(df['inbound_time_formula'].unique())): df.loc[df['inbound_time_formula'] == df['inbound_time_formula'].unique()[x], 'inbound_time'] = df.eval(df['inbound_time_formula'].unique()[x], inplace=True) </code></pre> <p>I tried to df.eval the dataframe, but it appears to apply to the entire dataframe rather than just the rows where the formula is present. </p>
<p>rules:</p> <pre><code>t=# create table rl(id serial,tm text, sm text); CREATE TABLE t=# insert into rl(tm,sm) values('a+b-c','a*b +c'); INSERT 0 1 </code></pre> <p>data:</p> <pre><code>t=# create table dt(i serial,a int,b int, c int); CREATE TABLE t=# insert into dt(a,b,c) select 1,2,3; INSERT 0 1 </code></pre> <p>example:</p> <pre><code>t=# create or replace function rlf(rid int,did int) returns table (rsm int,rtm int) as $$ begin return query execute format('select '||(select sm from rl where id=rid)||', '||(select tm from rl where id=rid)||' from dt where i=%s',did); end; $$ language plpgsql ; CREATE FUNCTION t=# select * from rlf(1,1); rsm | rtm -----+----- 5 | 0 (1 row) </code></pre> <p>the approach is very much questionable as you cant avoid injection by definition - you don't parse rule - executing it as is...</p>
postgresql|pandas
0
1,902,763
53,569,879
How to install pydoc on Windows?
<p>I'm trying to install <code>pydoc</code> on my Windows system using CMD as administrator.</p> <p>When I put this command:</p> <pre><code>pip install pydoc </code></pre> <p>I got this error message:</p> <blockquote> <p>Could not find a version that satisfies the requirement pydoc (from versions: )<br> No matching distribution found for pydoc</p> </blockquote>
<p><code>pip</code> installs packages from PyPI and there is no <a href="https://pypi.org/project/pydoc/" rel="nofollow noreferrer">pydoc at PyPI</a>. <a href="https://docs.python.org/3/library/pydoc.html" rel="nofollow noreferrer">pydoc</a> is a module from the standard library, that is, it's always available.</p>
python-3.x|cmd|pip|pydoc
4
1,902,764
21,499,099
What's the difference between using super() on a Class inherited from python object and a Class inherited from another user-defined Class
<p>What's the difference and problems involved in this curiosity:</p> <pre><code>class A(object): def __init__(self): super(A, self).__init__() </code></pre> <p>Than</p> <pre><code>class A(object): def __init__(self): pass class B(A): def __init__(self): super(B, self).__init__() </code></pre> <p>Even if the first example is wrong, it works. I thought it could be a redundancy, but I heard that using super() in a class that's inherited from object is wrong, but why?</p>
<p>super(class. self) is how one interacts with what's called the MRO, or method resolution order.</p> <p>A very important concept to grok. Here's Guido on MRO: <a href="http://python-history.blogspot.com/2010/06/method-resolution-order.html?m=1" rel="nofollow">http://python-history.blogspot.com/2010/06/method-resolution-order.html?m=1</a></p>
python
1
1,902,765
39,882,522
Finding duplicate rows python
<p>I have <code>timestamp</code> and <code>id</code> variables in my dataframe (<code>df</code>)</p> <pre><code>timestamp id 2016-06-09 8:33:37 a1 2016-06-09 8:33:37 a1 2016-06-09 8:33:38 a1 2016-06-09 8:33:39 a1 2016-06-09 8:33:39 a1 2016-06-09 8:33:37 b1 2016-06-09 8:33:38 b1 </code></pre> <p>Each <code>id</code> can't have two timestamps. I have to print these duplicate timestamps for each <code>id</code>. In my above case, the output should be for rows 1,2,4,5</p> <p>The following code will give the duplicate <code>timestamp</code></p> <pre><code>set([x for x in df['timestamp'] if df['timestamp'].count(x) &gt; 1]) </code></pre> <p>How to consider <code>id</code> along with <code>timestamp</code> to have the duplicate rows?</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> and get mask of all duplicates values per group by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.duplicated.html" rel="nofollow"><code>Series.duplicated</code></a>. Last use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p> <pre><code>print (df.groupby(['id'])['timestamp'].apply(lambda x: x.duplicated(keep=False))) 0 True 1 True 2 False 3 True 4 True 5 False 6 False Name: timestamp, dtype: bool print (df[df.groupby(['id'])['timestamp'].apply(lambda x: x.duplicated(keep=False))]) timestamp id 0 2016-06-09 08:33:37 a1 1 2016-06-09 08:33:37 a1 3 2016-06-09 08:33:39 a1 4 2016-06-09 08:33:39 a1 </code></pre>
python|pandas|timestamp|duplicates
1
1,902,766
28,891,452
Python arithmetic operation returns 0
<pre><code>read = True while read: my_input = int(raw_input()) print my_input result = (1/6) * my_input * (my_input + 1) * (my_input +2) if result == 0: print '' read = False break else: print result </code></pre> <p>I wrote this little code snippet to solve 1 + (1+2) + (1+2+3+)... without looping over anything but the result is always 0 for some reason. I am using PyDev on Eclipse but I do not think that's even remotely the issue</p> <p>Thank you</p>
<p>Multiplying by zero always results in zero.</p> <pre><code>&gt;&gt;&gt; a = (1/6) &gt;&gt;&gt; print a 0 </code></pre> <p>This is happening because Python is casting the resulting operation to integer.</p> <p>In order to get a float result you can specify the values in decimal notation.</p> <pre><code>&gt;&gt;&gt; a = 1.0/6.0 &gt;&gt;&gt; print a 0.166666666667 </code></pre>
python-2.7
1
1,902,767
51,801,917
Different encoding when using unicode character in Python
<p>I'm having problem in Python when met composition unicode instead of built-in unicode. Here is reproduce code:</p> <pre><code># encoding=utf8 a = ["Địa"] b = ["Địa"] print(a) # ['\xc4\x90i\xcc\xa3a'] print(b) # ['\xc4\x90\xe1\xbb\x8ba'] print("Địa" in a) # False print("Địa" in b) # True </code></pre> <p>How can I convert/normalize them into the same encoder?</p>
<p>You can use <code>unicodedata.normalize()</code>:</p> <pre><code># encoding=utf8 import unicodedata a = ["Địa"] b = ["Địa"] print("Địa" in [unicodedata.normalize('NFC', i) for i in a]) print("Địa" in [unicodedata.normalize('NFC', i) for i in b]) </code></pre> <p>This outputs:</p> <pre><code>True True </code></pre>
python|python-unicode|unicode-normalization
1
1,902,768
22,252,702
py2app error:is a directory
<p>When i execute the following command in terminal:</p> <pre><code>python setup.py py2app </code></pre> <p>it ends with :</p> <pre><code>byte-compiling /Users/gebruiker/Documents/build/bdist.macosx-10.6-universal/python2.6- semi_standalone/app/temp/aem/ae.py to aem/ae.pyc error: Is a directory </code></pre> <p>The error is : error: Is a directory - how can I solve this error ?</p> <p>and no .app will be created in the dist folder...</p> <p>I'm using the following setup.py (and i'm using appscript in my source code) :</p> <pre><code>""" This is a setup.py script generated by py2applet Usage: python setup.py py2app """ from setuptools import setup from appscript import * APP = ['schermen1.py'] DATA_FILES = [] OPTIONS = {'argv_emulation': True} setup( app=APP, data_files=DATA_FILES, options={'py2app': OPTIONS}, setup_requires=['py2app'], ) </code></pre> <p>Does anybody have any clue how I can solve this error ?</p>
<p>I had this problem too. What fixed it for me was specifying to py2app the packages that my python code imported. I did this as a '--package' argument:</p> <pre><code>python setup.py py2app --packages &lt;package&gt; </code></pre>
python|py2app|sourceforge-appscript
0
1,902,769
52,543,420
Nullable columns not getting updated
<p>I am making an api using Flask. I set the image column to nullable = True, however I can't seem to update that column. Here's the code:</p> <pre><code>def register(): try: ''' different details are uploaded except profile_image which is set to nullable = True ''' new_user = User(''' all columns are updated''') db.session.add(new_user) db.session.commit() except: image = request.files['profile_image'] user = User.query.filter_by('''query matched''').first() user.profile_image = image.read() db.session.commit() return jsonify({"message" : "Account successfully created"}) return jsonify(''' json object sent''') </code></pre> <p>Here, although it should have been updated, the column remains null, as checked in the JAvascript based Database Editor (JADE). I have absolutely no idea why it isn't working</p>
<p>Remove the indentation for the code after except. Anything indented after except only gets executed if an exception happens.</p> <pre><code>except: image = request.files['profile_image'] </code></pre>
python|database|flask|flask-sqlalchemy
0
1,902,770
47,694,757
Get centroids from datapoints in pandas DF for each group
<p>i have a dataframe</p> <pre><code>id code lat long 1 100 22.6 42.3 1 200 23.6 45.3 1 400 21.6 46.3 2 300 22.6 42.3 2 500 22.6 42.3 2 800 22.6 42.3 3 100 22.6 42.3 </code></pre> <p>i want to find the centre points grouping on id column, and return a dataframe :</p> <pre><code>id centre_lat centre_long 1 xx.xx yy.yy 2 xx.xx yy.yy 3 xx.xx yy.yy </code></pre> <p>Since id 3 has only 1 code, therefore the same lat long is the centroid for that id.</p>
<p>IIUC:</p> <pre><code>In [136]: df.groupby('id', as_index=False)['lat','long'].mean() Out[136]: id lat long 0 1 22.6 44.633333 1 2 22.6 42.300000 2 3 22.6 42.300000 </code></pre>
python|pandas|dataframe
2
1,902,771
47,714,969
MongoDB aggregation filter
<p>I'm trying to select some data from a mongodb collection in python, here are some example data :</p> <pre><code>[{"id":1, "planned_timestamp":1512728425, "executed_timestamp":0, "owner":1, "action_type": "read", "action_params":"book A"}, {"id":2, "planned_timestamp":1512728430, "executed_timestamp":0, "owner":1, "action_type": "read", "action_params":"book B"}, {"id":3, "planned_timestamp":1512728435, "executed_timestamp":0, "owner":2, "action_type": "read", "action_params":"book C"}] </code></pre> <p>I want to select all task that has <code>"executed_timestamp":0</code>, <code>"planned_timestamp"</code> lower than the timestamp variable and I would like to have the results as follows:</p> <pre><code>[{"owner":1, "tasks": [{"id":1,"planned_timestamp":1512728425,"action_type": "read","action_params":"book A"},{"id":1,"planned_timestamp":1512728430,"action_type": "read","action_params":"book B"}]}, {"owner":2, "tasks": [{"id":3,"planned_timestamp":1512728435,"action_type": "read","action_params":"book C"}]}] </code></pre> <p>My current request with pymongo is :</p> <pre><code>r = db.task_queue.aggregate( [ { "$group" : { "_id" : "$agent_id", "tasks": { "$push": "$$ROOT" } } } ] ) </code></pre>
<p>What is <code>agent_id</code>?</p> <p>What you need is <code>$match</code>. Try this.</p> <pre><code>timestamp = 1500000000 r = db.task_queue.aggregate( [ {"$match": {"$and": [ { "executed_timestamp": {"$eq": 0}, "planned_timestamp": {"$lte": timestamp} } ]}}, {"$group" : { "_id" : "$owner", "tasks": { "$push": "$$ROOT" } } }, {"$project": { "_id": 0, "owner": "$_id.owner", "tasks": 1 }} ] ) </code></pre>
python|mongodb|aggregation
0
1,902,772
47,687,633
Can I convert my Cython code to Python?
<p>I have written a cython code to help bridge the gap between a 3rdparty library and python. </p> <p>I also written some of my code in cython to improve its performance. </p> <p>Can I convert both of my above use cases into raw python?</p> <p>example of use case one</p> <pre><code>def f(double x): return x**2-x def integrate_f(double a, double b, int N): cdef int i cdef double s, dx s = 0 dx = (b-a)/N for i in range(N): s += f(a+i*dx) return s * dx </code></pre> <p>example of use case 2</p> <pre><code>from libc.stdlib cimport atoi cdef parse_charptr_to_py_int(char* s): assert s is not NULL, "byte string value is NULL" return atoi(s) # note: atoi() has no error detection! </code></pre>
<p>Well for you first use case the answer is yes. All you would need to do is remove the <code>cdef</code> lines like so.</p> <pre><code>def f(double x): return x**2-x def integrate_f(double a, double b, int N): s = 0 dx = (b-a)/N for i in range(N): s += f(a+i*dx) return s * dx </code></pre> <p>For your second use case that's where things get tricky because you cant just delete the <code>cdef</code> lines or rename <code>cdef</code> to <code>def</code>. Also since this use case depends on the external library it doesn't have direct to python translation.</p> <p>You have 2 options you can use besides Cython.</p> <ul> <li>ctypes - the foreign function library built into standard python</li> <li>cffi - a library that works similar to ctypes but simplifies the library glue code.</li> </ul> <p>Your usage example using ctypes would look like this</p> <pre><code>def parse_charptr_to_py_int(test): from ctypes import cdll,c_char_p cdll.LoadLibrary("libc.so") return cdll.libc.atoi(c_char_p(test)) </code></pre> <p>Your usage example using cffi would look like this</p> <pre><code>def parse_charptr_to_py_int(test): from cffi import FFI ffi = FFI() ffi.cdef("int atoi(const char *str);") CLib = ffi.dlopen("libc.so") return CLib.atoi(test) </code></pre>
python|cython
2
1,902,773
26,203,122
Python - Using Set-Cookie on for cookie use not work
<p>When I get the Set-Cookie and try to use it, I wont seem that I'm logged in Facebook...</p> <pre><code>import urllib, urllib2 data = urllib.urlencode({"email":"swagexample@hotmail.com", "pass":"password"}) request = urllib2.Request("http://www.facebook.com/login.php", data) request.add_header("User-Agent", "Mozilla 5.0") response = urllib2.urlopen(request) cookie = response.headers.get("Set-Cookie") new_request = urllib2.Request("http://www.facebook.com/login.php") new_request.add_header("User-Agent", "Mozilla 5.0") new_request.add_header("Cookie", cookie) new_response = urllib2.urlopen(new_request) if "Logout" in new_response.read(): print("Logged in.") #No output </code></pre> <p>Why?</p>
<p>First, <code>Set-Cookie</code> header format is different from <code>Cookie</code> header.</p> <p><code>Set-Cookie</code> header contains additional information (doamin, expire, ...), you need to convert them to use it for <code>Cookie</code> header.</p> <pre><code>cookie = '; '.join( x.split(';', 1)[0] for x in response.headers.getheaders("Set-Cookie") ) </code></pre> <p>Even though you do above, you will still not get what you want, because default urllib2 handler does not handle cookie for redirect.</p> <p>Why don't you <a href="https://stackoverflow.com/questions/25956080/python-using-cookies-successfully">use <code>urllib2.HTTPCookieProcessor</code> as you did before?</a></p>
python|python-2.7|cookies|urllib2|urllib
1
1,902,774
69,774,689
Categorize column of strings by category name in new column
<p>I am trying to carry out what should be a pretty simple procedure in Python, but I am having trouble searching for help on this, because I don't know how to best put what I am trying to do into searchable words. I am not sure if what I am trying to do is called reclassifying or using a conditional statement or what really. I will show an example of what I am trying to do, which is pretty simple I think. I have the following DataFrame:</p> <pre><code>Color Value ---------------- blue 43 blue 53 blue 25 orange 44 orange 33 orange 35 red 66 red 43 red 65 green 44 green 35 green 24 green 34 </code></pre> <p>Now, what I want to do is categorize these colors based on whether they are primary colors or secondary colors, where of course, blue, and red are primary colors, and orange, and green are secondary colors. And so I want to create the following DataFrame:</p> <pre><code>Color Value Category ------------------------------ blue 43 Primary blue 53 Primary blue 25 Primary orange 44 Secondary orange 33 Secondary orange 35 Secondary red 66 Primary red 43 Primary red 65 Primary green 44 Secondary green 35 Secondary green 24 Secondary green 34 Secondary </code></pre> <p>I am not sure if this involve needing to create a dictionary or if I just use a simple conditional statement to apply to my DataFrame. How can this be done in Python?</p>
<p>You can use simple <code>np.where</code>:</p> <pre><code>df['Category'] = np.where(df['Color'].str.contains('blue|red'), 'Primary', 'Seconday') </code></pre> <p>or</p> <pre><code>df['Color'].str.contains('blue|red').map({True:'Primary',False:'Secondary'}) </code></pre>
python|pandas|dataframe
1
1,902,775
73,028,669
How to click on Search button using Python Selenium
<p>I'm trying to click a search button with the help of selenium webdriver and python</p> <p>Here is the HTML Code</p> <pre><code>&lt;button data-testid=&quot;search-button&quot; tabindex=&quot;4&quot; type=&quot;submit&quot; class=&quot;sc-2ry4jn-0 sc-2ry4jn-2 sc-17kxwsy-0 bWjDpN&quot; xpath=&quot;1&quot;&gt;&lt;div data-testid=&quot;icon-testid&quot; class=&quot;sc- 121424n-0 loEDwb&quot;&gt;&lt;div class=&quot;sc-121424n-2 jFTWvP&quot;&gt;&lt;span class=&quot;sc-1kvy6kt-0 jTNjLr sc- 121424n-3 gCitZe&quot; data-testid=&quot;icon:icon-jameda-SVG-icon-Search&quot; color=&quot;#fff&quot;&gt;&lt;svg&gt;&lt;use data-testid=&quot;svgcontainer-use&quot; xmlns:xlink=&quot;http://www.w3.org/1999/xlink&quot; xlink:href=&quot;#icon-jameda-SVG-icon-Search&quot;&gt;&lt;/use&gt;&lt;/svg&gt;&lt;/span&gt;&lt;/div&gt;&lt;div color=&quot;#fff&quot; class=&quot;sc-121424n-1 hGbob&quot;&gt;Suchen&lt;/div&gt;&lt;/div&gt;&lt;/button&gt; </code></pre> <pre><code>&lt;div data-testid=&quot;icon-testid&quot; class=&quot;sc-121424n-0 loEDwb&quot; xpath=&quot;1&quot;&gt;&lt;div class=&quot;sc- 121424n-2 jFTWvP&quot;&gt;&lt;span class=&quot;sc-1kvy6kt-0 jTNjLr sc-121424n-3 gCitZe&quot; data- testid=&quot;icon:icon-jameda-SVG-icon-Search&quot; color=&quot;#fff&quot;&gt;&lt;svg&gt;&lt;use data- testid=&quot;svgcontainer-use&quot; xmlns:xlink=&quot;http://www.w3.org/1999/xlink&quot; xlink:href=&quot;#icon- jameda-SVG-icon-Search&quot;&gt;&lt;/use&gt;&lt;/svg&gt;&lt;/span&gt;&lt;/div&gt;&lt;div color=&quot;#fff&quot; class=&quot;sc-121424n-1 hGbob&quot;&gt;Suchen&lt;/div&gt;&lt;/div&gt; </code></pre> <pre><code>&lt;div class=&quot;sc-121424n-2 jFTWvP&quot; xpath=&quot;1&quot;&gt;&lt;span class=&quot;sc-1kvy6kt-0 jTNjLr sc-121424n-3 gCitZe&quot; data-testid=&quot;icon:icon-jameda-SVG-icon-Search&quot; color=&quot;#fff&quot;&gt;&lt;svg&gt;&lt;use data- testid=&quot;svgcontainer-use&quot; xmlns:xlink=&quot;http://www.w3.org/1999/xlink&quot; xlink:href=&quot;#icon- jameda-SVG-icon-Search&quot;&gt;&lt;/use&gt;&lt;/svg&gt;&lt;/span&gt;&lt;/div&gt; </code></pre> <pre><code>&lt;span class=&quot;sc-1kvy6kt-0 jTNjLr sc-121424n-3 gCitZe&quot; data-testid=&quot;icon:icon-jameda-SVG- icon-Search&quot; color=&quot;#fff&quot; xpath=&quot;1&quot;&gt;&lt;svg&gt;&lt;use data-testid=&quot;svgcontainer-use&quot; xmlns:xlink=&quot;http://www.w3.org/1999/xlink&quot; xlink:href=&quot;#icon-jameda-SVG-icon-Search&quot;&gt; &lt;/use&gt;&lt;/svg&gt;&lt;/span&gt; </code></pre> <p>To see the whole HTML Code visit: <a href="http://www.jameda.de" rel="nofollow noreferrer">www.jameda.de</a> and check out the green search button in the right corner</p> <p>I already tried to click it via <code>CLASS_NAME</code>, <code>XPATH</code>, <code>LINK_TEXT</code> but I always get the following error.</p> <pre><code>no such element: Unable to locate element: </code></pre> <p>Here is my code I used so far:</p> <pre><code>driver.find_element(by=By.CLASS_NAME, value=&quot;sc-2ry4jn-0 sc-2ry4jn-2 sc-17kxwsy-0 bWjDpN&quot;).click() </code></pre> <p>The button is visible when trying to click it.</p>
<p>To click on the element <kbd> Suchen</kbd> you need to induce <a href="https://stackoverflow.com/a/59130336/7429447">WebDriverWait</a> for the <a href="https://stackoverflow.com/a/54194511/7429447"><em>element_to_be_clickable()</em></a> and you can use either of the following <a href="https://stackoverflow.com/a/48056120/7429447"><em>locator strategies</em></a>:</p> <ul> <li><p>Using <em>CSS_SELECTOR</em>:</p> <pre><code>driver.get('https://www.jameda.de/') WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, &quot;button#CybotCookiebotDialogBodyLevelButtonLevelOptinAllowAll&quot;))).click() WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, &quot;span[class^='SearchString']&quot;))).click() </code></pre> </li> <li><p>Using <em>XPATH</em>:</p> <pre><code>driver.get('https://www.jameda.de/') WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, &quot;//button[@id='CybotCookiebotDialogBodyLevelButtonLevelOptinAllowAll']&quot;))).click() WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, &quot;//span[contains(., 'Suchen')]&quot;))).click() </code></pre> </li> <li><p><strong>Note</strong>: You have to add the following imports :</p> <pre><code>from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC </code></pre> </li> <li><p>Browser Snapshot:</p> </li> </ul> <p><a href="https://i.stack.imgur.com/TTY5e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TTY5e.png" alt="jameda" /></a></p>
python|selenium|xpath|css-selectors|webdriverwait
1
1,902,776
72,839,759
Try to find unique numbers in a long list, and return them in order. (prev. version did not work)
<p>You may consider this question is prev. answered, but the last version doesn't work with 2 digits number Given a list of numbers, find and print the elements that appear in it only once. Such elements should be printed in the order in which they occur in the original list.</p> <p>so here's my code (being fixed by other pythoneers)</p> <pre><code>a = [int(s) for s in input().split()] sortedLst = sorted(a) unique = [] uniqueOrd = [] for i in range(len(a) - 2): if sortedLst[i + 1] != sortedLst[i] and sortedLst[i + 1] != sortedLst[i + 2]: unique.append(sortedLst[i + 1]) for num in a: if num in unique: uniqueOrd.append(num) print(*uniqueOrd) </code></pre> <p>but the output does not work with 2 digits number, how can I fix this bug?</p>
<p>You don't have to labor yourself on finding the <code>uniques</code> from a long list, just consider the nice lib from <code>collections</code> - Counter. It's faster and more Pythonic.</p> <pre><code>A = [int(s) for s in input().split()] sortedLst = sorted(A) # this is just to confirm your inputs.. from collections import Counter counts = Counter(A) print(counts) uniques = [k for k, v in counts.items() if v == 1] # just check the frequency (v) print(uniques) </code></pre> <p>Try some inputs like: 2 3 1 4 22 33 44 55 33 22</p> <pre><code>[2, 3, 1, 4, 44, 55] </code></pre>
python|list|unique
1
1,902,777
55,593,265
Is it possible to loop through 10^8 possibilities to determine the correct answer?
<p>I have a number which is 615 digits long. Throughout it, there are 8 places where digits are missing. I have to find out what the digits are. There are 10^8 possibilities. </p> <p>It is for an RSA problem. The number in question is the private key, and I am trying to find out what it is. To help me, I have the public key pair (n, e), both of which are also 615 digits long, and also a plaintext and corresponding ciphertext. </p> <p>So the only way to figure out d is to bruteforce it. I am trying to use gmpy2 in python to figure it out. I had to jump through a lot of hoops to get it to work. I do not even know if I correctly did it. I had to download Python2.7 so I could run the gmpy2 installer just to not get an error message. But I think it works now, as I can type </p> <pre><code>&gt;&gt;&gt;import gmpy2 </code></pre> <p>in the terminal and it doesnt give me an error. </p> <p>Before I try to loop through 10^8 possibilities, I want to know if its possible to do so in a relatively short amount of time, considering my situation. I do not want to fry my computer or freeze it trying to compute this. I also want to know if I am using the right tools for this, or is gmpy2 not the correct version, or Python2.7 is not good/fast enough. I am running gmpy2 on Python2.7 on a laptop. </p> <p>In the end I suppose I want to take all 10^8 answers and raise such that C^d = M mod n. So thats an (already) large number to the power of number 615 digits long, 10^8 times. Is this possible? If it is, how can I do this using gmpy2? Is there a more efficient way to compute this? </p> <p>I sincerely apologize if this is not the right place to ask this. Thank you for any help.</p>
<p><strong>You're not going to fry your computer.</strong> </p> <p>It may take a long time to run, but it seems like this is a straight O(n) problem, so it won't blow up to infinity. As long as it doesn't take an obscene amount of time to check if one hash is valid or not, this may even take less than a minute to run. Modern day machines measure clock cycles in gHz. That's 10^9 cycles per second. And besides, since you say you can't make any inferences on what the correct answer would be from wrong guesses, brute force seems like the only solution.</p>
python|gmpy
0
1,902,778
73,204,508
How to count and remove consecutive words in df rows?
<p>I want to count and remove consecutive words in string or df rows. I</p> <pre><code>***input :*** str = &quot;ettim ettim deneme karar verdim verdim buna buna buna&quot; ***output :*** output = &quot;ettim deneme karar verdim buna&quot; output2 = { &quot;ettim&quot; : 2, &quot;verdim&quot; :2 , &quot;buna&quot; : 3&quot;} </code></pre> <p>How do I do fastly method with regex or something else</p> <p>Thanks</p>
<p>Try:</p> <pre><code>import regex as re s = 'ettim ettim deneme karar verdim verdim buna buna buna' rgx = re.compile(r'(?&lt;!\S)(\S+)(?:\s+\1)*(?!\S)', re.I) output1 = re.sub(rgx, r'\1', s) output2 = {} for i in re.finditer(rgx, s): if i.group(1) != i.group(0): output2[i.group(1)] = len(re.split(r'\s+', i.group(0))) print(output1) print(output2) </code></pre> <p>Prints:</p> <pre><code>ettim deneme karar verdim buna {'ettim': 2, 'verdim': 2, 'buna': 3} </code></pre> <hr /> <p>Core of the idea above is to use <code>re.compile(r'(?&lt;!\S)(\S+)(?:\s+\1)*(?!\S)', re.I)</code> to match case-insensitive consecutive words. See an online <a href="https://regex101.com/r/tjpDJq/1" rel="nofollow noreferrer">demo</a>.</p> <ul> <li><code>(?&lt;!\S)</code> - Negative lookbehind to assert position is not preceded by a non-whitespace character;</li> <li><code>(\S+)</code> - 1st Capture group to match 1+ non-whitespace characters;</li> <li><code>(?:\s+\1)*</code> - Match 0+ times a non-capture group holding 1+ whitespace characters and a backreference to what is matched previously in 1st group;</li> <li><code>(?!\S)</code> - Negative lookahead to assert position is not followed by a non-whitespace character.</li> </ul> <hr /> <p><strong>EDIT:</strong> I did notice that if the same consecutive words occur multiple times in the same text you may end up overwriting your dictionary's values. To stop that I edited the keys a bit:</p> <pre><code>import regex as re s = 'ettim ettim buna buna deneme karar verdim verdim buna buna buna' rgx = re.compile(r'\b(\S+)(?:\s+\1)*\b', re.I) output1 = re.sub(rgx, r'\1', s) output2 = {} c = 0 for i in re.finditer(rgx, s): if i.group(1) != i.group(0): c = c + 1 output2[str(c) + &quot;-&quot; + i.group(1)] = len(re.split(r'\s+', i.group(0))) print(output1) print(output2) </code></pre> <p>Prints:</p> <pre><code>ettim buna deneme karar verdim buna {'1-ettim': 2, '2-buna': 2, '3-verdim': 2, '4-buna': 3} </code></pre>
python-3.x|string|duplicates
3
1,902,779
50,032,810
Python file closes itself
<p>Whenever I run the snippet here, it will print "False" before immediately following on the line after with "ValueError: I/O operation on closed file" Is there anything with rstrip that will close the file?</p> <pre><code>with open(ffile, 'rb') as f: print f.closed lines = (line.rstrip() for line in f) lines = (line for line in lines if line) </code></pre> <p>This is the entire snippet</p> <pre><code>ffile = sys.argv[1] ifile = sys.argv[2] sha1 = hashlib.sha1() with open(ifile, 'rb') as f: while True: data = f.read(5000) if not data: break sha1.update(data) digest = sha1.hexdigest() digest_int = int(digest, 16) with open(ffile, 'rb') as f: print f.closed lines = (line.rstrip() for line in f) lines = (line for line in lines if line) maxid = 0 for l in lines: node_name = l.split(' ')[0] nextid = l.split(' ')[1] nextid = int(nextid, 16) if (nextid == digest_int): maxid = nextid break elif nextid &lt; digest_int and not("Finger" in node_name): if nextid &gt; maxid: maxid = nextid print str(digest_int) print str(maxid) </code></pre> <p>There is literally no code that closes anything.</p>
<p>The problem isn't in this code, but in some other code farther down that you haven't shown us.</p> <p>What you've written is creating a generator that, when iterated, will yield stripped, non-empty lines out of the file.</p> <p>That's perfectly fine. But if you don't <em>use</em> that generator until after you've closed the file, it will try to get those lines out of a closed file. (Remember, the whole point of generators is that they're <em>lazy</em>—they do all the work as late as possible, using as little memory as possible.)</p> <p>From your comments, it seems like you don't think you're closing the file anywhere. But in fact you are. The whole point of using <code>with</code> statements on files is that they close the file as soon as you exit the <code>with</code> body.</p> <hr> <p>For example, if you do this:</p> <pre><code>with open(ffile, 'rb') as f: print f.closed lines = (line.rstrip() for line in f) lines = (line for line in lines if line) for line in lines: print line </code></pre> <p>… that's an error, probably exactly the same kind of error you're seeing.</p> <hr> <p>But this:</p> <pre><code>with open(ffile, 'rb') as f: print f.closed lines = (line.rstrip() for line in f) lines = (line for line in lines if line) for line in lines: print line </code></pre> <p>… is just fine. You're using <code>lines</code> inside the <code>with</code> statement, while the file is still open.</p> <hr> <p>And this:</p> <pre><code>with open(ffile, 'rb') as f: print f.closed lines = (line.rstrip() for line in f) lines = (line for line in lines if line) lines = list(line) for line in lines: print line </code></pre> <p>… is also fine. You're using the generator inside the <code>with</code> statement, and storing everything in a list, which of course is still around and taking up memory even after the file goes away.</p> <hr> <p>What you want to do is probably some variation on the first fix if possible, some variation on the second otherwise. But without seeing any of your code, there's no way of telling you anything more specific.</p>
python|file|io
1
1,902,780
66,575,963
Python - Eliminating NaN values in each row of a numpy array or pandas dataframe
<p>I have a pandas dataframe that currently looks like this</p> <pre><code>|Eriksson| NaN | Boeser | NaN | | NaN | McDavid| NaN | NaN | | ... | ... | ... | ... | </code></pre> <p>I don't care whether its converted to a Numpy array or it remains a Data Frame, but I want an output object where the rows just consist of the non NaN values like this:</p> <pre><code>|Eriksson| Boeser| |McDavid | NaN | </code></pre> <p>(<code>NaN</code> because of the mismatched dimensions.) Is there any way to do this?</p>
<p>I think that this would do the trick for you:</p> <pre><code>df.apply(lambda x: pd.Series(x.dropna().values), axis=1) </code></pre> <p>Example:</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame(np.random.randn(5,4)) &gt;&gt;&gt; df.iloc[1,2] = np.NaN &gt;&gt;&gt; df.iloc[0,1] = np.NaN &gt;&gt;&gt; df.iloc[2,1] = np.NaN &gt;&gt;&gt; df.iloc[2,0] = np.NaN &gt;&gt;&gt; df 0 1 2 3 0 -0.162388 NaN -0.299892 0.594846 1 3.165631 -1.190102 NaN -1.234934 2 NaN NaN 0.885439 -1.714365 3 -1.622833 -1.319395 -1.716550 -0.517699 4 0.688479 0.576763 0.645344 0.708909 &gt;&gt;&gt; df.apply(lambda x: pd.Series(x.dropna().values), axis=1) 0 1 2 3 0 -0.162388 -0.299892 0.594846 NaN 1 3.165631 -1.190102 -1.234934 NaN 2 0.885439 -1.714365 NaN NaN 3 -1.622833 -1.319395 -1.716550 -0.517699 4 0.688479 0.576763 0.645344 0.708909 </code></pre>
python|pandas|numpy
1
1,902,781
66,701,490
discord.py Invalid Form Body In embed.image.url: Scheme "<discord.file.file object at 0x000001e151c02360>" is not supported
<p>i tried it</p> <pre><code>data_stream = io.BytesIO() data_stream.seek(0) l = ['Communication', 'dress', 'food', 'culture', 'other'] plt.pie(r, labels=l, autopct='%.1f%%') plt.savefig(str(ctx.author)) chart = discord.File(data_stream, filename=f&quot;{str(ctx.author)}.png&quot;) embed = discord.Embed(title = 'ㅁㄴㅇㄹ', description = 'ㅁㄴㅇㄹd') embed.set_image(url=chart) plt.close() await ctx.send(embed=embed) </code></pre> <p>but error ..</p> <p><strong>Invalid Form Body In embed.image.url: Scheme &quot;&lt;discord.file.file object at 0x000001e151c02360&gt;&quot; is not supported. Scheme must be one of ('http', 'https').</strong></p> <p>how i can fix this bug?</p>
<p>Everything is in the error, you are trying to set image with a File but a url is expected.</p> <p>There is a way to transform your image into an url which is explained in the Discord.py <a href="https://discordpy.readthedocs.io/en/latest/faq.html" rel="nofollow noreferrer">FAQ</a> (important to read!).</p> <p>I think that the <a href="https://discordpy.readthedocs.io/en/latest/faq.html#how-do-i-use-a-local-image-file-for-an-embed-image" rel="nofollow noreferrer">How do I use a local image file for an embed image?</a> question corresponds to yours.</p> <p>Have a nice day!</p>
python|discord.py
0
1,902,782
53,101,229
How to iterate through a matrix column in python
<p>I have a matrix with the cell values only <code>0</code> or <code>1</code>.</p> <p>I want to count how many ones or zeros are there in the same row or column to a given cell.</p> <p>For example, the value <code>matrix[r][c]</code> is <code>1</code>, so I want to know how many ones are there in the same row. This code does that:</p> <pre><code>count_in_row = 0 value = matrix[r][c] for i in matrix[r]: if i == value: count_in_row += 1 </code></pre> <p>The for cycle iterates through the same row and counts all ones (cells with the same value).</p> <p>What if I want to do the same process with columns? Will I iterate through the whole matrix or it is possible through just one column?</p> <p>PS: I don't want to use <code>numpy</code>, <code>transpose</code> or <code>zip</code>; better with composite cycle.</p>
<p>You have not specified what the datatype of your matrix is. If it is a list of lists, then there is no way to "get just one column", but the code still is similar (assuming that <code>r</code> and <code>c</code> are of type <code>int</code>):</p> <p>I added the functionality to only count the cells adjacent to the cell in question (above, below, left and right; does NOT consider diagonals); this is done checking that the difference between indexes is not greater than 1.</p> <pre><code>count_in_row = 0 count_in_col = 0 value = matrix[r][c] for j in range(len(matrix[r])): if abs(j - c) &lt;= 1: # only if it is adjacent if matrix[r][j] == value: count_in_row += 1 for i in range(len(matrix)): if abs(i - r) &lt;= 1: # only if it is adjacent if matrix[i][c] == value: count_in_col += 1 </code></pre> <p>Or if following the way you started it (whole rows and columns, not only adjacent ones):</p> <pre><code>for col_val in matrix[r]: if col_val == value: count_in_row += 1 for row in matrix: if row[c] == value: count_in_col += 1 </code></pre> <hr> <p>If you will be doind this for a lot of cells, then there are better ways to do that (even without <code>numpy</code>, but <code>numpy</code> is defenitively a very good option). </p>
python|loops|matrix|cycle
3
1,902,783
65,101,153
SyntaxError: multiple statements found while compiling a single statement - python
<pre><code>endFlag = False while endFlag == False: fruits= ['apple','cherry','banana','kiwi', 'lemon','pear', 'peach','avocado'] num = int(input('please select a fruit with the number associated with it ' )) if num == 0: print (fruits[0]) elif num ==1: print(fruits[1]) elif num == 2: print(fruits[2]) elif num == 3: print(fruits[3]) elif num == 4: print(fruits[4]) elif num == 5: print(fruits[5]) elif num == 6: print (fruits[6]) elif num == 7: print(fruits[7], ',please enjoy your fruit!') else: print('enter another fruit please, that one is not available') VendingMachine = input(&quot;would you like to repeat the program again? Yes/No &quot;) if VendingMachine == 'N': endFlag = True </code></pre> <p>This code shows &quot;SyntaxError: multiple statements found while compiling a single statement&quot; and I need help because I do not know why.</p>
<p>The problem may be the ident thing you did at the end. The if, the elifs and the else should have the same identation.</p> <p>Also the code can be improve like this:</p> <pre><code>#put fruits outside the loop so it only runs once fruits= ['apple','cherry','banana','kiwi', 'lemon','pear', 'peach','avocado'] endFlag = False while not endFlag: num = int(input('please select a fruit with the number associated with it') if num &gt;= 0 and num&lt;len(fruits): #You can also use try/except here print(fruit[num]) else: print('enter another fruit please, that one is not available') VendingMachine = input(&quot;would you like to repeat the program again? Yes/No &quot;) if VendingMachine[0] == 'N': #if user input is 'No' your code fails endFlag = True </code></pre>
python
2
1,902,784
68,747,545
How to calculate cumulative percent change by each group?
<p>I'd like to create a new column to calculate the cumulative percent change by each group</p> <p>Sample dataset:</p> <pre><code>import pandas as pd df = pd.DataFrame({'Group':['A', 'A', 'A', 'B', 'B'], 'Col_1':[100, 200, 300, 400, 500], 'Col_2':[55, 66, 77, 88, 99]}) </code></pre> <p>Methodology: See example below</p> <pre><code>| Group |Col_1 | Col_2 | Cumulative Percent Change | |-------|------|--------|---------------------------------------| | A | 100 | 55 | 1 | | A | 200 | 66 |(66-55)/55 + 1 | | A | 300 | 77 |((77-66)/66) + ((66-55)/55 + 1) | | B | 400 | 88 | 1 | | B | 500 | 99 |((99-88)/88) + 1 | </code></pre>
<p>You need to <code>groupby</code> twice, once to compute the percent change (with <code>pct_change</code>) and once for the cumulative sum+1 (<code>cumsum</code> and <code>add(1)</code>):</p> <pre><code>df['CPC'] = (df.groupby('Group')['Col_2'] .pct_change() .fillna(0) .groupby(df['Group']) .cumsum().add(1) ) </code></pre> <p>output:</p> <pre><code> Group Col_1 Col_2 CPC 0 A 100 55 1.000000 1 A 200 66 1.200000 2 A 300 77 1.366667 3 B 400 88 1.000000 4 B 500 99 1.125000 </code></pre>
pandas|dataframe|group-by
1
1,902,785
68,796,797
What is the best way to include external Python modules when distributing a project?
<p>I'm working on a simple script for a friend who isn't knowledgeable about programming at all. My script uses several libraries installed from external sources via pip (requests, BeautifulSoup, etc.). I would like to send my script to my friend with as little set-up on his end as possible, but can't figure out how to include those external libraries in the repository. Is there a 'proper' or best way to package those libraries so that the user of the script doesn't have to install them manually?</p> <p>I've looked into using venv or a setup.py file, but I'm not sure if either of those is an appropriate approach or how to implement those solutions.</p>
<p>I'd say that the user installing the packages/modules manually is common practice when exploring a distributed project.</p> <p>However, perhaps the concept of a requirements file may be pertinent here.</p> <p>Before pushing your project to your repo (or even after is fine), in your local project directory run a pip freeze command like:</p> <pre><code>pip freeze &gt; requirements.txt </code></pre> <p>(or some variation of that, Google it if that doesn't work)</p> <p>which will <em>freeze</em> the names of your installed modules into a file called <strong>requirements.txt</strong>.</p> <p>When someone wants to run any code from your project when they download the repo, they can quickly install all necessary packages with</p> <pre><code>pip install -r requirements.txt </code></pre> <p>Read Pip Documentation <a href="https://pip.pypa.io/en/stable/user_guide/" rel="nofollow noreferrer">Here</a></p>
python|pip|package
1
1,902,786
62,713,033
Listbox in Json file
<p>I have a json file that i save my listbox data in. After i choose items (highlight) from the listbox and save my file then load it, the listbox items are stored in my file and they are printed but they are not highlighted. How can I highlight them so i can deselect and select again if i want to change my item selection?</p> <p>THE CODE:</p> <pre><code>import tkinter as tk import json from tkinter.filedialog import askdirectory root = tk.Tk() root.title('Intialization') value = [] def callback(listbox): global value value = [listbox.get(ratio) for ratio in listbox.curselection()] def writeToJSONFile(path, fileName, data): filePathNameWExt = path + '/' + fileName + '.json' with open(filePathNameWExt, 'w') as fp: json.dump(data, fp) def check(): global value data = {} path = askdirectory() data['items'] = value writeToJSONFile(path, 'json', data) def w(): window = tk.Toplevel(root) window.title('Main') global value listbox = tk.Listbox(window, activestyle='dotbox', selectmode=tk.MULTIPLE, exportselection=False) values = [100, 155, 200, 255, 300, 355, 400] for item in values: listbox.insert(tk.END, item) scrollbar = tk.Scrollbar(window) scrollbar.grid(column=0, row=2, sticky='nse', pady=20) listbox.bind('&lt;&lt;ListboxSelect&gt;&gt;', func=lambda z: callback(listbox)) listbox.config(width=13, height=4, yscrollcommand=scrollbar.set) listbox.grid(column=0, row=2, pady=20, sticky='ne') save_config = tk.Button(window, text=&quot;Save Configuration&quot;, bg='green', command=lambda: check()) save_config.grid(column=0, row=3) try: f = open('json.json', &quot;r&quot;) j = json.loads(f.read()) for key, value in j.items(): print(key, &quot;:&quot;, value) value = j['items'] print(j) except FileNotFoundError: print(&quot;No Json File&quot;) window.grab_set() load_btn = tk.Button(root, text=&quot;Load&quot;, command=w) load_btn.place(relx=0.5, rely=0.5, anchor=tk.CENTER) root.mainloop() </code></pre>
<p>After you read this json file,you could get the index of values and make those items selected. Try:</p> <pre><code>import tkinter as tk import json from tkinter.filedialog import askdirectory root = tk.Tk() root.title('Intialization') value = [] def callback(listbox): global value value = [listbox.get(ratio) for ratio in listbox.curselection()] def writeToJSONFile(path, fileName, data): filePathNameWExt = path + '/' + fileName + '.json' with open(filePathNameWExt, 'w') as fp: json.dump(data, fp) def check(): global value data = {} path = askdirectory() data['items'] = value writeToJSONFile(path, 'json', data) def w(): window = tk.Toplevel(root) window.title('Main') global value listbox = tk.Listbox(window, activestyle='dotbox', selectmode=tk.MULTIPLE, exportselection=False) values = [100, 155, 200, 255, 300, 355, 400] for item in values: listbox.insert(tk.END, item) scrollbar = tk.Scrollbar(window) scrollbar.grid(column=0, row=2, sticky='nse', pady=20) listbox.bind('&lt;&lt;ListboxSelect&gt;&gt;', func=lambda z: callback(listbox)) listbox.config(width=13, height=4, yscrollcommand=scrollbar.set) listbox.grid(column=0, row=2, pady=20, sticky='ne') save_config = tk.Button(window, text=&quot;Save Configuration&quot;, bg='green', command=lambda: check()) save_config.grid(column=0, row=3) try: f = open('json.json', &quot;r&quot;) j = json.loads(f.read()) for key, value in j.items(): print(key, &quot;:&quot;, value) value = j['items'] index_list = [values.index(i) for i in value] # get the index for index in index_list: listbox.selection_set(index) # make it selected f.close() except FileNotFoundError: print(&quot;No Json File&quot;) window.grab_set() load_btn = tk.Button(root, text=&quot;Load&quot;, command=w) load_btn.place(relx=0.5, rely=0.5, anchor=tk.CENTER) root.mainloop() </code></pre>
python|json|python-3.x|tkinter
2
1,902,787
62,796,587
How to set the location of bars in python matplotlib?
<p>I use the following code to generate the following image.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt labels = ['G1', 'G2'] men_means = [20, 35] women_means = [25, 32] men_std = [2, 3] women_std = [3, 5] width = 0.25 # the width of the bars: can also be len(x) sequence fig, ax = plt.subplots() ax.bar(labels, men_means, width, yerr=men_std, label='Men') ax.bar(labels, women_means, width, yerr=women_std, bottom=men_means, label='Women') ax.set_ylabel('Scores') ax.set_title('Scores by group and gender') ax.legend() plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/KApxc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KApxc.png" alt="enter image description here" /></a></p> <p>The two bars are too far from each other. How can I make them come closer but still be centered? Can I add some padding on the left and right?</p>
<p>You can manually set the positions of the bars in the x axis. You then have to add the tick labels manually:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt men_means = [20, 35] women_means = [25, 32] men_std = [2, 3] women_std = [3, 5] width = 0.25 # the width of the bars: can also be len(x) sequence fig, ax = plt.subplots() # Define positions for each bar... just random here # Change the 2nd argument to move bars around; play with bar widths also positions = (0.5, 0.8) # Here the first argument is the x position for the bars ax.bar(positions, men_means, width, yerr=men_std, label='Men') ax.bar(positions, women_means, width, yerr=women_std, bottom=men_means, label='Women') # Now set the ticks and the corresponding labels labels = ('G1', 'G2') plt.xticks(positions, labels) ax.set_ylabel('Scores') ax.set_title('Scores by group and gender') ax.legend() plt.show() </code></pre> <p>Result:</p> <p><a href="https://i.stack.imgur.com/VNMAU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VNMAU.png" alt="enter image description here" /></a></p> <p>You can play around with the bar widths and the 2nd argument in <code>positions</code> to get the distance you'd like.</p>
python|matplotlib
1
1,902,788
61,796,338
Position of a dash dropdown inline with other components
<p>I am using plotly Dash and i am having problems with the position of a dash component on the page. It’s the ‘change the year’ dropdown as shown in the picture below. I would like it to be where i show with the arrow, whereas it’s below my first radioitem component. <a href="https://i.stack.imgur.com/59XgW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/59XgW.png" alt="enter image description here"></a></p> <p>My code below:</p> <pre><code>external_stylesheets = ["https://codepen.io/chriddyp/pen/bWLwgP.css"] app = dash.Dash(__name__, external_stylesheets= external_stylesheets ) # determining the layout of the page app.layout = html.Div( [ html.Div( [ html.Label( ['Change continent:'], style={'font-weight': 'bold', 'display': 'inline-block', 'justifyContent': 'center', 'width': '65%', 'height': '100%'}, ), dcc.RadioItems( id='radio_ITEMS', options=[ {'label': 'AMERICA', 'value': 'graph1'}, {'label': 'EUROPE', 'value': 'graph2'}, {'label': 'ASIA', 'value': 'graph3'}], value='graph1', ), ], className='six columns' ), html.Div( [ html.Label( ['Change the variable:'], style={'font-weight': 'bold', 'display': 'inline-block', 'justifyContent': 'center', 'width': '65%', 'height': '100%'}, ), dcc.RadioItems( id='radio_items2', options=[{'label': x, 'value': x} for x in cols1], value='Happiness Score', ), ], className='six columns' ), html.Div( [ html.Label( ['Change the year:'], style={'font-weight': 'bold', 'display': 'inline-block'}), dcc.Dropdown(id="drop", options=[ {"label": "1970", "value": 2015}, {"label": "1980", "value": 2016}, {"label": "1990", "value": 2017}, {"label": "2000", "value": 2018}, {"label": "2010", "value": 2019}], multi=False, value=2015, style={"width": "35%"}, )]), html.Div( dcc.Graph( id='the_graph', style={'height': 600, 'width': 1000, } ), ), ] ,className= 'fifteen columns') @app.callback( Output( 'the_graph', 'figure' ), [Input( 'radio_ITEMS', 'value' ), Input( component_id='radio_items2', component_property='value' ), Input('drop', 'value')] </code></pre>
<p>As a rule of thumb, adjust the style of your app using <code>css</code>, not inline <code>style</code>. </p> <p>The issue you're experiencing is that the width of your divs are in sum greater than 100% therefore making them span multiple rows. You can fix using the code provided below.</p> <p><strong>Remove the styling from your code + use broad classes (specific ids if necessary):</strong></p> <pre><code># determining the layout of the page app.layout = html.Div( [ html.Div( classname="my-cols", children = [ html.Label('Change continent:'), dcc.RadioItems( id='radio_ITEMS', options=[ {'label': 'AMERICA', 'value': 'graph1'}, {'label': 'EUROPE', 'value': 'graph2'}, {'label': 'ASIA', 'value': 'graph3'} ], value='graph1', ), ] ), html.Div( classname="my-cols", children = [ html.Label('Change variable:'), dcc.RadioItems( id='radio_items2', # id naming should be consistent... options=[{'label': x, 'value': x} for x in cols1], value='Happiness Score', ), ] ), html.Div( classname="my-cols", children = [ html.Label('Change year:'), dcc.RadioItems( id='radio_items2', # id naming should be consistent... options=[ {"label": "1970", "value": 2015}, {"label": "1980", "value": 2016}, {"label": "1990", "value": 2017}, {"label": "2000", "value": 2018}, {"label": "2010", "value": 2019} ], multi=False, value='Happiness Score', ), ] ), html.Div( dcc.Graph( id='the_graph', ), ) ) # close the app! </code></pre> <p>Create a css file in your project root – or create a folder in the root named <code>styles</code> and add a file in that folder. Name is arbitrary...</p> <pre class="lang-css prettyprint-override"><code>.my-cols{ width: calc(100%/3); float: left; } /* The following formats the content whose id = the_graph*/ @the_graph{ height: 600; width: 1000 } /* Style all labels*/ label{ font-weight: bold; display: inline-block; /* I don't think you need these style configs... (justify, w, h)...*/ justifyContent: center; width: 65%; height: 100%; } </code></pre> <p><em>This should solve your problem.</em></p>
python|html|css|python-3.x|plotly-dash
5
1,902,789
61,822,604
How to get values from list of dictionaries?
<p>This is my data set, this is the column I separated from the csv file.</p> <pre><code>0 [{'id': 16, 'name': 'Animation'}, {'id': 35, '... 1 [{'id': 12, 'name': 'Adventure'}, {'id': 14, '... 2 [{'id': 10749, 'name': 'Romance'}, {'id': 35, ... 3 [{'id': 35, 'name': 'Comedy'}, {'id': 18, 'nam... 4 [{'id': 35, 'name': 'Comedy'}] </code></pre> <p>How to get just a list with the content <code>['Animation', 'Adventure', 'Romance', 'Comedy', 'Comedy']</code> as output?</p>
<p>It's unclear if you have a list of lists or just one list.</p> <p>For a single list you can use a list comprehension:</p> <pre><code>dict_list = [{'id': 10749, 'name': 'Romance'}, {'id': 35, 'name': 'Comedy'}] [dict_item['name'] for dict_item in dict_list] </code></pre> <p>Otherwise, you can unnest the first list and then do a list comprehension</p> <pre><code>dict_list = [[{'id': 1, 'name': 'Animation'}, {'id': 2, 'name': 'Comedy'}],[{'id': 3, 'name': 'Romance'}, {'id': 4, 'name': 'Comedy'}]] [dict_item['name'] for dict_item in [dict_item for sublist in dict_list for dict_item in sublist]] </code></pre>
python
0
1,902,790
67,536,355
Removing '#' from the scraped links
<p>Hi I am beginner with web scraping. I am trying to scrape all the links from a website and I am successful to some extent.</p> <pre><code>import requests from bs4 import BeautifulSoup url = 'https://www.marian.ac.in/' response = requests.get(url) soup = BeautifulSoup(response.text, 'html.parser') soup.title soup.title.string for link in soup.find_all('a',href=True): print(link['href']) </code></pre> <p>The issue I am facing is the output has '#'.How shall I remove this?</p> <p>Can anyone help with this?</p>
<p>Try the following to get the links that do not starts with <code>#</code>. You can choose either of the conditions to meet the requirement:</p> <pre><code>import requests from bs4 import BeautifulSoup url = 'https://www.marian.ac.in/' response = requests.get(url) soup = BeautifulSoup(response.text, 'html.parser') for link in soup.find_all('a',href=True): if link['href'].strip().startswith(&quot;#&quot;):continue # if not link['href'].startswith(&quot;http&quot;):continue print(link['href']) </code></pre>
python|web-scraping
3
1,902,791
60,674,891
Extracting values from a dictionary for a respective key
<p>I have a dictionary in a below-mentioned pattern: </p> <pre><code>dict_one = {1: [2, 3, 4], 2: [3, 4, 4, 5],3 : [2, 5, 6, 6]} </code></pre> <p>I need to get an output such that for each key I have only one value adjacent to it and then finally I need to create a data frame out of it.</p> <p>The output would be similar to:</p> <pre><code>1 2 1 3 1 4 2 3 2 4 2 4 2 5 3 2 3 5 3 6 3 6 </code></pre> <p>Please help me with this. </p> <pre><code>dict_one = {1: [2, 3, 4], 2: [3, 4, 4, 5],3 : [2, 5, 6, 6]} df_column = ['key','value'] for key in dict_one.keys(): value = dict_one.values() row = (key,value) extended_ground_truth = pd.DataFrame.from_dict(row, orient='index', columns=df_column) extended_ground_truth.to_csv("extended_ground_truth.csv", index=None) </code></pre>
<p>You can normalize the data as you iterate the dictionary</p> <pre><code>df=pd.DataFrame(((key, value[0]) for key,value in dict_one.items()), columns=["key", "value"]) </code></pre>
python|dataframe|dictionary
1
1,902,792
60,423,575
Why can't you omit the arguments to super if you add *args in __init__ definition?
<pre><code>class MyClass: def __init__(*args): print(super()) MyClass() </code></pre> <p>Why does this code raise <code>RuntimeError: super(): no arguments</code>? This is in Python 3.7.4.</p>
<p>Per <a href="https://www.python.org/dev/peps/pep-3135/" rel="nofollow noreferrer">PEP 3135</a>, which introduced "new <code>super</code>" (emphasis mine):</p> <blockquote> <p>The new syntax:</p> <pre><code>super() </code></pre> <p>is equivalent to:</p> <pre><code>super(__class__, &lt;firstarg&gt;) </code></pre> <p>where <code>__class__</code> is the class that the method was defined in, and <strong><code>&lt;firstarg&gt;</code> is the first parameter of the method</strong> (normally <code>self</code> for instance methods, and <code>cls</code> for class methods).</p> </blockquote> <p>There <strong>must</strong> be a specific first parameter for this to work (although it doesn't <em>necessarily</em> have to be called <code>self</code> or <code>cls</code>), it won't use e.g. <code>args[0]</code>.</p> <hr> <p>As to <em>why</em> it needs to be a specific parameter, that's due to <a href="https://github.com/python/cpython/blob/1b55b65638254aa78b005fbf0b71fb02499f1852/Objects/typeobject.c#L7940-L7975" rel="nofollow noreferrer">the implementation</a>; per the comment it uses the <em>"first local variable on the stack"</em>. If <code>co-&gt;co_argcount == 0</code>, as it is when you only specify <code>*args</code>, you get the <code>no arguments</code> error. This behaviour may not be the same in other implementations than CPython.</p> <hr> <h3>Related</h3> <ul> <li><a href="https://stackoverflow.com/q/13126727/3001761">How is super() in Python 3 implemented?</a></li> <li><a href="https://stackoverflow.com/q/19608134/3001761">Why is Python 3.x&#39;s super() magic?</a></li> <li><a href="https://stackoverflow.com/q/39312553/3001761">Get &quot;super(): no arguments&quot; error in one case but not a similar case</a></li> <li><a href="https://stackoverflow.com/q/36993577/3001761">Schr&#246;dinger&#39;s variable: the __class__ cell magically appears if you&#39;re checking for its presence?</a></li> </ul>
python|python-3.x|class
4
1,902,793
70,163,791
Manim png rendered is cropped
<p>I'm trying to plot a table from numbers upto 100, but when the png is rendered (because there's still no animation), the image is cropped and I don't know if it's necesary to do some zoom out to the scene or what. I tried with the flag -r but it only changes the size of the image, it still looks cropped.</p> <pre><code>from manim import * class DrawTable(Scene): def construct(self): N = 100 ROWS, COLS = 10, 10 vals = np.arange(1,N+1).reshape(ROWS,COLS) table = IntegerTable( vals, include_outer_lines=True ) self.add(table) </code></pre> <p>And the png: <a href="https://i.stack.imgur.com/UUkYh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UUkYh.png" alt="png rendered" /></a></p>
<p>You can use <code>.scale(value)</code> to manually rescale it.</p> <pre><code>table = table.scale(0.5) </code></pre> <p><a href="https://i.stack.imgur.com/ryV6w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ryV6w.png" alt="enter image description here" /></a></p> <hr /> <p>If you use <code>print( dir(table) )</code> then you can see all functions avaliable for <code>table</code> and there is <code>.scale()</code> but also <code>.scale_to_fit_width(width)</code> and <code>.scale_to_fit_height(height)</code> which you can use with <code>config.frame_width</code>, <code>config.frame_height</code>. But you have to choose which one to use. For some (table and screen) sizes you will need <code>fit_width</code> and for others <code>fit_height</code>.</p> <pre><code>table = table.scale_to_fit_width(config.frame_width) </code></pre> <p><a href="https://i.stack.imgur.com/uTV32.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uTV32.png" alt="enter image description here" /></a></p> <pre><code>table = table.scale_to_fit_height(config.frame_height) </code></pre> <p><a href="https://i.stack.imgur.com/bOvdf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bOvdf.png" alt="enter image description here" /></a></p> <hr /> <p>You can also calculate scale between <code>table.width</code> and <code>config.frame_width</code>, and scale between <code>table.height</code> and <code>config.frame_height</code>, and use <code>min()</code> or <code>max()</code> to choose correct scale. But again for some (table and screen) sizes you will need <code>min()</code> and for others `max().</p> <pre><code>scale_x = config.frame_width/table.width scale_y = config.frame_height/table.height scale = min([scale_x, scale_y]) table = table.scale(scale) </code></pre> <hr /> <p>Full code for tests.</p> <p>Tested with <a href="https://docs.manim.community/en/stable/index.html" rel="nofollow noreferrer">Manim Community</a> <code>0.12.0</code><br /> Not tested with <a href="https://3b1b.github.io/manim/index.html" rel="nofollow noreferrer">Original Manim create by 3Blue1Brown </a></p> <pre><code>from manim import * class DrawTable(Scene): def construct(self): #print(config) N = 100 ROWS, COLS = 10, 10 vals = np.arange(1,N+1).reshape(ROWS,COLS) table = IntegerTable( vals, include_outer_lines=True ) print(&quot;\n&quot;.join(dir(table))) # display all functions # --- manually --- #table = table.scale(0.5) # --- fit --- #print(config.frame_width, config.frame_height) #table = table.scale_to_fit_width(config.frame_width) #table = table.scale_to_fit_height(config.frame_height) # --- calculate scale --- print(table.width, table.height) scale_x = config.frame_width/table.width scale_y = config.frame_height/table.height scale = min([scale_x, scale_y]) print('scale:', scale_x, scale_y, '-&gt;', scale) table = table.scale(scale) # --- self.add(table) # --- #self.play(table.animate.scale(2.00)) #self.play(table.animate.scale(0.25)) #self.wait(3) </code></pre>
python|manim
2
1,902,794
70,600,067
using setattr within a class method to sel something on self
<p>When doing this</p> <pre><code>class Example: def __init__(self, a, b): self.a = a self.b = b def update(self, **kwargs): for key, value in kwargs.items(): getattr(self, key) setattr(self, key, value) </code></pre> <p>.. the update function won't update an instance of this class. I've tried <code>self.__setattr__(key, value)</code> and <code>object.__setattr__(self, key, value)</code> and even tried <code>eval(f&quot;self.{key}={repr(value)}&quot;)</code> which threw an error!</p> <p>UPDATE: The code does work.initially I had coded <code>self.__setattr__(key, value)</code>. Microsoft vscode ( running on linux ) somehow caches things, for a very very long time and despite rotating through several changes of code didn't show any change in the test results i was running. After taking a break to ask this question I ran the answer code below, then reverted to setattr and everything worked from there. really annoyed with that!!</p>
<p>Use <code>__dict__</code> as a shortcut:</p> <pre><code>class Example: def __init__(self, a, b): self.a = a self.b = b def update(self, **kwargs): # you need to do some checks here (if attribute exists or not) self.__dict__.update(kwargs) e = Example(1, 2) print(e.__dict__) e.update(a=3, b=4) print(e.__dict__) </code></pre> <p>Output:</p> <pre><code>{'a': 1, 'b': 2} {'a': 3, 'b': 4} </code></pre>
python|class|setattr
2
1,902,795
63,704,937
TensorBoard showing lots of 'nodes' from previous models
<p>I am training a model on the MNIST data and I am using tensorboard to visualise the training and validation loss.</p> <p>Here is the code for my current model I am trying:</p> <pre><code>model=tf.keras.models.Sequential() #callback=tf.keras.callbacks.EarlyStopping(monitor='accuracy', min_delta=0, patience=0, verbose=0, mode='auto',restore_best_weights=False) #model.add(tf.keras.layers.InputLayer(input_shape=[28,28])) log_dir = &quot;logs/fit/&quot; + datetime.datetime.now().strftime(&quot;%Y%m%d-%H%M%S&quot;) tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1) reduce_lr=tf.keras.callbacks.ReduceLROnPlateau( monitor='val_loss', factor=0.1, patience=5, verbose=0, mode='auto', min_delta=0.0001, cooldown=0, min_lr=0) optimizer=tf.keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=True, name='Adam', clipnorm=5) # if hparams[HP_OPTIMIZER]=='adam': # optimizer=tf.keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=True, # name='Adam', clipnorm=5) # elif hparams[HP_OPTIMIZER]=='sgd': # tf.keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=False, name='SGD', **kwargs) model.add(tf.keras.layers.Flatten(input_shape=[28,28])) l2_new=tf.keras.regularizers.L2( l2=0.05) model.add(tf.keras.layers.BatchNormalization( axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones', beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None, renorm=True, renorm_clipping=None, renorm_momentum=0.99)) model.add(tf.keras.layers.Dense(300,activation='relu',kernel_initializer=&quot;he_normal&quot;, kernel_regularizer=l2_new, bias_regularizer=l2_new)) model.add(tf.keras.layers.BatchNormalization( axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones', beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None, renorm=True, renorm_clipping=None, renorm_momentum=0.99)) model.add(tf.keras.layers.Dense(300,activation='relu',kernel_initializer=&quot;he_normal&quot;, kernel_regularizer=l2_new, bias_regularizer=l2_new)) model.add(tf.keras.layers.BatchNormalization( axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones', beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None, renorm=True, renorm_clipping=None, renorm_momentum=0.99)) model.add(tf.keras.layers.Dense(10,activation='softmax')) </code></pre> <p>Sorry if it is a bit messy. I am essentially creating a sequential model with</p> <ol> <li>A Flatten input layer</li> <li>A Batch Norm layer 3.A 300 neuron dense layer</li> <li>A Batch Norm layer</li> <li>A 300 neuron dense layer</li> <li>A Batch Norm layer</li> <li>A Softmax output layer with 10 neurons.</li> </ol> <p>My model also uses the 'Adam' optimizer and learning rate decay.</p> <p>When I view my model under the graphs sub heading in tensorboard, I get the following picture:</p> <p><a href="https://i.stack.imgur.com/B2mhA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B2mhA.png" alt="TensorBoard under graphs subheading" /></a></p> <p>As you can see, there are lots of 'nodes', which I am guessing is because I have trained multiple models. How do I get rid of all the previous attempts.</p> <p>I have tried using <code>del model</code> and <code>tf.keras.backend.clear_session()</code> but they didn't work.</p> <p><strong>Edit</strong>: I have followed the advice of 'Aniket Bote' and deleted the logs. Here is the new output:</p> <p><a href="https://i.stack.imgur.com/K2SA0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K2SA0.png" alt="New Output" /></a></p> <p>I am still not sure it is correct. From my code, I don't think that my graph should have 2 branches as is denoted, and I am still getting that huge stack of batch normalisation 'nodes' to the right.</p>
<p>The second branch is not a graph in itself but rather it is a <strong>subgraph</strong>.<br /> Tensorflow build graphs of the operation it performs in order to speed up the execution of code. If you click on those you can see they are functions that are utilized by the batch normalization layer, not the layer itself. You can see all the layer information on your main graph.</p> <p>If you don't want those nodes you can get rid of them by setting the BatchNormalization's trainable attribute as False.</p> <p>In this case, the layer's weight won't change and TensorFlow would no longer require to compute anything for that layer ie. No function nodes will be generated.</p> <p><strong>Code:</strong></p> <pre><code>import tensorflow as tf import numpy as np np.random.seed(100) x = tf.constant(np.random.randint(50, size =(1000,28,28)), dtype = tf.float32) y = tf.constant(np.random.randint(10, size =(1000,)), dtype = tf.int32) model=tf.keras.models.Sequential() log_dir = &quot;logs&quot; tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1, profile_batch = 0) optimizer=tf.keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=True, name='Adam', clipnorm=5) model.add(tf.keras.layers.Flatten(input_shape=[28,28])) l2_new=tf.keras.regularizers.L2( l2=0.05) model.add(tf.keras.layers.BatchNormalization( axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones', beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None, renorm=True, renorm_clipping=None, renorm_momentum=0.99,trainable = False)) model.add(tf.keras.layers.Dense(300,activation='relu',kernel_initializer=&quot;he_normal&quot;, kernel_regularizer=l2_new, bias_regularizer=l2_new)) model.add(tf.keras.layers.BatchNormalization( axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones', beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None, renorm=True, renorm_clipping=None, renorm_momentum=0.99,trainable = False)) model.add(tf.keras.layers.Dense(300,activation='relu',kernel_initializer=&quot;he_normal&quot;, kernel_regularizer=l2_new, bias_regularizer=l2_new)) model.add(tf.keras.layers.BatchNormalization( axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones', beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None, renorm=True, renorm_clipping=None, renorm_momentum=0.99,trainable = False)) model.add(tf.keras.layers.Dense(10,activation='softmax')) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) tf.keras.utils.plot_model(model, 'my_first_model.png', show_shapes=True) model.fit(x,y, epochs = 10, callbacks = tensorboard_callback) </code></pre> <p><strong>Output:</strong></p> <pre><code>Epoch 1/10 32/32 [==============================] - 0s 10ms/step - loss: 89.0275 - accuracy: 0.1100 Epoch 2/10 32/32 [==============================] - 0s 9ms/step - loss: 56.7906 - accuracy: 0.1310 Epoch 3/10 32/32 [==============================] - 0s 9ms/step - loss: 48.5681 - accuracy: 0.1490 Epoch 4/10 32/32 [==============================] - 0s 9ms/step - loss: 42.8176 - accuracy: 0.1850 Epoch 5/10 32/32 [==============================] - 0s 9ms/step - loss: 38.5857 - accuracy: 0.2110 Epoch 6/10 32/32 [==============================] - 0s 9ms/step - loss: 35.1675 - accuracy: 0.2540 Epoch 7/10 32/32 [==============================] - 0s 9ms/step - loss: 32.3327 - accuracy: 0.2750 Epoch 8/10 32/32 [==============================] - 0s 9ms/step - loss: 29.8839 - accuracy: 0.3420 Epoch 9/10 32/32 [==============================] - 0s 9ms/step - loss: 27.7426 - accuracy: 0.3940 Epoch 10/10 32/32 [==============================] - 0s 10ms/step - loss: 25.6565 - accuracy: 0.4930 </code></pre> <p><strong>Tensorboard Graph Image:</strong> <a href="https://i.stack.imgur.com/bcN3T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bcN3T.png" alt="tensorboard" /></a></p>
python|tensorflow|keras|tensorboard
1
1,902,796
55,939,346
Data from input not being accepted as arguments
<p>When I enter data in to my function directly I get the right output but when I use input from the user to fill the list nothing happens. I don't get any errors or output what so ever.</p> <p>The data from input should enter the list and the index from input should be deleted from the list.</p> <pre><code>#!/usr/bin/env python3 #class definitions class record: def __init__(self,telephone,lastname,firstname): self.telephone = telephone self.lastname = lastname self.firstname = firstname def __str__(self): return f"Last name: {self.lastname}, First Name: {self.firstname}, Telephone: {self.telephone}" class PhoneBook: def __init__(self): self.phonebook = [] def addrecord(self, record): self.phonebook.append(record) return self.phonebook.index(record) def deleterecord(self, i): self.phonebook.pop(i-1) def printphonebook(self): x = 1 for entry in self.phonebook: print(x,'. ',entry,sep='') x = x + 1 #Main select = None while select != 'exit': ph = PhoneBook() ph.addrecord(record(515,'fin','matt')) ph.addrecord(record(657,'fisher','bill')) select = input('Main Menu \n1. show phonebook \n2. add record \n3. remove record\nor "exit" to exit program\n') test = False while test == False: if select == '1': ph.printphonebook() test = True elif select == '2': x = int(input('Enter telephone number.\n')) y = str(input('Enter last name.\n')) z = str(input('Enter first name.\n')) ph.addrecord(record(x,y,z)) test = True elif select == '3': i = int(input('Enter the record number youd like to delete.\n')) ph.deleterecord(i) test = True elif select == 'exit': break else: print('Invalid selection. Please try again.') test = True </code></pre> <p>The desired output would be getting the data to correctly enter and exit the list based on my x, y and z inputs and take out the specified index of the list based on the i input.</p>
<p>You clear and create a new Phonebook() object every time your first while loop runs.<br> I'm new and not skilled enough to fix every problem in a short amount of time.<br> You don't see your new entries, because they get wiped out every time. </p> <p>Try using one while loop and a switch statement.</p>
python
0
1,902,797
56,542,268
Trouble passing variable as parameter between methods
<p>I've created a script in python using <strong><em>class</em></strong> to log into a website making use of my credentials. When I run my script, I can see that it successfully logs in. What I can't do is find a suitable way to pass <code>res.text</code> being returned within <code>login()</code> method to <code>get_data()</code> method so that I can process it further. I don't wish to try like this <code>return self.get_data(res.text)</code> as it looks very awkward.</p> <p>The bottom line is: when I run my script, It will automatically logs in like it is doing now. However, it will fetch data when I use this line <code>scraper.get_data()</code> within main function..</p> <p>This is my try so far:</p> <pre><code>from lxml.html import fromstring import requests class CoffeeGuideBot(object): login_url = "some url" def __init__(self,session,username,password): self.session = session self.usrname = username self.password = password self.login(session,username,password) def login(self,session,username,password): session.headers['User-Agent'] = 'Mozilla/5.0' payload = { "Login1$UserName": username, "Login1$Password": password, "Login1$LoginButton": "Log on" } res = session.post(self.login_url,data=payload) return res.text def get_data(self,htmlcontent): root = fromstring(htmlcontent,"lxml") for iteminfo in root.cssselect("some selector"): print(iteminfo.text) if __name__ == '__main__': session = requests.Session() scraper = CoffeeGuideBot(session,"username","password") #scraper.get_data() #This is how i wish to call this </code></pre> <p><strong><em>What is the ideal way to pass variable as parameter between methods?</em></strong></p>
<p>If I understood you requirement correctly, you want to access <code>res.text</code> inside <code>get_data()</code> without passing it as a method argument.</p> <p>There are 2 options IMO.</p> <ol> <li>Store <code>res</code> as a class instance variable of <code>CoffeeGuideBot</code>, access it in <code>get_data()</code></li> </ol> <pre><code>def login(self,session,username,password): &lt;some code&gt; self.res = session.post(self.login_url,data=payload) def get_data(self): root = fromstring(self.res.text,"lxml") &lt;other code&gt; </code></pre> <ol start="2"> <li>Almost same as above, but actually use the return value from <code>login()</code> to store <code>res</code>. In current code, the <code>return</code> statement is unnecessary.</li> </ol> <pre><code>def __init__(self,session,username,password): &lt;initializations&gt; self.res = self.login(session,username,password) def login(self,session,username,password): &lt;some code&gt; return session.post(self.login_url,data=payload) def get_data(self): root = fromstring(self.res.text,"lxml") &lt;other code&gt; </code></pre>
python|python-3.x|class|web-scraping
3
1,902,798
69,704,468
PyQt5 Installation on Windows 10
<h2>What I Know</h2> <p>What I did was:</p> <pre><code>pip install pyqt5 pip install pyqt5-tools </code></pre> <p>The installation went successfully and there was no problem.</p> <h2>Designer</h2> <p>I expected that executing the commands in the above section, the PyQt5 Designer should be installed. But when I navigated to where PyQt5 was installed, I couldn’t find any <code>designer.exe</code> file.<br /> Am I supposed to run any command?</p> <p>I am using Python 3.9 on Windows.</p>
<h2>Found the Solution</h2> <p>It’s simple; you have to just run this command in your <code>cmd</code>/<code>powershell</code>/<code>terminal</code>:</p> <pre><code>pip install pyqtdesigner </code></pre>
python|python-3.x|windows|pip|pyqt5
0
1,902,799
17,718,827
matplotlib: draw major tick labels under minor labels
<p>This seems like it should be easy - but I can't see how to do it:</p> <p>I have a plot with time on the X-axis. I want to set two sets of ticks, minor ticks showing the hour of the day and major ticks showing the day/month. So I do this:</p> <pre><code># set date ticks to something sensible: xax = ax.get_xaxis() xax.set_major_locator(dates.DayLocator()) xax.set_major_formatter(dates.DateFormatter('%d/%b')) xax.set_minor_locator(dates.HourLocator(byhour=range(0,24,3))) xax.set_minor_formatter(dates.DateFormatter('%H')) </code></pre> <p>This labels the ticks ok, but the major tick labels (day/month) are drawn on top of the minor tick labels:</p> <p><img src="https://i.stack.imgur.com/rfbBW.png" alt="Sig. wave height ensemble time series"></p> <p>How do I force the major tick labels to get plotted below the minor ones? I tried putting newline escape characters (\n) in the DateFormatter, but it is a poor solution as the vertical spacing is not quite right.</p> <p>Any advice would be appreciated!</p>
<p>You can use <code>axis</code> method <code>set_tick_params()</code> with the keyword <code>pad</code>. Compare following example.</p> <pre><code>import datetime import random import matplotlib.pyplot as plt import matplotlib.dates as dates # make up some data x = [datetime.datetime.now() + datetime.timedelta(hours=i) for i in range(100)] y = [i+random.gauss(0,1) for i,_ in enumerate(x)] # plot plt.plot(x,y) # beautify the x-labels plt.gcf().autofmt_xdate() ax = plt.gca() # set date ticks to something sensible: xax = ax.get_xaxis() xax.set_major_locator(dates.DayLocator()) xax.set_major_formatter(dates.DateFormatter('%d/%b')) xax.set_minor_locator(dates.HourLocator(byhour=range(0,24,3))) xax.set_minor_formatter(dates.DateFormatter('%H')) xax.set_tick_params(which='major', pad=15) plt.show() </code></pre> <p><strong>PS</strong>: This example is borrowed from <a href="https://stackoverflow.com/questions/1574088/plotting-time-in-python-with-matplotlib/16428019#16428019">moooeeeep</a></p> <hr /> <p>Here's how the above snippet would render:</p> <p><a href="https://i.stack.imgur.com/rH8DA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rH8DA.png" alt="enter image description here" /></a></p>
python|matplotlib|plot|axis-labels
20