Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,909,600
3,230,067
Numpy minimum in (row, column) format
<p>How can I know the (row, column) index of the minimum of a numpy array/matrix?</p> <p>For example, if <code>A = array([[1, 2], [3, 0]])</code>, I want to get <code>(1, 1)</code></p> <p>Thanks!</p>
<p>Use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.unravel_index.html" rel="noreferrer"><code>unravel_index</code></a>:</p> <pre><code>numpy.unravel_index(A.argmin(), A.shape) </code></pre>
python|arrays|numpy|minimum
115
1,909,601
50,614,977
How to send multiple recipient sendgrid V3 api Python
<p>Anyone please help, I am using sendgrid v3 api. But I cannot find any way to send an email to multiple recipients. Thank in advance.</p> <pre><code> import sendgrid from sendgrid.helpers.mail import * sg = sendgrid.SendGridAPIClient(apikey="SG.xxxxxxxx") from_email = Email("FROM EMAIL ADDRESS") to_email = Email("TO EMAIL ADDRESS") subject = "Sending with SendGrid is Fun" content = Content("text/plain", "and easy to do anywhere, even with Python") mail = Mail(from_email, subject, to_email, content) response = sg.client.mail.send.post(request_body=mail.get()) print(response.status_code) print(response.body) print(response.headers) </code></pre> <p>I want to send email to multiple recipient. Like to_mail = " xxx@gmail.com, yyy@gmail.com".</p>
<p>Note that with the code of the other answers here, the recipients of the email will see each others emails address in the TO field. To avoid this one has to use a separate <code>Personalization</code> object for every email address:</p> <pre><code>def SendEmail(): sg = sendgrid.SendGridAPIClient(api_key="YOUR KEY") from_email = Email ("FROM EMAIL ADDRESS") person1 = Personalization() person1.add_to(Email ("EMAIL ADDRESS 1")) person2 = Personalization() person2.add_to(Email ("EMAIL ADDRESS 2")) subject = "EMAIL SUBJECT" content = Content ("text/plain", "EMAIL BODY") mail = Mail (from_email, subject, None, content) mail.add_personalization(person1) mail.add_personalization(person2) response = sg.client.mail.send.post (request_body=mail.get()) return response.status_code == 202 </code></pre>
python-3.x|sendgrid
15
1,909,602
50,533,089
Django Celery periodic task example
<p>I need a minimum example to do periodic task (run some function after every 5 minutes, or run something at 12:00:00 etc.).</p> <p>In my <code>myapp/tasks.py</code>, I have,</p> <pre><code>from celery.task.schedules import crontab from celery.decorators import periodic_task from celery import task @periodic_task(run_every=(crontab(hour="*", minute=1)), name="run_every_1_minutes", ignore_result=True) def return_5(): return 5 @task def test(): return "test" </code></pre> <p>When I run celery workers it does show the tasks (given below) but <strong>does not return any values</strong> (in either terminal or flower).</p> <pre><code>[tasks] . mathematica.core.tasks.test . run_every_1_minutes </code></pre> <p><strong>Please provide a minimum example or hints to achieve the desired results.</strong></p> <p><strong>Background:</strong></p> <p>I have a <code>config/celery.py</code> which contains the following:</p> <pre><code>import os from celery import Celery os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.local") app = Celery('config') app.config_from_object('django.conf:settings', namespace='CELERY') app.autodiscover_tasks() </code></pre> <p>And in my <code>config/__init__.py</code>, I have</p> <pre><code>from .celery import app as celery_app __all__ = ['celery_app'] </code></pre> <p>I added a function something like below in <code>myapp/tasks.py</code></p> <pre><code>from celery import task @task def test(): return "test" </code></pre> <p>When I run <code>test.delay()</code> from shell, it runs successfully and also shows the task information in flower</p>
<p>To run periodic task you should run celery <a href="http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html#starting-the-scheduler" rel="nofollow noreferrer">beat</a> also. You can run it with this command:</p> <pre><code>celery -A proj beat </code></pre> <p>Or if you are using one worker:</p> <pre><code>celery -A proj worker -B </code></pre>
python|django|celery
4
1,909,603
50,328,304
ScrollBar with Text widget not showing in grid
<p>I am creating a grid view with different rows and columns, the main problem is that my scrollbar with text widget does not show up the entire GUI. </p> <p>My code is as:</p> <pre><code> # -----Zero Row---- lbl = Label(window, font=('Calibri',32), text='Title',bg = '#f0f0f0',bd =10,anchor='w').grid(row=0,columnspan=4) #---- First Row --- Label(window, text='Account Number').grid(row =1 , column = 0, sticky='nsew' ) Label(window, text='Balance').grid(row =1 , column = 1, sticky='nsew' ) btnLogOut = Button(window, text='Log Out', command = save_and_logout).grid(row= 1, column= 2, columnspan = 2, sticky='nsew') #----Second Row---- Label(window, text='ON').grid(row =2 , column = 0, sticky='nsew') Entry(window, textvariable = account_number_input).grid(row =2 , column = 1, sticky='nsew') Button(window, text='OF', command = a).grid(row= 2, column= 2, sticky='nsew') Button(window, text='SWITCH', command = a).grid(row= 2, column= 3, sticky='nsew') #---Third Row---- text_scrollbar = Scrollbar ( window ) text_scrollbar.pack( side = RIGHT, fill = Y ) transaction_text_widget = Text(window, wrap = NONE, yscrollcommand = text_scrollbar.set) # state = NORMAL for editing transaction_text_widget.config(state=NORMAL) transaction_text_widget.insert("1.0", "text") transaction_text_widget.insert("1232.30", "text") transaction_text_widget.insert("132223.0", "text") # state = DISABLED so that it cannot be edited once written transaction_text_widget.config(state=DISABLED) transaction_text_widget.pack(side="left") #Configure the scrollbars text_scrollbar.config(command=transaction_text_widget.yview) transaction_text_widget.grid(row = 3 , column = 1,sticky='nsw') #-----------setting the column and row weights----------- window.columnconfigure(0, weight=1) window.columnconfigure(1, weight=1) window.columnconfigure(2, weight=1) window.columnconfigure(3, weight=1) window.rowconfigure(0, weight =4) window.rowconfigure(1, weight =2) window.rowconfigure(2, weight =3) window.rowconfigure(3, weight =3) </code></pre> <p>What might be the reason for this? If I remove the Third Row section the GUI is seen for other rows. </p>
<p>I fixed it by just adding </p> <pre><code> text_scrollbar.grid(row = 3 , columnspan = 4,sticky='nsew') </code></pre> <p>Thank you for the answer though. </p>
python|python-3.x|text|tkinter|scrollbar
0
1,909,604
50,531,919
why do I get ZeroDivisionError?
<p>I am trying to write python program that reads a .csv file. I have in the input file 4 columns/fields and I want to only have 2 in the output with one being a new one.</p> <p>This is the input I am using:</p> <pre><code>MID,REP,NAME,NEPTUN 0,,"people's front of judea",GM6MRT 17,,Steve Jobs,NC3J0K ,0,Brian,RQQCFE 19,9,Pontius Pilate,BQ6IAJ 1,,N. Jesus,QDMXVF 18,,Bill Gates,D1CXLO 0,,"knights who say NI",CZN5JA ,1,"Robin, the brave",BWQ5AU 17,19,"Gelehed, the pure",BY9B8G </code></pre> <p>then the output should be something like this(not full output):</p> <pre><code>NEPTUN,GROWTH BQ6IAJ,-0.5263157894736842 BWQ5AU,inf BY9B8G,0.11764705882352941 RQQCFE,0 </code></pre> <p>The new field called <code>GROWTH</code> is calculated by (REP-MID)/MID.</p> <p>So, I am using two lists to do that:</p> <pre><code>import csv L = [] s =[] with open('input.csv', 'rb') as R: reader = csv.DictReader(R) for x in reader: if x['MID'] != '' or '0' and x['REP'] == '': Growth = -float(x['MID'])/float(x['MID']) L.append(x) s.append(Growth) elif x['MID'] != '' or '0': Growth = (float(x['REP']))-float(x['MID'])/float(x['MID']) L.append(x) s.append(Growth) elif x['MID'] and x['REP'] == '' or '0' : Growth = 0 L.append(x) s.append(Growth) else: Growth = float("inf") L.append(x) s.append(Growth) for i in range(len(s)): L[i]['GROWTH'] = i R.close() with open('output.csv', 'wb') as output: fields = ['NEPTUN', 'GROWTH'] writer = csv.DictWriter(output, fieldnames=fields, extrasaction='ignore') writer.writeheader() writer.writerows(L) output.close() </code></pre> <p>Now, I am not even sure if the code is correct or does what I aim it for, because I am stuck at a <code>ZeroDivisionError: float division by zero</code> at the first <code>if</code>condition and I tried many ways to avoid it but I get the same error.</p> <p>I thought the problem is that when there are no values for <code>MID</code> field, I think the dictionary gives it <code>``</code> value and that can't be transformed to <code>0</code> by <code>float()</code>. But it seems that is not the problem, but honestly I have no idea now, so that is why I am asking here.</p> <p>the full error:</p> <pre><code>Growth = -float(x['MID'])/float(x['MID']) ZeroDivisionError: float division by zero </code></pre> <p>Any hints about this are greatly valued.</p>
<pre><code>if x['MID'] != '' or '0' and x['REP'] == '' </code></pre> <p>This does not mean what you think it means. It is interpreted as</p> <pre><code>if ((x['Mid'] != '') or ('0')) and (x['REP'] == '') </code></pre> <p>which shortens to</p> <pre><code>if True and x['REP'] == '' </code></pre> <p>which in turn becomes</p> <pre><code>if x['REP'] == '' </code></pre> <hr> <p>What you mean is</p> <pre><code>if x['Mid'] not in ('', '0') and (x['REP'] == ''): </code></pre> <p>You need to do the same for your other <code>if</code> statements</p>
python|python-2.7|csv|dictionary
1
1,909,605
45,028,121
Single inheritance causes TypeError: metaclass conflict: the metaclass of a derived class must be
<p>I'm trying to make subclasses of a Command class but I keep getting the error: </p> <blockquote> <p>metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases</p> </blockquote> <pre><code>class Command(object): def __init__(self, sock, str): self.sock = sock self.str = str def execute(self): pass from src import Command class BroadcastCommand(object, Command): def __init__(self, sock, str): super(Command, self).__init__() def execute(self): self.broadcast() def broadcast(self): print(str) </code></pre> <p>My Command.py file and BroadcastCommand.py file are currently in the same package directory.</p>
<p>If <code>Command</code> inherits from <code>object</code> then there's no use making <code>BroadcastCommand</code> inheriting from both <code>object</code> and <code>Command</code> - it's enough that it inherits from <code>Command</code> -, and it indeed raises the <code>TypeError</code> you get. Solution: make <code>BroadcastCommand</code> inherit from <code>Command</code> only.</p> <p>As a side note, your <code>super</code> call should be</p> <pre><code>super(BroadcastCommand, self).__init__(sock, str) </code></pre> <p>and naming your param <code>str</code> is possibly not a good idea.</p>
python|python-3.x|oop|inheritance
0
1,909,606
61,562,560
Django periodic task celery
<p>I'm a newbie in Django and Celery. Help me please, I can't understand, how it work. I want to see in my console "Hello world" every 1 min.</p> <p>tasks.py</p> <pre><code>from celery import Celery from celery.schedules import crontab from celery.task import periodic_task app = Celery('tasks', broker='pyamqp://guest@localhost//') @periodic_task(run_every=(crontab(hour="*", minute=1)), ignore_result=True) def hello_world(): return "Hello World" </code></pre> <p>celery.py</p> <pre><code>from __future__ import absolute_import import os from celery import Celery os.environ.setdefault("DJANGO_SETTINGS_MODULE", "test.settings.local") app = Celery('test') app.config_from_object('celeryconfig') app.autodiscover_tasks() @app.task(bind=True) def debug_task(self): print('Request: {0!r}'.format(self.request)) </code></pre> <p>init.py</p> <pre><code>from __future__ import absolute_import # This will make sure the app is always imported when # Django starts so that shared_task will use this app. from .celery import app as celery_app </code></pre> <p>celeryconfig.py</p> <pre><code>broker_url = 'redis://localhost:6379' result_backend = 'rpc://' task_serializer = 'json' result_serializer = 'json' accept_content = ['json'] timezone = 'Europe/Oslo' enable_utc = True </code></pre> <p>It's a simple celery settings and code, but doesn't worked =\</p> <pre><code>celery -A tasks worker -B </code></pre> <p>And nothing happens. Tell me what am I doing wrong? Thank you!</p>
<p>You need to setup beat_schedule in your celeryconfig.py</p> <pre><code>from celery.schedules import crontab beat_schedule = { 'send_each_minute': { 'task': 'your.module.path.function', 'schedule': crontab(), 'args': (), }, } </code></pre>
python|django|celery
1
1,909,607
60,550,478
python blackjack aces break my code calculations
<p>I am coding a blackjack game. I append all cards including royals and aces and 2 through 10. What I'm having trouble with is the aces. My problem is being able to calculate all cards before calculating A, to clarify this, I mean I want to calculate ALL cards before ANY aces in a list so I can see if the total is greater then 11, if so then just add 1 for every ace if not, add 11. My code thus far:</p> <pre><code>import random def dealhand(): #This function appends 2 cards to the deck and converts royals and aces to letters. player_hand = [] for card in range(0,2): card = random.randint(1,13) if card == 1: card = "A" if card == 11: card = "J" if card == 12: card = "Q" if card == 13: card = "K" player_hand.append(card) return player_hand #this function sums the given cards to the user def sumHand(list): total = 0 for card in list: card = str(card) if card == "J" or card == "Q" or card== "K": total+=10 continue elif card == "2" or card == "3" or card == "4" or card == "5" or card == "6" or card == "7" or card == "8" or card == "9" or card == "10": total += int(card) elif card == "A": if total&lt;11: total+=11 else: total+=1 return total </code></pre>
<p>I would suggest total += 1, aces += 1 then at the end, add 10 if needed for each ace.</p> <p>A few pointers on asking a question: don't post the dealhand function, as that is completely irrelevant. Post the input, output and expected output</p> <pre><code>def sumHand(hand): ... hand = ['A', 'K', 'Q'] expected 21 actual 31 </code></pre> <p>Here is my suggested fix (minimal change for this particular issue)</p> <pre><code>def sumHand(hand): total = 0 aces = 0 for card in hand: card = str(card) if card == "J" or card == "Q" or card== "K": total+=10 continue elif card == "2" or card == "3" or card == "4" or card == "5" or card == "6" or card == "7" or card == "8" or card == "9" or card == "10": total += int(card) elif card == "A": total += 1 aces += 1 for _ in range(aces): if total &lt;= 11: total += 10 return total </code></pre> <p>I changed "list" to "hand" because that's hiding a builtin class's name, but otherwise didn't mess with it. I would suggest adding a (unit tested) function to get a card's value. Maybe a dict which serves as a name-value map. You could simplify some of the conditions with the "in" operator. It's weird that handle ints by converting them to string and then back to int. But none of that directly relates to the issue of counting aces.</p>
python|blackjack
2
1,909,608
57,882,479
subset df by masking between specific rows
<p>I'm trying to subset a pandas <code>df</code> by removing rows that fall between specific values. The problem is these values can be at different rows so I can't select fixed rows. </p> <p>Specifically, I want to remove rows that fall between <code>ABC xxx</code> and the integer <code>5</code>. These values could fall anywhere in the <code>df</code> and be of unequal length. </p> <p>Note: The string <code>ABC</code> will be followed by different values.</p> <p>I thought about returning all the indexes that contain these two values. </p> <p>But mask could work better if I could return all rows <em>between</em> these two values?</p> <pre><code>df = pd.DataFrame({ 'Val' : ['None','ABC','None',1,2,3,4,5,'X',1,2,'ABC',1,4,5,'Y',1,2], }) mask = (df['Val'].str.contains(r'ABC(?!$)')) &amp; (df['Val'] == 5) </code></pre> <p>Intended Output:</p> <pre><code> Val 0 None 8 X 9 1 10 2 15 Y 16 1 17 2 </code></pre>
<pre><code>a = df.index[df['Val'].str.contains('ABC')==True][0] b = df.index[df['Val']==5][0]+1 c = np.array(range (a,b)) bad_df = df.index.isin(c) df[~bad_df] </code></pre> <p><strong>Output</strong></p> <pre><code> Val 0 None 8 X 9 1 10 2 </code></pre> <p>If there are more than one 'ABC' and 5, then you the below version. With this you get the df other than the first <code>ABC</code> &amp; the last <code>5</code></p> <pre><code>a = (df['Val'].str.contains('ABC')==True).idxmax() b = df['Val'].where(df['Val']==5).last_valid_index()+1 c = np.array(range (a,b)) bad_df = df.index.isin(c) df[~bad_df] </code></pre>
pandas|dataframe|subset|mask
1
1,909,609
56,333,127
unable to parse h1 element following execute_script()
<p>I'm trying to <strong>scrape</strong> the h1 element after the click of a JS link. As I'm new to python, selenium, and beautifulsoup, I'm not sure if what followed the JS execution changes the way parsing works, or if I'm just grabbing the new url improperly. Everything I've tried has returned something different, from Incompleteread, Nonetype object is not callable, [-1, None, -1, None], to a simple None. I'm just not sure where to go after the containers variable, which I left the way it is just to pull the html.</p> <p>All I'm wanting to pull from this is the name</p> <pre>&lt;div class=&quot;name&quot;&gt; &lt;h1 itemprop=&quot;name&quot;&gt; Nicolette Shea &lt;/h1&gt; </pre> <hr /> <pre><code>star_button = driver.find_element_by_css_selector(&quot;a[href*='/pornstar/']&quot;) click = driver.execute_script('arguments[0].click();', star_button) wait = WebDriverWait(driver, 5) try: wait.until(EC.url_contains('-')) except TimeOutException: print(&quot;Unable to load&quot;) new_url = driver.current_url page = pUrl(new_url) p_read = page.read() page.close() p_parse = soup(p_read, 'html.parser') containers = p_parse.find('div', {'class' : 'name'}) print(containers) </code></pre>
<p>Why not after your wait simply load driver.page_source into BeautifulSoup?</p> <pre><code>#try: #except: ....your code soup = BeautifulSoup(driver.page_source, 'lxml') names = [item.text for item in soup.select('div.name')] </code></pre>
python|python-3.x|selenium|web-scraping|beautifulsoup
0
1,909,610
56,332,336
Iterate through the list
<p>The transaction csv looks like this and I add them to list as shown below.</p> <pre><code>Bread Milk Bread Diapers Beer Eggs Beer [{'Bread': 1, 'Milk': 1, '': 7}, {'Bread': 1, 'Diapers': 1, 'Beer': 6, 'Eggs': 1}, {'Milk': 1, 'Diapers': 1, 'Beer': 6, 'Cola': 1}, {'Bread': 1, 'Milk': 1, 'Diapers': 1, 'Beer': 6}, {'Bread': 1, 'Milk': 1, 'Diapers': 2, 'Cola': 1, 'Chips': 2, 'Beer': 1, '': 1}, {'Bread': 1, 'Milk': 1, '': 7}, {'Bread': 1, 'Cola': 1, 'Beer': 3, 'Milk': 1, 'Chips': 1, 'Diapers': 3, '': 1}, {'Milk': 1, 'Bread': 1, 'Beer': 4, 'Cola': 1, 'Diapers': 1, 'Chips': 1}, {'Bread': 1, 'Milk': 2, 'Diapers': 2, 'Beer': 2, 'Chips': 2}, {'Bread': 2, 'Beer': 3, 'Diapers': 3, 'Milk': 1}] </code></pre> <p>I would like to consider only the list which contains the count 3 Diapers.</p> <p>I would expect the transactions to return only as shown below:</p> <pre><code>{'Bread': 2, 'Beer': 3, 'Diapers': 3, 'Milk': 1} {'Bread': 1, 'Cola': 1, 'Beer': 3, 'Milk': 1, 'Chips': 1, 'Diapers': 3, '': 1} {'Bread', 'Beer', 'Diapers', 'Milk'} {'Bread', 'Cola', 'Beer', 'Milk', 'Chips', 'Diapers', ''} </code></pre> <p>The code i have is:</p> <pre><code>def M(): li = [] # Open the csv file with open('transaction.csv') as fp: DataCaptured = csv.reader(fp, delimiter=',') # Iterate through each word in csv and add it's counter to the row for row in DataCaptured: li.append(dict(Counter(row))) if li['Diaper']==3: ---&gt; I am missing this logic not sure how to get it. # Return the list of counters return li print(M()) </code></pre>
<pre><code>li=[{'Bread': 1, 'Milk': 1, '': 7}, {'Bread': 1, 'Diapers': 1, 'Beer': 6, 'Eggs': 1}, {'Milk': 1, 'Diapers': 1, 'Beer': 6, 'Cola': 1}, {'Bread': 1, 'Milk': 1, 'Diapers': 1, 'Beer': 6}, {'Bread': 1, 'Milk': 1, 'Diapers': 2, 'Cola': 1, 'Chips': 2, 'Beer': 1, '': 1}, {'Bread': 1, 'Milk': 1, '': 7}, {'Bread': 1, 'Cola': 1, 'Beer': 3, 'Milk': 1, 'Chips': 1, 'Diapers': 3, '': 1}, {'Milk': 1, 'Bread': 1, 'Beer': 4, 'Cola': 1, 'Diapers': 1, 'Chips': 1}, {'Bread': 1, 'Milk': 2, 'Diapers': 2, 'Beer': 2, 'Chips': 2}, {'Bread': 2, 'Beer': 3, 'Diapers': 3, 'Milk': 1}] for d in li: if 'Diapers' in d and d['Diapers']==3: print(d) </code></pre> <p>OUTPUT:</p> <pre><code>{'Bread': 1, 'Cola': 1, 'Beer': 3, 'Milk': 1, 'Chips': 1, 'Diapers': 3, '': 1} {'Bread': 2, 'Beer': 3, 'Diapers': 3, 'Milk': 1} </code></pre>
python|list
0
1,909,611
18,501,039
how to add subject for sendmail
<p>I am using the following to send an email using smtp..however subject is missing in the email,how to add subject?</p> <pre><code>from email.mime.text import MIMEText from smtplib import SMTP def email (body,subject): msg = MIMEText("%s" % body) msg['Content-Type'] = "text/html; charset=UTF8" s = SMTP('localhost',25) s.sendmail('userid@company.com', ['userid@company.com'],msg=msg.as_string()) def main (): # open gerrit.txt and read the content into body with open('gerrit.txt', 'r') as f: body = f.read() subject = "test email" email(body) print "Done" if __name__ == '__main__': main() </code></pre>
<p>As always, the subject is just another header.</p> <pre><code>msg['Subject'] = subject </code></pre>
python|smtp
0
1,909,612
18,665,032
How to avoid multiple queries in one execute call
<p>I've just realized that <code>psycopg2</code> allows multiple queries in one <code>execute</code> call.</p> <p>For instance, this code will actually insert two rows in <code>my_table</code>:</p> <pre><code>&gt;&gt;&gt; import psycopg2 &gt;&gt;&gt; connection = psycopg2.connection(database='testing') &gt;&gt;&gt; cursor = connection.cursor() &gt;&gt;&gt; sql = ('INSERT INTO my_table VALUES (1, 2);' ... 'INSERT INTO my_table VALUES (3, 4)') &gt;&gt;&gt; cursor.execute(sql) &gt;&gt;&gt; connection.commit() </code></pre> <p>Does <code>psycopg2</code> have some way of disabling this functionality? Or is there some other way to prevent this from happening?</p> <p>What I've come so far is to search if the query has any semicolon (<code>;</code>) on it:</p> <pre><code>if ';' in sql: # Multiple queries not allowed! </code></pre> <p>But this solution is not perfect, because it wouldn't allow some valid queries like:</p> <pre><code>SELECT * FROM my_table WHERE name LIKE '%;' </code></pre> <hr> <p><strong>EDIT:</strong> SQL injection attacks are not an issue here. I do want to give to the user full access of the database (he can even delete the whole database if he wants).</p>
<p>If you want a general solution to this kind of problem, the answer is always going to be "parse format X, or at least parse it well enough to handle your needs".</p> <p>In this case, it's probably pretty simple. PostgreSQL doesn't allow semicolons in the middle of column or table names, etc.; the only places they can appear are inside strings, or as statement terminators. So, you don't need a full parser, just one that can handle strings.</p> <p>Unfortunately, even that isn't completely trivial, because you have to know the rules for what counts as a string literal in PostgreSQL. For example, is <code>"abc\"def"</code> a string <code>abc"def</code>?</p> <p>But once you write or find a parser that can identify strings in PostgreSQL, it's easy: skip all the strings, then see if there are any semicolons left over.</p> <p>For example (this is probably <em>not</em> the correct logic,* and it's also written in a verbose and inefficient way, just to show you the idea):</p> <pre><code>def skip_quotes(sql): in_1, in_2 = False, False for c in sql: if in_1: if c == "'": in_1 = False elif in_2: if c == '"': in_2 = False else: if c == "'": in_1 = True elif c == '"': in_2 = True else: yield c </code></pre> <p>Then you can just write:</p> <pre><code>if ';' in skip_quotes(sql): # Multiple queries not allowed! </code></pre> <hr> <p>If you can't find a pre-made parser, the first things to consider are:</p> <ul> <li>If it's so trivial that simple string operations like <code>find</code> will work, do that.</li> <li>If it's a simple, regular language, use <code>re</code>.</li> <li>If the logic can be explained descriptively (e.g., via a BNF grammar), use a parsing library or parser-generator library like <a href="http://pyparsing.wikispaces.com" rel="nofollow">pyparsing</a> or <a href="http://freenet.mcnabhosting.com/python/pybison/" rel="nofollow">pybison</a>.</li> <li>Otherwise, you will probably need to write a state machine, or even explicit iterative code (like my example above). But this is very rarely the best answer for anything but teaching purposes.</li> </ul> <hr> <p>* This is correct for a dialect that accepts either single- or double-quoted strings, does not escape one quote type within the other, and escapes quotes by doubling them (we will incorrectly treat <code>'abc''def'</code> as two strings <code>abc</code> and <code>def</code>, rather than one string <code>abc'def</code>, but since all we're doing is skipping the strings anyway, we get the right result), but does not have C-style backslash escapes or anything else. I believe this matches sqlite3 as it actually works, although not sqlite3 as it's documented, and I have no idea whether it matches PostgreSQL.</p>
python|cursor|psycopg2
1
1,909,613
69,298,865
How to set my QTreeWidget expanded by default?
<p>I have this QTreeWidget that I would like to be expanded by default. I have read this same question many times but the solutions aren't working for me. I tried the commands for the root of my tree: <code>.ExpandAll()</code> and <code>.itemsExpandable()</code> and for the children <code>.setExpanded(True)</code> with no success.</p> <p>Here is the code of my test application:</p> <pre><code>import sys from PyQt5.QtCore import Qt from PyQt5.QtWidgets import ( QApplication, QMainWindow, QWidget, QTreeWidget, QTreeWidgetItem, QVBoxLayout ) # ---------------------------------------------------------------- unsorted_data = [ ['att_0', 'a', 2020], ['att_0', 'a', 2015], ['att_2', 'b', 5300], ['att_0', 'a', 2100], ['att_1', 'b', 5013], ['att_1', 'c', 6500], ] # Sort data list_att = [] for elem in range(len(unsorted_data)) : att_ = unsorted_data[elem][0] if att_ not in list_att: list_att.append(att_) list_att.sort() n_att = len(list_att) data = ['']*n_att tree = ['']*n_att list_a_number = [] list_b_number = [] list_c_number = [] class MainWindow(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle(&quot;My App&quot;) widget = QWidget() layout = QVBoxLayout() widget.setLayout(layout) # QTreeWidget main_tree = QTreeWidget() main_tree.setHeaderLabel('Test') # main_tree.itemsExpandable() # NOT WORKING # main_tree.expandAll() # NOT WORKING sublevel_1 = [] for i, att in enumerate(list_att) : list_a_number.clear() list_b_number.clear() list_c_number.clear() # Create a dictionary for elem in range(len(unsorted_data)) : if unsorted_data[elem][0] == att : if unsorted_data[elem][1]== 'a' : list_a_number.append(str(unsorted_data[elem][2])) if unsorted_data[elem][1] == 'b' : list_b_number.append(str(unsorted_data[elem][2])) if unsorted_data[elem][1] == 'c' : list_c_number.append(str(unsorted_data[elem][2])) data[i] = {'a' : list_a_number, 'b' : list_b_number, 'c' : list_c_number} # Fill the Tree items = [] att_id = list_att[i].split('\\')[-1] tree[i] = QTreeWidgetItem([att_id]) tree[i].setExpanded(True) # NOT WORKING sublevel_1.append(tree[i]) for key, values in data[i].items(): item = QTreeWidgetItem([key]) item.setCheckState(0, Qt.Checked) tree[i].addChild(item) for value in values : child = QTreeWidgetItem([value]) child.setExpanded(True) # NOT WORKING child.setCheckState(0, Qt.Checked) item.addChild(child) items.append(item) main_tree.insertTopLevelItems(0, sublevel_1) layout.addWidget(main_tree) self.setCentralWidget(widget) # ------------------------------------------------------------------ app = QApplication(sys.argv) window = MainWindow() window.show() sys.exit(app.exec()) </code></pre>
<p>You have to use expandAll <strong>after</strong> placing all the items in the QTreeWidget:</p> <pre class="lang-py prettyprint-override"><code>main_tree.insertTopLevelItems(0, sublevel_1) main_tree.expandAll() layout.addWidget(main_tree) </code></pre> <p><a href="https://i.stack.imgur.com/aUuvK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aUuvK.png" alt="enter image description here" /></a></p> <p>Note: One of the errors in your case is that you invoke <code>setExpanded</code> before the item is added to the QTreeWidget. remove useless <code>setExpanded</code></p>
python|pyqt5|qtreewidget
0
1,909,614
57,368,957
beautifulsoup select method returns traceback
<p>well im still learning beautifulsoup module and im replcating this from the book automate the boring stuff with python i tried replcating the get amazon price script but i get a traceback on the .select() method the error 'TypeError: 'NoneType' object is not callable' its getiing devastated with this error as i couldnt find much about it </p> <pre class="lang-py prettyprint-override"><code>import bs4 import requests header = {'User-Agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36"} def site(url): x = requests.get(url, headers=header) x.raise_for_status() soup = bs4.BeautifulSoup(x.text, "html.parser") p = soup.Select('#buyNewSection &gt; a &gt; h5 &gt; div &gt; div.a-column.a-span8.a-text-right.a-span-last &gt; div &gt; span.a-size-medium.a-color-price.offer-price.a-text-normal') abc = p[0].text.strip() return abc price = site('https://www.amazon.com/Automate-Boring-Stuff-Python-Programming/dp/1593275994') print('price is' + str(price)) </code></pre> <p>it must return a list value containing the price but im stuck with this error</p>
<p>If you use <code>soup.select</code> as opposed to <code>soup.Select</code>, your code does work, it just returns an empty list.The reason can see if we inspect the function you are using:</p> <pre><code>help(soup.Select) Out[1]: Help on NoneType object: class NoneType(object) | Methods defined here: | | __bool__(self, /) | self != 0 | | __repr__(self, /) | Return repr(self). | | ---------------------------------------------------------------------- | Static methods defined here: | | __new__(*args, **kwargs) from builtins.type | Create and return a new object. See help(type) for accurate signature. </code></pre> <p>Compared to:</p> <pre><code>help(soup.select) Out[2]: Help on method select in module bs4.element: select(selector, namespaces=None, limit=None, **kwargs) method of bs4.BeautifulSoup instance Perform a CSS selection operation on the current element. This uses the SoupSieve library. :param selector: A string containing a CSS selector. :param namespaces: A dictionary mapping namespace prefixes used in the CSS selector to namespace URIs. By default, Beautiful Soup will use the prefixes it encountered while parsing the document. :param limit: After finding this number of results, stop looking. :param kwargs: Any extra arguments you'd like to pass in to soupsieve.select(). </code></pre> <p>Having that said, it seems that the page structure is actually different than the one you are trying to get, missing the <code>&lt;a&gt;</code> tag.</p> <pre class="lang-html prettyprint-override"><code>&lt;div id="buyNewSection" class="rbbHeader dp-accordion-row"&gt; &lt;h5&gt; &lt;div class="a-row"&gt; &lt;div class="a-column a-span4 a-text-left a-nowrap"&gt; &lt;span class="a-text-bold"&gt;Buy New&lt;/span&gt; &lt;/div&gt; &lt;div class="a-column a-span8 a-text-right a-span-last"&gt; &lt;div class="inlineBlock-display"&gt; &lt;span class="a-letter-space"&gt;&lt;/span&gt; &lt;span class="a-size-medium a-color-price offer-price a-text-normal"&gt;$16.83&lt;/span&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/h5&gt; &lt;/div&gt; </code></pre> <p>So this should work:</p> <pre class="lang-py prettyprint-override"><code>p = soup.select('#buyNewSection &gt; h5 &gt; div &gt; div.a-column.a-span8.a-text-right.a-span-last &gt; div.inlineBlock-display &gt; span.a-size-medium.a-color-price.offer-price.a-text-normal') abc = p[0].text.strip() abc Out[2]: '$16.83' </code></pre> <p>Additionally, you could consider using a more granular approach that let's you debug your code better. For instance:</p> <pre class="lang-py prettyprint-override"><code>buySection = soup.find('div', attrs={'id':'buyNewSection'}) buySpan = buySection.find('span', attrs={'class': 'a-size-medium a-color-price offer-price a-text-normal'}) print (buyScan) Out[1]: '$16.83' </code></pre>
python|beautifulsoup|python-requests
1
1,909,615
57,507,667
Creating a Search Bar with cascading functionality with Python
<p>I have been wondering if it possible to create a search bar with cascading functionality using an entry widget in tkinter or if there is another widgets that can be used to achieve this aim, through out my time in desktop application development i've only been able to create one where you will have to type in the full name of what you want to search, then you'd write a query that gets the entry and gets what ever information you want from the database, this is very important for me because it limits me, especially when i want to create an application for a store where there a a lot of items you could just type the first letter of an item and it automatically shows you the items with that first letter. please i'd really appreciate if there is an answer to this...</p>
<p>All you need to do is bind a function on <code>&lt;Any-KeyRelease&gt;</code> to filter the data as the user types. When the bound function is called, get the value of the entry widget then use that to get a filtered list of values.</p> <p>Here's an example that uses a fixed set of data and a listbox to show the data, but of course you can just as easily do a database query and display the values however you wish.</p> <pre><code>import tkinter as tk # A list of all tkinter widget class names VALUES = [cls.__name__ for cls in tk.Widget.__subclasses__()] class Example(): def __init__(self): self.root = tk.Tk() self.entry = tk.Entry(self.root) self.listbox = tk.Listbox(self.root) self.vsb = tk.Scrollbar(self.root, command=self.listbox.yview) self.listbox.configure(yscrollcommand=self.vsb.set) self.entry.pack(side="top", fill="x") self.vsb.pack(side="right", fill="y") self.listbox.pack(side="bottom", fill="both", expand=True) self.entry.bind("&lt;Any-KeyRelease&gt;", self.filter) self.filter() def filter(self, event=None): pattern = self.entry.get().lower() self.listbox.delete(0, "end") filtered = [value for value in VALUES if value.lower().startswith(pattern)] self.listbox.insert("end", *filtered) example = Example() tk.mainloop() </code></pre>
python|tkinter|uisearchbar
0
1,909,616
57,348,638
Average values for same key rows excluding certain columns in pandas
<p>I have a table that has a certain subset of columns as a record-key. Record keys might have duplicates e.g several rows might have same key, but different values. I want to average values for such same-key row into one row. But some columns have numbers that represent categories and I want to exclude them from averaging and rather pick a random value. </p> <p>As an example consider this table with keys <code>k1</code> and <code>k2</code>, numerical value <code>v1</code> and categorical-int value <code>id</code></p> <pre><code>k1 | k2 | v1 | id 1 | 2 | 4 | 100 1 | 3 | 2 | 200 1 | 2 | 8 | 300 1 | 2 | 2 | 400 </code></pre> <p>I want the output to be </p> <pre><code>k1 | k2 | v1 | id 1 | 2 |14/3| 100 (or 300 or 400) 1 | 3 | 2 | 200 </code></pre> <p>Currently I have a code to average values accross same-key columns:</p> <pre><code>g = table.groupby(primary_keys) s = g.sum() table = s.div(g.count(), axis=0) </code></pre> <p>but I do not know to extend it to exclude categorical columns (say I know what they are) and pick random value for categoricals</p>
<p>Here is one way </p> <pre><code>df.groupby(['k1','k2']).agg({'v1':'mean','id':lambda x : x.sample(1)}) v1 id k1 k2 1 2 4.666667 100 3 2.000000 200 </code></pre>
pandas
1
1,909,617
42,545,779
How To Make a Generator, That's Composed of Another Generator
<p>I want to make a generator. And that generator should take an iterable. This is basically so that I can plug the generator into an existing framework. </p> <p>This is the code I've got so far.</p> <pre><code>class Iter1(object): def __init__(self, iterable=None): self.iterable = iterable def __iter__(self): if self.iterable is None: self.iterable = Iter2() return self.iterable def next(self): for thing in self.iterable: yield thing class Iter2(object): DEFAULT_PATH = r"/Users/Documents/stuff.txt" def __init__(self, path=None): self.path = path or self.DEFAULT_PATH def __iter__(self): return self def next(self): with open(self.path, 'r') as f: for line in f: yield line if __name__ == "__main__": iterable = Iter1() for thing in iterable: print(thing) </code></pre> <p>There are two problems that I have with this code. The first is that what gets returned (yielded) isn't one of the lines from the file, it's another generator object. The second is that it doesn't return the number of lines that are in the file, it just returns an infinite number of lines. I get that that's because each time I call <code>next</code> in <code>Iter2</code> I'm opening the file again, but then I don't understand how to yield each line without loading the whole file into memory.</p>
<p><a href="https://www.python.org/dev/peps/pep-0234/" rel="nofollow noreferrer">PEP 234 -- Iterators</a>:</p> <blockquote> <p>Iterator objects returned by either form of iter() have a next() method. This method either returns the next value in the iteration, or raises StopIteration (or a derived exception class) to signal the end of the iteration. Any other exception should be considered to signify an error and should be propagated normally, not taken to mean the end of the iteration.</p> </blockquote> <p>You are returning an iterator from <code>next()</code>, which is why it's not working as expected. Instead you should return a single value each time <code>next()</code> is invoked.</p> <p>Also, having <code>__iter__()</code> return <code>self</code> is a bit odd. It is generally assumed that invoking <code>iter(sequence)</code> multiple times will return multiple new iterators, each starting at the beginning of the sequence, but this isn't the case with your code.</p>
python|python-2.7|iterator|generator
1
1,909,618
53,915,733
Use contextmanager inside init
<p>In the code below I don't understand why the <code>with super().__init__(*args, **kwargs):</code> line in MyFileIO2 is throwing an error about missing <code>__exit__</code> while everything works perfectly fine with the MyFileIO class. I don't really understand what exactly the difference between doing the with inside or outside of the init is. Can someone enlighten me what is going on here? </p> <pre><code>import io class MyFileIO(io.FileIO): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def __enter__(self, *args, **kwargs): f = super().__enter__(*args, **kwargs) print('first byte of file: ', f.read(1)) return f class MyFileIO2(io.FileIO): def __enter__(self, *args, **kwargs): f = super().__enter__(*args, **kwargs) print('first byte of file: ', f.read(1)) return f def __init__(self, *args, **kwargs): with super().__init__(*args, **kwargs): # AttributeError: __exit__ pass path = 'some_file.bin' with MyFileIO(path, 'rb'): pass MyFileIO2(path, 'rb') </code></pre>
<p>You will need to call the context manager on <code>self</code>, because <code>__init__</code> doesn't actually return anything.</p> <pre><code>class MyFileIO2(io.FileIO): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) with self: pass def __enter__(self, *args, **kwargs): f = super().__enter__(*args, **kwargs) print('First byte of file: ', f.read(1)) return f </code></pre> <p></p> <p>For testing, I created a binary file having the contents "hello world".</p> <pre><code>_ = MyFileIO2(path, 'rb') # First byte of file: b'h' </code></pre> <hr> <p>What happens is the return value of <code>super().__init__</code> is being passed through the context manager, so you effectively have this:</p> <pre><code>with None: pass AttributeError: __enter__ </code></pre> <p>The context manager tries calling the <code>__enter__</code> method on the <code>NoneType</code> object, but that is an invalid operation.</p>
python|python-3.x|class|python-3.5|contextmanager
1
1,909,619
54,209,782
Keras - Input arrays should have the same number of samples as target arrays
<p>I have the code below which run a Generative Adversarial Network (GAN) on <code>374</code> training images of size <code>32x32</code>.</p> <p>Why am I having the following error?</p> <pre><code>ValueError: Input arrays should have the same number of samples as target arrays. Found 7500 input samples and 40 target samples. </code></pre> <p>which occurs at the following statement:</p> <blockquote> <pre><code>discriminator_loss = discriminator.train_on_batch(combined_images,labels) </code></pre> </blockquote> <pre><code>import keras from keras import layers import numpy as np import cv2 import os from keras.preprocessing import image latent_dimension = 32 height = 32 width = 32 channels = 3 iterations = 100000 batch_size = 20 real_images = [] # paths to the training and results directories train_directory = '/training' results_directory = '/results' # GAN generator generator_input = keras.Input(shape=(latent_dimension,)) # transform the input into a 16x16 128-channel feature map x = layers.Dense(128*16*16)(generator_input) x = layers.LeakyReLU()(x) x = layers.Reshape((16,16,128))(x) x = layers.Conv2D(256,5,padding='same')(x) x = layers.LeakyReLU()(x) # upsample to 32x32 x = layers.Conv2DTranspose(256,4,strides=2,padding='same')(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(256,5,padding='same')(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(256,5,padding='same')(x) x = layers.LeakyReLU()(x) # a 32x32 1-channel feature map is generated (i.e. shape of image) x = layers.Conv2D(channels,7,activation='tanh',padding='same')(x) # instantiae the generator model, which maps the input of shape (latent dimension) into an image of shape (32,32,1) generator = keras.models.Model(generator_input,x) generator.summary() # GAN discriminator discriminator_input = layers.Input(shape=(height,width,channels)) x = layers.Conv2D(128,3)(discriminator_input) x = layers.LeakyReLU()(x) x = layers.Conv2D(128,4,strides=2)(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(128,4,strides=2)(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(128,4,strides=2)(x) x = layers.LeakyReLU()(x) x = layers.Flatten()(x) # dropout layer x = layers.Dropout(0.4)(x) # classification layer x = layers.Dense(1,activation='sigmoid')(x) # instantiate the discriminator model, and turn a (32,32,1) input # into a binary classification decision (fake or real) discriminator = keras.models.Model(discriminator_input,x) discriminator.summary() discriminator_optimizer = keras.optimizers.RMSprop( lr=0.0008, clipvalue=1.0, decay=1e-8) discriminator.compile(optimizer=discriminator_optimizer, loss='binary_crossentropy') # adversarial network discriminator.trainable = False gan_input = keras.Input(shape=(latent_dimension,)) gan_output = discriminator(generator(gan_input)) gan = keras.models.Model(gan_input,gan_output) gan_optimizer = keras.optimizers.RMSprop( lr=0.0004, clipvalue=1.0, decay=1e-8) gan.compile(optimizer=gan_optimizer,loss='binary_crossentropy') start = 0 for step in range(iterations): # sample random points in the latent space random_latent_vectors = np.random.normal(size=(batch_size,latent_dimension)) # decode the random latent vectors into fake images generated_images = generator.predict(random_latent_vectors) stop = start + batch_size i = start for root, dirs, files in os.walk(train_directory): for file in files: for i in range(stop-start): img = cv2.imread(root + '/' + file) real_images.append(img) i = i+1 combined_images = np.concatenate([generated_images,real_images]) # assemble labels and discrminate between real and fake images labels = np.concatenate([np.ones((batch_size,1)),np.zeros(batch_size,1)]) # add random noise to the labels labels = labels + 0.05 * np.random.random(labels.shape) # train the discriminator discriminator_loss = discriminator.train_on_batch(combined_images,labels) random_latent_vectors = np.random.normal(size=(batch_size,latent_dimension)) # assemble labels that classify the images as "real", which is not true misleading_targets = np.zeros((batch_size,1)) # train the generator via the GAN model, where the discriminator weights are frozen adversarial_loss = gan.train_on_batch(random_latent_vectors,misleading_targets) start = start + batch_size if start &gt; len(train_directory)-batch_size: start = 0 # save the model weights if step % 100 == 0: gan.save_weights('gan.h5') print'discriminator loss: ' print discriminator_loss print 'adversarial loss: ' print adversarial_loss img = image.array_to_img(generated_images[0] * 255.) img.save(os.path.join(results_directory,'generated_melanoma_image' + str(step) + '.png')) img = image.array_to_img(real_images[0] * 255.) img.save(os.path.join(results_directory,'real_melanoma_image' + str(step) + '.png')) </code></pre> <p>Thanks.</p>
<p>Your following step causing this problem,</p> <pre><code>i = start for root, dirs, files in os.walk(train_directory): for file in files: for i in range(stop-start): img = cv2.imread(root + '/' + file) real_images.append(img) i = i+1 </code></pre> <p>You are trying to collect <code>20</code> samples of <code>real_images</code>, which is done by inner loop. Then there is outer loop, which is running for each files, So outer loop is collecting <code>20</code> sample for each file, which collect <code>7480</code> sample in total, where you are planned to collect only <code>20</code> in total .</p>
python|keras
3
1,909,620
53,826,582
Pandas assign cumulative count for consecutive values in a column
<p>This is my data:</p> <pre><code>print(n0data) FULL_MPID DateTime EquipID count Index 1 5092761672035390000000000000 2018-11-28 00:36:00 1296 1 2 5092761672035390000000000000 2018-11-28 00:37:00 1634 2 3 5092761672035390000000000000 2018-11-28 13:36:00 1296 3 4 5092761672035390000000000000 2018-11-28 13:38:00 1634 4 5 5092761672035390000000000000 2018-11-29 17:37:00 1290 5 6 5092761672035390000000000000 2018-11-29 17:37:00 1634 6 7 5092761672035390000000000000 2018-11-30 21:23:00 1290 7 8 5092761672035390000000000000 2018-11-30 21:24:00 1634 8 9 5092761672035390000000000000 2018-12-02 09:37:00 1296 9 10 5092761672035390000000000000 2018-12-02 09:39:00 1634 10 11 5092761672035390000000000000 2018-12-02 09:39:00 1634 11 12 5092761672035390000000000000 2018-12-03 11:55:00 1290 12 13 5092761672035390000000000000 2018-12-03 12:02:00 1634 13 14 5092761672035390000000000000 2018-12-06 12:22:00 1290 14 15 5092761672035390000000000000 2018-12-06 12:22:00 1634 15 16 5092761672035390000000000000 2018-12-06 12:22:00 1634 16 17 5092761672035390000000000000 2018-12-06 12:23:00 1634 17 18 5092761672035390000000000000 2018-12-06 12:23:00 1634 18 19 5092761672035390000000000000 2018-12-06 12:23:00 1634 19 20 5092761672035390000000000000 2018-12-06 12:23:00 1634 20 21 5092761672035390000000000000 2018-12-06 12:23:00 1634 21 22 5092761672035390000000000000 2018-12-09 05:51:00 1290 22 </code></pre> <p></p> <p>So I have a groupBy function that makes the following ecount column with the command: </p> <pre><code>n0data['ecount'] = n0data.groupby(['EquipID','FULL_MPID']).cumcount() + 1 </code></pre> <p>The data is sorted by the time and looks to identify when the changeover of EquipID happens.</p> <p>Ecount is supposed to be:</p> <p><a href="https://i.stack.imgur.com/GE6Vk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GE6Vk.png" alt="ecount right"></a></p> <p>When the EquipID column values changes from one value to another, ecount should reset. However if EquipID does not change, like during index 15-21 rows, EquipID should continue counting. I thought this was what the groupBy delivered also... </p>
<p>You can use the <code>shift</code> and <code>cumsum</code> trick before <code>groupby</code>:</p> <pre><code>v = df.EquipID.ne(df.EquipID.shift()) v.groupby(v.cumsum()).cumcount() + 1 Index 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 9 1 10 1 11 2 12 1 13 1 14 1 15 1 16 2 17 3 18 4 19 5 20 6 21 7 22 1 dtype: int64 </code></pre>
python|pandas|dataframe|group-by|pandas-groupby
2
1,909,621
22,677,507
Is it possible to port Python GAE db.GeoPt to a Go type?
<p>I'm working on a porting an existing GAE app that was originally written in Python to Go. So far it's been pretty great and reasonably easy (though it's not been without its quirks).</p> <p>Since this port will be deployed to the same GAE app on a different version, the two versions will share the same datastore. The problem is that the original Python app makes extensive use of the db.GeoPt type.</p> <p>I implemented my own custom PropertyLoadSaver on one of my types so I could look at how I might represent a db.GeoPt in Go, via reflection. But apparently the memory layout of db.GeoPt is not compatible with anything in Go at all. Does anybody know how I might go about this? Has anybody done this before?</p> <p>Here's some code to give you guys a better idea of what I'm doing:</p> <pre class="lang-go prettyprint-override"><code>func (sS *SomeStruct) Load(c &lt;-chan datastore.Property) error { for p := range c { if p.Name == "location" { // "location" is the name of the original db.GeoPt property v := reflect.ValueOf(p.Value) // If I call v.Kind(), it returns reflect.Invalid // And yes, I know v is declared and never used :P } } return nil } </code></pre> <p>Thank you in advance!</p>
<p><code>appengine.GeoPoint</code> support in <code>appengine/datastore</code> was added in the 1.9.3 App Engine release.</p>
python|google-app-engine|go|google-cloud-datastore
2
1,909,622
45,513,520
Making parallel code work in python 2.7 and 3.6
<p>I have some code in python 3.6 which is like this:</p> <pre><code>from multiprocessing import Pool with Pool(processes=4) as p: p.starmap(parallel_function, list(dict_variables.items())) </code></pre> <p>Here dict_variables looks like this:</p> <pre><code>[('aa', ['ab', 'ab', 'ad']), ('aa1', ['a1b', 'a1b', 'a2d'])] </code></pre> <p>This code only works in python 3.6. How can I make it work in 2.7?</p>
<p><code>starmap</code> was <a href="https://docs.python.org/3.6/library/multiprocessing.html#multiprocessing.pool.Pool.starmap" rel="nofollow noreferrer">introduced in Python3.3</a>. In Python2, use <a href="https://docs.python.org/2.7/library/multiprocessing.html#multiprocessing.pool.multiprocessing.Pool.map" rel="nofollow noreferrer"><code>Pool.map</code></a> and unpack the argument yourself:</p> <p>In Python3:</p> <pre><code>import multiprocessing as mp def starmap_func(x, y): return x**y with mp.Pool(processes=4) as p: print(p.starmap(starmap_func, [(1,2), (3,4), (5,6)])) # [1, 81, 15625] </code></pre> <p>In Python2 or Python3:</p> <pre><code>import multiprocessing as mp def map_func(arg): x, y = arg return x**y p = mp.Pool(processes=4) print(p.map(map_func, [(1,2), (3,4), (5,6)])) # [1, 81, 15625] p.close() </code></pre>
python|python-2.7|python-3.x|multiprocessing|starmap
1
1,909,623
45,420,740
Pandas add dataframes side to side with different indexes
<p>I have dataframes like this:</p> <pre><code> Sender USD_Equivalent 725 ABC 5777527.31 330 CFE 4717812.90 12 CDE 3085838.19 Sender USD_Equivalent 707 AAP 1962412.94 149 EFF 1777705.37 189 EFG 1744705.37 </code></pre> <p>And I want them like this :</p> <pre><code>Sender USD_Equivalent Sender USD_Equivalent ABC 5777527.31 AAP 1962412.94 CFE 4717812.90 EFF 1777705.37 CDE 3085838.19 EFG 1744705.37 </code></pre> <p>Thanks</p>
<pre><code>pd.concat([d.reset_index(drop=True) for d in [df1, df2]], axis=1) Sender USD_Equivalent Sender USD_Equivalent 0 ABC 5777527.31 AAP 1962412.94 1 CFE 4717812.90 EFF 1777705.37 2 CDE 3085838.19 EFG 1744705.37 </code></pre>
pandas
5
1,909,624
44,781,818
Save related Images Django REST Framework
<p>I have this basic model layout:</p> <pre><code>class Listing(models.Model): name = models.TextField() class ListingImage(models.Model): listing = models.ForeignKey(Listing, related_name='images', on_delete=models.CASCADE) image = models.ImageField(upload_to=listing_image_path) </code></pre> <p>Im trying to write a serializer which lets me add an rest api endpoint for creating Listings including images.</p> <p>My idea would be this:</p> <pre><code>class ListingImageSerializer(serializers.ModelSerializer): class Meta: model = ListingImage fields = ('image',) class ListingSerializer(serializers.ModelSerializer): images = ListingImageSerializer(many=True) class Meta: model = Listing fields = ('name', 'images') def create(self, validated_data): images_data = validated_data.pop('images') listing = Listing.objects.create(**validated_data) for image_data in images_data: ListingImage.objects.create(listing=listing, **image_data) return listing </code></pre> <p>My Problems are:</p> <ol> <li><p>I'm not sure how and if I can send a list of images in a nested dictionary using a multipart POST request.</p></li> <li><p>If I just post an images list and try to convert it from a list to a list of dictionaries before calling the serializer, I get weird OS errors when parsing the actual image.</p> <pre><code>for key, item in request.data.items(): if key.startswith('images'): # images.append({'image': item}) request.data[key] = {'image': item} </code></pre></li> </ol> <p>My request code looks like this:</p> <pre><code>import requests from requests_toolbelt.multipart.encoder import MultipartEncoder api_token = 'xxxx' images_data = MultipartEncoder( fields={ 'name': 'test', 'images[0]': (open('lilo.png', 'rb'), 'image/png'), 'images[1]': (open('panda.jpg', 'rb'), 'image/jpeg') } ) response = requests.post('http://127.0.0.1:8000/api/listings/', data=images_data, headers={ 'Content-Type': images_data.content_type, 'Authorization': 'Token' + ' ' + api_token }) </code></pre> <p>I did find a very hacky solution which I will post in the answers but its not really robust and there needs to be a better way to do this.</p>
<p>So my solution is based off of this post and works quite well but seems very unrobust and hacky.</p> <p>I change the images field from a relation serializer requiring a dictonary to a ListField. Doing this i need to override the list field method to actually create a List out of the RelatedModelManager when calling "to_repesentation".</p> <p>This baiscally behaves like a list on input, but like a modelfield on read.</p> <pre><code>class ModelListField(serializers.ListField): def to_representation(self, data): """ List of object instances -&gt; List of dicts of primitive datatypes. """ return [self.child.to_representation(item) if item is not None else None for item in data.all()] class ListingSerializer(serializers.ModelSerializer): images = ModelListField(child=serializers.FileField(max_length=100000, allow_empty_file=False, use_url=False)) class Meta: model = Listing fields = ('name', 'images') def create(self, validated_data): images_data = validated_data.pop('images') listing = Listing.objects.create(**validated_data) for image_data in images_data: ListingImage.objects.create(listing=listing, image=image_data) return listing </code></pre>
python|django|django-rest-framework
3
1,909,625
44,670,006
Posting Data to Server with volley library from android App
<p>Guys i need help am trying to post this data to my server here but it is returning an error, it seems to be working just fine in postman the problem comes in while trying to implement in android app using google's volley library.</p> <p><a href="http://yoyo.coderows.nl/rest/save_inventory.php" rel="nofollow noreferrer">Link</a> to server script. </p> <p>This is the screenshot of a successful post working in postman rest client:<a href="https://i.stack.imgur.com/RXsGf.png" rel="nofollow noreferrer">2</a></p> <pre><code>private void SaveDataToServer() { StringRequest serverPostRequest = new StringRequest(Request.Method.POST, Config.SAVE_INVENTORY_URL, new Response.Listener&lt;String&gt;() { @Override public void onResponse(String json) { try { Toast.makeText(SelectItemsActivity.this, json.toString(), Toast.LENGTH_SHORT).show(); Log.e("RESPONSE FROM SERVER",json); JSONObject dataJson=new JSONObject(json); JSONObject myJson=dataJson.getJSONObject("status"); String status=myJson.getString("status_text"); if (status.equalsIgnoreCase("Success.")){ Toast.makeText(SelectItemsActivity.this, "Data saved Successfully", Toast.LENGTH_SHORT).show(); proggressShow.dismiss(); }else { Toast.makeText(SelectItemsActivity.this, "An error occured while saving data", Toast.LENGTH_SHORT).show(); proggressShow.dismiss(); } } catch (JSONException e) { e.printStackTrace(); } } }, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError volleyError) { } }){ @Override protected Map&lt;String, String&gt; getPostParams() { HashMap&lt;String, String&gt; params = new HashMap&lt;String, String&gt;(); params.put("api_key",Config.API_KEY); params.put("move_id", "1"); params.put("room_name", "Attic room"); params.put("item_name", "Halloween Broom"); params.put("item_id", "6"); Log.e("datat to server",params.toString()); return params; } }; saveDataRequest.add(serverPostRequest); } </code></pre>
<p>After adding this lines to my request headers i was able to successfully commit the database to my db. Also i made sure i am using a stringRequest for a jsonRequest it does not work for reasons i do not know.</p> <pre><code> @Override public Map&lt;String, String&gt; getHeaders() throws AuthFailureError { HashMap&lt;String, String&gt; headers = new HashMap&lt;&gt;(); headers.put("Content-Type","application/x-www-form-urlencoded"); return headers; } </code></pre>
java|android|networking|python-requests|android-volley
0
1,909,626
24,172,361
Is it possible to return an HttpResponse object in render function?
<p>Reasons for why someone would want to do this aside, is it possible? Something along the lines of</p> <pre><code>from cms.plugin_base import CMSPluginBase from data_viewer.models.data_view import DataPlugin from django.http import HttpResponse class CMSPlugin(CMSPluginBase): def render(self, context, instance) response = HttpResponse(content_type='text/csv') return response </code></pre> <p>Usually render functions require a context to be returned, thus this code doesn't work as is. Once again, I know this isn't typical. I just want to know if it's possible</p> <p>Thanks in advance, all help is appreciated!</p>
<p>In short: No.</p> <p>The <code>render</code> method is very unfortunately named and should really be called <code>get_context</code>. It <strong>must</strong> return a dictionary or <code>Context</code> instance, see the <a href="http://docs.django-cms.org/en/latest/extending_cms/custom_plugins.html#the-simplest-plugin" rel="nofollow">docs</a></p> <p>If you want to extend django CMS with something that returns <code>HttpResponse</code> objects, have a look at <a href="http://docs.django-cms.org/en/latest/extending_cms/app_integration.html#integration-apphooks" rel="nofollow">apphooks</a>.</p>
python|django|django-cms
1
1,909,627
24,181,704
How to restart an iterator?
<p>How to restart an iterator?</p> <p>I have a list of columns names like this:</p> <pre><code>my_column_names = ["A", "B", "C", "D", "F", "G", "H"] </code></pre> <p>And I take a csv file with rows like this:</p> <pre><code>A,500 B,3.0 C,87 A,200 A,300 B,3.5 D,CALL E,CLEAN F,MADRID G,28000 H,SPAIN A,150 B,1.75 C,103 D,PUT </code></pre> <p>I want to make a csv file with this format:</p> <pre><code>A,B,C,D,E,F,G,H 500,3.0,87,,,,, 200,,,,,,, 300,3.5,,CALL,CLEAN,MADRID,28000,SPAIN 150,1.75,103,PUT,,,, </code></pre> <p>My code:</p> <pre><code>iter_column_names = itertools.cycle(my_column_names) my_new_line = [] for old_line in new_file: column_name = iter_column_names.__next__() if old_line[0] == column_name: my_new_line.append(old_line[1]) else: my_new_line.append('') if column_name == "H": print(my_new_line) # to change by writeline() when it works fine my_new_line = [] </code></pre> <p>But it doesn't work like I need. I suppose that the problem is that it needs to restart de <code>iter_column_names</code> every time that it reaches "H" element. Or not?</p>
<p>I'd use a <code>csv.DictWriter()</code> and use a dictionary to handle the rows. That way you can detect if a column has been seen already, and start a new row:</p> <pre><code>import csv fields = ('A', 'B', 'C', 'D', 'E', 'F', 'G', 'H') with open('inputfile.csv', newline='') as infh, open('output.csv', 'w', newline='') as outfh: reader = csv.reader(infh) writer = csv.DictWriter(outfh, fields) writer.writeheader() row = {} for key, value in reader: if key in row: # new row found, write old writer.writerow(row) row = {} row[key] = value # write last row if row: writer.writerow(row) </code></pre> <p>Demo:</p> <pre><code>&gt;&gt;&gt; import csv &gt;&gt;&gt; import sys &gt;&gt;&gt; infh = '''\ ... A,500 ... B,3.0 ... C,87 ... A,200 ... A,300 ... B,3.5 ... D,CALL ... E,CLEAN ... F,MADRID ... G,28000 ... H,SPAIN ... A,150 ... B,1.75 ... C,103 ... D,PUT ... '''.splitlines() &gt;&gt;&gt; outfh = sys.stdout &gt;&gt;&gt; fields = ('A', 'B', 'C', 'D', 'E', 'F', 'G', 'H') &gt;&gt;&gt; if True: ... reader = csv.reader(infh) ... writer = csv.DictWriter(outfh, fields) ... writer.writeheader() ... row = {} ... for key, value in reader: ... if key in row: ... # new row found, write old ... writer.writerow(row) ... row = {} ... row[key] = value ... # write last row ... if row: ... writer.writerow(row) ... A,B,C,D,E,F,G,H 500,3.0,87,,,,, 17 200,,,,,,, 12 300,3.5,,CALL,CLEAN,MADRID,28000,SPAIN 40 150,1.75,103,PUT,,,, 22 </code></pre> <p>The numbers in between (<code>17</code>, <code>12</code>, <code>40</code>, <code>22</code>) are the <code>writer.writerow()</code> return values (bytes written).</p>
python|python-3.x|iterator
3
1,909,628
36,139,889
RNN model running out of memory in TensorFlow
<p>I implemented a Sequence to Sequence model using the rnn.rnn helper in TensorFlow.</p> <pre><code>with tf.variable_scope("rnn") as scope, tf.device("/gpu:0"): cell = tf.nn.rnn_cell.BasicLSTMCell(4096) lstm = tf.nn.rnn_cell.MultiRNNCell([cell] * 2) _, cell = rnn.rnn(lstm, input_vectors, dtype=tf.float32) tf.get_variable_scope().reuse_variables() lstm_outputs, _ = rnn.rnn(lstm, output_vectors, initial_state=cell) </code></pre> <p>The model is running out of memory on a Titan X with 16 GB of memory while allocating gradients for the LSTM cells:</p> <pre><code>W tensorflow/core/kernels/matmul_op.cc:158] Resource exhausted: OOM when allocating tensor with shape[8192,16384] W tensorflow/core/common_runtime/executor.cc:1102] 0x2b42f00 Compute status: Resource exhausted: OOM when allocating tensor with shape[8192,16384] [[Node: gradients/rnn/RNN/MultiRNNCell_1/Cell0/BasicLSTMCell/Linear/MatMul_grad/MatMul_1 = MatMul[T=DT_FLOAT, transpose_a=true, transpose_b=false, _device="/job:localhost/replica:0/task:0/gpu:0"](rnn/RNN/MultiRNNCell_1/Cell0/BasicLSTMCell/Linear/concat, gradients/rnn/RNN/MultiRNNCell_1/Cell0/BasicLSTMCell/add_grad/tuple/control_dependency)]] </code></pre> <p>If I reduce the length of the input and output sequences to 4 or less the model runs without a problem.</p> <p>This indicates to me that TF is trying to allocate the gradients for all time steps at the same time. Is there a way of avoiding this?</p>
<p>The function <code>tf.gradients</code> as well as the <code>minimize</code> method of the optimizers allow you to set parameter called <code>aggregation_method</code>. The default value is <code>ADD_N</code>. This method constructs the graph in such a way that all gradients need to be computed at the same time. </p> <p>There are two other undocumented methods called <code>tf.AggregationMethod.EXPERIMENTAL_TREE</code> and <code>tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N</code>, which do not have this requirement.</p>
tensorflow
5
1,909,629
35,973,964
Python - how to print entry text tkinter?
<p>I'm writing a basic Q&amp;A program as I learn Python, and I'm messing around with tkinter's functions. I'm trying to print user input, but it just prints a blank line. Here is my code:</p> <pre><code>from tkinter import * from tkinter import ttk def response(): value = str(var.get()) print(value) root = Tk() root.title("Bot") mainframe = ttk.Frame(root, padding = "5 5 15 15") mainframe.grid(column=0, row=0), sticky=(N, W, E, S)) mainframe.columnconfigure(0, weight=1) mainframe.rowconfigure(0, weight=1) var = StringVar() input_entry = ttk.Entry(mainframe, width=20, textvariable=var) input_entry.grid(column=5, row=5, sticky = (W, E)) input_entry.pack() ttk.Label(mainframe, textvariable=response).grid(column=2, row=2, sticky=(W, E)) ttk.Button(mainframe, text="Ask away!", command=response).grid(column=3, row=3, sticky=W) root.mainloop() </code></pre>
<p>To get an entry widgets text you can use <code>input_entry.get()</code></p> <p>You can see the documentation for the ttk entry widget <a href="https://www.tcl.tk/man/tcl8.5/TkCmd/ttk_entry.htm" rel="nofollow">here</a></p>
python|tkinter|printing|tkinter-entry
1
1,909,630
29,402,137
remove the item in string
<p>How do I remove the other stuff in the string and return a list that is made of other strings ? This is what I have written. Thanks in advance!!!</p> <pre><code> def get_poem_lines(poem): r""" (str) -&gt; list of str Return the non-blank, non-empty lines of poem, with whitespace removed from the beginning and end of each line. &gt;&gt;&gt; get_poem_lines('The first line leads off,\n\n\n' ... + 'With a gap before the next.\nThen the poem ends.\n') ['The first line leads off,', 'With a gap before the next.', 'Then the poem ends.'] """ list=[] for line in poem: if line == '\n' and line == '+': poem.remove(line) s = poem.remove(line) for a in s: list.append(a) return list </code></pre>
<p><code>split</code> and <code>strip</code> might be what you need:</p> <pre><code>s = 'The first line leads off,\n\n\n With a gap before the next.\nThen the poem ends.\n' print([line.strip() for line in s.split("\n") if line]) ['The first line leads off,', 'With a gap before the next.', 'Then the poem ends.'] </code></pre> <p>Not sure where the <code>+</code> fits in as it is, if it is involved somehow either strip or str.replace it, also avoid using <code>list</code> as a variable name, it shadows the python <code>list</code>. </p> <p>lastly strings have no remove method, you can .replace but since strings are <em>immutable</em> you will need to reassign the <code>poem</code> to the the return value of replace i.e <code>poem = poem.replace("+","")</code></p>
string|python-3.x
1
1,909,631
60,870,263
python random choice choosing nothing
<p>I have a method to generate random string that always starts with a character and has length of at minimum 1.</p> <pre><code>class Util: @staticmethod def get_random_name(): N = r.randint(0, 5) return "".join( r.choice( string.ascii_lowercase + string.ascii_uppercase ) ).join( r.choice( string.ascii_lowercase + string.ascii_uppercase + string.digits ) for _ in range(N) ) </code></pre> <p>Now when I do this:</p> <pre><code>for i in range(0,50): logging.debug(str(i)+" -- "+Util().get_random_name()) </code></pre> <p>Some of them gives me empty string or sometimes it starts with number.</p> <p><strong>What am I missing ?</strong></p> <p>Check the log: <a href="https://i.stack.imgur.com/JBlg7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JBlg7.png" alt="enter image description here"></a></p>
<p>I think you're looking for something like</p> <pre><code>import random import string def get_random_name(min_n=0, max_n=5): initial = random.choice( string.ascii_lowercase + string.ascii_uppercase ) return initial + "".join( random.choice( string.ascii_lowercase + string.ascii_uppercase + string.digits ) for _ in range( random.randint(min_n, max_n) ) ) for x in range(10): print(x, get_random_name(max_n=x)) </code></pre> <p>Output (e.g.):</p> <pre><code>0 z 1 cm 2 W 3 oku9 4 nh 5 Ul3 6 yNPH 7 Rw7hW0eW 8 qR 9 BYKaGyv </code></pre>
python|python-3.x
3
1,909,632
62,760,849
windows suddenly can't run python "this app can't run on your PC anymore"
<p>I was coding on VSCODE and when I wanted to run the script I suddenly got thrown this error by windows <a href="https://i.stack.imgur.com/sRaLl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sRaLl.png" alt="enter image description here" /></a></p> <p>so now if I type python on cmd I get this error and python returns &quot;Access is denied.&quot; this happened suddenly so one minute I could run python and now I can't, so I don't think it's a windows update causing a problem</p> <p>I looked around and it seems most people that get this error downloaded 32bit version of an app on a 64bit machine</p> <p>edit: it seems like python.exe now is 0kb it's corrupted so I'll probably have to reinstall python but I'd love to know what caused it.</p>
<p>Ran into same issue. Try downloading &quot;Windows x86-64 MSI installer&quot; from this link <a href="https://www.python.org/downloads/release/python-2718/" rel="nofollow noreferrer">https://www.python.org/downloads/release/python-2718/</a> for Windows. This one worked for me.</p> <p>This is the latest 2.X version as of writing this answer. To future check the 2.X latest release if any from the default download page of python.</p>
python|windows|windows-10
0
1,909,633
62,672,983
elasticsearch.exceptions.SSLError: ConnectionError hostname doesn't match
<p>I've been using the Elasticsearch Python API to do some basic operation on a cluster (like creating an index or listing them). Everything worked fine but I decided to activate SSL authentification on the cluster and my scripts aren't working anymore.</p> <p>I have the following errors :</p> <pre><code>Certificate did not match expected hostname: X.X.X.X. Certificate: {'subject': ((('commonName', 'X.X.X.X'),),), 'subjectAltName': [('DNS', 'X.X.X.X')]} GET https://X.X.X.X:9201/ [status:N/A request:0.009s] Traceback (most recent call last): File &quot;/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py&quot;, line 672, in urlopen chunked=chunked, File &quot;/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py&quot;, line 376, in _make_request self._validate_conn(conn) File &quot;/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py&quot;, line 994, in _validate_conn conn.connect() File &quot;/home/esadm/env/lib/python3.7/site-packages/urllib3/connection.py&quot;, line 386, in connect _match_hostname(cert, self.assert_hostname or server_hostname) File &quot;/home/esadm/env/lib/python3.7/site-packages/urllib3/connection.py&quot;, line 396, in _match_hostname match_hostname(cert, asserted_hostname) File &quot;/home/esadm/env/lib/python3.7/ssl.py&quot;, line 338, in match_hostname % (hostname, dnsnames[0])) ssl.SSLCertVerificationError: (&quot;hostname 'X.X.X.X' doesn't match 'X.X.X.X'&quot;,) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/esadm/env/lib/python3.7/site-packages/elasticsearch/connection/http_urllib3.py&quot;, line 233, in perform_request method, url, body, retries=Retry(False), headers=request_headers, **kw File &quot;/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py&quot;, line 720, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File &quot;/home/esadm/env/lib/python3.7/site-packages/urllib3/util/retry.py&quot;, line 376, in increment raise six.reraise(type(error), error, _stacktrace) File &quot;/home/esadm/env/lib/python3.7/site-packages/urllib3/packages/six.py&quot;, line 734, in reraise raise value.with_traceback(tb) File &quot;/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py&quot;, line 672, in urlopen chunked=chunked, File &quot;/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py&quot;, line 376, in _make_request self._validate_conn(conn) File &quot;/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py&quot;, line 994, in _validate_conn conn.connect() File &quot;/home/esadm/env/lib/python3.7/site-packages/urllib3/connection.py&quot;, line 386, in connect _match_hostname(cert, self.assert_hostname or server_hostname) File &quot;/home/esadm/env/lib/python3.7/site-packages/urllib3/connection.py&quot;, line 396, in _match_hostname match_hostname(cert, asserted_hostname) File &quot;/home/esadm/env/lib/python3.7/ssl.py&quot;, line 338, in match_hostname % (hostname, dnsnames[0])) urllib3.exceptions.SSLError: (&quot;hostname 'X.X.X.X' doesn't match 'X.X.X.X'&quot;,) </code></pre> <p>The thing I don't understand is that this message doesn't make any sense :</p> <blockquote> <p><code>&quot;hostname 'X.X.X.X' doesn't match 'X.X.X.X'&quot;</code></p> </blockquote> <p>Because the two adresses matches, they are exactly the same !</p> <p>I've followed the docs and my configuration of the instance Elasticsearch looks like this :</p> <pre><code>Elasticsearch([get_ip_address()], http_auth=('elastic', 'pass'), use_ssl=True, verify_certs=True, port=get_instance_port(), ca_certs='ca.crt', client_cert='pvln0047.crt', client_key='pvln0047.key' ) </code></pre> <p>Thanks for your help</p>
<p>Problem solved, the issue was in the constructor :</p> <pre><code>Elasticsearch([get_ip_address()], http_auth=('elastic', 'pass'), use_ssl=True, verify_certs=True, port=get_instance_port(), ca_certs='ca.crt', client_cert='pvln0047.crt', client_key='pvln0047.key' ) </code></pre> <p>Instead of mentioning the ip address I needed to mention the <strong>DNS name</strong>, I also changed the arguments by using context object just to follow the original docs.</p> <pre><code>context = create_default_context(cafile=&quot;ca.crt&quot;) context.load_cert_chain(certfile=&quot;pvln0047.crt&quot;, keyfile=&quot;pvln0047.key&quot;) context.verify_mode = CERT_REQUIRED Elasticsearch(['dns_name'], http_auth=('elastic', 'pass'), scheme=&quot;https&quot;, port=get_instance_port(), ssl_context=context ) </code></pre> <p>This is how I generated the certificates :</p> <pre><code>bin/elasticsearch-certutil cert ca --pem --in /tmp/instance.yml --out /home/user/certs.zip </code></pre> <p>And this is my instance.yml file :</p> <pre><code>instances: - name: 'dns_name' dns: [ 'dns_name' ] </code></pre> <p>Hope, it will help someone !</p>
python|elasticsearch|ssl
2
1,909,634
64,513,180
Error while importing tensorflow in jupyter environment
<p><strong>Its repetitively showing this error while importing tensorflow</strong></p> <p>I am using a separate environment in anaconda with jupyter installed.Can anyone help me solve this error</p> <blockquote> <p>ImportError Traceback (most recent call last) E:\Anaconda2\Library\envs\tf-gpu\lib\site-packages\tensorflow\python\pywrap_tensorflow.py in 63 try: ---&gt; 64 from tensorflow.python._pywrap_tensorflow_internal import * 65 # This try catch logic is because there is no bazel equivalent for py_extension.</p> <p>ImportError: DLL load failed: The specified module could not be found.</p> <p>During handling of the above exception, another exception occurred:</p> <p>ImportError Traceback (most recent call last) in 1 import os ----&gt; 2 import tensorflow as tf 3 import matplotlib.pyplot as plt 4 import numpy as np 5 import pandas as pd</p> <p>E:\Anaconda2\Library\envs\tf-gpu\lib\site-packages\tensorflow_<em>init</em>_.py in 39 import sys as _sys 40 ---&gt; 41 from tensorflow.python.tools import module_util as _module_util 42 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader 43</p> <p>E:\Anaconda2\Library\envs\tf-gpu\lib\site-packages\tensorflow\python_<em>init</em>_.py in 37 # go/tf-wildcard-import 38 # pylint: disable=wildcard-import,g-bad-import-order,g-import-not-at-top ---&gt; 39 from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow 40 41 from tensorflow.python.eager import context</p> <p>E:\Anaconda2\Library\envs\tf-gpu\lib\site-packages\tensorflow\python\pywrap_tensorflow.py in 81 for some common reasons and solutions. Include the entire stack trace 82 above this error message when asking for help.&quot;&quot;&quot; % traceback.format_exc() ---&gt; 83 raise ImportError(msg) 84 85 # pylint: enable=wildcard-import,g-import-not-at-top,unused-import,line-too-long</p> <p>ImportError: Traceback (most recent call last): File &quot;E:\Anaconda2\Library\envs\tf-gpu\lib\site-packages\tensorflow\python\pywrap_tensorflow.py&quot;, line 64, in from tensorflow.python._pywrap_tensorflow_internal import * ImportError: DLL load failed: The specified module could not be found.</p> </blockquote>
<p>DLL load fail error is because either you have not installed <a href="https://support.microsoft.com/en-us/topic/the-latest-supported-visual-c-downloads-2647da03-1eea-4433-9aff-95f26a218cc0" rel="nofollow noreferrer">Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019</a> or your CPU does not support AVX2 instructions</p> <p>There is a workaround either you have to compile Tensorflow from source or use google colaboratory to work. Follow the instructions mentioned <a href="https://www.tensorflow.org/install/source_windows" rel="nofollow noreferrer">here</a> to build Tensorflow from source.</p>
tensorflow|import|jupyter
0
1,909,635
69,846,536
Vectorise nested vmap
<p>Here's some data I have:</p> <pre class="lang-py prettyprint-override"><code>import jax.numpy as jnp import numpyro.distributions as dist import jax xaxis = jnp.linspace(-3, 3, 5) yaxis = jnp.linspace(-3, 3, 5) </code></pre> <p>I'd like to run the function</p> <pre class="lang-py prettyprint-override"><code>def func(x, y): return dist.MultivariateNormal(jnp.zeros(2), jnp.array([[.5, .2], [.2, .1]])).log_prob(jnp.asarray([x, y])) </code></pre> <p>over each pair of values from <code>xaxis</code> and <code>yaxis</code>.</p> <p>Here's a &quot;slow&quot; way to do:</p> <pre class="lang-py prettyprint-override"><code>results = np.zeros((len(xaxis), len(yaxis))) for i in range(len(xaxis)): for j in range(len(yaxis)): results[i, j] = func(xaxis[i], yaxis[j]) </code></pre> <p>Works, but it's slow.</p> <p>So here's a vectorised way of doing it:</p> <pre class="lang-py prettyprint-override"><code>jax.vmap(lambda axis: jax.vmap(func, (None, 0))(axis, yaxis))(xaxis) </code></pre> <p>Much faster, but it's hard to read.</p> <p>Is there a clean way of writing the vectorised version? Can I do it with a single <code>vmap</code>, rather than having to nest one within another one?</p> <p>EDIT</p> <p>Another way would be</p> <pre class="lang-py prettyprint-override"><code>jax.vmap(func)(xmesh.flatten(), ymesh.flatten()).reshape(len(xaxis), len(yaxis)).T </code></pre> <p>but it's still messy.</p>
<p>I believe <a href="https://stackoverflow.com/questions/69772134/vectorization-guidelnes-for-jax">Vectorization guidelnes for jax</a> is quite similar to your question; to replicate the logic of nested for-loops with vmap requires nested vmaps.</p> <p>The cleanest approach using <code>jax.vmap</code> is probably something like this:</p> <pre class="lang-python prettyprint-override"><code>from functools import partial @partial(jax.vmap, in_axes=(0, None)) @partial(jax.vmap, in_axes=(None, 0)) def func(x, y): return dist.MultivariateNormal(jnp.zeros(2), jnp.array([[.5, .2], [.2, .1]])).log_prob(jnp.asarray([x, y])) func(xaxis, yaxis) </code></pre> <p>Another option here is to use the <code>jnp.vectorize</code> API (which is implemented via multiple vmaps), in which case you can do something like this:</p> <pre class="lang-python prettyprint-override"><code>print(jnp.vectorize(func)(xaxis[:, None], yaxis)) </code></pre>
python|numpy|jax
1
1,909,636
69,887,372
Split a column into multiple columns with condition
<p>I have a question about splitting columns into multiple rows at Pandas with conditions. For example, I tend to do something as follows but takes a very long time using for loop</p> <pre><code>| Index | Value | | ----- | ----- | | 0 | 1 | | 1 | 1,3 | | 2 | 4,6,8 | | 3 | 1,3 | | 4 | 2,7,9 | </code></pre> <p>into</p> <pre><code>| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | | ----- | - | - | - | - | - | - | - | - | - | | 0 | 1 | | | | | | | | | | 1 | 1 | | 3 | | | | | | | | 2 | | | | 4 | | 6 | | 8 | | | 3 | 1 | | 3 | | | | | | | | 4 | | 2 | | | | | 7 | | 9 | </code></pre> <p>I wonder if there are any packages that can help this out rather than to write a for loop to map all indexes.</p>
<p>Assuming the &quot;Value&quot; column contains strings, you can use <code>str.split</code> and <code>pivot</code> like so:</p> <pre><code>value = df[&quot;Value&quot;].str.split(&quot;,&quot;).explode().astype(int).reset_index() output = value.pivot(index=&quot;index&quot;, columns=&quot;Value&quot;, values=&quot;Value&quot;) output = output.reindex(range(value[&quot;Value&quot;].min(), value[&quot;Value&quot;].max()+1), axis=1) &gt;&gt;&gt; output Value 1 2 3 4 5 6 7 8 9 index 0 1.0 NaN NaN NaN NaN NaN NaN NaN NaN 1 1.0 NaN 3.0 NaN NaN NaN NaN NaN NaN 2 NaN NaN NaN 4.0 NaN 6.0 NaN 8.0 NaN 3 1.0 NaN 3.0 NaN NaN NaN NaN NaN NaN 4 NaN 2.0 NaN NaN NaN NaN 7.0 NaN 9.0 </code></pre> <h6>Input <code>df</code>:</h6> <pre><code>df = pd.DataFrame({&quot;Value&quot;: [&quot;1&quot;, &quot;1,3&quot;, &quot;4,6,8&quot;, &quot;1,3&quot;, &quot;2,7,9&quot;]}) </code></pre>
python|split
2
1,909,637
69,942,305
Given an array arr of size n and an integer X. Find if there's a triplet in the array which sums up to the given integer X
<p>Given an array arr of size n and an integer X. Find if there's a triplet in the array which sums up to the given integer X.</p> <pre><code> Input: n = 5, X = 10 arr[] = [1 2 4 3 6] Output: Yes Explanation: The triplet {1, 3, 6} in the array sums up to 10. </code></pre>
<p>the line of reasoning is: Get all the possible combinations of 3 numbers in the array arr. Find which has sum=X, print only these triplets</p> <pre><code>import numpy as np import itertools arr=np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) X=10 combinations=np.array(list(itertools.combinations(arr, 3))) triplets=combinations[combinations.sum(axis=1)==X] print(f'Triplets with sum equal to {X} are:\n{triplets}') </code></pre> <p>output:</p> <pre><code>Triplets with sum equal to 10 are: [[0 1 9] [0 2 8] [0 3 7] [0 4 6] [1 2 7] [1 3 6] [1 4 5] [2 3 5]] </code></pre>
python|arrays
1
1,909,638
69,787,854
How do I sum up the numbers in a tuple, thats in a list?
<p>I have the following list of tuples: <code>[(), (1,), (2,), (3,), (1, 2), (1, 3), (2, 3), (1, 2, 3)]</code></p> <p>I would like to get the total sum (or other operation_ of all the numbers in each tuple, then get the sum of the entire list.</p> <p>Desired outcome:</p> <ul> <li>addition: <code>1+2+3+(1+2)+(1+3)+(2+3)+(1+2+3) = 24</code></li> <li>multiplication: <code>1+2+3+(1×2)+(1×3)+(2×3)+(1×2×3)=23</code></li> <li>bit operator: <code>1+2+3+(1⊕2)+(1⊕3)+(2⊕3)+(1⊕2⊕3)=1+2+3+3+2+1+0 = 12.</code></li> </ul>
<p>I would solve this by iterating through your list, applying your desired operation, then take the sum:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt; import math &gt;&gt; from operator import xor &gt;&gt; from functools import reduce &gt;&gt; my_values = [(), (1,), (2,), (3,), (1, 2), (1, 3), (2, 3), (1, 2, 3)] # Addition &gt;&gt; addition_values = [sum(x) for x in my_values] [0, 1, 2, 3, 3, 4, 5, 6] &gt;&gt; sum(addition_values) 24 # Multiplication &gt;&gt; multiplication_value = [math.prod(x) for x in my_values] [1, 1, 2, 3, 2, 3, 6, 6] &gt;&gt; sum(multiplication_value) 24 # Bit operation &gt;&gt; xor_value = [reduce(xor, x) for x in my_values if x] [1, 2, 3, 3, 2, 1, 0] &gt;&gt; sum(xor_value) 12 </code></pre> <p><strong>You could put this together as a single function:</strong><br> <em>Especially helpful if you want to extend functionality to additional operators...</em></p> <pre class="lang-py prettyprint-override"><code>import math from operator import xor, mul, add from functools import reduce from typing import List, Tuple, Literal def operator_then_sum(my_list: List[Tuple], op: Literal[add, mul, xor]) -&gt; int: &quot;&quot;&quot; Performs operation on tuples within list then returns the total sum of the list Args: my_list: list of tuples to perform operator on op: an operator &quot;&quot;&quot; operated_values = [reduce(op, x) for x in my_values if x] return sum(operated_values) # Test it out on your values my_values = [(), (1,), (2,), (3,), (1, 2), (1, 3), (2, 3), (1, 2, 3)] &gt;&gt; operator_then_sum(my_values, add) 24 &gt;&gt; operator_then_sum(my_values, mul) 23 &gt;&gt; operator_then_sum(my_values, xor) 12 </code></pre>
python|tuples
1
1,909,639
55,731,835
Python: Convert list of dictionaries to list of lists
<p>I want to convert a list of dictionaries to list of lists.</p> <p>From this.</p> <pre><code>d = [{'B': 0.65, 'E': 0.55, 'C': 0.31}, {'A': 0.87, 'D': 0.67, 'E': 0.41}, {'B': 0.88, 'D': 0.72, 'E': 0.69}, {'B': 0.84, 'E': 0.78, 'A': 0.64}, {'A': 0.71, 'B': 0.62, 'D': 0.32}] </code></pre> <p>To</p> <pre><code>[['B', 0.65, 'E', 0.55, 'C', 0.31], ['A', 0.87, 'D', 0.67, 'E', 0.41], ['B', 0.88, 'D', 0.72, 'E', 0.69], ['B', 0.84, 'E', 0.78, 'A', 0.64], ['A', 0.71, 'B', 0.62, 'D', 0.32]] </code></pre> <p>I can acheive this output from</p> <pre><code>l=[] for i in range(len(d)): temp=[] [temp.extend([k,v]) for k,v in d[i].items()] l.append(temp) </code></pre> <p><strong>My question is</strong>: </p> <ul> <li>Is there any better way to do this?</li> <li>Can I do this with list comprehension?</li> </ul>
<p>Since you are using python 3.6.7 and <a href="https://stackoverflow.com/questions/39980323/are-dictionaries-ordered-in-python-3-6">python dictionaries are insertion ordered in python 3.6+</a>, you can achieve the desired result using <code>itertools.chain</code>:</p> <pre><code>from itertools import chain print([list(chain.from_iterable(x.items())) for x in d]) #[['B', 0.65, 'E', 0.55, 'C', 0.31], # ['A', 0.87, 'D', 0.67, 'E', 0.41], # ['B', 0.88, 'D', 0.72, 'E', 0.69], # ['B', 0.84, 'E', 0.78, 'A', 0.64], # ['A', 0.71, 'B', 0.62, 'D', 0.32]] </code></pre>
python|python-3.x|list|dictionary
3
1,909,640
55,980,045
How to perform two aggregate operations in one column of same pandas dataframe?
<p>I have a column in pandas data frame where I want to find out the min and max of a column in the same result. But the problem is I am getting only one aggregated value in return.</p> <pre><code>import pandas as pd print(df) col1 col2 5 9 6 6 3 4 4 3 df.agg({'col1':'sum','col1':'mean'}) </code></pre> <p>The output of this aggregation is giving only mean :</p> <pre><code>col1 4.5 dtype: float64 </code></pre> <p>However, the output which I need should have both sums and mean for col1 and I am only getting mean.</p>
<p>Try below code:</p> <pre><code>import pandas as pd from io import StringIO content= """col1 col2 5 9 6 6 3 4 4 3 """ df=pd.read_csv(StringIO(content),sep='\s+') df.agg({"col1":["sum","mean"],"col2":"std"}) </code></pre> <p>if you want to apply multiple functions in one columns, you has to use list, otherwise, the later function to col1 will replace the former. if you want to apply mutliple functions for different columns, just use dict inside of the agg fucntions. </p>
python|pandas|aggregate
2
1,909,641
50,079,895
Im not getting desired accuracy in logistic regression on MNIST
<p>I'm not getting output on logistic regression problem. I have used MNIST dataset to predict the number digit, I have used Adam optimizer, It is not giving the desired accuracy.Model Reduces cost but does not give accuracy well.</p> <p>My code looks like this.</p> <pre><code># In[1]: import tensorflow as tf import matplotlib.pyplot as plt import numpy as np import pandas as pd from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) get_ipython().run_line_magic('matplotlib', 'inline') # In[2]: train_x = mnist.train.images train_y = mnist.train.labels X = tf.placeholder(shape=[None,784],dtype=tf.float32,name="X") Y = tf.placeholder(shape=[None,10],dtype=tf.float32,name="Y") # In[3]: #hyperparameters training_epoches = 25 batch_size = 1000 total_batches = int(mnist.train.num_examples/batch_size) W = tf.Variable(tf.random_normal([784,10])) b = tf.Variable(tf.random_normal([10])) # In[6]: y_ = tf.nn.sigmoid(tf.matmul(X,W)+b) cost = tf.reduce_mean(-tf.reduce_sum(Y*tf.log(y_), reduction_indices=1)) optimizer = tf.train.AdamOptimizer(0.01).minimize(cost) init = tf.global_variables_initializer() # In[7]: with tf.Session() as sess: sess.run(init) for epoches in range(training_epoches): for i in range(total_batches): xs_batch,ys_batch = mnist.train.next_batch(batch_size) sess.run(optimizer,feed_dict={X:train_x,Y:train_y}) print("cost after epoch %i : %f"%(epoches+1,sess.run(cost,feed_dict={X:train_x,Y:train_y}))) correct_prediction = tf.equal(tf.argmax(y_, 1), tf.argmax(Y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print("Accuracy:", accuracy.eval({X: mnist.test.images, Y: mnist.test.labels})) </code></pre> <p>The output of code is :</p> <pre><code>cost after epoch 1 : 0.005403 cost after epoch 2 : 0.002935 cost after epoch 3 : 0.001866 cost after epoch 4 : 0.001245 cost after epoch 5 : 0.000877 cost after epoch 6 : 0.000652 cost after epoch 7 : 0.000507 cost after epoch 8 : 0.000407 cost after epoch 9 : 0.000334 cost after epoch 10 : 0.000279 cost after epoch 11 : 0.000237 cost after epoch 12 : 0.000204 cost after epoch 13 : 0.000178 cost after epoch 14 : 0.000156 cost after epoch 15 : 0.000138 cost after epoch 16 : 0.000123 cost after epoch 17 : 0.000111 cost after epoch 18 : 0.000100 cost after epoch 19 : 0.000091 cost after epoch 20 : 0.000083 cost after epoch 21 : 0.000076 cost after epoch 22 : 0.000070 cost after epoch 23 : 0.000065 cost after epoch 24 : 0.000060 cost after epoch 25 : 0.000056 Accuracy: 0.1859 </code></pre> <p>It is giving Accuracy of 0.1859.which is not expected</p>
<p>You need to use - <code>y_ = tf.nn.softmax(tf.matmul(X,W)+b)</code></p> <p>instead of :</p> <p><code>y_ = tf.nn.sigmoid(tf.matmul(X,W)+b)</code></p> <p>as the MNIST data set has multi-class labels (sigmoid is used in case of 2 classes).</p> <br> <br> <p>You may also need to add a small number to</p> <p><code>cost = tf.reduce_mean(-tf.reduce_sum(Y*tf.log(y_), reduction_indices=1)) </code></p> <p>like -</p> <p><code>cost = tf.reduce_mean(-tf.reduce_sum(Y*tf.log(y_ + 1e-10), reduction_indices=1))</code></p> <p>in case the cost results in nan</p>
python-3.x|tensorflow|machine-learning|deep-learning|logistic-regression
1
1,909,642
49,822,047
Installing rpy2 on python 2.7 on macOS
<p>I have python version 2.7.10 on macOS High Sierra and would like to install rpy2.</p> <p>When I do <i>sudo pip install rpy2</i></p> <p>I get the error message:</p> <pre><code>Command "python setup.py egg_info" failed with error code 1 in /private/tmp/pip-build-Nwbha3/rpy2/ </code></pre> <p>I have already upgraded setuptools (version 39.0.1). </p> <p>I also downloaded the older version rpy2-2.7.0.tar.gz and tried installing it with <i>sudo pip install rpy2-2.7.0.tar.gz</i>. I then get the following error message:</p> <pre><code>clang: error: unsupported option '-fopenmp' clang: error: unsupported option '-fopenmp' error: command 'cc' failed with exit status 1 ---------------------------------------- Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/private/tmp/pip-O0cu4E-build/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-haDUA3-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/tmp/pip-O0cu4E-build/ </code></pre> <p>If somebody has the answer to my installation problem, it would be greatly appreciate it. Many thanks in advance!</p>
<p>The <code>clang</code> that ships with Mac does not support <code>openmp</code> which is what the <code>-fopenmp</code> flag is for. You'll likely need a version of clang that supports openmp.</p> <p>One possible solution would be to get the full llvm/clang build with openmp support. With homebrew you can do:</p> <pre><code>brew install llvm # clang/llvm brew install libomp # OpenMP support </code></pre> <p>And then try installing <code>rpy2</code> again with newly installed version of clang.</p> <p>As an example, the current version is <code>6.0.0</code> so you would run</p> <pre><code>CC=/usr/local/Cellar/llvm/6.0.0/bin/clang pip install rpy2 </code></pre>
macos|python-2.7|clang|openmp|rpy2
3
1,909,643
65,227,004
runtime error: dictionary changed size during iteration
<p>I have a social network graph 'G'.</p> <p>I'm trying to check that they keys of my graph are in the characteristics dataset as well. I ran this command:</p> <pre><code>for node in G.nodes(): if node in caste1.keys(): pass else: G = G.remove_node(node) </code></pre> <p>It shows an error RuntimeError: dictionary changed size during iteration</p>
<p>The <code>RuntimeError</code> is self-explanatory. You're changing the size of an object you're iterating within the iteration. It messes up the looping.</p> <p>What you can do is to first iterate across the nodes to get the ones you would like to remove, and store it in a separate variable. Then you can iterate with this variable to remove the nodes:</p> <pre class="lang-py prettyprint-override"><code># Identify nodes for removal nodes2remove = [node for node in G.nodes() if node not in caste1.keys()] # Remove target-nodes for node in nodes2remove: G = G.remove_node(node) </code></pre>
python
1
1,909,644
65,209,923
declaring a python variable in a list [data] = self.read()?
<p>while studying the open source repo of Odoo I found a line of code that I don't understand like the following</p> <pre class="lang-py prettyprint-override"><code>[data] = self.read() </code></pre> <p>found there <a href="https://github.com/odoo/odoo/blob/8f297c9d5f6d31370797d64fee5ca9d779f14b81/addons/hr_holidays/wizard/hr_holidays_summary_department.py#L25" rel="nofollow noreferrer">https://github.com/odoo/odoo/blob/8f297c9d5f6d31370797d64fee5ca9d779f14b81/addons/hr_holidays/wizard/hr_holidays_summary_department.py#L25</a></p> <p>I really would like to know why would you put the variable in a list</p>
<p>It seems to ensure that <code>[data]</code> is an iterable of one item and therefore unpacks the first value from <code>self.read()</code></p> <p>It cannot be assigned to a non-iterable</p> <pre><code>&gt;&gt;&gt; [data] = 1 Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; TypeError: cannot unpack non-iterable int object </code></pre> <p>Works for iterable types, though must have a length equal to one</p> <pre><code>&gt;&gt;&gt; [data] = {'some':2} &gt;&gt;&gt; data 'some' &gt;&gt;&gt; [data] = {'foo':2, 'bar':3} Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ValueError: too many values to unpack (expected 1) &gt;&gt;&gt; [data] = [1] &gt;&gt;&gt; data 1 &gt;&gt;&gt; [data] = [[1]] &gt;&gt;&gt; data [1] &gt;&gt;&gt; [data] = [1, 2] Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ValueError: too many values to unpack (expected 1) &gt;&gt;&gt; [data] = [] Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ValueError: not enough values to unpack (expected 1, got 0) </code></pre>
python-3.x|odoo
3
1,909,645
68,664,116
Is there a way to change the color name's color in mplstyle file?
<p>I would like to assign colors of a matplotlib plot e.g. by writing <code>plt.plot(x,y, color=&quot;red&quot;)</code>. But &quot;red&quot; should be a color I predefined in a mplstyle file.</p> <p>I know I can change the default colors in the mplstyle file that are used for drawing by</p> <pre><code> # colors: blue, red, green axes.prop_cycle: cycler('color', [(0,0.3,0.5),(0.9, 0.3,0.3),(0.7, 0.8, 0.3)]) </code></pre> <p>But appart from that I sometimes need to draw something with specific colors</p> <pre><code>plt.bar([0,1],[1,1],color=[&quot;green&quot;,&quot;red&quot;]) </code></pre> <p>Unfortunately, those colors always refer to the predefined &quot;green&quot; and &quot;red&quot; in matplotlib. I would know like to assign a my green for &quot;green&quot; and my red for &quot;red&quot;.</p> <p>Of course, I could do a workaround and define a variable as, say a RGB-tuple</p> <pre><code>red = (0.9, 0.3,0.3) green = (0.7, 0.8, 0.3) plt.bar([0,1],[10,8],color=[red,green]) </code></pre> <p>but I think more natural would be to redefine the colors referred to by &quot;green&quot; and &quot;red&quot;.</p> <p>Does any of you know how I can change a color name's color in the mplstyle file?</p>
<p>The list of color names is stored in <code>matplotlib.colors.get_named_colors_mapping()</code> and you can edit it.</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.colors import matplotlib.pyplot as plt color_map = matplotlib.colors.get_named_colors_mapping() color_map[&quot;red&quot;] = &quot;#ffaaaa&quot; color_map[&quot;green&quot;] = &quot;#aaffaa&quot; plt.bar([0, 1], [1, 1], color=[&quot;green&quot;, &quot;red&quot;]) plt.show() </code></pre>
python|matplotlib|colors
2
1,909,646
71,510,387
Returning function from another function
<p>I'm developing a package in which I want to modify incoming functions to add extra functionality. I'm not sure how I would go about doing this. A simple example of what I'm trying to do:</p> <pre class="lang-py prettyprint-override"><code>def function(argument): print(argument,end=&quot;,&quot;) # this will not work but an attempt at illustrating what I wish to accomplish def modify_function(original_function): return def new_function(arguments): for i in range(3): original_function(arguments) print(&quot;finished&quot;,end=&quot;&quot;) </code></pre> <p>and running the original function:</p> <pre class="lang-py prettyprint-override"><code>function(&quot;hello&quot;) </code></pre> <p>gives the output: <code>hello,</code></p> <p>and running the modified function:</p> <pre class="lang-py prettyprint-override"><code>modified_function = modify_function(function) modified_function(i) </code></pre> <p>gives the output: <code>hello,hello,hello,finished</code></p>
<p>Credit to @Matiss for pointing out the way to do this. Adding this answer so it is documented.</p> <pre class="lang-py prettyprint-override"><code>def original_function(argument): print(argument, end=&quot;,&quot;) def modify_function(input_func): # Using *args and **kwargs allows handling of multiple arguments # and keyword arguments, in case something more general is required. def new_func(*args, **kwargs): for i in range(3): input_func(*args, **kwargs) print(&quot;finished&quot;, end=&quot;&quot;) return new_func modified_function = modify_function(original_function) modified_function(&quot;hello&quot;) </code></pre> <p>For other kinds of modification where you want to e.g. fix some of the arguments passed in, this way will still work, but you may also make use of <a href="https://docs.python.org/3/library/functools.html#functools.partial" rel="nofollow noreferrer">https://docs.python.org/3/library/functools.html#functools.partial</a>.</p>
python
2
1,909,647
5,493,984
How to make something like a log box in wxPython
<p>I'm assuming this is possible with a multiline text box, but not sure how to do it. What I'm looking to do is make a log box in my wxPython program, where I can write messages to it when certain actions happen. Also, i need to write the messages not only when an event happens, but certain times in the code. How would i get it to redraw the window so the messages appear at that instant?</p>
<p>I wrote an article on this sort of thing a couple years ago:</p> <p><a href="http://www.blog.pythonlibrary.org/2009/01/01/wxpython-redirecting-stdout-stderr/" rel="noreferrer">http://www.blog.pythonlibrary.org/2009/01/01/wxpython-redirecting-stdout-stderr/</a></p>
python|logging|wxpython
6
1,909,648
67,305,550
How to select class from lists of classes based on key – "like" in dicts?
<p>I have a list of classes in my code and I want to access them based on a 'key' of a dictionary, in a loop, so I am trying to understand which could be the most appropriate way to do it. I am trying to think, if maybe I associate an attribute to the <code>class</code>, like <code>Class.key</code>, then maybe I can iterate among my elements, but maybe I am just approaching the problem the wrong way.</p> <p>To be more clear, this is just a rough example. If my <code>dict</code> is:</p> <pre><code>dict = {'A': 'blue', 'B': 'apple', 'C': 'dog'} </code></pre> <p>and my list of classes:</p> <pre><code>listOfClasses = [Class_A, Class_B, Class_C] </code></pre> <p>I would like to operate in such a way that:</p> <pre><code>for key in dict: print(key, dict[key]) print(key, ---&quot;something like: Class.value if Class.key == key&quot; ---) </code></pre> <p>Maybe the question is confusing, or stupid, or maybe I am just approaching the wrong way and should create a <code>dict</code> of classes, (if it is possible?). Just need a piece of advice I guess.</p>
<p>The classes themselves are python objects, and therefore valid dictionary values.</p> <pre><code>class ClassA: pass class ClassB: pass class ClassC: pass class_registry = { 'A': ClassA, 'B': ClassB, 'C': ClassC, } for name, cls in class_registry.items(): print (name, cls.__name__) </code></pre>
python|class|dictionary
2
1,909,649
60,480,715
What is the most pythonic way to iterate through a long list of strings and structure new lists from that original list?
<p>I have a large list of strings of song lyrics. Each element in the list is a song, and each song has multiple lines and some of those lines are headers such as '[Intro]', '[Chorus]' etc. I'm trying to iterate through the list and create new lists where each new list is comprised of all the lines in a certain section like '[Intro]' or '[Chorus]'. Once I achieve this I want to create a Pandas data frame where each row are all the song lyrics and each column is that section(Intro, Chorus, Verse 1, etc.) of the song. Am I thinking about this the right way? Here's an example of 1 element in the list and my current partial attempt to iterate and store:</p> <pre><code>song_index_number = 0 line_index_in_song = 0 intro = [] bridge = [] verse1 = [] prechorus = [] chorus = [] verse2 = [] verse3 = [] verse4 = [] verse5 = [] outro = [] lyrics_by_song[30] ['[Intro]', '(Just the two of us, just the two of us)', 'Baby, your dada loves you', "And I'ma always be here for you", '(Just the two of us, just the two of us)', 'No matter what happens', "You're all I got in this world", '(Just the two of us, just the two of us)', "I would never give you up for nothin'", '(Just the two of us, just the two of us)', 'Nobody in this world is ever gonna keep you from me', 'I love you', '', '[Verse 1]', "C'mon Hai-Hai, we goin' to the beach", 'Grab a couple of toys and let Dada strap you in the car seat', "Oh, where's Mama? She's takin' a little nap in the trunk", "Oh, that smell? Dada must've runned over a skunk", "Now, I know what you're thinkin', it's kind of late to go swimmin'", "But you know your Mama, she's one of those type of women", "That do crazy things, and if she don't get her way, she'll throw a fit", "Don't play with Dada's toy knife, honey, let go of it (No)", "And don't look so upset, why you actin' bashful?", "Don't you wanna help Dada build a sandcastle? (Yeah)", 'And Mama said she wants to show you how far she can float', "And don't worry about that little boo-boo on her throat", "It's just a little scratch, it don't hurt", "Her was eatin' dinner while you were sweepin'", 'And spilled ketchup on her shirt', "Mama's messy, ain't she? We'll let her wash off in the water", "And me and you can play by ourselves, can't we?", '', '[Chorus]', 'Just the two of us, just the two of us',.... for line in lyrics_by_song: if lyrics_by_song == '[Intro]': intro.append(line) </code></pre>
<p>Refer to python's doc: <a href="https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions" rel="nofollow noreferrer">https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions</a></p> <p>you could also use this</p> <pre><code>Intro = lyrics_by_song[lyrics_by_song.index('[Intro]'):lyrics_by_song.index('something_else')] </code></pre> <p>See top answer here: <a href="https://stackoverflow.com/questions/509211/understanding-slice-notation">Understanding slice notation</a></p>
python|pandas|list|loops|logic
0
1,909,650
70,360,374
Create dataframe from existing one by counting the rows according to the values in a specific column
<p>I have this dataframe,</p> <pre><code>|order_id|customername|product_count| |1 |a |2 | |2 |b |-1 | |3 |Q |3 | |4 |a |-1 | |5 |c |-1 | |6 |Q |-1 | |7 |d |-1 | </code></pre> <p>What I want is another dataframe with the count of the rows wherever it is 'Q' in customername and count of the rows with the rest of the items in customername. As given below where test2 represents Q and test1 represents rest of the items. Pecentage column is (total request/count of coustomername)*100, which in this case is (5/7)*100 and (2/7)*100</p> <pre><code>|users|Total request|Percentage| |test1 |5 | 71.4 | |test2 |2 | 28.5 | </code></pre>
<p>Compare column for <code>Q</code> and count by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>Series.value_counts</code></a>, last rename values of index and create <code>DataFrame</code>:</p> <pre><code>df = pd.DataFrame({'order_id': [1, 2, 3, 4, 5, 6, 7], 'customername': ['a', 'b', 'Q', 'a', 'c', 'Q', 'd'], 'product_count': [2, -1, 3, -1, -1, -1, -1]}) print (df) order_id customername product_count 0 1 a 2 1 2 b -1 2 3 Q 3 3 4 a -1 4 5 c -1 5 6 Q -1 6 7 d -1 s = df['customername'].eq('Q').value_counts().rename({True:'test2', False:'test1'}) df1 = s.rename_axis('users').reset_index(name='Total request') df1['Percentage'] = df1['Total request'].div(df1['Total request'].sum()).mul(100).round(2) print (df1) users Total request Percentage 0 test1 5 71.43 1 test2 2 28.57 </code></pre>
python|pandas|dataframe
0
1,909,651
10,872,973
How can I remove values from a ranking system if they are a positive value k apart?
<p>Suppose I have the following code:</p> <pre><code>def compute_ranks(graph, k): d = .8 #dampening factor loops = 10 ranks = {} npages = len(graph) for page in graph: ranks[page] = 1.0 / npages for c in range(0, loops): newranks = {} for page in graph: newrank = (1-d) / npages for node in graph: if page in graph[node]: newrank = newrank + d * (ranks[node]/len(graph[node])) newranks[page] = newrank ranks = newranks return ranks </code></pre> <p>Alright so now suppose I want to not allow any items that can collude with each other. If I have an item dictionary </p> <pre><code>g = {'a': ['a', 'b', 'c'], 'b':['a'], 'c':['d'], 'd':['a']} </code></pre> <p>For any path A==>B, I don't want to allow paths from B==>A that are at a distance at or below my number k.</p> <p>For example if k = 0, then the only path I would not allow is A==>A.</p> <p>However if k = 2, then I would not allow the links A==>A as before but also links such as D==>A, B==>A, or A==>C.</p> <p>I know this is very confusing and a majority of my problem comes from not understanding exactly what this means. </p> <p>Here's a transcript of the question:</p> <pre><code># Question 2: Combatting Link Spam # One of the problems with our page ranking system is pages can # collude with each other to improve their page ranks. We consider # A-&gt;B a reciprocal link if there is a link path from B to A of length # equal to or below the collusion level, k. The length of a link path # is the number of links which are taken to travel from one page to the # other. # If k = 0, then a link from A to A is a reciprocal link for node A, # since no links needs to be taken to get from A to A. # If k=1, B-&gt;A would count as a reciprocal link if there is a link # A-&gt;B, which includes one link and so is of length 1. (it requires # two parties, A and B, to collude to increase each others page rank). # If k=2, B-&gt;A would count as a reciprocal link for node A if there is # a path A-&gt;C-&gt;B, for some page C, (link path of length 2), # or a direct link A-&gt; B (link path of length 1). # Modify the compute_ranks code to # - take an extra input k, which is a non-negative integer, and # - exclude reciprocal links of length up to and including k from # helping the page rank. </code></pre>
<p>A possible solution could be to introduce a recursive method which detects a collusion. Something like:</p> <pre><code>def Colluding(p1,p2,itemDict,k): if p1 == p2: return True elif k == 0: return False else p2 in itemDict[p1]: return True for p in itemDict[p1]: if Colluding(p1,p,itemDict,k-1): return True return False </code></pre> <p>Then where it says <code>if item in itemDict[node]</code> you would have <code>if item in itemDict[node] and not Colluding(item,node,itemDict,k)</code> or something similar. </p> <p>That does a <a href="http://en.wikipedia.org/wiki/Depth-first_search" rel="nofollow">depth-first search</a> which might not be the best choice if there are a lot of colluding links at a small depth (say A->B->A) since they may only be found after several full depth searches. You may be able to find another way which does a <a href="http://en.wikipedia.org/wiki/Breadth-first_search" rel="nofollow">breadth-first search</a> in that case. Also, if you have a lot of links, it might be worth trying to convert to a loop instead because Python might have a stack overflow problem if you use recursion. The recursive algorithm is what came to my mind first because it seems natural to traverse trees using recursion. </p> <p>Note: Since this is help with homework, I have not tested it and some comparisons might not be quite right and need to be adjusted in some way. </p>
python
0
1,909,652
11,196,144
Running cython extension directly using python interpreter? i.e. "python test.so"
<p>I want to compile main.py into main.so and run it using the python interpreter in linux, like this: "/usr/bin/python main.so" </p> <p>How can i do this? </p> <p>So far running extensions compiled the official way give me this:</p> <pre><code>root@server:~/TEMP# python test.so File "test.so", line 1 SyntaxError: Non-ASCII character '\x8b' in file test.so on line 2,... </code></pre>
<p>You can't execute a .so directly. Cause of the binary form, you have to import it with:</p> <pre><code>python -m test </code></pre> <p>If you want to make an executable out of the module, you could use the "-embed" option of cython:</p> <pre><code>cython -embed test.pyx gcc ...your flags... test.c -o test ./test </code></pre>
python|cython
2
1,909,653
63,574,469
Transfer dependency to --dev in poetry
<p>If you accidentally install a dependency in poetry as a main dependency (i.e. <code>poetry add ...</code>), is there a way to quickly transfer it to dev dependencies (i.e. <code>poetry add --dev ...</code>), or do you have to uninstall it and reinstall with <code>poetry add --dev</code>?</p>
<p>You can move the corresponding line in the <code>pyproject.toml</code> from the <code>[tool.poetry.dependencies]</code> section to <code>[tool.poetry.dev-dependencies]</code> by hand and run <code>poetry lock --no-update</code> afterwards.</p>
python-3.x|python-poetry
30
1,909,654
56,494,062
get distinct columns dataframe based on 2 ids
<p>Hello how can I do to only the lines where val is different in the 2 dataframes.</p> <p>The way I need to filter is the following:</p> <p>For each row of F1 (take each id1 if it is not null search for id1 F2 ) compare the VAL and if its different return it. else look at id2 and do the same thing. </p> <p>Notice that I can have id1 or id2 or both as shown below:</p> <pre><code>d2 = {'id1': ['X22', 'X13',np.nan,'X02','X14'],'id2': ['Y1','Y2','Y3','Y4',np.nan],'VAL1':[1,0,2,3,0]} F1 = pd.DataFrame(data=d2) d2 = {'id1': ['X02', 'X13',np.nan,'X22','X14'],'id2': ['Y4','Y2','Y3','Y1','Y22'],'VAL2':[1,0,4,3,1]} F2 = pd.DataFrame(data=d2) </code></pre> <p>Where F1 is:</p> <pre><code> id1 id2 VAL1 0 X22 Y1 1 1 X13 Y2 0 2 NaN Y3 2 3 X02 Y4 3 4 X14 NaN 0 </code></pre> <p>and F2 is:</p> <pre><code> id1 id2 VAL2 0 X02 Y4 1 1 X13 Y2 0 2 NaN Y3 4 3 X22 Y1 3 4 X14 Y22 1 </code></pre> <p>Expected output:</p> <pre><code>d2 = {'id1': ['X02',np.nan,'X22','X14'],'id2': ['Y4','Y3','Y1',np.nan],'VAL1':[3,2,1,0],'VAL2':[1,4,3,1]} F3 = pd.DataFrame(data=d2) id1 id2 VAL1 VAL2 0 X02 Y4 3 1 1 NaN Y3 2 4 2 X22 Y1 1 3 3 X14 NaN 0 1 </code></pre>
<p>Ok it is a rather complex merge, because you want to merge on 2 columns, and any of them can contain NaN which should match anything (but not both).</p> <p>I would to 2 separate merges:</p> <ul> <li>first where <code>id1</code> is not NaN in F1 on <code>id1</code></li> <li>second where <code>id1</code> is NaN in F1 on <code>id2</code></li> </ul> <p>In both resultant dataframe, I would only keep rows where:</p> <ul> <li>VAL1 != VAL2</li> <li>AND (F1.id2 == F2.id2 or F1.id2 is NaN or F2.id2 is NaN)</li> </ul> <p>Then I would concat them. Code could be:</p> <pre><code>t = F1.loc[~F1['id1'].isna()].merge(F2, on=['id1']).query('VAL1!=VAL2') t = t[(t.id2_x==t.id2_y)|t.id2_x.isna()|t.id2_y.isna()] t2 = F1.loc[F1['id1'].isna()].merge(F2, on=['id2']).query('VAL1!=VAL2') t2 = t2[(t2.id1_x==t2.id1_y)|t2.id1_x.isna()|t2.id1_y.isna()] # build back lost columns t['id2'] = np.where(t['id2_x'].isna(), t['id2_y'], t['id2_x']) t2['id1'] = np.where(t2['id1_x'].isna(), t2['id1_y'], t2['id1_x']) # concat and reorder the columns resul = pd.concat([t.drop(columns=['id2_x', 'id2_y']), t2.drop(columns=['id1_x', 'id1_y'])], ignore_index=True, sort=True).reindex(columns= ['id1', 'id2', 'VAL1', 'VAL2']) </code></pre> <p>Result is:</p> <pre><code> id1 id2 VAL1 VAL2 0 X22 Y1 1 3 1 X02 Y4 3 1 2 X14 Y22 0 1 3 NaN Y3 2 4 </code></pre>
python|pandas|dataframe
1
1,909,655
17,692,375
Print item from list in chunks of 50 in Python
<p>I have a list with 2,500 items. I want to print the first 50 items in a line and the next 50 items in the next line. So there will be a total of 50 lines with 50 items in each line.</p> <pre><code>myList = ['item1', item2,..., 'item2500'] line1 = item1, item2, ..., item50 line2 = item51, item52,...., item100 . . line 50 = item2451, item2452,...., item 2500 </code></pre> <p>Tried some while loops but it didn't quite work out. Is there a built-in functions or an easier way to do this? Thank you.</p>
<p>Same thing really, but looks nicer and reusable chunks function as a generator, I think.</p> <pre><code>def chunks_of_n(l,n): for i in xrange(0, len(l), n): yield l[i:i+n] def show_my_list_in_chunks(l): for chunk in chunks_of_n(l,50): print ', '.join(l) </code></pre>
python|list
3
1,909,656
60,890,382
How to import zip image folder as data in a cnn model?
<p>I am trying to run a basic cnn model on cats vs dogs images.The images exist as zip file.How can we extract the images in the zip folder to a directory using the kaggle online notebook?</p> <p><a href="https://i.stack.imgur.com/rKOg3.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>Try unzipping it :</p> <pre class="lang-py prettyprint-override"><code>!unzip "filePath" </code></pre>
python|deep-learning|kaggle|conv-neural-network
0
1,909,657
72,682,018
How to convert multiple yolo darknet format into .csv file
<p>First I have Yolo darknet around 321 txt files. Each file contain of text 5 or 6 row. as example below.</p> <pre><code>1 0.778906 0.614583 0.0828125 0.0958333 0 0.861719 0.584375 0.0984375 0.10625 0 0.654688 0.6 0.14375 0.125 0 0.254687 0.663542 0.146875 0.139583 0 0.457031 0.64375 0.120312 0.108333 0 0.960938 0.566667 0.078125 0.129167 </code></pre> <p>(First column : 1 or 0 is class and another columns is coordinate x,y,w,h)</p> <p>I try to convert to a csv file and I found solution as below.</p> <pre><code>os.chdir(r'C:\xxx\labels') myFiles = glob.glob('*.txt') width=1024 height=1024 image_id=0 final_df=[] for item in myFiles: row=[] bbox_temp=[] with open(item, 'rt') as fd: first_line = fd.readline() splited = first_line.split(); row.append(fd.readline(1)) row.append(width) row.append(height) try: bbox_temp.append(float(splited[1])*width) bbox_temp.append(float(splited[2])*height) bbox_temp.append(float(splited[3])*width) bbox_temp.append(float(splited[4])*height) row.append(bbox_temp) final_df.append(row) except: print(&quot;file is not in YOLO format!&quot;) df = pd.DataFrame(final_df,columns=['image_id', 'width', 'height','bbox']) df.to_csv(&quot;saved.csv&quot;,index=False) </code></pre> <p>and got output.</p> <p><a href="https://i.stack.imgur.com/mWbkg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mWbkg.png" alt="enter image description here" /></a></p> <p>But this code make CSV file only first line of Yolo Darknet txt. I want to get all of row (5 or 6 row for each text file.)</p> <p>If code is work. CSV should has 321* 5 or 6 = 1,xxx rows x 4 columns.</p> <p>Please help me for adjust this code.</p>
<p>Can you just replace <code>first_line = fd.readline()</code> with <code>for first_line in fd.readlines():</code> and indent the remainder of the code? You may also need to move <code>row=[]</code> and <code>bbox_temp=[]</code> into the new for loop.</p>
python|yolov5
1
1,909,658
72,763,804
Why are these two strings not equal in Python?
<p>I have a simple Python code sample</p> <pre><code>import json hello = json.dumps(&quot;hello&quot;) print(type(hello)) if hello == &quot;hello&quot;: print(&quot;They are equal&quot;) else: print(&quot;They are not equal&quot;) </code></pre> <p>This is evaluating to &quot;They are not equal&quot;. I don't understand why these values are not equal.</p> <p>I'm re-familiarizing myself with Python but I read that this &quot;==&quot; can be used as an <a href="https://www.journaldev.com/23511/python-string-comparison" rel="nofollow noreferrer">operator</a> to compare strings in Python. I also printed the type of hello which evaluates to &quot;str&quot;</p> <p>Can someone clarify this?</p>
<p>The behavior becomes much more clear once you print out the result from <code>json.dumps()</code>:</p> <pre class="lang-py prettyprint-override"><code>print(&quot;hello&quot;, len(&quot;hello&quot;)) print(hello, len(hello)) </code></pre> <p>This outputs:</p> <pre class="lang-none prettyprint-override"><code>hello 5 &quot;hello&quot; 7 </code></pre> <p><code>json.dumps()</code> adds extra quotation marks -- you can see that the lengths of the two strings aren't the same. This is why your check fails.</p>
python
8
1,909,659
68,182,266
Issue using progress_recorder (celery-progress): extends time of task
<p>I want to use celery-progress to display progress bar when downloading csv files</p> <p>my task loop over list of cvs files, open each files, filtered data and produce a zip folder with csv filtered files (see code below)</p> <p>but depending where set_progress is called, task will take much more time</p> <p>if I count (and set_progress) for files processed, it is quite fast even for files with 100000 records</p> <p>but if I count for records in files, that would be more informative for user, it extends time by 20</p> <p>I do not understand why</p> <p>how can I manage this issue</p> <pre><code>for file in listOfFiles: # 1 - count for files processed i += 1 progress_recorder.set_progress(i,numberOfFilesToProcess, description='Export in progess...') records = [] with open(import_path + file, newline='', encoding=&quot;utf8&quot;) as csvfile: spamreader = csv.reader(csvfile, delimiter=',', quotechar='|') csv_headings = ','.join(next(spamreader)) for row in spamreader: # 2 - count for records in each files processed (files with 100000 records) # i += 1 # progress_recorder.set_progress(i,100000, description='Export in progess...') site = [row[0][positions[0]:positions[1]]] filtered_site = filter(lambda x: filter_records(x,sites),site) for site in filtered_site: records.append(','.join(row)) </code></pre>
<p>If there is a very high number of records then likely there's no need to update the progress on every one, and the overhead of updating the progress in the backend every time could become substantial. Instead you could do something like this in the inner loop:</p> <pre><code>if i % 100 == 0: # update progress for every 100th entry progress_recorder.set_progress(i,numberOfFilesToProcess, description='Export in progress...') </code></pre>
python|django|progress-bar|django-celery
1
1,909,660
63,035,639
how to remove axis label while keeping ticklabel and tick in matplotlib
<p>I would like to remove axis label with keeping tick, ticklabel.</p> <p>This is <code>seaborn heatmap</code> example.</p> <p>In this case, I'd like to remove only yaixs label('month') and xaxis label('year') label.</p> <p>I tried to use follows but I couldn't remove only labels.</p> <pre><code>ax.xaxis.set_visible(False) ax.set_xticklabels([]) ax.set_xticks([]) </code></pre> <p><a href="https://i.stack.imgur.com/npqDf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/npqDf.png" alt="enter image description here" /></a></p> <p>Codes is follow.</p> <pre><code>import seaborn as sns import matplotlib.pylab as plt fig, ax = plt.subplots() flights = sns.load_dataset(&quot;flights&quot;) flights = flights.pivot(&quot;month&quot;, &quot;year&quot;, &quot;passengers&quot;) ax = sns.heatmap(flights) #ax.xaxis.set_visible(False) # remove axis label &amp; xtick label #ax.set_xticklabels([]) # remove xtick label #ax.set_xticks([]) # remove xtick label &amp; tick plt.show() </code></pre>
<pre><code>ax.set_xlabel('') ax.set_ylabel('') </code></pre>
python|matplotlib
2
1,909,661
63,019,765
PySpark add column if date in range by quarter
<p>I have a df as follows:</p> <pre><code>name date x 2020-07-20 y 2020-02-13 z 2020-01-21 </code></pre> <p>I need a new column with the corresponding quarter as an integer, e.g.</p> <pre><code>name date quarter x 2020-07-20 3 y 2020-02-13 1 z 2020-01-21 1 </code></pre> <p>I have defined my quarters as a list of strings so I thought I could use .withColumn + when col('date') in quarter range but get an error saying I cannot convert column to boolean.</p>
<p>You can use <code>quarter</code> function to extract it as an integer.</p> <pre><code>from pyspark.sql.functions import * df1=spark.createDataFrame([(&quot;x&quot;,&quot;2020-07-20&quot;),(&quot;y&quot;,&quot;2020-02-13&quot;),(&quot;z&quot;,&quot;2020-01-21&quot;)], [&quot;name&quot;, &quot;date&quot;]) df1.show() +----+----------+ |name| date| +----+----------+ | x|2020-07-20| | y|2020-02-13| | z|2020-01-21| +----+----------+ df1.withColumn(&quot;quarter&quot;, quarter(col(&quot;date&quot;))).show() +----+----------+-------+ |name| date|quarter| +----+----------+-------+ | x|2020-07-20| 3| | y|2020-02-13| 1| | z|2020-01-21| 1| +----+----------+-------+ </code></pre>
python-3.x|apache-spark|apache-spark-sql
2
1,909,662
62,185,453
Unexpected valid syntax in annotated assignment expression
<p>Apparently (and surprisingly, to me at least), this is a perfectly valid expression in Python 3.6+:</p> <pre><code>x: 10 </code></pre> <p>What is up with this? I checked it out using the <code>ast</code> module and got the following:</p> <pre class="lang-py prettyprint-override"><code>[ins] In [1]: ast.parse('x: 10').body Out[1]: [&lt;_ast.AnnAssign at 0x110ff5be0&gt;] </code></pre> <p>Okay so it's an annotated assignment. I looked up the <a href="https://docs.python.org/3.6/reference/grammar.html" rel="nofollow noreferrer">grammar reference</a> and saw that it corresponds to this rule:</p> <pre><code>annassign: ':' test ['=' test] </code></pre> <p>This doesn't make a lot of sense to me. If it's an annotated <em>assignment</em>, then why is the assignment portion of the expression optional? What could it possibly mean if the assignment portion of the expression isn't present? Isn't that really strange? </p> <p>The <code>annasign</code> node is only referenced in one rule:</p> <pre><code>expr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) | ('=' (yield_expr|testlist_star_expr))*) </code></pre> <p>In each of the other possible projections at that level, some kind of assignment expression is required (<code>augassign</code> is a token like <code>+=</code>). So why is it optional for <code>annassign</code>?</p> <p>I guess it's plausible that this is intended to be the annotated version of a bare name expression (i.e. just <code>x</code>), but it's really quite confusing. I'm not too familiar with the static type checkers out there but can they make use of an annotation like this?</p> <p>More than likely this is intentional, but it kind of seems like a bug. It's a little bit problematic, because it's possible to write syntactically valid but utterly nonsensical code like this:</p> <pre><code>a = 1 b = 2 c: 3 # see what I did there? oops! d = 4 </code></pre> <p>I recently made a similar mistake in my own code when I converted a <code>dict</code> representation into separate variables, and only got caught out when my test pipeline ran in a Python 3.5 environment and produced a <code>SyntaxError</code>.</p> <p>Anyway, I'm mostly just curious about the intent, but would also be really excited to find out that I discovered an actual grammar bug.</p>
<p>It's classified as an annotated assignment for parser reasons. If there were separate <code>annotation</code> and <code>annassign</code> rules, Python's LL(1) parser wouldn't be able to tell which one it's supposed to be parsing when it sees the colon.</p>
python|syntax|type-annotation
2
1,909,663
62,099,401
Downloading excel file from url in pandas (post authentication)
<p>I am facing a strange problem, I dont know much about for my lack of knowledge of html.<br><br> I want to download an excel file post login from a website. The file_url is:<br></p> <pre><code>file_url="https://xyz.xyz.com/portal/workspace/IN AWP ABRL/Reports &amp; Analysis Library/CDI Reports/CDI_SM_Mar'20.xlsx" </code></pre> <p><br>There is a share button for the file which gives the link2 (For the same file):</p> <pre><code>file_url2='http://xyz.xyz.com/portal/traffic/4a8367bfd0fae3046d45cd83085072a0' </code></pre> <p>When I use requests.get to read link 2 (post login to a session) I am able to read the excel into pandas. However, link 2 does not serve my purpose as I cant schedule my report on this on a periodic basis (by changing Mar'20 to Apr'20 etc). Link1 suits my purpose but gives the following on passing r=requests.get in the r.content method:</p> <pre><code>b'\n\n\n\n\n\n\n\n\n\n&lt;html&gt;\n\t&lt;head&gt;\n\t\t&lt;title&gt;&lt;/title&gt;\n\t&lt;/head&gt;\n\t\n\t&lt;body bgcolor="#FFFFFF"&gt;\n\t\n\n\t&lt;script language="javascript"&gt;\n\t\t&lt;!-- \n\t\t\ttop.location.href="https://xyz.xyz.com/portal/workspace/IN%20AWP%20ABRL/Reports%20&amp;%20Analysis%20Library/CDI%20Reports/CDI_SM_Mar\'20.xlsx";\t\n\t\t--&gt;\n\t&lt;/script&gt;\n\t&lt;/body&gt;\n&lt;/html&gt;' </code></pre> <p>I have tried all encoding decoding of url but cant understand this alphanumeric url (link2). </p> <p>My python code (working) is:</p> <pre><code>import requests url = 'http://xyz.xyz.com/portal/site' username='' password='' s = requests.Session() headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36'} r = s.get(url,auth=(username, password),verify=False,headers=headers) r2 = s.get(file_url,verify=False,allow_redirects=True) r2.content # df=pd.read_excel(BytesIO(r2.content)) </code></pre>
<p>You get HTML with JavaScript which redirects browser to new url. But <code>requests</code> can't run <code>JavaScript</code>. it is simple methods to block some simple scripts/bots.</p> <p>But HTML is only string so you can use string's functions to get url from string and use this url with <code>requests</code> to get file.</p> <pre><code>content = b'\n\n\n\n\n\n\n\n\n\n&lt;html&gt;\n\t&lt;head&gt;\n\t\t&lt;title&gt;&lt;/title&gt;\n\t&lt;/head&gt;\n\t\n\t&lt;body bgcolor="#FFFFFF"&gt;\n\t\n\n\t&lt;script language="javascript"&gt;\n\t\t&lt;!-- \n\t\t\ttop.location.href="https://xyz.xyz.com/portal/workspace/IN%20AWP%20ABRL/Reports%20&amp;%20Analysis%20Library/CDI%20Reports/CDI_SM_Mar\'20.xlsx";\t\n\t\t--&gt;\n\t&lt;/script&gt;\n\t&lt;/body&gt;\n&lt;/html&gt;' text = content.decode() print(text) print('\n---\n') start = text.find('href="') + len('href="') end = text.find('";', start) url = text[start:end] print('url:', url) response = s.get(url) </code></pre> <p>Results:</p> <pre><code>&lt;html&gt; &lt;head&gt; &lt;title&gt;&lt;/title&gt; &lt;/head&gt; &lt;body bgcolor="#FFFFFF"&gt; &lt;script language="javascript"&gt; &lt;!-- top.location.href="https://xyz.xyz.com/portal/workspace/IN%20AWP%20ABRL/Reports%20&amp;%20Analysis%20Library/CDI%20Reports/CDI_SM_Mar'20.xlsx"; --&gt; &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; --- url: https://xyz.xyz.com/portal/workspace/IN%20AWP%20ABRL/Reports%20&amp;%20Analysis%20Library/CDI%20Reports/CDI_SM_Mar'20.xlsx </code></pre>
python|python-requests|urllib|python-requests-html|python-responses
1
1,909,664
62,079,423
How to find all symlinks in directory and its subdirectories in python
<p>I need to list symlinks using python. Broken aswell</p> <p>How do I do it? I was searching everywhere and tried alot.</p> <p>The best result I found was:</p> <pre><code>import os,sys print '\n'.join([os.path.join(sys.argv[1],i) for i in os.listdir(sys.argv[1]) if os.path.islink(os.path.join(sys.argv[1],i))]) </code></pre> <p>It does not show where its linked to and it doesn't go to subdirs.</p>
<p>You can use a code similar to this one to achieve what you need. Directories to search are passed as arguments or current directory taken as the default. You can modify this further with the <code>os.walk</code> method to make it recursive. </p> <pre><code>import sys, os def lll(dirname): for name in os.listdir(dirname): if name not in (os.curdir, os.pardir): full = os.path.join(dirname, name) if os.path.isdir(full) and not os.path.islink(full): lll(full) elif os.path.islink(full): print(name, '-&gt;', os.readlink(full)) def main(args): if not args: args = [os.curdir] first = 1 for arg in args: if len(args) &gt; 1: if not first: print() first = 0 print(arg + ':') lll(arg) if __name__ == '__main__': main(sys.argv[1:]) </code></pre> <p>Ref: <a href="https://github.com/python/cpython/blob/master/Tools/scripts/lll.py" rel="nofollow noreferrer">https://github.com/python/cpython/blob/master/Tools/scripts/lll.py</a></p>
python
1
1,909,665
35,681,902
openpyxl.load_workbook(file, data_only=True doens't work?
<p>Why does x = "None" instead of "500"? I have tried everything that I know and searched 1 hour for answer... Thank you for any help!</p> <pre><code>import openpyxl wb = openpyxl.Workbook() sheet = wb.active sheet["A1"] = 200 sheet["A2"] = 300 sheet["A3"] = "=SUM(A1+A2)" wb.save("writeFormula.xlsx") wbFormulas = openpyxl.load_workbook("writeFormula.xlsx") sheet = wbFormulas.active print(sheet["A3"].value) wbDataOnly = openpyxl.load_workbook("writeFormula.xlsx", data_only=True) sheet = wbDataOnly.active x = (sheet["A3"].value) print(x) # None? Should print 500? </code></pre>
<p>From the <a href="https://openpyxl.readthedocs.org/en/latest/usage.html#using-formulae" rel="nofollow">documentation</a> </p> <blockquote> <p>openpyxl never evaluates formula</p> </blockquote>
python|excel|openpyxl
4
1,909,666
58,984,413
How to progammatically pass the callable to gunicorn instead of arguments
<p>I have the following implementation to spin up a web app using gunicorn</p> <pre class="lang-py prettyprint-override"><code>@click.command("run_app", help="starts application in gunicorn") def run_uwsgi(): """ Runs the project in gunicorn """ import sys sys.argv = ["--gunicorn"] sys.argv.append("-b 0.0.0.0:5000") sys.argv.append("myapp.wsgi:application") WSGIApplication(usage="%(prog)s [OPTIONS] [APP_MODULE]").run() </code></pre> <p>This will spin up the app using gunicorn, as per the requirement how to spin this up without using arguments? Is there a way to assign sys.argv values to the gunicorn ?</p>
<p>I would like to post the solution which I have worked out </p> <pre><code>@click.command("uwsgi", help="starts application in gunicorn") def run_uwsgi(): """ Runs the project in gunicorn """ from gunicorn.app.base import Application import sys class MyApplication(Application): """ Bypasses the class `WSGIApplication` and made it independent from command line arguments """ def init(self, parser, opts, args): self.cfg.set("default_proc_name", args[0]) # Added this to ensure the application integrity self.app_uri = "myapp.wsgi:application" def load_wsgiapp(self): # This would do the trick # returns application callable return application def load(self): return self.load_wsgiapp() sys.argv = ["--gunicorn"] sys.argv.append(f"-b {os.environ['APP_HOST']}:{os.environ['APP_PORT']}") # Throws an error if this is missing. sys.argv.append("myapp.wsgi:application") MyApplication(usage="%(prog)s [OPTIONS] [APP_MODULE]").run() </code></pre> <p>I have directly returning the callable from </p> <pre><code>def load_wsgiapp(self): # This would do the trick # returns application callable return application </code></pre> <p><code>wsgi.py</code></p> <pre><code>application = main.create_app() </code></pre> <p>But still needed to pass a command line argument for the module, other wise it throws an error. If you are using Nuitka to bundle your application you can spin it up and use with gunicorn. </p>
python|python-3.x|gunicorn
0
1,909,667
49,083,742
NLTK - Lemmatizing the tokens before being chunked
<p>I am currently stuck in this problem. </p> <p>NLTK's Chunking function is like this:</p> <pre><code>tokens = nltk.word_tokenize(word) tagged = nltk.pos_tag(tokens) chunking = nltk.chunk.ne_chunk(tagged) </code></pre> <p>is there any way to lemmatize the tokens with its tag before being chunked? Like </p> <p><code>lmtzr.lemmatize('tokens, pos=tagged)</code></p> <p>I have tried to lemmatize the chunk, but it is not working (error says something about chunking being a list). I am new to python, so my knowledge about it isn't that great. Any help would be great!</p>
<p>You can <code>lemmatize</code> directly without <code>pos_tag</code> - </p> <pre><code>import nltk from nltk.corpus import wordnet lmtzr = nltk.WordNetLemmatizer() word = "Here are words and cars" tokens = nltk.word_tokenize(word) token_lemma = [ lmtzr.lemmatize(token) for token in tokens ] tagged = nltk.pos_tag(token_lemma) chunking = nltk.chunk.ne_chunk(tagged) </code></pre> <p><strong>Output</strong></p> <pre><code>['Here', 'are', 'word', 'and', 'car'] # lemmatize output [('Here', 'RB'), ('are', 'VBP'), ('word', 'NN'), ('and', 'CC'), ('car', 'NN')] (S Here/RB are/VBP word/NN and/CC car/NN) </code></pre>
python|nltk
1
1,909,668
49,106,322
Add a row to pandas dataframe based on dictionary
<p>Here is my example dataframe row:</p> <pre><code>A B C D E </code></pre> <p>I have a dictionary formatted like:</p> <pre><code>{'foo': ['A', 'B', 'C'], 'bar': ['D', 'E']} </code></pre> <p>I would like to add a row above my original dataframe so my new dataframe is:</p> <pre><code>foo foo foo bar bar A B C D E </code></pre> <p>I think maybe the df.map function should be able to do it, but I've tried it and can't seem to get the syntax right.</p>
<p>I believe you want set columns names by row of <code>DataFrame</code> with <code>dict</code> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow noreferrer"><code>map</code></a>:</p> <pre><code>d = {'foo': ['A', 'B', 'C'], 'bar': ['D', 'E']} #swap keys with values d1 = {k: oldk for oldk, oldv in d.items() for k in oldv} print (d1) {'E': 'bar', 'A': 'foo', 'D': 'bar', 'B': 'foo', 'C': 'foo'} df = pd.DataFrame([list('ABCDE')]) df.columns = df.iloc[0].map(d1).values print (df) foo foo foo bar bar 0 A B C D E </code></pre> <p>If need set first row in one row <code>DataFrame</code>:</p> <pre><code>df = pd.DataFrame([list('ABCDE')]) df.loc[-1] = df.iloc[0].map(d1) df = df.sort_index().reset_index(drop=True) print (df) 0 1 2 3 4 0 foo foo foo bar bar 1 A B C D E </code></pre>
python-3.x|pandas|dataframe
1
1,909,669
49,040,765
Python's print function in a class
<p>I can't execute <code>print</code> function in the class:</p> <pre><code>#!/usr/bin/python import sys class MyClass: def print(self): print 'MyClass' a = MyClass() a.print() </code></pre> <p>I'm getting the following error:</p> <pre><code>File "./start.py", line 9 a.print() ^ SyntaxError: invalid syntax </code></pre> <p>Why is it happening?</p>
<p>In Python 2, <code>print</code> is a <a href="https://docs.python.org/2/reference/lexical_analysis.html#keywords" rel="nofollow noreferrer">keyword</a>. It can only be used for its intended purpose. I can't be the name of a variable or a function.</p> <p>In Python 3, <code>print</code> is a <a href="https://docs.python.org/3/library/functions.html#print" rel="nofollow noreferrer">built-in function</a>, not a keyword. So methods, for example, can have the name <code>print</code>.</p> <p>If you are using Python 2 and want to override its default behavior, you can import Python 3's behavior from <a href="https://docs.python.org/2/reference/simple_stmts.html#future" rel="nofollow noreferrer"><code>__future__</code></a>:</p> <pre><code>from __future__ import print_function class MyClass: def print(self): print ('MyClass') a = MyClass() a.print() </code></pre>
python|python-2.x
10
1,909,670
70,895,609
creating event gives 400 bad request - Google calendar API using AUTHLIB (Python package)
<h3>This is my request to create an event (Using AuthLib from PyPi)</h3> <pre><code>resp = google.post( 'https://www.googleapis.com/calendar/v3/calendars/primary/events', data={ 'start': {'dateTime': today.strftime(&quot;%Y-%m-%dT%H:%M:%S+05:30&quot;), 'timeZone': 'Asia/Kolkata'}, 'end': {'dateTime': tomorrow.strftime(&quot;%Y-%m-%dT%H:%M:%S+05:30&quot;), 'timeZone': 'Asia/Kolkata'}, 'reminders': {'useDefault': True}, }, token=dict(session).get('token')) </code></pre> <h2>The response im getting</h2> <p><code>{&quot;error&quot;:{&quot;code&quot;:400,&quot;errors&quot;:[{&quot;domain&quot;:&quot;global&quot;,&quot;message&quot;:&quot;Bad Request&quot;,&quot;reason&quot;:&quot;badRequest&quot;}],&quot;message&quot;:&quot;Bad Request&quot;}}</code></p> <h1>Notes:</h1> <ol> <li>I have done get requests with the same library and methods and works</li> <li>Scopes included (as mentioned in documentation) <ul> <li><a href="https://www.googleapis.com/auth/calendar" rel="nofollow noreferrer">https://www.googleapis.com/auth/calendar</a></li> <li><a href="https://www.googleapis.com/auth/calendar.events" rel="nofollow noreferrer">https://www.googleapis.com/auth/calendar.events</a></li> </ul> </li> </ol> <h1>Also note:</h1> <p>I have gone through all existing stackoverflow answers regarding this specific error request and api endpoint, and most of them have an issue with their time formatting which i have tried all off, now im sticking with the google api docs time format.</p>
<p>You can try <a href="https://github.com/kuzmoyev/google-calendar-simple-api" rel="nofollow noreferrer">gcsa</a> library (<a href="https://google-calendar-simple-api.readthedocs.io/en/latest/?badge=latest" rel="nofollow noreferrer">documentation</a>). It handles time formatting for you:</p> <pre><code>from gcsa.google_calendar import GoogleCalendar from gcsa.event import Event event = Event( 'My event', start=today, end=tomorrow, timezone='Asia/Kolkata' ) calendar.add_event(event) </code></pre> <p>Install it with</p> <pre><code>pip install gcsa </code></pre>
python|google-calendar-api|authlib
0
1,909,671
70,978,732
Top and right axes labels in matplotlib pyplot
<p>I have a matplotlib/pyplot plot that appears as I want, in that the axes show the required range of values from -1 to +1 on both the x and y axes. I have labelled the x and y axes. However I also wish to label the right-hand vertical axis with the text &quot;Thinking&quot; and the top axis with the text &quot;Extraversion&quot;.</p> <p>I have looked at the matplotlib documentation but can't get my code to execute using set_xlabel and set_ylabel. I have commented these lines out in my code so my code runs for now - but hopefully the comments will make it clear enough what I am trying to do.</p> <pre><code>import matplotlib.pyplot as plt w = 6 h = 6 d = 70 plt.figure(figsize=(w, h), dpi=d) x = [-0.34,-0.155,0.845,0.66,-0.34] y = [0.76,0.24,-0.265,0.735,0.76,] plt.plot(x, y) plt.xlim(-1,1) plt.ylim(-1,1) plt.xlabel(&quot;Intraverted&quot;) plt.ylabel(&quot;Feeling&quot;) #secax = plt.secondary_xaxis('top') #secax.set_xlabel('Extraverted') #secay = plt.secondary_xaxis('right') #secay.set_ylabel('Thinking') #plt.show() plt.savefig(&quot;out.png&quot;) </code></pre>
<p>As @Mr. T pointed out, there is no <code>plt.secondary_xaxis</code> method so you need the axes object</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt plt.figure(figsize=(6, 6), constrained_layout=True, dpi=70) x = [-0.34,-0.155,0.845,0.66,-0.34] y = [0.76,0.24,-0.265,0.735,0.76,] plt.plot(x, y) plt.xlim(-1,1) plt.ylim(-1,1) plt.xlabel(&quot;Intraverted&quot;) plt.ylabel(&quot;Feeling&quot;) secax = plt.gca().secondary_xaxis('top') secax.set_xlabel('Extraverted') secay = plt.gca().secondary_yaxis('right') secay.set_ylabel('Thinking') #plt.show() plt.savefig(&quot;out.png&quot;) </code></pre> <p><a href="https://i.stack.imgur.com/yKlay.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yKlay.png" alt="enter image description here" /></a></p> <p>Better, would be just to create the axes object from the start:</p> <pre class="lang-py prettyprint-override"><code>fig, ax = plt.subplots(figsize=(w, h), constrained_layout=True, dpi=d) ... ax.plot(x, y) ax.set_xlim(-1, 1) ... secax = ax.secondary_xaxis('top') ... fig.savefig(&quot;out.png&quot;) </code></pre> <p>Further note the use of <code>constrained_layout=True</code> to make the secondary yaxis label fit on the figure.</p>
python-3.x|matplotlib
2
1,909,672
67,734,136
How to create a groupby of two columns with all possible combinations and aggregated results
<p>I want to group a large dataframe over two or more columns and aggregate the other columns. I use groupby but realised after some time that <code>groupby(label1, label2)</code> only creates rows for existing combinations of label1 and label2. Example:</p> <pre><code>lijst = [['a', 1, 3], ['b', 2, 6], ['a', 2, 7], ['b', 2, 2], ['a', 1, 8]] data = pd.DataFrame(lijst, columns=['letter', 'cijfer', 'getal']) data['Aantal'] = 0 label1 = 'letter' label2 = 'cijfer' df = data.groupby([label1, label2]).agg({'Aantal': 'count', 'getal': sum}) </code></pre> <p>Result:</p> <pre><code> Aantal getal letter cijfer a 1 2 11 2 1 7 b 2 2 8 </code></pre> <p>And I wanted something like:</p> <pre><code> Aantal getal letter cijfer a 1 2 11 2 1 7 b 1 NaN NaN 2 2 8 </code></pre> <p>I tried <a href="https://stackoverflow.com/questions/48485275/pandas-groupby-two-columns-include-all-possible-values-of-column-2-per-group">this link</a> and several others but they all don't handle the case of having to aggregate many columns (sorry if I haved missed it).</p> <p>The only solution I can thing of is making a template DataFrame from:</p> <pre><code> template = pd.DataFrame(index=pd.MultiIndex.from_product([data[label1].unique(), data[label2].unique()]), columns=df.columns) </code></pre> <p>and next copy all data over from df. That seems to me a very tedious solution. Is there a better solution to get what I want?</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>DataFrame.unstack</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>DataFrame.stack</code></a>:</p> <pre><code>df = df.unstack().stack(dropna=False) print (df) Aantal getal letter cijfer a 1 2.0 11.0 2 1.0 7.0 b 1 NaN NaN 2 2.0 8.0 </code></pre> <p>Or another idea with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>DataFrame.reindex</code></a>:</p> <pre><code>df = df.reindex(pd.MultiIndex.from_product(df.index.levels)) print (df) Aantal getal letter cijfer a 1 2.0 11.0 2 1.0 7.0 b 1 NaN NaN 2 2.0 8.0 </code></pre>
python|pandas
1
1,909,673
66,974,698
How to tell a asyncio.Protocol some information?
<p>After reading a post <a href="https://stackoverflow.com/questions/50678184/how-to-pass-additional-parameters-to-handle-client-coroutine">here</a>, I thought that was possible to send arguments to my protocol factory using a lambda function, but for some reason, it just doesn't work (it doesn't recognize any connection). Since create_server doesn't accept arguments, how could I tell my protocol some useful information? I start a bunch of them using a loop for every door in a list, but after that, I can't relate to which protocol is which.</p> <p>Any ideas?</p>
<p>Alright, I found the problem.</p> <p>Instead of using the lambda like in the <a href="https://stackoverflow.com/questions/50678184/how-to-pass-additional-parameters-to-handle-client-coroutine">example</a>:</p> <pre class="lang-py prettyprint-override"><code>await asyncio.start_server(lambda r, w: handle_client(r, w, session), '', 55555) </code></pre> <p>I should be using lambda like this:</p> <pre class="lang-py prettyprint-override"><code>await asyncio.start_server(lambda: handle_client(r, w, session), '', 55555) </code></pre> <p>I hope this may be helpful to someone else.</p>
python-3.x|python-asyncio
-1
1,909,674
66,761,079
Am I able to extend the range in a for loop?
<p>This is a problem from the Project Euler website and I can not seem to get it right. I want to extend the range of the for loop if the number i am trying to divide is not evenly divisible by x. The question is at the top. Any ideas?</p> <pre><code># 2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder. # What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20? def main(): num = 0 to = 20 for num in range(1, to): isDivisible = True for x in range(1, 20): if num % x != 0: isDivisible = False to += 1 #Here I try to extend the loop continue if isDivisible: print(num) break main() </code></pre>
<p>I'm not sure it's true but:</p> <pre><code>def f(x): z = 1 for i in range(1, x + 1): for i_2 in range(1,i + 1): #Check the multipliers! For example, since the number 9 is multiplied by 3 before, if it is multiplied by 3 again, it becomes a layer. if i % i_2 == 0 and ( z * i_2 ) % i == 0: z *= i_2 break print(z) f(20) </code></pre> <blockquote> <p>232792560</p> </blockquote>
python|python-3.x|for-loop|range
1
1,909,675
67,112,024
1 px thick line cv2
<p>I need to draw a line in an image where no pixel is thicker than 1 pixel in the horizontal dimension.</p> <p>Despite I use thickness=1 in poly lines,</p> <pre><code>cv2.polylines(img, np.int32([points]), isClosed=False, color=(255, 255, 255), thickness=1) </code></pre> <p>in the resulting plot there may be 2 pixels horizontally adjacent set to 255, like in this pic:</p> <p><a href="https://i.stack.imgur.com/ok6Of.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ok6Of.png" alt="enter image description here" /></a></p> <p>How can I prevent to have adjacent pixels set to 255? Or equivalently: what is an efficient way to set to 0 one of the 2?</p> <p>I thought to Erosion but then, in those lines where there is only 1 255 pixel, such a pixel would be set to 0 as well.</p>
<p>It looks like we need to use <strong>for loops</strong>.</p> <p>Removing one pixel out of two horizontally adjacent pixels is an iterative operation.<br /> I can't see a way to vectorize it, or use filtering or morphological operations.<br /> There could be something I am missing, but I think we need to use a loop.</p> <p>In case the image large, you may use <a href="https://numba.pydata.org/" rel="nofollow noreferrer">Numba</a> (or <a href="https://cython.org/" rel="nofollow noreferrer">Cython</a>) for accelerating the execution time.</p> <pre><code>import cv2 import numpy as np from numba import jit @jit(nopython=True) # Use Numba JIT for accelerating the code execution time def horiz_skip_pix(im): for y in range(im.shape[0]): for x in range(1, im.shape[1]): # Use logical operation instead of using &quot;if statement&quot;. # We could have used an if statement, like if im[y, x]==255 and im[y, x-1]==255: im[y, x] = 0 ... im[y, x] = im[y, x] &amp; (255-im[y, x-1]) # Build sample input image src_img = np.zeros((10, 14), np.uint8) points = np.array([[2,2], [5,8], [12,5], [12,2], [3, 2]], np.int32) cv2.polylines(src_img, [points], isClosed=False, color=(255, 255, 255), thickness=1) dst_img = src_img.copy() # Remove horizontally adjacent pixels. horiz_skip_pix(dst_img) # Show result cv2.imshow('src_img', cv2.resize(src_img, (14*20, 10*20), interpolation=cv2.INTER_NEAREST)) cv2.imshow('dst_img', cv2.resize(dst_img, (14*20, 10*20), interpolation=cv2.INTER_NEAREST)) cv2.waitKey() cv2.destroyAllWindows() </code></pre> <hr /> <p><code>src_img</code>:<br /> <a href="https://i.stack.imgur.com/LwvtM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LwvtM.png" alt="src_img" /></a></p> <p><code>dst_img</code>:<br /> <a href="https://i.stack.imgur.com/QA7uQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QA7uQ.png" alt="dst_img" /></a></p> <p>I wouldn't call the result a &quot;1 px thick line&quot;, but it meats the condition of &quot;prevent to having adjacent pixels&quot;.</p>
python|image-processing|line|cv2
2
1,909,676
67,100,665
Dataflow works with directrunner but not with dataflowrunner (PubSub to GCS)
<p>I'm doing a very simple pipeline with dataflow.</p> <p>It gets a raw data from pubsub and adds a timestamp then write to raw file (I tried parquet first).</p> <p>Code:</p> <pre><code>class GetTimestampFn(beam.DoFn): &quot;&quot;&quot;Prints element timestamp&quot;&quot;&quot; def process(self, element, timestamp=beam.DoFn.TimestampParam): timestamp_utc = float(timestamp) yield {'raw':str(element),&quot;inserted_at&quot;:timestamp_utc} options = PipelineOptions(streaming=True) p = beam.Pipeline(DirectRunner(), options=options) parser = argparse.ArgumentParser() parser.add_argument('--input_topic',required=True) parser.add_argument('--output_parquet',required=True) known_args, _ = parser.parse_known_args(argv) raw_data = p | 'Read' &gt;&gt; beam.io.ReadFromPubSub(subscription=known_args.input_topic) raw_with_timestamp = raw_data | 'Getting Timestamp' &gt;&gt; beam.ParDo(GetTimestampFn()) _ = raw_with_timestamp | 'Write' &gt;&gt; beam.io.textio.WriteToText(known_args.output_parquet,append_trailing_newlines=True ,file_name_suffix='.gzip' ) p.run().wait_until_finish() </code></pre> <p>It works with direct runner but it fails on dataflowrunner with this message &quot;Workflow failed.&quot;</p> <p>Job id: 2021-04-14_17_11_02-16453427249129279174</p> <p>How I'm running the job:</p> <pre><code>python real_time_events.py \ --region us-central1 \ --input_topic 'projects/{project}/subscriptions/{subscription}' \ --output_parquet 'gs://{bucket}/stream/' \ --project &quot;{project}&quot; \ --temp_location &quot;gs://{bucket}/tmp&quot; \ --staging_location &quot;gs://{bucket}/stage&quot; </code></pre> <p>Any ideas on how to solve ?</p>
<p><code>WriteToText</code> does not support streaming pipelines. I recommend you try using <code>fileio.WriteToFiles</code> instead, to get a transform that supports streaming. Note that you may need to group before it.</p>
python|google-cloud-dataflow|dataflow
0
1,909,677
64,102,591
Why can't it read the xml data properly?
<p>Okay first of all, in my experience as a beginner in programming I never had encounter this kind of weirdness in my whole life.</p> <p>Hello I have a very large xml file and I cannot show it here but I can show the first part here as an image</p> <p><a href="https://i.stack.imgur.com/TRikh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TRikh.png" alt="First data in XML" /></a></p> <p>As you can see the arrows are pointing is the very first tag along with its respective children. Now I have here a program that reads that <strong>LARGE</strong> xml file and as you can see there is only the first 5 of it, here is the code</p> <pre><code>def parse(word,translator,file): language_target = &quot;de&quot; if os.path.isfile(file): data = [] #[(a,b,c),(a,b,c),(a,b,c)] file_path = os.path.join(os.getcwd(),file) nsmap = {&quot;xml&quot;: &quot;http://www.w3.org/XML/1998/namespace&quot;} for event, elem in ET.iterparse(file_path,events=(&quot;start&quot;,&quot;end&quot;)): if event == &quot;start&quot; and elem.tag == &quot;tu&quot;: temp_list = [] sentence = elem.find(f'.//tuv[@xml:lang=&quot;{language_target}&quot;]', nsmap).find('seg', nsmap).text else: print(&quot;\nNo such File&quot;) os.system(&quot;pause&quot;) </code></pre> <p>There's no need to give attention much on the parameters, those three just gets a word and a translator and a filename. Now on this part is where I read the <strong>LARGE</strong> xml file</p> <pre><code>nsmap = {&quot;xml&quot;: &quot;http://www.w3.org/XML/1998/namespace&quot;} for event, elem in ET.iterparse(file_path,events=(&quot;start&quot;,&quot;end&quot;)): if event == &quot;start&quot; and elem.tag == &quot;tu&quot;: temp_list = [] sentence = elem.find(f'.//tuv[@xml:lang=&quot;{language_target}&quot;]', nsmap).find('seg', nsmap).text </code></pre> <p>What happens there is it gets all of the tags cause I want to get the text of its children. Now look what happens here when I run the program</p> <p><a href="https://i.stack.imgur.com/vQg0X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vQg0X.png" alt="Error without print" /></a></p> <p>It says that it is a <code>NoneType</code> object, now I am wondering how can it be a <code>NoneType</code> object where there <code>YOU CAN SEE</code> it is definitely <code>NOT a NoneType Object</code> cause it has the corresponding data and this is the first data and how can it said that it is <strong>NoneType</strong>?</p> <pre><code>&lt;tu&gt; &lt;tuv xml:lang=&quot;de&quot;&gt;&lt;seg&gt;- Gloucester? ! - Die sollten doch in Serfam sein.&lt;/seg&gt;&lt;/tuv&gt; &lt;tuv xml:lang=&quot;en&quot;&gt;&lt;seg&gt;They should have surrendered when they had the chance!&lt;/seg&gt;&lt;/tuv&gt; &lt;/tu&gt; </code></pre> <p>Now look what happens when I put <code>print()</code> right below this code <code>sentence = elem.find(f'.//tuv[@xml:lang=&quot;{language_target}&quot;]', nsmap).find('seg', nsmap).text</code></p> <p>So it would be like this:</p> <pre><code>sentence = elem.find(f'.//tuv[@xml:lang=&quot;{language_target}&quot;]', nsmap).find('seg', nsmap).text print(sentence) </code></pre> <p><a href="https://i.stack.imgur.com/j4KAV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j4KAV.png" alt="IT WORKS" /></a></p> <p>As you can see now it works! however it stopped again on a specific data but I checked it and it is not <code>NoneType</code> there is <strong>DATA</strong> on that part and I am wondering why is it saying it to be <code>NoneType</code>. Also I am so mindblown by the fact that I just put a <code>print()</code> function below the <code>sentence</code> code and it made a lot of difference. Can someone help me with this? To be honest I am really mindblown by this and I do not know what is happening, I feel like there is a lack of understanding that I am having in reading the XML file with python. Can someone help me with it and guide me? maybe there's a better way to do this.</p> <p>Thank you so much! I really need your help stackoverflow community! Thank you!</p> <p>Also here I made a run again and got this result</p> <pre><code>Jeremiah! Jetzt bist du dran! Jeremiah, this is a purge! Traceback (most recent call last): File &quot;us24.py&quot;, line 76, in &lt;module&gt; parse(word,translator,file) File &quot;us24.py&quot;, line 35, in parse english_translation = elem.find('.//tuv[@xml:lang=&quot;en&quot;]', nsmap).find('seg', nsmap).text #Human Translation AttributeError: 'NoneType' object has no attribute 'find' </code></pre> <p>and it again said <strong>NoneType</strong> where in fact I looked at my xml file and there is <strong>DATA</strong> on it!</p> <pre><code>&lt;tu&gt; &lt;tuv xml:lang=&quot;de&quot;&gt;&lt;seg&gt;Jeremiah! Jetzt bist du dran!&lt;/seg&gt;&lt;/tuv&gt; &lt;tuv xml:lang=&quot;en&quot;&gt;&lt;seg&gt;Jeremiah, this is a purge!&lt;/seg&gt;&lt;/tuv&gt; &lt;/tu&gt; &lt;tu&gt; &lt;tuv xml:lang=&quot;de&quot;&gt;&lt;seg&gt;Suzaku!&lt;/seg&gt;&lt;/tuv&gt; &lt;tuv xml:lang=&quot;en&quot;&gt;&lt;seg&gt;Suzaku-kun!&lt;/seg&gt;&lt;/tuv&gt; &lt;/tu&gt; &lt;tu&gt; &lt;tuv xml:lang=&quot;de&quot;&gt;&lt;seg&gt;- Cécile-san? !&lt;/seg&gt;&lt;/tuv&gt; &lt;tuv xml:lang=&quot;en&quot;&gt;&lt;seg&gt;Cecil-san!&lt;/seg&gt;&lt;/tuv&gt; &lt;/tu&gt; &lt;tu&gt; &lt;tuv xml:lang=&quot;de&quot;&gt;&lt;seg&gt;- Hier ist's zu gefahrlich!&lt;/seg&gt;&lt;/tuv&gt; &lt;tuv xml:lang=&quot;en&quot;&gt;&lt;seg&gt;It's dangerous here!&lt;/seg&gt;&lt;/tuv&gt; &lt;/tu&gt; </code></pre> <p>The next one it should be reading is this , but it says <strong>NoneType</strong> how can it be Nonetype where you can see it has the correct data and why are the others working besides this one? :(</p> <pre><code>&lt;tu&gt; &lt;tuv xml:lang=&quot;de&quot;&gt;&lt;seg&gt;Suzaku!&lt;/seg&gt;&lt;/tuv&gt; &lt;tuv xml:lang=&quot;en&quot;&gt;&lt;seg&gt;Suzaku-kun!&lt;/seg&gt;&lt;/tuv&gt; &lt;/tu&gt; </code></pre>
<p>I would simplify your parsing approach.</p> <p>Take advantage of the event-based parser and remove all <code>.find()</code> calls. Look at it this way: The parser presents to you all the elements in the XML, you only have to decide which ones you find interesting.</p> <p>In this case, the interesting elements have a certain tag name ('seg') and they need to be in a section with the right language. It's easy to have a Boolean flag (say, <code>is_correct_language</code>) that is toggled based on the <code>xml:lang</code> attribute of the previous <code>&lt;tuv&gt;</code> element.</p> <p>Since the <code>start</code> event is enough for checking attributes and text, we don't need the parser to notify us of <code>end</code> events at all:</p> <pre><code>import xml.etree.ElementTree as ET def parse(word, translator, file, language_target='de'): is_correct_language = False for event, elem in ET.iterparse(file, events=('start',)): if elem.tag == 'tuv': xml_lang = elem.attrib.get('{http://www.w3.org/XML/1998/namespace}lang', '') is_correct_language = xml_lang.startswith(language_target) elif is_correct_language and elem.tag == 'seg': yield elem.text </code></pre> <p>usage:</p> <pre><code>for segment in parse('', '', 'test.xml', 'de'): print(segment) </code></pre> <p>Other notes:</p> <ul> <li>I've used a generator function (<code>yield</code> instead of <code>return</code>) as those tend to be more versatile.</li> <li>I don't think that using <code>os.getcwd()</code> is a good idea in the function, as this needlessly restricts the function's usefulness.<br /> By default Python will look in the current working directory anyway, so in the best case prefixing the filename with <code>os.getcwd()</code> is superfluous. In the worst case you want to parse a file from a different directory and your function would needlessly break the path.</li> <li>&quot;File exists&quot; checks are useless. Just open the file (or call <code>ET.iterparse()</code> on it, same thing). Wrap the whole thing in <code>try</code>/<code>except</code> and catch the <code>FileNotFoundError</code>, if you want to handle this situation.</li> </ul>
python-3.x|xml
0
1,909,678
63,997,419
How to fix ValueError: too many values to unpack (expected 2)
<p>I am trying to use the <code>sorted()</code> in <code>python</code> and trying to find the <code>interpretation</code> but I get this error.</p> <blockquote> <p>ValueError: too many values to unpack (expected 2)</p> </blockquote> <p>Here's my code:</p> <pre><code>from treeinterpreter import treeinterpreter as ti X = processed_data[0] y = prediction rf = pickle.load(open(&quot;original/PermutationModelNew.sav&quot;, &quot;rb&quot;)) prediction, bias, contributions = ti.predict(rf, X) print(&quot;Bias (trainset mean)&quot;, bias[0]) c, feature = sorted(zip(contributions[0],X.columns)) </code></pre> <p><code>X</code> is the test data and it looks like this:</p> <pre><code>Age DailyRate DistanceFromHome ... BusinessTravel_ OverTime_ Over18_ 0 39 903 2 ... 2 1 1 [1 rows x 28 columns] </code></pre> <p>and <code>y</code> looks like this:</p> <pre><code>[0] </code></pre> <p>Can someone please help me to fix this? I am using this <a href="http://blog.datadive.net/random-forest-interpretation-with-scikit-learn/" rel="nofollow noreferrer">Example</a></p>
<p>Here is an example on how I was doing the sorting for the contribution (I was creating a dictionary with the results and then convert it into a DataFrame, but you don't have to use the dictionary):</p> <pre><code>prediction, bias, contributions = ti.predict(model, chunk_data) contr_dict = {} for i in range(len(chunk_data)): contr_dict.setdefault('instance', []).append(i) contr_dict.setdefault('prediction', []).append(prediction[i][1]) contr_dict.setdefault('bias', []).append(bias[i][1]) contribution = contributions[i] # sort contributions in descending order ids = contribution[:, 1].argsort()[::-1] for j in ids: contr_dict.setdefault(chunk_data.columns[j][]).append(contribution[j][1]) pd.DataFrame(contr_dict) </code></pre>
python|scikit-learn
0
1,909,679
64,016,632
Numpy Error: this is the wrong setup.py file to run while trying to install Numpy
<p>I tried to install Numpy library with VisualStudio Code (VS Code) used the terminal and official website for <a href="https://numpy.org/doc/1.19/user/building.html" rel="nofollow noreferrer">instructions</a></p> <p>Even though I followed each step I keep getting &quot;This is the wrong setup.py file to run error&quot;</p> <p>I tried to update every element to not get an error, deleted and installed NumPy files in the directories which are in site-packages, and my anaconda files (i use jupyter as well but I need to implement this on my VSCode editor).</p> <p>I also tried to get in the NumPy file and tried</p> <pre><code>pip install. python setup.py build_ext --inplace </code></pre> <p>I used this site's instructions as well to <a href="https://scipy.org/install.html#pip-install" rel="nofollow noreferrer">install NumPy</a>:</p> <p>here I tried :</p> <pre><code>python -m pip install --user numpy </code></pre> <p>but keep getting the same error. What am I doing wrong?</p>
<p>In the screenshot you provided, I noticed that the installed module &quot;<code>numpy</code>&quot; exists in the &quot;<code>python3.7</code>&quot; folder, not in the &quot;<code>python3.8</code>&quot; you are currently using.</p> <p>This is where my environment and <code>numpy</code> are located:</p> <p><a href="https://i.stack.imgur.com/oELk2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oELk2.png" alt="enter image description here" /></a></p> <ol> <li><p>It is recommended that you use the shortcut key Ctrl+Shift+` to open a new terminal, VSCode will automatically enter the current environment, and then you can use &quot;pip install numpy&quot; to install numpy into &quot;python3.8&quot;.</p> </li> <li><p>Or you can switch the environment directly to the python3.7 environment that includes <code>numpy</code>.</p> </li> <li><p>If it still doesn't work, you can uninstall <code>numpy</code> and reinstall it. (&quot;<code>pip uninstall numpy</code>&quot;, &quot;<code>pip install numpy</code>&quot;)</p> </li> </ol> <p>Since we are using <code>pip</code> to install the module <code>numpy</code>, we can use &quot;<code>pip --version</code>&quot; to check the currently used pip version, the module is installed in this environment:</p> <p><a href="https://i.stack.imgur.com/2755Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2755Y.png" alt="enter image description here" /></a></p>
python|numpy|visual-studio-code
0
1,909,680
42,932,315
Comparing items in an Excel file with Openpyxl in Python
<p>I am working with a big set of data, which has 9 rows (B3:J3 in column 3) and stretches until B1325:J1325. Using Python and the Openpyxl library, I need to get the biggest and second biggest value of each row and print those to a new field in the same row. I already assigned values to single fields manually (headings), but cannot seem to even get the max value in my range automatically written to a new field. My code looks like the following:</p> <pre><code>for row in ws.rows['B3':'J3']: sumup = 0.0 for cell in row: if cell.value != None: ......... </code></pre> <p>It throws the error:</p> <pre><code>for row in ws.rows['B3':'J3']: TypeError: 'generator' object has no attribute '__getitem__' </code></pre> <p>How could I get to my goal here?</p>
<p>You can you <code>iter_rows</code> to do what you want.</p> <p>Try this:</p> <pre><code>for row in ws.iter_rows('B3':'J3'): sumup = 0.0 for cell in row: if cell.value != None: ........ </code></pre> <p>Check out this answer for more info: <a href="https://stackoverflow.com/questions/29792134/how-we-can-use-iter-rows-in-python-openpyxl-package">How we can use iter_rows() in Python openpyxl package?</a> </p>
excel|python-2.7|openpyxl
0
1,909,681
66,625,594
How to format a CSV data with str values to float values by reading it?
<p>This is how the CSV looks like</p> <p><a href="https://i.stack.imgur.com/pEMxJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pEMxJ.jpg" alt="enter image description here" /></a></p> <p>and python after using the code</p> <pre><code>def get_returns(file): return pd.read_csv(file + &quot;.csv&quot;, index_col = 0, parse_dates = True).pct_change() #example df= get_returns(&quot;SP500&quot;) </code></pre> <p>shows up with folloiwng error</p> <pre><code>TypeError: unsupported operand type(s) for /: 'str' and 'float' result[mask] = op(xrav[mask], yrav[mask]) </code></pre> <p>Anyone an idea how to solve this?</p> <p>With this formating data there is no problem (other web source, other dataset)</p> <p>For sure i could format it first in excel before reading it but on longterm that could be annyoing.</p> <p><a href="https://i.stack.imgur.com/yf3xA.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yf3xA.jpg" alt="enter image description here" /></a></p>
<p>got clarified in total to avoid</p> <p>a)</p> <pre><code>TypeError: unsupported operand type(s) for /: 'str' and 'float' result[mask] = op(xrav[mask], yrav[mask]) </code></pre> <p>and then b)</p> <pre><code> IndexError: list index out of range </code></pre> <p>this solved it all</p> <pre><code>def get_returns(file): dfcolumns = pd.read_csv(file + &quot;.csv&quot;,nrows=1) return pd.read_csv(file + &quot;.csv&quot;, index_col = 0, parse_dates = True,dtype={&quot;Open&quot;: float, &quot;High&quot;:float, &quot;Low&quot;:float, &quot;Close&quot;:float},usecols = list(range(len(dfcolumns.columns)))).pct_change() </code></pre> <p>BUT! Before in Excel i had to change the date xx/xx/xx to xx-xx-xx &amp; &quot;&quot; by using search and replace to nothing</p>
python|csv|format
0
1,909,682
65,700,317
How can I avoid overwriting of csv file writing the data into?
<pre><code>for page in range(start,end+1,1): url = &quot;http://wrf.meteo.kg/aws/index?AwsSearch%5Bid%5D=36927&amp;AwsSearch%5Bdate_range%5D=20.10.2020+-+13.01.2021&amp;page=&quot;+str(page) handle = requests.get(url) doc = lh.fromstring(handle.content) tr_elements = doc.xpath('//tr') col=[] i=0 for t in tr_elements[0]: i+=1 name=t.text_content() col.append((name,[])) for j in range(1,len(tr_elements)): T=tr_elements[j] if len(T)!=14: break i=0 for t in T.iterchildren(): data=t.text_content() if i&gt;0: try: data=int(data) except: pass col[i][1].append(data) i+=1 Dict={title:column for (title,column) in col} df=pd.DataFrame(Dict) print(df) </code></pre> <p>So I am scraping similar scructured tables from multiple pages and I did it but can not save it as csv file without overwriting How Can I do this?</p>
<p>Use the built in csv library with the 'append' mode. Hope will help you.</p>
python|pandas|dataframe|web-scraping
0
1,909,683
65,679,343
if xldate < 0.00: TypeError: '<' not supported between instances of 'str' and 'float' when read using xldate
<p>when I am trying to read date in column 0 using the code below, I get an error as given below. Can anyone help me? Thanks in advance</p> <pre><code>wb = xlrd.open_workbook('call.xlsx') sheet = wb.sheet_by_index(0) for r in range(sheet.nrows): for c in range(sheet.ncols): if c==0: a1=sheet.cell_value(r,0) a1_as_date=datetime.datetime(*xlrd.xldate_as_tuple(a1,wb.datemode)) print(a1_as_date) else: print(sheet.cell_value(r,c)) </code></pre> <pre><code>error a1_as_date=datetime.datetime(*xlrd.xldate_as_tuple(a1,wb.datemode)) File &quot;C:\Users\user78\anaconda3\lib\site-packages\xlrd\xldate.py&quot;, line 95, in xldate_as_tuple if xldate &lt; 0.00: TypeError: '&lt;' not supported between instances of 'str' and 'float' </code></pre>
<p>I understand that you received an error that &lt; is not supported by the two types.</p> <p>I think you can try this out:</p> <pre><code>if float(xldate) &lt; 0.00: </code></pre>
python
1
1,909,684
65,714,143
TypeError at /api/register/ 'module' object is not callable
<p>I am trying to register users using django rest framework but this is the error i am getting, Please help Identify the issue</p> <p>TypeError at /api/register/ 'module' object is not callable Request Method: POST Request URL: <a href="http://127.0.0.1:8000/api/register/" rel="nofollow noreferrer">http://127.0.0.1:8000/api/register/</a> Django Version: 3.1.5 Exception Type: TypeError Exception Value:<br /> 'module' object is not callable Exception Location: C:\Users\ben\PycharmProjects\buddyroo\lib\site-packages\rest_framework\generics.py, line 110, in get_serializer Python Executable: C:\Users\ben\PycharmProjects\buddyroo\Scripts\python.exe Python Version: 3.8.5</p> <p>below is RegisterSerializer</p> <pre><code>from django.contrib.auth.password_validation import validate_password from rest_framework import serializers from django.contrib.auth.models import User from rest_framework.validators import UniqueValidator class RegisterSerializer(serializers.ModelSerializer): email = serializers.EmailField( required=True, validators=[UniqueValidator(queryset=User.objects.all())] ) password = serializers.CharField(write_only=True, required=True, validators=[validate_password]) password2 = serializers.CharField(write_only=True, required=True) class Meta: model = User fields = ('username', 'password', 'password2', 'email', 'first_name', 'last_name') extra_kwargs = { 'first_name': {'required': True}, 'last_name': {'required': True} } def validate(self, attrs): if attrs['password'] != attrs['password2']: raise serializers.ValidationError({&quot;password&quot;: &quot;Password fields didn't match.&quot;}) return attrs def create(self, validated_data): user = User.objects.create( username=validated_data['username'], email=validated_data['email'], first_name=validated_data['first_name'], last_name=validated_data['last_name'] ) user.set_password(validated_data['password']) user.save() return user </code></pre> <p>and RegisterView.py</p> <pre><code>from django.contrib.auth.models import User from rest_framework import generics from rest_framework.permissions import IsAuthenticated, AllowAny # &lt;-- Here from rest_framework.response import Response from rest_framework.views import APIView from api import UsersSerializer, RegisterSerializer class RegisterView(generics.CreateAPIView): queryset = User.objects.all() serializer_class = RegisterSerializer permission_classes = (AllowAny,) </code></pre>
<p>I suppose that the name of the module (file) where RegisterSerializer is defined is RegisterSerializer.py.</p> <p>If this is the case, in the RegisterView.py you are importing the module RegisterSerializer and not the class.</p> <p>So, it should be</p> <pre><code>from api.RegisterSerializer import RegisterSerializer </code></pre> <p>In Python it is common to have more than one class in one module, so I would advise you to rename your modules to serializers.py and views.py and put all your serializers and views there.</p> <p>Of course, if they are many, you may split this and create serializers/views packages and put several serializers/views modules there: user_serializers.py, whaterver_serializers.py...</p>
python|django|django-rest-framework|django-registration
1
1,909,685
65,507,264
pandas: groupby with multiple conditions
<p>Would you, please help me, to group pandas dataframe by multiple conditions.</p> <p>Here is how I do it in SQL:</p> <pre><code>with a as ( select high ,sum( case when qr = 1 and now = 1 then 1 else 0 end ) q1_bad ,sum( case when qr = 2 and now = 1 then 1 else 0 end ) q2_bad from #tmp2 group by high ) select a.high from a where q1_bad &gt;= 2 and q2_bad &gt;= 2 and a.high is not null </code></pre> <p>Here is the part of the dataset:</p> <pre><code>import pandas as pd a = pd.DataFrame() a['client'] = range(35) a['high'] = ['02','47','47','47','79','01','43','56','46','47','17','58','42','90','47','86','41','56', '55','49','47','49','95','23','46','47','80','80','41','49','46','49','56','46','31'] a['qr'] = ['1','1','1','1','2','1','1','2','2','1','1','2','2', '2','1','1','1','2','1','2','1','2','2','1','1','1','2','2','1','1', '1','1','1','1','2'] a['now'] = ['0','0','0','0','0','0','0','0','0','0','0','0','1','0','0','0','0', '0','0','0','0','0','0','0','0','0','0','0','0','0','0','1','0','0','0'] </code></pre> <p>Thank you very much!</p>
<p>it's very similar, you need to define your columns ahead of the groupby then apply your operation.</p> <p>assuming you have actual integers and not strings.</p> <pre><code>import numpy as np import pandas as pd a.assign(q1_bad = np.where((a['qr'].eq(1) &amp; a['now'].eq(1)),1,0), q2_bad = np.where((a['qr'].eq(2) &amp; a['now'].eq(1)),1,0) ).groupby('high')[['q1_bad','q2_bad']].sum() </code></pre> <hr /> <pre><code> q1_bad q2_bad high 01 0 0 02 0 0 17 0 0 23 0 0 31 0 0 41 0 0 42 0 1 43 0 0 46 0 0 47 0 0 49 1 0 55 0 0 56 0 0 58 0 0 79 0 0 80 0 0 86 0 0 90 0 0 95 0 0 </code></pre> <p>for you extra where clause you can filter it one of many ways, but for ease we can add <code>query</code> at the end.</p> <pre><code>a.dropna(subset='high').assign(q1_bad = np.where((a['qr'].eq(1) &amp; a['now'].eq(1)),1,0), q2_bad = np.where((a['qr'].eq(2) &amp; a['now'].eq(1)),1,0) ).groupby('high')[['q1_bad','q2_bad']].sum().query('q2_bad &gt;= 2 and q1_bad &gt;= 2') </code></pre>
python-3.x|pandas|dataframe
2
1,909,686
50,939,006
How to run --upgrade with pipenv?
<p>Running (say for numpy) <code>pipenv install --upgrade numpy</code> tries to install <code>--upgrade</code> and <code>numpy</code> instead of normal <code>pip</code> behavior for <code>--upgrade</code> switch.</p> <p>Is anyone else having this problem?</p> <p><strong>Edit:</strong></p> <p>Everyone, stop using <code>pipenv</code>. It's terrible. Use <code>poetry</code> instead.</p>
<p>For pipenv use <code>update</code> command , not <code>--upgrade</code> switch. You can update a package with:</p> <pre><code>pipenv update numpy </code></pre> <p>See comments in <a href="https://pipenv.pypa.io/en/latest/basics/#example-pipenv-upgrade-workflow" rel="nofollow noreferrer">documentation</a>.</p> <p>This will also persist new version of package in <code>Pipfile</code>/<code>Pipfile.lock</code>, no manual editing needed. There was a <a href="https://github.com/pypa/pipenv/issues/2165" rel="nofollow noreferrer">bug</a> with this command under certain scenarios, but hopefully it is fixed now.</p>
python|upgrade|pipenv
77
1,909,687
3,679,638
How to use the cl command?
<p>All, I found a piece of information on how to call c files in python, in these examples: there is a c file, which includes many other header files, the very beginning of this c files is <code>#include Python.h</code>, then I found that <code>#include Python.h</code> actually involves many many other header files, such as <code>pystate.h</code>, <code>object.h</code>, etc, so I include all the required header files. In an cpp IDE environment, it did not show errors. <b>What I am trying to do is call this c code in python</b>, so <code>from ctypes import *</code>, then it seems that a <code>dll</code> should be generated by code such as: <code>cl -LD test.c -test.dll</code>, but how to use the <code>cl</code> in this case? I used the cygwin: <code>gcc</code>, it worked fine. Could anyone help me with this <b>i.e.: Call the C in python</b>? Do I make myself clear? Thank you in advance!!</p> <p>Well, Now I feel it important to tell me what I did: The ultimate goal I wanna achieve is: I am lazy, I do not want to re-write those c codes in python, (which is very complicated for me in some cases), so I just want to generate <code>dll</code> files that python could call. I followed an example given by googleing "python call c", there are two versions in this examples: linux and windows: The example <code>test.c</code>:</p> <pre><code>#include &lt;windows.h&gt; BOOL APIENTRY DllMain(HANDLE hModule, DWORD dwReason, LPVOID lpReserved) { return TRUE; } __declspec(dllexport) int multiply(int num1, int num2) { return num1 * num2; } </code></pre> <p>Two versions: 1, Complie under linux</p> <pre><code>gcc -c -fPIC test.c gcc -shared test.o -o test.so </code></pre> <p>I did this in cygwin on my vista system, it works fine; :)</p> <p>2, Compile under windows:</p> <pre><code>cl -LD test.c -test.dll </code></pre> <p>I used the cl in windows command line prompt, it won't work!</p> <p>These are the python codes:</p> <pre><code>from ctypes import * import os libtest = cdll.LoadLibrary(os.getcwd() + '/test.so') print test.multiply(2, 2) </code></pre> <p>Could anyone try this and tell me what you get? thank you!</p>
<p>You will find the command line options of Microsoft's C++ compiler <a href="http://msdn.microsoft.com/en-us/library/610ecb4h(VS.80).aspx" rel="nofollow noreferrer">here</a>.</p> <p>Consider the following switches for cl:</p> <pre><code>/nologo /GS /fp:precise /Zc:forScope /Gd </code></pre> <p>...and link your file using</p> <pre><code>/NOLOGO /OUT:"your.dll" /DLL &lt;your lib files&gt; /SUBSYSTEM:WINDOWS /MACHINE:X86 /DYNAMICBASE </code></pre> <p>Please have a look at what those options mean in detail, I just listed common ones. You should be aware of their effect nonetheless, so try to avoid copy&amp;paste and make sure it's really what you need - the documentation linked above will help you. This is just a setup I use more or less often.</p> <p>Be advised that you can always open Visual Studio, configure build options, and copy the command line invokations from the project configuration dialog.</p> <p><strong>Edit:</strong> Ok, here is some more advice, given the new information you've edited into your original question. I took the example code of your simple DLL and pasted it into a source file, and made two changes:</p> <pre><code>#include &lt;windows.h&gt; BOOL APIENTRY DllMain(HANDLE hModule, DWORD dwReason, LPVOID lpReserved) { return TRUE; } extern "C" __declspec(dllexport) int __stdcall multiply(int num1, int num2) { return num1 * num2; } </code></pre> <p>First of all, I usually expect functions exported from a DLL to use stdcall calling convention, just because it's a common thing in Windows and there are languages who inherently cannot cope with cdecl, seeing as they only know stdcall. So that's one change I made.</p> <p>Second, to make exports more friendly, I specified extern "C" to get rid of <a href="http://en.wikipedia.org/wiki/Name_mangling" rel="nofollow noreferrer">name mangling</a>. I then proceeded to compile the code from the command line like this:</p> <pre><code>cl /nologo /GS /Zc:forScope /Gd c.cpp /link /OUT:"foobar.dll" /DL kernel32.lib /SUBSYSTEM:WINDOWS /MACHINE:X86 </code></pre> <p>If you use the DUMPBIN tool from the Visual Studio toolset, you can check your DLL for exports:</p> <pre><code>dumpbin /EXPORTS foobar.dll </code></pre> <p>Seeing something like this...</p> <pre><code>ordinal hint RVA name 1 0 00001010 ?multiply@@YGHHH@Z </code></pre> <p>...you can notice the exported name got mangled. You'll usually want clear names for exports, so either use a DEF file to specify exports in more details, or the shortcut from above.</p> <p>Afterwards, I end up with a DLL that I can load into Python like this:</p> <pre><code>In [1]: import ctypes In [2]: dll = ctypes.windll.LoadLibrary("foobar.dll") In [3]: dll.multiply Out[3]: &lt;_FuncPtr object at 0x0928BEF3&gt; In [4]: dll.multiply(5, 5) Out[4]: 25 </code></pre> <p>Note that I'm using ctypes.windll here, which implies stdcall.</p>
python|c|compiler-construction
2
1,909,688
50,487,431
Python-Arduino USB communication floats error
<p>i have made a project that a python script communicates with arduino sending various data types. Well everything works great except when arduino sends back floats in some cases.</p> <p>For e.g: When arduino sends numbers 4112.5, -7631.5 python receive them correct In case of 4112.112, -7631.23 python receives 4112.11181641, -7631.22998047</p> <p>What is causing this??</p> <p>Python code: <a href="https://drive.google.com/file/d/1uKBTUW319oTh6YyZU9tOQYa3jXrUvZVF/view" rel="nofollow noreferrer">https://drive.google.com/file/d/1uKBTUW319oTh6YyZU9tOQYa3jXrUvZVF/view</a></p> <pre><code>import os import struct import serial import time print('HELLO WORLD!!!!\nI AM PYTHON READY TO TALK WITH ARDUINO\nINSERT PASSWORD PLEASE.') ser=serial.Serial("COM5", 9600) #Serial port COM5, baudrate=9600 ser.close() ser.open() #open Serial Port a = int(raw_input("Enter number: ")) #integer object b = int(raw_input("Enter number: ")) #integer object c = float(raw_input("Enter number: ")) #float object d = float(raw_input("Enter number: ")) #float object time.sleep(2) #wait ser.write(struct.pack("2i2f",a,b,c,d)) #write to port all all number bytes if a == 22 : if b == -22 : if c == 2212.113 : if d == -3131.111 : print("Congratulations!!! Check the ledpin should be ON!!!") receivedbytes=ser.read(16) #read from Serial port 16 bytes=2 int32_t + 2 floats from arduino (number1,number2,number3,number4,)=struct.unpack("2i2f",receivedbytes) #convert bytes to numbers print "Arduino also send me back ",str(number1),",",str(number2),",",str(number3),",",str(number4) else : print("WRONG PASSWORD") os.system("pause") #wait for user to press enter </code></pre> <p>Arduino code:<a href="https://drive.google.com/file/d/1ifZx-0PGtex-M4tu7KTsIjWSjLqxMvMz/view" rel="nofollow noreferrer">https://drive.google.com/file/d/1ifZx-0PGtex-M4tu7KTsIjWSjLqxMvMz/view</a></p> <pre><code>struct sendata { //data to send volatile int32_t a=53; volatile int32_t b=-2121; volatile float c=4112.5; volatile float d=-7631.5; }; struct receive { //data to receive volatile int32_t a; //it will not work with int volatile int32_t b; volatile float c; volatile float d; }; struct receive bytes; struct sendata values; const int total_bytes=16; //total bytes to send int i; byte buf[total_bytes]; //each received Serial byte saved into byte array void setup() { Serial.begin(9600); pinMode(13,OUTPUT); //Arduino Mega ledpin } void loop() { } void serialEvent() { //Called each time Serial data is received if (Serial.available()==total_bytes){ //Receive data first saved toSerial buffer,Serial.available return how many bytes are saved.The Serial buffer space is limited. while(i&lt;=total_bytes-1){ buf[i] = Serial.read(); //Save each byte from Serial buffer to byte array i++; } memmove(&amp;bytes,buf,sizeof(bytes)); //Move each single byte memory location of array to memory field of the struct,the numbers are reconstructed from bytes. if (bytes.a==22){ //Access each struct number. if (bytes.b==-22){ if (bytes.c==2212.113){ if (bytes.d==-3131.111){ //If the password is right Serial.write((const uint8_t*)&amp;values,sizeof(values)); //Write struct to Serial port. delay(100); digitalWrite(13,HIGH);//Turn ON LED. } } } } } } </code></pre> <p>For further information you can also check my video:</p> <p><a href="https://www.youtube.com/watch?v=yjfHwO3qSgY&amp;t=7s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=yjfHwO3qSgY&amp;t=7s</a></p>
<p>Well, after a few tests i came up that i can only send float numbers betwwen 8-bit Arduino and Python with maximum 3 decimal places with accuracy. I also wrote a non-struct code:</p> <p><strong>Edit:added code</strong></p> <p>NON_STRUCT</p> <p>Arduino side:<a href="https://drive.google.com/file/d/1lvgT-LqQa7DxDorFF0MTe7UMpBfHn6LA/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/1lvgT-LqQa7DxDorFF0MTe7UMpBfHn6LA/view?usp=sharing</a></p> <pre><code>//values to send// int32_t aa=53; int32_t bb=-2121; float cc=4112.3; //each float must have max 3 decimal places else it will #rounded to 3!! float dd=-7631.23; ////***///// ///values to receive//// int32_t a; //it will not work with int int32_t b; float c; float d; int i,e; /////****//// void setup() { Serial.begin(9600); pinMode(13,OUTPUT); //Arduino Mega ledpin } void loop() { } void serialEvent() { //Called each time Serial data is received a=Serial.parseInt(); b=Serial.parseInt(); c=Serial.parseFloat(); d=Serial.parseFloat(); if (a==22){ //Access each struct number. if (b==-22){ if (c==2212.113){ if (d==-3131.111){ //If the password is right Serial.println(aa); Serial.println(bb); Serial.println(cc,3); //must be &lt;=3 decimal places else it //will //rounded Serial.println(dd,3); //must be &lt;=3 decimal places else it //will //rounded delay(100); digitalWrite(13,HIGH);//Turn ON LED. } } } } } </code></pre> <p>Python side:<a href="https://drive.google.com/file/d/1gPKfhTvbd4vp4L4VrZuns95yQoekg-vn/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/1gPKfhTvbd4vp4L4VrZuns95yQoekg-vn/view?usp=sharing</a></p> <pre><code>import os import struct import serial import time print('HELLO WORLD!!!!\nI AM PYTHON READY TO TALK WITH ARDUINO\nINSERT PASSWORD PLEASE.') ser=serial.Serial("COM5", 9600) #Serial port COM5, baudrate=9600 ser.close() ser.open() #open Serial Port a = int(raw_input("Enter number: ")) #integer object b = int(raw_input("Enter number: ")) #integer object c = float(format(float(raw_input("Enter number: ")), '.3f'))#float object # #&lt;=3 #decimal places d = float(format(float(raw_input("Enter number: ")), '.3f')) time.sleep(2) #wait ser.write(str(a).encode()) #convert int to string and write it to port ser.write('\n'.encode()) ser.write(str(b).encode()) ser.write('\n'.encode()) ser.write(str(c).encode()) ser.write('\n'.encode()) ser.write(str(d).encode()) ser.write('\n'.encode()) if str(a) == "22" : if str(b) == "-22" : if str(c) == "2212.113" : if str(d) == "-3131.111" : print("Congratulations!!! Check the ledpin should be ON!!!") number1=int(ser.readline()) #read from Serial port convert to int number2=int(ser.readline()) number3=float(ser.readline()) ##read from Serial port convert to float #(3 #decimal places from arduino) number4=float(ser.readline()) print "Arduino also send me back ",str(number1),",",str(number2),",",str(number3),",",str(number4) else : print("WRONG PASSWORD") os.system("pause") #wait for user to press enter </code></pre> <p>WITH_STRUCT better performance</p> <p>Arduino side:<a href="https://drive.google.com/file/d/153fuSVeMz2apI-JbDNjdkw9PQKHfGDGI/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/153fuSVeMz2apI-JbDNjdkw9PQKHfGDGI/view?usp=sharing</a></p> <pre><code>struct sendata { //data to send volatile int32_t a=53; volatile int32_t b=-2121; volatile float c=4112.3; volatile float d=-7631.4; }; struct receive { //data to receive volatile int32_t a; //it will not work with int volatile int32_t b; volatile float c; volatile float d; }; struct receive bytes; struct sendata values; const int total_bytes=16; //total bytes to send int i; byte buf[total_bytes]; //each received Serial byte saved into byte array void setup() { Serial.begin(9600); pinMode(13,OUTPUT); //Arduino Mega ledpin } void loop() { } void serialEvent() { //Called each time Serial data is received if (Serial.available()==total_bytes){ //Receive data first saved to Serial //buffer,Serial.available return how many bytes are saved.The Serial buffer //space //is limited. while(i&lt;=total_bytes-1){ buf[i] = Serial.read(); //Save each byte from Serial buffer to //byte //array i++; } memmove(&amp;bytes,buf,sizeof(bytes)); //Move each single byte memory //location of array to memory field of the struct,the numbers are //reconstructed //from bytes. if (bytes.a==22){ //Access each struct number. if (bytes.b==-22){ if (bytes.c==2212.113){ if (bytes.d==-3131.111){ //If the password is right Serial.write((const uint8_t*)&amp;values,sizeof(values)); //Write //struct to Serial port. delay(100); digitalWrite(13,HIGH);//Turn ON LED. } } } } } } </code></pre> <p>Python side:<a href="https://drive.google.com/file/d/1M6iWnluXdNzTKO1hfcsk3qi9omzMiYeh/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/1M6iWnluXdNzTKO1hfcsk3qi9omzMiYeh/view?usp=sharing</a></p> <pre><code>import os import struct import serial import time print('HELLO WORLD!!!!\nI AM PYTHON READY TO TALK WITH ARDUINO\nINSERT PASSWORD PLEASE.') ser=serial.Serial("COM5", 9600) #Serial port COM5, baudrate=9600 ser.close() ser.open() #open Serial Port a = int(raw_input("Enter number: ")) #integer object b = int(raw_input("Enter number: ")) #integer object c = float(format(float(raw_input("Enter number: ")), '.3f'))#float object #&lt;=3 #decimal places d = float(format(float(raw_input("Enter number: ")), '.3f')) time.sleep(2) #wait ser.write(struct.pack("2i2f",a,b,c,d)) #write to port all all number bytes if a == 22 : if b == -22 : if c == 2212.113 : if d == -3131.111 : print("Congratulations!!! Check the ledpin should be ON!!!") receivedbytes=ser.read(16) #read from Serial port 16 bytes=2 int32_t + 2 #floats from arduino (number1,number2,number3,number4,)=struct.unpack("2i2f",receivedbytes) #convert bytes to numbers number3=float(format(number3, '.3f')) #floats must be under 3 decimal #points else will be rounded number4=float(format(number4, '.3f')) print "Arduino also send me back ",str(number1),",",str(number2),",",str(number3),",",str(number4) else : print("WRONG PASSWORD") os.system("pause") #wait for user to press enter </code></pre> <p>Youtube video: <a href="https://www.youtube.com/watch?v=yjfHwO3qSgY&amp;t=170s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=yjfHwO3qSgY&amp;t=170s</a></p>
python|c++|arduino|serial-communication
0
1,909,689
64,743,944
python: Using class variable as counter in a multiply run method
<p>Lets say I have following simplified python code:</p> <pre><code>class GUI: def __init__(self): self.counter = 0 self.f1c = 0 self.f2c = 0 def update(self): self.counter = self.counter + self.f1() self.counter = self.counter + self.f4() self.counter = self.counter + self.f2() self.counter = self.counter + self.f3() print(self.counter) if (self.counter &gt; 4): print(&quot;doing update now&quot;) # do_update() self.counter = 0 def f1(self): if self.f1c &lt; 2: self.f1c = self.f1c + 1 self.update() return 1 def f2(self): if self.f2c &lt; 4: self.f2c = self.f2c + 1 self.update() return 0 def f3(self): return 1 def f4(self): return 0 g = GUI() g.update() </code></pre> <p>A sample output from a test run in my case is:</p> <pre><code>6 doing update now 5 doing update now 4 3 2 2 2 </code></pre> <p>While I would expect it to be:</p> <pre><code>1 2 3 4 5 doing update now 0 1 </code></pre> <p>In my understanding, it is not even possible in my code sample, that <code>self.counter</code> can go from <code>2</code> to <code>1</code> without doing <code>do_update()</code>.</p> <p>What would be the correct way to do this?</p> <p>edit: Basically, what I want is, that <code>do_update</code> will only run after all other functions came to an end <code>AND</code> <code>counter &gt; 0</code>.</p>
<p>You have some very complicated recursion going on here, and I honestly think you just got lucky in not creating an infinite loop.</p> <p>To see what's going on with what you have, I opened up your code with a debugger, and have annotated the calls just to trace the order of flow through the first couple print statements. Increasing indentation indicates a deeper nested function call, and the line number with the line itself is then listed. The comments indicate the current value of certain variables at each stage, and indicate when print statements happen:</p> <pre><code>--Call-- 39| g.update() --Call-- 08| self.counter = self.counter + self.f1() #f1c is 0 --Call-- 22| self.update() #f1c is 1 --Call-- 08| self.counter = self.counter + self.f1() #f1c is 1 --Call-- 22| self.update() #f1c is 2 --Call-- 08| self.counter = self.counter + self.f1() #f1c is 2 --Return-- 23| return 1 #self.counter now == 1 --Call-- 09| self.counter = self.counter + self.f4() --Return-- 35| return 0 #self.counter now == 1 --Call-- 10| self.counter = self.counter + self.f2() #f2c is 0 --Call-- 28| self.update() #f2c is 1 --Call-- 08| self.counter = self.counter + self.f1() #f1c is 2 --Return-- 23| return 1 #self.counter now == 2 --Call-- 09| self.counter = self.counter + self.f4() --Return-- 35| return 0 #self.counter now == 2 --Call-- 10| self.counter = self.counter + self.f2() #f2c is 1 --Call-- 28| self.update() #f2c is 2 --Call-- 08| self.counter = self.counter + self.f1() #f1c is 2 --Return-- 23| return 1 #self.counter now == 3 --Call-- 09| self.counter = self.counter + self.f4() --Return-- 35| return 0 #self.counter now == 3 --Call-- 10| self.counter = self.counter + self.f2() #f2c is 2 --Call-- 28| self.update() #f2c is 3 --Call-- 08| self.counter = self.counter + self.f1() #f1c is 2 --Return-- 23| return 1 #self.counter now == 4 --Call-- 09| self.counter = self.counter + self.f4() --Return-- 35| return 0 #self.counter now == 4 --Call-- 10| self.counter = self.counter + self.f2() #f2c is 3 --Call-- 28| self.update() #f2c is 4 --Call-- 08| self.counter = self.counter + self.f1() #f1c is 2 --Return-- 23| return 1 #self.counter now == 5 --Call-- 09| self.counter = self.counter + self.f4() --Return-- 35| return 0 #self.counter now == 5 --Call-- 10| self.counter = self.counter + self.f2() #f2c is 4 --Return-- 29| return 0 #self.counter now == 5 --Call-- 11| self.counter = self.counter + self.f3() --Return-- 32| return 1 #self.counter now == 6 ### print(self.counter) ### #self.counter now == 6 ### print('doing update') #self.counter now == 0 --Return-- 18| #self.counter in prior stack frame was only 4 --Return-- 29| return 0 #self.counter now == 4 --Call-- 11| self.counter = self.counter + self.f3() --Return-- 32| return 1 #self.counter now == 5 ### print(self.counter) ### #self.counter now == 5 ### print('doing update') #self.counter now == 0 --Return-- 18| #self.counter in prior stack frame was only 3 </code></pre>
python|recursion
0
1,909,690
61,455,055
GPU optimization with Keras
<p>I am running Keras on a Windows 10 computer with a GPU. I have gone from Tensorflow 1 to Tensorflow 2 and I now feel that fitting is much slower and hope for your advice.</p> <p>I am testing whether Tensorflow sees the GPU with the following statements:</p> <pre><code>from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) K._get_available_gpus() </code></pre> <p>giving the response</p> <pre><code>[name: "/device:CPU:0" device_type: "CPU" memory_limit: 268435456 locality { } incarnation: 17171012743200670970 , name: "/device:GPU:0" device_type: "GPU" memory_limit: 6682068255 locality { bus_id: 1 links { } } incarnation: 5711519511292622685 physical_device_desc: "device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1" </code></pre> <p>So, that seems to indicate that the GPU is working?</p> <p>I am training a modified version of ResNet50 with up to 10 images (257x257x2) as input. It has 4.3M trainable parameters. Training is very slow (could be several days). Part of the code is shown here:</p> <pre><code>import os,cv2,sys import numpy as np import tensorflow as tf import matplotlib.pyplot as plt import scipy.io import h5py import time from tensorflow.python.keras import backend as K from tensorflow.python.keras.models import load_model from tensorflow.python.keras import optimizers from buildModelReduced_test import buildModelReduced from tensorflow.keras.utils import plot_model K.set_image_data_format('channels_last') #set_image_dim_ordering('tf') sys.setrecursionlimit(10000) # Check that gpu is running from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) K._get_available_gpus() # Generator to read one batch at a time for large datasets def imageLoaderLargeFiles(data_path, batch_size, nStars, nDatasets=0): --- --- --- yield(train_in,train_target # Repository for parameters nStars = 10 img_rows = 257 img_cols = 257 bit_depth = 16 channels = 2 num_epochs = 1 batch_size = 8 data_path_train = 'E:/TomoA/large/train2' data_path_validate = 'E:/TomoA/large/validate2' nDatasets_train = 33000 nDatasets_validate = 8000 nBatches_train = nDatasets_train//(batch_size) validation_steps = nDatasets_validate//(batch_size) output_width = 35; runSize = 'large' restartFile = ‘’ #%% Train model if restartFile == '': model = buildModelReduced(nStars,img_rows, img_cols, output_width,\ batch_size=batch_size,channels=channels, use_l2_regularizer=True) model.summary() plot_model(model, to_file='model.png', show_shapes=True) all_mae = [] adam=optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False) model.compile(optimizer='adam',loss='MSE',metrics=['mae']) history = model.fit_generator(imageLoaderLargeFiles(data_path_train,batch_size,nStars,nDatasets_train), steps_per_epoch=nBatches_train,epochs=num_epochs, validation_data=imageLoaderLargeFiles(data_path_validate,batch_size,nStars,nDatasets_validate), validation_steps=validation_steps,verbose=1,workers=0, use_multiprocessing=False, shuffle=False) print('\nSaving model...\n') if runSize == 'large': model.save(runID + '_' + runSize + '.h5') </code></pre> <p>When I open Windows task manager and look at the GPU, I see that the memory allocation is 6.5GB, copying activity less than 1%, and CUDA about 4%. Disk activity is low, I read a cache of 1000 data sets at a time from an SSD. See screen clip below. I think that shows that the GPU is not working well. The CPU load is 19%. I use a batch size of 8, if I go higher, I get an resource exhausted error.</p> <p>Any ideas on how to proceed or where to find more information? Any rules of thumb on how to tune your run to exploit the GPU well?</p> <p><a href="https://i.stack.imgur.com/UvUqC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UvUqC.png" alt="Task manager screen"></a></p>
<p>It seems there is a bottleneck somewhere on a training operation that we can not detect by looking at Task Manager. It can be caused by I/O, GPU, or CPU. You should detect which part of the processing is slow with using an advanced tool.</p> <p>You can use <a href="https://www.tensorflow.org/guide/profiler" rel="nofollow noreferrer">TensorFlow Profiler</a> to inspect all processes of the TensorFlow. Also, it gives suggestions that how you can speed up your processes. <a href="https://www.youtube.com/watch?v=pXHAQIhhMhI" rel="nofollow noreferrer">Here</a> is a simple video tutorial about it.</p> <p><a href="https://i.stack.imgur.com/8vuTV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8vuTV.png" alt="TensorFlow Profiler"></a></p>
tensorflow|keras|windows-10|gpu
2
1,909,691
60,588,421
Extract images using xpath
<p>I have been trying to get information from this website <a href="https://www.leadhome.co.za/property/poortview-ah/roodepoort/lh-95810/magnificent-masterpiece-in-poortview-" rel="nofollow noreferrer">https://www.leadhome.co.za/property/poortview-ah/roodepoort/lh-95810/magnificent-masterpiece-in-poortview-</a> and I am having issues with getting all the images of the property; more specifically the URL</p> <p>this is how the class looks like:</p> <pre><code>&lt;div class="lazy-image listing-slider-carousel-item lazy-image-loaded"&gt; &lt;div class="lazy-image-background" style="background-image: url(&amp;quot;https://s3-eu-west-1.amazonaws.com/leadhome-listing-photos/025c90ab-9c87-47d5-b11c-1cfbce3f67f2-md.jpg&amp;quot;);"&gt;&lt;/div&gt; &lt;/div&gt; </code></pre> <p>What I have so far:</p> <pre><code> for item in response.xpath('//div[@class="lazy-image-background"]/*[starts-with(@style,"background-image")]/@style').getall(): yield {"image_link":item} </code></pre> <p>But unfortunately this is empty. Any tips on what I'm doing wrong? Thank you!</p>
<p>If you inspect <strong>original html source</strong> of this webpage (CTRL + U on google Chrome webbrowser, !!!not html code from Crhome developer tools /elements section)<br> you will see 2 important things:</p> <ol> <li>Images in tags like <code>&lt;div class="lazy-image listing-slider-carousel-item lazy-image-loaded"&gt;</code> as well as other data don't exists inside these html tags.</li> <li>All data stored inside <code>script</code> tag and inside <code>window.REDUX_INITIAL_STATE</code> javascript variable:<br> <br> <a href="https://i.stack.imgur.com/gIfXh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gIfXh.png" alt="original html source"></a></li> </ol> <p>In this case we can convert data from javascript variable into basic python <code>dict</code> format using python's built-in <code>json</code> module.<br> The most complicated part of this task is to correctly fit content of that <code>script</code> tag into <code>json.loads</code> function. It should be strictly a text after <code>window.REDUX_INITIAL_STATE =</code> and before next javascript operation (in this case before the latest <code>;</code> symbol). As result we will get this code:</p> <pre><code>def parse(self, response): script_tag = [script for script in response.css("script::text").extract() if "window.REDUX_INITIAL_STATE = {" in script] script_data = json.loads(script_tag[0].split("window.REDUX_INITIAL_STATE = ")[-1][:-1], encoding="utf-8") </code></pre> <p>As you can see on following debugger screenshot all data successfully converted: <a href="https://i.stack.imgur.com/e2gyu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e2gyu.png" alt="debugger_converted_data"></a> Images stored in <code>script_data['app']['listing']['listing']['entity']['lh-95810']['images']</code> as list of dictionaries: <a href="https://i.stack.imgur.com/LdgHn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LdgHn.png" alt="debugger_images"></a> <code>lh-95810</code> is entity id so in updated code this entity id will be separately selected in order to be able to use it in other pages:</p> <pre><code>def parse(self, response): script_tag = [script for script in response.css("script::text").extract() if "window.REDUX_INITIAL_STATE = {" in script] script_data = json.loads(script_tag[0].split("window.REDUX_INITIAL_STATE = ")[-1][:-1], encoding="utf-8") entity_key = [k for k in script_data['app']['listing']['listing']['entity'].keys()] images = [image["medium"] for image in script_data['app']['listing']['listing']['entity'][entity_key[0]]['images']] </code></pre> <p>This website uses javascript to render data on webpage. Hovewer any javascript formed content have it's *roots in original html code. This approach uses only built-in <code>json</code> module and don't require css or Xpath selectors.</p>
python|html|xpath|web-scraping|scrapy
2
1,909,692
60,543,207
How to solve tensor flow cpu dll not found error
<p>I have install <code>tensorflow v2.1.0</code> with <code>python version 3.6.6</code> and <code>pip version 20.0.2</code>. When i try to import tensorflow i got below error.</p> <pre><code>C:\Users\Dexter&gt;python Python 3.6.6 (v3.6.6:4cf1f54eb7, Jun 27 2018, 03:37:03) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import tensorflow Traceback (most recent call last): File "C:\Users\Dexter\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in &lt;module&gt; from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\Dexter\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in &lt;module&gt; _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\Dexter\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\Users\Dexter\AppData\Local\Programs\Python\Python36\lib\imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "C:\Users\Dexter\AppData\Local\Programs\Python\Python36\lib\imp.py", line 343, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found. </code></pre> <p>When i searched on google i always get tensorflow-gpu solution i don't have any graphic card in my system. below is info of my display driver. Please help me with this i stuck in this. <a href="https://i.stack.imgur.com/SHIRP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SHIRP.jpg" alt="enter image description here"></a></p> <p>I have c++ Redistributable for Visual Studio 2017</p> <p><a href="https://i.stack.imgur.com/AwE8W.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AwE8W.jpg" alt="enter image description here"></a></p>
<p>As per <a href="https://www.tensorflow.org/install/pip" rel="nofollow noreferrer">installation instructions for Windows</a>, Tensorflow 2.1.0 requires <em>Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017, and 2019</em> - which is what you are (partially) missing. Moreover, <em>starting with the TensorFlow 2.1.0 version, the msvcp140_1.dll file is required from this package (which may not be provided from older redistributable packages).</em> </p> <p>That's why you're getting the error. Install the missing packages following <a href="https://www.tensorflow.org/install/pip" rel="nofollow noreferrer">these instructions</a>. In essence, grab the 2015, 2017 and 2019 Redistributable, all in single package, available from <a href="https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads" rel="nofollow noreferrer">here</a>.</p>
python|python-3.x|tensorflow|tensorflow2.0
3
1,909,693
58,066,908
match and replace string items in list with string items from another list
<p>I have three lists - <code>base, match, and replac</code> ; <code>match</code> and <code>replac</code> are same length</p> <pre><code>base = ['abc', 'def', 'hjk'] match = ['abc', 'hjk'] replac = ['abcde', 'hjklm'] </code></pre> <p>I would like to modify the <code>base</code> list by matching string items in <code>match</code> and replace these with the same index item from <code>replac</code>.</p> <p>Expected output: <code>base = ['abcde', 'def', 'hjklm']</code></p>
<p>Here is how I'd do it:</p> <pre><code>mapp = dict(zip(match,replac)) res = [mapp[e] if e in mapp else e for e in base] </code></pre>
python|list|string-matching
2
1,909,694
56,238,588
Remove certain word using regular expression
<p>I have a string which is shown below:</p> <pre><code>a = 'steven (0.00030s ). prince (0.00040s ). kavin (0.000330s ). 23.24.21' </code></pre> <p>I want to remove the numbers inside <code>()</code> and the brackets and want to have it like this:</p> <pre><code>a = 'steven prince kavin 23.24.21' </code></pre>
<p>Use <code>re.sub</code></p> <p><strong>Ex:</strong></p> <pre><code>import re a = 'steven (0.00030s ). prince (0.00040s ). kavin (0.000330s ). 23.24.21' print(re.sub(r"(\(.*?\)\.)", "", a)) </code></pre> <p><strong>Output:</strong></p> <pre><code>steven prince kavin 23.24.21 </code></pre>
python|python-3.x|regex|string
3
1,909,695
18,454,307
mod_wsgi fails when it is asked to read a file
<p>I'm having a puzzling issue where when my Flask application requires reading a file from disk, it fails with the following error:</p> <pre><code>[Mon Aug 26 22:29:48 2013] [error] [client 67.170.62.218] (2)No such file or directory: mod_wsgi (pid=15678): Unable to connect to WSGI daemon process 'flaskapp' on '/var/run/apache2/wsgi.14164.5.1.sock' after multiple attempts. </code></pre> <p>When using the Flask development server, or running an application that does not read files, it works fine.</p> <p><strong>directory structure:</strong></p> <pre><code>/flaskapp /static style.css /templates index.html flaskapp.py flaskapp.wsgi config.json </code></pre> <p><strong>flaskapp.py:</strong></p> <pre><code>import flask import json app = flask.Flask(__name__) #config = json.loads(open('config.json', 'r').read()) @app.route('/') def index(): return "Hello World" #return flask.render_template('index.html') if __name__ == '__main__': app.run(host='0.0.0.0') </code></pre> <p><strong>flaskapp.wsgi</strong></p> <pre><code>import sys sys.path.append('/root/flaskapp') from flaskapp import app as application </code></pre> <p><strong>sites-available:</strong></p> <pre><code>&lt;VirtualHost *:80&gt; ServerName localhost WSGIDaemonProcess flaskapp user=www-data group=www-data threads=5 WSGIScriptAlias / /root/flaskapp/flaskapp.wsgi WSGIScriptReloading On &lt;Directory /root/flaskapp&gt; WSGIProcessGroup flaskapp WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all &lt;/Directory&gt; &lt;/VirtualHost&gt; </code></pre>
<p>For the socket error see:</p> <ul> <li><a href="http://code.google.com/p/modwsgi/wiki/ConfigurationIssues#Location_Of_UNIX_Sockets" rel="nofollow">http://code.google.com/p/modwsgi/wiki/ConfigurationIssues#Location_Of_UNIX_Sockets</a></li> </ul> <p>BTW, don't use relative path names for files you want to load either:</p> <ul> <li><a href="http://code.google.com/p/modwsgi/wiki/ApplicationIssues#Application_Working_Directory" rel="nofollow">http://code.google.com/p/modwsgi/wiki/ApplicationIssues#Application_Working_Directory</a></li> </ul> <p>Although commented out right now, loading config.json in your code as you are would also usually fail.</p>
python|apache|flask|mod-wsgi
0
1,909,696
57,485,604
How to make a python webserver that gives a requested csv file
<p>I want to create a webserver on linux in python so that I can use it to get csv files from the linux machine onto a windows machine. I am quite unfamiliar with networking terminology so I would greatly appreciate it if the answer is a little detailed. I dont want to create a website, just the webserver to get the csv file requested.</p>
<p>If you have the files on disk and just want to serve them over HTTP you can use python's built in modules:</p> <pre><code>python3 -m http.server </code></pre> <p>in python 3.x</p> <p>or </p> <pre><code>python -m SimpleHTTPServer </code></pre> <p>For python 2.x.</p> <p>If you need something more dynamic check out <a href="https://flask.palletsprojects.com/en/1.1.x/quickstart/" rel="nofollow noreferrer">Flask</a></p>
python|webserver
1
1,909,697
57,344,590
Count of most 'two words combination' popular Hebrew words in a pandas Dataframe with nltk
<p>I have a csv data file containing column 'notes' with satisfaction answers in Hebrew.</p> <p>I want to find the most popular words and popular '2 words combination', the number of times they show up and plotting them in a bar chart.</p> <p>My code so far:</p> <pre><code>PYTHONIOENCODING="UTF-8" df= pd.read_csv('keep.csv', encoding='utf-8' , usecols=['notes']) words= df.notes.str.split(expand=True).stack().value_counts() </code></pre> <p>This produce a list of the words with a counter but takes into account all the stopwords in Hebrew and don't produce '2 words combination' frequencies. I also tried this code and it's not what I'm looking for:</p> <pre><code> top_N = 30 txt = df.notes.str.lower().str.replace(r'\|', ' ').str.cat(sep=' ') words = nltk.tokenize.word_tokenize(txt) word_dist = nltk.FreqDist(words) rslt = pd.DataFrame(word_dist.most_common(top_N), columns=['Word', 'Frequency']) print(rslt) print('=' * 60) </code></pre> <p>How can I use nltk to do that?</p>
<p>In addition to what jezrael posted, I would like to introduce another hack of achieving this. Since you are trying to get individual as well as the two-word frequencies, you can also take advantage of the <a href="https://tedboy.github.io/nlps/generated/generated/nltk.everygrams.html" rel="nofollow noreferrer">everygram</a> function.</p> <p>Given a dataframe:</p> <pre><code>import pandas as pd df = pd.DataFrame() df['notes'] = ['this is sentence one', 'is sentence two this one', 'sentence one was good'] </code></pre> <p>Get the one-word and two-word forms using <code>everygrams(word_tokenize(x), 1, 2)</code>, to get the combinations of one, two, three word combinations, you can change 2 to 3, and so on. So in your case it should be:</p> <pre><code>from nltk import everygrams, word_tokenize x = df['notes'].apply(lambda x: [' '.join(ng) for ng in everygrams(word_tokenize(x), 1, 2)]).to_frame() </code></pre> <p>At this point you should see:</p> <pre><code> notes 0 [this, is, sentence, one, this is, is sentence... 1 [is, sentence, two, this, one, is sentence, se... 2 [sentence, one, was, good, sentence one, one w... </code></pre> <p>You can now get the count by flattening the list and value_counts:</p> <pre><code>import numpy as np flattenList = pd.Series(np.concatenate(x.notes)) freqDf = flattenList.value_counts().sort_index().rename_axis('notes').reset_index(name = 'frequency') </code></pre> <p>Final output:</p> <pre><code> notes frequency 0 good 1 1 is 2 2 is sentence 2 3 one 3 4 one was 1 5 sentence 3 6 sentence one 2 7 sentence two 1 8 this 2 9 this is 1 10 this one 1 11 two 1 12 two this 1 13 was 1 14 was good 1 </code></pre> <p>And now plotting the graph is easy:</p> <pre><code>import matplotlib.pyplot as plt plt.figure() flattenList.value_counts().plot(kind = 'bar', title = 'Count of 1-word and 2-word frequencies') plt.xlabel('Words') plt.ylabel('Count') plt.show() </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/k27VQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k27VQ.png" alt="enter image description here"></a></p>
python|pandas|utf-8|nltk|hebrew
2
1,909,698
42,239,634
Using ConfigParser in Python - tests OK but wipes file once deployed
<p>I am running my own pool control system and I wanted to implement copying certain system parameters to a flat file for processing by a web interface I am working on. Since there are just a few entries I liked ConfigParser for the job. </p> <p>I built a test setup and it worked great. Here is that code:</p> <pre><code>import ConfigParser config = ConfigParser.ConfigParser() get_pool_level_resistance_value = 870 def read_system_status_values(file, section, system): config.read(file) current_status = config.get(section, system) print("Our current {} is {}.".format(system, current_status)) def update_system_status_values(file, section, system, value): cfgfile = open(file, 'w') config.set(section, system, value) config.write(cfgfile) cfgfile.close() print("{} updated to {}".format(system, value)) def read_test(): read_system_status_values("current_system_status", "system_status", "pool_level_resistance_value") def write_test(): update_system_status_values("current_system_status", "system_status", "pool_level_resistance_value", get_pool_level_resistance_value) read_test() write_test() read_test() </code></pre> <p>This is my config file "current_system_status":</p> <pre><code>[system_status] running_status = True fill_control_manual_disable = False pump_running = True pump_watts = 865 pool_level_resistance_value = 680 pool_level = MIDWAY pool_is_filling = False pool_is_filling_auto = False pool_is_filling_manual = False pool_current_temp = 61 pool_current_ph = 7.2 pool_current_orp = 400 sprinklers_running = False pool_level_sensor_battery_voltage = 3.2 pool_temp_sensor_battery_voltage = 3.2 pool_level_sensor_time_delta = 32 pool_temp_sensor_time_delta = 18 </code></pre> <p>When I run my test file I get this output:</p> <pre><code>ssh://root@scruffy:22/usr/bin/python -u /root/pool_control/V3.2/system_status.py Our current pool_level_resistance_value is 350. pool_level_resistance_value updated to 870 Our current pool_level_resistance_value is 870. Process finished with exit code 0 </code></pre> <p>This is exactly as expected. However when I move it to my main pool_sensors.py module, anytime I run it I get the following error:</p> <pre><code>Traceback (most recent call last): File "/root/pool_control/V3.2/pool_sensors.py", line 58, in update_system_status_values config.set(section, system, value) File "/usr/lib/python2.7/ConfigParser.py", line 396, in set raise NoSectionError(section) ConfigParser.NoSectionError: No section: 'system_status' Process finished with exit code 1 </code></pre> <p>I then debugged (using PyCharm) and as I was walking through the code as soon as it gets to this line in the code:</p> <pre><code>cfgfile = open(file, 'w') </code></pre> <p>it wipes out my file completely, and hence I get the NoSectionError. When I debug my test file and it gets to that exact same line of code, it opens the file and updates it as expected.</p> <p>Both the test file and the actual file are in the same directory on the same machine using the same version of everything. The code that opens and writes the files is an exact duplicate of the test code with the exception of a debug print statement in the "production" code. </p> <p>I tried various methods including:</p> <pre><code>cfgfile = open(file, 'w') cfgfile = open(file, 'r') cfgfile = open(file, 'wb') </code></pre> <p>but no matter which one I use, once I include the test code in my production file, as soon as it hits that line it completely wipes out the file as opposed to updating it like my test files does. </p> <p>Here is the pertenent lines of the code where I call it:</p> <pre><code>import pooldb # Database information import mysql.connector from mysql.connector import errorcode import time import notifications import logging import ConfigParser DEBUG = pooldb.DEBUG config = ConfigParser.ConfigParser() def read_system_status_values(file, section, system): config.read(file) current_status = config.get(section, system) if DEBUG: print("Our current {} is {}.".format(system, current_status)) def update_system_status_values(file, section, system, value): cfgfile = open(file, 'w') config.set(section, system, value) config.write(cfgfile) cfgfile.close() if DEBUG: print("{} updated to {}".format(system,value)) def get_pool_level_resistance(): """ Function to get the current level of our pool from our MySQL DB. """ global get_pool_level try: cnx = mysql.connector.connect(user=pooldb.username, password=pooldb.password, host=pooldb.servername, database=pooldb.emoncms_db) except mysql.connector.Error as err: if err.errno == errorcode.ER_ACCESS_DENIED_ERROR: logger.error( 'Database connection failure: Check your username and password') if DEBUG: print( "Database connection failure: Check your username and " "password") elif err.errno == errorcode.ER_BAD_DB_ERROR: logger.error('Database does not exist. Please check your settings.') if DEBUG: print("Database does not exist. Please check your settings.") else: logger.error( 'Unknown database error, please check all of your settings.') if DEBUG: print( "Unknown database error, please check all of your " "settings.") else: cursor = cnx.cursor(buffered=True) cursor.execute(("SELECT data FROM `%s` ORDER by time DESC LIMIT 1") % ( pooldb.pool_resistance_table)) for data in cursor: get_pool_level_resistance_value = int("%1.0f" % data) cursor.close() logger.info("Pool Resistance is: %s", get_pool_level_resistance_value) if DEBUG: print( "pool_sensors: Pool Resistance is: %s " % get_pool_level_resistance_value) print( "pooldb: Static critical pool level resistance set at (" "%s)." % pooldb.pool_resistance_critical_level) print( "pooldb: Static normal pool level resistance set at (%s)." % pooldb.pool_resistance_ok_level) cnx.close() print("We made it here with a resistance of (%s)" % get_pool_level_resistance_value) update_system_status_values("current_system_status", "system_status", "pool_level_resistance_value", get_pool_level_resistance_value) if get_pool_level_resistance_value &gt;= pooldb.pool_resistance_critical_level: get_pool_level = "LOW" update_system_status_values("current_system_status", "system_status", "pool_level", get_pool_level) if DEBUG: print("get_pool_level_resistance() returned pool_level = LOW") else: if get_pool_level_resistance_value &lt;= pooldb.pool_resistance_ok_level: get_pool_level = "OK" update_system_status_values("current_system_status", "system_status", "pool_level", get_pool_level) if DEBUG: print("get_pool_level_resistance() returned pool_level = OK") if DEBUG: print("Our Pool Level is %s." % get_pool_level) return get_pool_level </code></pre> <p>I suspect that it might have something to do with another import maybe conflicting with the open(file,'w').</p> <p>My main module is pool_fill_control.py and it has these imports:</p> <pre><code>import pooldb # Configuration information import datetime import logging import os import socket import subprocess import threading import time import RPi.GPIO as GPIO # Import GPIO Library import mysql.connector import requests import serial from mysql.connector import errorcode import notifications import pool_sensors import ConfigParser </code></pre> <p>Within a function in that module, it then calls my pool_sensors.py module shown above using this line of code:</p> <pre><code>get_pool_level = pool_sensors.get_pool_level_resistance() </code></pre> <p>Any informtion or help as to why it works one way and not the other would be greatly appreciated.</p>
<p>Well, interesting enough after doing more research and looking line by line through the code I realized that the only thing different I was doing in my test file was reading my file first before writing to it. </p> <p>So I changed my code as follows:</p> <p>Old Code:</p> <pre><code>def update_system_status_values(file, section, system, value): cfgfile = open(file, 'w') config.set(section, system, value) config.write(cfgfile) cfgfile.close() if DEBUG: print("{} updated to {}".format(system,value)) </code></pre> <p>New Code (thanks @ShadowRanger)</p> <pre><code>def update_system_status_values(file, section, system, value): config.read(file) cfgfile = open('tempfile', 'w') config.set(section, system, value) config.write(cfgfile) cfgfile.close() os.rename('tempfile', file) if DEBUG: print("{} updated to {}".format(system, value)) </code></pre> <p>and now it works like a charm!!</p> <p>These are the steps now:</p> <p>1) Read it</p> <p>2) Open a temp file</p> <p>3) Update it</p> <p>4) Write temp file</p> <p>5) Close temp file</p> <p>6) Rename temp file over main file</p>
python|configparser
0
1,909,699
53,927,971
How to call function once in testcases in pytest
<p>If i execute below testcase i get the below output</p> <p>sample</p> <p>test_a</p> <p>sample </p> <p>test_b</p> <p>In this function sample() executes every method in testcases , i want to execute function at the start of the testcase not every method . I want out put like below</p> <p>sample</p> <p>test_a</p> <p>test_b</p> <p>Ex:</p> <pre><code>def sample(): print("sample") class Test_example(APITestCase): def setUp(self): sample() def test_a(self): print("test_a") def test_b(self): print("test_b") </code></pre>
<p>You want a class scoped fixture:</p> <pre><code>@pytest.fixture(scope="class") def sample(): print("sample") </code></pre> <p>But you need to explicitly use the fixture in your tests:</p> <pre><code>@pytest.mark.usefixtures("sample") class Test_example(APITestCase): def test_x(self): pass </code></pre> <p>Note that you don't need to call the fixture, either. It's a feature of your testing suite and is automatically called by pytest.</p>
python|pytest
0