Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,909,300
30,677,288
Reading file contents line-by-line while using each iteration in another script/function for manipulation [Python/Threading]
<p>I'm using (2) .py scripts. read.py reads the content of a file line-by-line, and process.py needs to access the values from read.py for manipulation. How do I access each read-line iteration from read.py, and use the value in process.py for manipulation?</p> <p><strong>read.py</strong></p> <pre><code>def read_file_line(): with open('data_file.txt') as file: for line in file: lineValue = line lineStripValue = lineValue.rstrip("\n") formattedLine = lineStripValue print formattedLine </code></pre> <p><strong>process.py</strong></p> <pre><code>def process_data(): for line in read_file_line() lineValue = lineData newLineValue = str('text') + lineValue print newLineValue </code></pre> <p>First, I'm wondering if the read.py has the expanded functionality that is required, and if a loop is needed for the script-script-function communication. It does read file contents correctly and without loading the entire file(s) into memory as they are 50GB.</p> <p>Secondly, what are generally schema to having one scripts functions and return values accessible in another script, or use functions as parameters in other functions?</p> <p><strong>Example</strong></p> <p>My data_file has keys:</p> <pre><code>0004672 00054356-346436 7437865663-7363 23562-3735-9994 </code></pre> <p>Reading the contents of data_file line-by-line while having each line stored in a variable that can be accessed from another script functi on. If the manipulation was to factor each key by X, and return the result. read.py would iterate over the data_file while simultaneously passing each iterative value into another script's function for processing. I'm doing a lot of work that requires different functions and scripts to be used in other scripts and functions, and want to ensure any technique standards for this kind of communication. Would threading be needed as implementation if scripts are running simultaneously?</p> <p><strong>Output</strong> factorkey = string value</p> <pre><code>factorkey0004672 factorkey00054356-346436 factorkey7437865663-7363 factorkey23562-3735-9994 </code></pre>
<p>Your problem is that you printing the lines to <code>stdout</code> in the <code>read_file_line.py</code> but you actually want to return the lines to further work with it. So, why not use a generator to get the lines line-by-line?</p> <p>E.g., </p> <h2>read.py</h2> <pre><code>def read_file_line(path): with open(path, 'r') as file: for line in file: lineValue = line lineStripValue = lineValue.rstrip("\n") formattedLine = lineStripValue yield formattedLine </code></pre> <h2>process.py</h2> <pre><code>from read import read_file_line def process_data(path): for line in read_file_line(path): lineValue = lineData newLineValue = str('text') + lineValue print newLineValue process_data('./data_file.txt') </code></pre>
python|multithreading|function|stdout|race-condition
0
1,909,301
42,598,630
Why can't I use preprocessing module in Keras?
<p>I'm trying to use the function pad_sequences() but the same error keeps rising: 'AttributeError: 'module' object has no attribute 'sequence''</p> <p>I have followed Keras documentation and I can't figure out why It does not work. Here is the line of code:</p> <pre><code>from keras import preprocessing import keras X_test = sequence.pad_sequences(X_test, maxlen=500) X_test = preprocessing.sequence.pad_sequences(X_test, maxlen=500) X_test = keras.preprocessing.sequence.pad_sequences(X_test, maxlen=500) </code></pre> <p>None of the above lines seem to work.</p>
<p>In the first line please use </p> <pre><code>X_test = preprocessing.sequence.pad_sequences(X_test, maxlen=500) </code></pre> <p>You can simply import pad_sequences like so instead</p> <pre><code>from keras.preprocessing.sequence import pad_sequences </code></pre> <p>and replace <code>preprocessing.sequence.pad_sequences</code> with just <code>pad_sequences</code></p>
python|keras
6
1,909,302
50,779,637
Detecting frequency of a training set
<p>I am trying to detect the frequency of the x-axis which is referred to (m) training set in my LSTM model </p> <pre><code>r,time, x, y, z, m, s,l = np.loadtxt('FINALkneeTRAIN.txt', delimiter = ',', unpack = True) spectrum = fft.fft(m) freq = fft.fftfreq(len(spectrum)) plt(freq, abs(spectrum)) </code></pre> <p>but it gives me the following error:</p> <pre><code>plt(freq, abs(spectrum)) TypeError: 'module' object is not callable </code></pre>
<p>You should provide some more information on your code. But I assume that this line is written somewhere:</p> <pre><code>import matplotlib.pyplot as plt </code></pre> <p>In this case, when you write <code>plt(freq, abs(spectrum))</code> you are referring to the module <code>plt</code> instead of a plotting function. If you do have the above line you probably want</p> <pre><code>plt.plot(freq, abs(spectrum)) </code></pre> <p>In addition, you may find this numpy docpage useful</p> <p><a href="https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.fft.fft.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.fft.fft.html</a></p>
python|python-3.x|numpy|frequency
1
1,909,303
57,771,110
Checking row-by-row if a value in one column is present as a substring of a value in another column, appending the string if the boolean value = False
<p>I'm looking to improve the quality of the title descriptions of some of the items listed in a product feed by creating a function which loops through existing fields and checks whether these items are present.</p> <p>If the value in the column is not present, I wish to append the item into the existing title at the start of the title.</p> <p>So far, I have tried multiple methods including using boolean values to see if the value is true or false. However, beyond this point I'm unable to use this to loop through each row and append the string if FALSE.</p> <p>Here is some sample data:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd feed = pd.read_csv(r'...feed.csv') cols = ['title', 'color', 'brand'] df = feed.loc[:,cols] </code></pre> <h1>Output</h1> <pre class="lang-py prettyprint-override"><code>title color brand 0 Testy Black Jumper black Testy 1 White T-Shirt white Testy_New 2 Testy Red Jacket red Testy 3 Trousers green Testy </code></pre> <h1>Attempt 1 (Does not work)</h1> <pre class="lang-py prettyprint-override"><code>def brand_checker(df): for row in df: if row in df[~df['title'].isin(df['brand']): m = df.filter(like='title').apply(lambda x: x.str.contains(str(df['brand'])), axis=1).all(axis=1) df['new_title'] = np.where(m, df['title'], df['brand'] + &quot; &quot; + df['title']) else: pass return df df2 = brand_checker(df) df.head(3) </code></pre> <p>At the moment I am getting the following error message:</p> <p>&quot;SyntaxError: invalid syntax&quot;</p> <h1>Expected Output:</h1> <pre class="lang-py prettyprint-override"><code>title color brand 0 Testy Black Jumper black Testy 1 Testy White T-Shirt white Testy 2 Testy_New Red Jacket red Testy_New 3 Testy Trousers green Testy </code></pre> <p>How am I able to check row-by-row if the brand is currently present in the title (order does not matter) and then append to the start if not?</p> <p>Ideally, I would like to replicate the process for color and/or any other columns which may be added into the dataframe in the future.</p>
<p>You will probably have better luck with something like this. I noticed you had some str conversion going on in there, so if your data types aren't str already, you may have to add some conversion to this.</p> <pre class="lang-py prettyprint-override"><code> def brand_checker(df): for x in range(len(df.iloc[:])): if df.iloc[x,2] not in df.iloc[x,0]: df.iloc[x,2] = df.iloc[x,2] + " " + df.iloc[x,0] return df df2 = brand_checker(df) df.head(3) </code></pre>
python|pandas
0
1,909,304
55,576,632
I opened VScode from Anaconda and running it in conda based python environment, but numpy is not activated
<p>I installed Anaconda and VScode, and running in Python 2.7.15 conda based environment, but numpy can not be used. I checked Anaconda Navigator and all numpy plugins are installed.</p> <p><a href="https://i.stack.imgur.com/L04Lj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L04Lj.png" alt="enter image description here"></a></p>
<p>Try creating a conda environment that specifies numpy as a dependency instead of running directly out of the 'base' environment.</p>
python|python-2.7|numpy|visual-studio-code
0
1,909,305
28,813,670
Selenium can't find element
<p>I am having trouble accessing elements:</p> <pre><code>&lt;fieldset&gt; &lt;legend&gt; Legend1 &lt;/legend&gt; &lt;table width=100%" cellspacing="3" bgcolor="white"&gt; &lt;tbody&gt; &lt;tr&gt;...&lt;/tr&gt; &lt;tr&gt;...&lt;/tr&gt; &lt;/tbody&gt; &lt;/table&gt; &lt;fieldset&gt; &lt;legend&gt; Legend2 &lt;/legend&gt; &lt;table width="100%" cellspacing="3" bgcolor="white" align="center"&gt; &lt;tbody&gt; &lt;tr&gt; &lt;td&gt;&lt;/td&gt; &lt;td class="reportLabel" nowrap=""&gt;Label1&lt;/td&gt; &lt;td class="reportField&gt;Field1&lt;/td&gt; &lt;td&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/tbody&gt; &lt;/table&gt; &lt;fieldset&gt; ... </code></pre> <p>I can access everything in the first table (before entering a sub-fieldset). However, I can't access anything from the fieldset on. The error I get is:</p> <pre><code>Message: Unable to find element with xpath == ... </code></pre> <p>Is there something special I have to do when there are new fieldsets? Similar to having to switch frames?</p> <p>The command I'm using is:</p> <pre><code>ret = self.driver.find_element_by_xpath("//fieldset/legend[text()='Legend2']/following::table/tbody/tr/td[@class='reportlabel'][text()='Label1']") </code></pre> <p>The reason I'm including the legend and following it with 'following' is that there are a lot of different sections and legends within a previous one, and I'd like to ensure that the field is indeed in the proper section. </p> <p>I have also tried simpler things, though, such as:</p> <pre><code>ret = self.driver.find_element_by_xpath("//fieldset/table/tbody/tr/td[@class='reportLabel][text()='Label1']") </code></pre> <p>I am using:</p> <pre><code>IE11 (same issue on Firefox, though) Selenium 2.44.0 Python 2.7 Windows 7 32 bit IEDriverServer.exe </code></pre> <p>Does anyone know why I can't access these elements?</p>
<p>Your second <code>XPATH</code> looks correct unless the fact that you are missing a <code>'</code> after <code>reportLabel</code>. Corrected:</p> <pre><code>//fieldset/table/tbody/tr/td[@class='reportLabel'][text()='Label1'] </code></pre> <p><strong>Working <code>xpath</code> as per OP's comment</strong></p> <pre><code>//legend[contains(.,'Legend2')]//..//td[contains(text(),'Label1')] </code></pre>
python|selenium|xpath
3
1,909,306
56,977,820
Better way to use SpaCy to parse sentences?
<p>I'm using SpaCy to find sentences that contain 'is' or 'was' that have pronouns as their subjects and return the object of the sentence. My code works, but I feel like there must be a much better way to do this.</p> <pre><code>import spacy nlp = spacy.load('en_core_web_sm') ex_phrase = nlp("He was a genius. I really liked working with him. He is a dog owner. She is very kind to animals.") #create an empty list to hold any instance of this particular construction list_of_responses = [] #split into sentences for sent in ex_phrase.sents: for token in sent: #check to see if the word 'was' or 'is' is in each sentence, if so, make a list of the verb's constituents if token.text == 'was' or token.text == 'is': dependency = [child for child in token.children] #if the first constituent is a pronoun, make sent_object equal to the item at index 1 in the list of constituents if dependency[0].pos_ == 'PRON': sent_object = dependency[1] #create a string of the entire object of the verb. For instance, if sent_object = 'genius', this would create a string 'a genius' for token in sent: if token == sent_object: whole_constituent = [t.text for t in token.subtree] whole_constituent = " ".join(whole_constituent) #check to see what the pronoun was, and depending on if it was 'he' or 'she', construct a coherent followup sentence if dependency[0].text.lower() == 'he': returning_phrase = f"Why do you think him being {whole_constituent} helped the two of you get along?" elif dependency[0].text.lower() == 'she': returning_phrase = f"Why do you think her being {whole_constituent} helped the two of you get along?" #add each followup sentence to the list. For some reason it creates a lot of duplicates, so I have to use set list_of_responses.append(returning_phrase) list_of_responses = list(set(list_of_responses)) </code></pre>
<p>It seems like your code is trying to do something more complicated than what you describe in your question. I have tried to do what it looks like you want to do with your code. Getting the object/attribute of a verb "is" or "was" is just part of this.</p> <pre class="lang-py prettyprint-override"><code>import spacy from pprint import pprint nlp = spacy.load('en') text = "He was a genius. I really liked working with him. He is a dog owner. She is very kind to animals." def get_pro_nsubj(token): # get the (lowercased) subject pronoun if there is one return [child.lower_ for child in token.children if child.dep_ == 'nsubj'][0] list_of_responses = [] # a mapping of subject to object pronouns subj_obj_pro_map = {'he': 'him', 'she': 'her' } for token in nlp(text): if token.pos_ in ['NOUN', 'ADJ']: if token.dep_ in ['attr', 'acomp'] and token.head.lower_ in ['is', 'was']: # to test for lemma 'be' use token.head.lemma_ == 'be' nsubj = get_pro_nsubj(token.head) if nsubj in ['he', 'she']: # get the text of each token in the constituent and join it all together whole_constituent = ' '.join([t.text for t in token.subtree]) obj_pro = subj_obj_pro_map[nsubj] # convert subject to object pronoun returning_phrase = 'Why do you think {} being {} helped the two of you get along?'.format(obj_pro, whole_constituent) list_of_responses.append(returning_phrase) pprint(list_of_responses) </code></pre> <p>Which outputs this:</p> <pre><code>['Why do you think him being a genius helped the two of you get along?', 'Why do you think him being a dog owner helped the two of you get along?', 'Why do you think her being very kind to animals helped the two of you get ' 'along?'] </code></pre>
python|nlp|spacy
0
1,909,307
56,911,536
How do I make it so my code loops through all items on a page?
<p>I'm trying to scrape an IMDB list, but currently all I that prints out in the table is the first movie (Toy Story).</p> <p>I've tried to initialize count = 0 and then I've tried to update first_movie = movie_containers[count+1] at the end of the for loop, but it doesn't work. Whatever I try, I get various errors such as 'Arrays Must Be the Same Length'. When it does work, like I said, only the first movie on the page is printed into the table 50 times.</p> <pre class="lang-py prettyprint-override"><code>from bs4 import BeautifulSoup from requests import get import pandas as pd url = 'https://www.imdb.com/search/title/?genres=comedy&amp;explore=title_type,genres&amp;pf_rd_m=A2FGELUUNOQJNL&amp;pf_rd_p=3396781f-d87f-4fac-8694-c56ce6f490fe&amp;pf_rd_r=3PWY0EZBAKM22YP2F114&amp;pf_rd_s=center-1&amp;pf_rd_t=15051&amp;pf_rd_i=genre&amp;ref_=ft_gnr_pr1_i_1' response = get(url) html = BeautifulSoup(response.text, 'lxml') movie_containers = html.find_all('div', class_='lister-item mode-advanced') first_movie = movie_containers[0] name = first_movie.h3.a.text year = first_movie.find('span', class_='lister-item-year text-muted unbold').text rating = float(first_movie.find('div', class_='inline-block ratings-imdb-rating').text.strip()) metascore = int(first_movie.find('span', class_='metascore favorable').text) vote = first_movie.find('span', attrs={'name':'nv'}) vote = vote['data-value'] gross = first_movie.find('span', attrs={'data-value':'272,257,544'}) gross = '$' + gross['data-value'] info_container = first_movie.findAll('p', class_='text-muted')[0] certificate = info_container.find('span', class_='certificate').text runtime = info_container.find('span', class_='runtime').text genre = info_container.find('span', class_='genre').text.strip() description = first_movie.findAll('p', class_='text-muted')[1].text.strip() #second_movie_metascore = movie_containers[1].find('div', class_='ratings-metascore') names = [] years = [] ratings = [] metascores = [] votes = [] grossing = [] certificates = [] runtimes = [] genres = [] descriptions = [] for container in movie_containers: try: name = first_movie.h3.a.text names.append(name) except: continue try: year = first_movie.find('span', class_='lister-item-year text-muted unbold').text years.append(year) except: continue try: rating = float(first_movie.find('div', class_='inline-block ratings-imdb-rating').text.strip()) ratings.append(rating) except: continue try: metascore = int(first_movie.find('span', class_='metascore favorable').text) metascores.append(metascore) except: continue try: vote = first_movie.find('span', attrs={'name':'nv'}) vote = vote['data-value'] votes.append(vote) except: continue try: gross = first_movie.find('span', attrs={'data-value':'272,257,544'}) gross = '$' + gross['data-value'] grossing.append(gross) except: continue try: certificate = info_container.find('span', class_='certificate').text certificates.append(certificate) except: continue try: runtime = info_container.find('span', class_='runtime').text runtimes.append(runtime) except: continue try: genre = info_container.find('span', class_='genre').text.strip() genres.append(genre) except: continue try: description = first_movie.findAll('p', class_='text-muted')[1].text.strip() descriptions.append(description) except: continue test_df = pd.DataFrame({'Movie': names, 'Year': years, 'IMDB': ratings, 'Metascore': metascores, 'Votes': votes, 'Gross': grossing, 'Certificate': certificates, 'Runtime': runtimes, 'Genres': genres, 'Descriptions': descriptions }) #print(test_df.info()) print(test_df) </code></pre> <p>Also, how do I start the pd list at 1, not 0 when it prints out a table?</p>
<p>You can try this code to scrape the data. I'm now printing it on screen, but you will put the data into the Panda's dataframe:</p> <pre><code>from bs4 import BeautifulSoup import requests import textwrap url = 'https://www.imdb.com/search/title/?genres=comedy&amp;explore=title_type,genres&amp;pf_rd_m=A2FGELUUNOQJNL&amp;pf_rd_p=3396781f-d87f-4fac-8694-c56ce6f490fe&amp;pf_rd_r=3PWY0EZBAKM22YP2F114&amp;pf_rd_s=center-1&amp;pf_rd_t=15051&amp;pf_rd_i=genre&amp;ref_=ft_gnr_pr1_i_1' soup = BeautifulSoup(requests.get(url).text, 'lxml') names = [] years = [] ratings = [] metascores = [] votes = [] grossing = [] certificates = [] runtimes = [] genres = [] descriptions = [] for i in soup.select('.lister-item-content'): for t in i.select('h3 a'): names.append(t.text) break else: names.append('-') for t in i.select('.lister-item-year'): years.append(t.text) break else: years.append('-') for t in i.select('.ratings-imdb-rating'): ratings.append(t.text.strip()) break else: ratings.append('-') for t in i.select('.metascore'): metascores.append(t.text.strip()) break else: metascores.append('-') for t in i.select('.sort-num_votes-visible span:contains("Votes:") + span[data-value]'): votes.append(t['data-value']) break else: votes.append('-') for t in i.select('.sort-num_votes-visible span:contains("Gross:") + span[data-value]'): grossing.append(t['data-value']) break else: grossing.append('-') for t in i.select('.certificate'): certificates.append(t.text.strip()) break else: certificates.append('-') for t in i.select('.runtime'): runtimes.append(t.text.strip()) break else: runtimes.append('-') for t in i.select('.genre'): genres.append(t.text.strip().split(',')) break else: genres.append('-') for t in i.select('p.text-muted')[1:2]: descriptions.append(t.text.strip()) break else: descriptions.append('-') for row in zip(names, years, ratings, metascores, votes, grossing, certificates, runtimes, genres, descriptions): for col_num, data in enumerate(row): if col_num == 0: t = textwrap.shorten(str(data), 35) print('{: ^35}'.format(t), end='|') elif col_num in (1, 2, 3, 4, 5, 6, 7): t = textwrap.shorten(str(data), 12) print('{: ^12}'.format(t), end='|') else: t = textwrap.shorten(str(data), 35) print('{: ^35}'.format(t), end='|') print() </code></pre> <p>Prints:</p> <pre><code> Toy Story 4 | (2019) | 8.3 | 84 | 50496 |272,257,544 | G | 100 min |['Animation', ' Adventure', ' [...]|When a new toy called "Forky" [...]| Charlie's Angels | (2019) | - | - | - | - | - | - |['Action', ' Adventure', ' Comedy']| Reboot of the 2000 action [...] | Murder Mystery | (2019) | 6.0 | 38 | 46255 | - | PG-13 | 97 min | ['Action', ' Comedy', ' Crime'] | A New York cop and his wife [...] | Eile veel |(III) (2019)| 7.1 | 56 | 10539 | 26,132,740 | PG-13 | 116 min | ['Comedy', ' Fantasy', ' Music'] | A struggling musician [...] | Mehed mustas: globaalne oht | (2019) | 5.7 | 38 | 24338 | 66,894,949 | PG-13 | 114 min |['Action', ' Adventure', ' Comedy']|The Men in Black have always [...] | Good Omens | (2019) | 8.3 | - | 24804 | - | - | 60 min | ['Comedy', ' Fantasy'] | A tale of the bungling of [...] | Ükskord Hollywoodis | (2019) | 9.6 | 88 | 6936 | - | - | 159 min | ['Comedy', ' Drama'] |A faded television actor and [...] | Aladdin | (2019) | 7.4 | 53 | 77230 |313,189,616 | PG | 128 min |['Adventure', ' Comedy', ' Family']|A kind-hearted street urchin [...] | Mr. Iglesias | (2019– ) | 7.2 | - | 2266 | - | - | 30 min | ['Comedy'] | A good-natured high school [...] | Shazam! | (2019) | 7.3 | 70 | 129241 |140,105,000 | PG-13 | 132 min |['Action', ' Adventure', ' Comedy']| We all have a superhero [...] | Shaft | (2019) | 6.4 | 40 | 12016 | 19,019,975 | R | 111 min | ['Action', ' Comedy', ' Crime'] | John Shaft Jr., a cyber [...] | Kontor |(2005–2013) | 8.8 | - | 301620 | - | - | 22 min | ['Comedy'] |A mockumentary on a group of [...] | Sõbrad |(1994–2004) | 8.9 | - | 683205 | - | - | 22 min | ['Comedy', ' Romance'] | Follows the personal and [...] | Lelulugu | (1995) | 8.3 | 95 | 800957 |191,796,233 | - | 81 min |['Animation', ' Adventure', ' [...]| A cowboy doll is profoundly [...] | Lelulugu 3 | (2010) | 8.3 | 92 | 689098 |415,004,880 | - | 103 min |['Animation', ' Adventure', ' [...]| The toys are mistakenly [...] | Orange Is the New Black | (2013– ) | 8.1 | - | 256417 | - | - | 59 min | ['Comedy', ' Crime', ' Drama'] | Convicted of a decade old [...] | Brooklyn Nine-Nine | (2013– ) | 8.4 | - | 154342 | - | - | 22 min | ['Comedy', ' Crime'] | Jake Peralta, an immature, [...] | Always Be My Maybe | (2019) | 6.9 | 64 | 26210 | - | PG-13 | 101 min | ['Comedy', ' Romance'] | A pair of childhood friends [...] | The Dead Don't Die | (2019) | 6.0 | 54 | 6841 | 6,116,830 | R | 104 min | ['Comedy', ' Fantasy', ' Horror'] | The peaceful town of [...] | Suure Paugu teooria |(2007–2019) | 8.2 | - | 653122 | - | - | 22 min | ['Comedy', ' Romance'] | A woman who moves into an [...] | Lelulugu 2 | (1999) | 7.9 | 88 | 476104 |245,852,179 | - | 92 min |['Animation', ' Adventure', ' [...]|When Woody is stolen by a toy [...]| Fast &amp; Furious Presents: [...] | (2019) | - | - | - | - | PG-13 | - |['Action', ' Adventure', ' Comedy']|Lawman Luke Hobbs and outcast [...]| Dead to Me | (2019– ) | 8.2 | - | 23149 | - | - | 30 min | ['Comedy', ' Drama'] | A series about a powerful [...] | Pintsaklipslased | (2011– ) | 8.5 | - | 328568 | - | - | 44 min | ['Comedy', ' Drama'] | On the run from a drug deal [...] | The Secret Life of Pets 2 | (2019) | 6.6 | 55 | 8613 |135,983,335 | PG | 86 min |['Animation', ' Adventure', ' [...]| Continuing the story of Max [...] | Good Girls | (2018– ) | 7.9 | - | 18518 | - | - | 43 min | ['Comedy', ' Crime', ' Drama'] | Three suburban mothers [...] | Ralph Breaks the Internet | (2018) | 7.1 | 71 | 91165 |201,091,711 | PG | 112 min |['Animation', ' Adventure', ' [...]|Six years after the events of [...]| Trolls 2 | (2020) | - | - | - | - | - | - |['Animation', ' Adventure', ' [...]| Sequel to the 2016 animated hit. | Booksmart | (2019) | 7.4 | 84 | 24935 | 21,474,121 | R | 102 min | ['Comedy'] | On the eve of their high [...] | The Old Man &amp; the Gun | (2018) | 6.8 | 80 | 27337 | 11,277,120 | PG-13 | 93 min |['Biography', ' Comedy', ' Crime'] | Based on the true story of [...] | Fleabag | (2016– ) | 8.6 | - | 25041 | - | - | 27 min | ['Comedy', ' Drama'] |A comedy series adapted from [...] | Schitt's Creek | (2015– ) | 8.2 | - | 18112 | - | - | 22 min | ['Comedy'] |When rich video-store magnate [...]| Catch-22 | (2019– ) | 7.9 | - | 6829 | - | - | 45 min | ['Comedy', ' Crime', ' Drama'] |Limited series adaptation of [...] | Häbitu | (2011– ) | 8.7 | - | 171782 | - | - | 46 min | ['Comedy', ' Drama'] | A scrappy, fiercely loyal [...] | Jane the Virgin | (2014– ) | 7.8 | - | 30106 | - | - | 60 min | ['Comedy'] | A young, devout Catholic [...] | Parks and Recreation |(2009–2015) | 8.6 | - | 178220 | - | - | 22 min | ['Comedy'] | The absurd antics of an [...] | One Punch Man: Wanpanman | (2015– ) | 8.9 | - | 87166 | - | - | 24 min | ['Animation', ' Action', ' [...] |The story of Saitama, a hero [...] | The Boys | (2019– ) | - | - | - | - | - | 60 min | ['Action', ' Comedy', ' Crime'] |A group of vigilantes set out [...]| Pokémon Detective Pikachu | (2019) | 6.8 | 53 | 65217 |142,692,000 | PG | 104 min |['Action', ' Adventure', ' Comedy']| In a world where people [...] | Kuidas ma kohtasin teie ema |(2005–2014) | 8.3 | - | 544472 | - | - | 22 min | ['Comedy', ' Romance'] | A father recounts to his [...] | It's Always Sunny in Philadelphia | (2005– ) | 8.7 | - | 171517 | - | - | 22 min | ['Comedy'] | Five friends with big egos [...] | Stuber | (2019) | 5.9 | 53 | 794 | - | R | 93 min | ['Action', ' Comedy'] |A detective recruits his Uber [...]| Moodne perekond | (2009– ) | 8.4 | - | 314178 | - | - | 22 min | ['Comedy', ' Romance'] | Three different but related [...] | The Umbrella Academy | (2019– ) | 8.1 | - | 73654 | - | - | 60 min |['Action', ' Adventure', ' Comedy']| A disbanded group of [...] | Happy! |(2017–2019) | 8.3 | - | 25284 | - | - | 60 min | ['Action', ' Comedy', ' Crime'] | An injured hitman befriends [...] | Rick and Morty | (2013– ) | 9.3 | - | 279411 | - | - | 23 min |['Animation', ' Adventure', ' [...]| An animated series that [...] | Cobra Kai | (2018– ) | 8.8 | - | 34069 | - | - | 30 min | ['Action', ' Comedy', ' Drama'] |Decades after their 1984 All [...] | Roheline raamat | (2018) | 8.2 | 69 | 215069 | 85,080,171 | PG-13 | 130 min |['Biography', ' Comedy', ' Drama'] | A working-class Italian- [...] | Kondid |(2005–2017) | 7.9 | - | 130232 | - | - | 40 min | ['Comedy', ' Crime', ' Drama'] | Forensic anthropologist Dr. [...] | Sex Education | (2019– ) | 8.4 | - | 68509 | - | - | 45 min | ['Comedy', ' Drama'] | A teenage boy with a sex [...] | </code></pre>
python-3.x|web-scraping|beautifulsoup
1
1,909,308
44,638,575
Pytest - tmpdir_factory in pytest_generate_tests
<p>So I have two main portion of code: </p> <ol> <li>Generates an extensive collection of config files in a directory.</li> <li>Runs a single config file generated by the previous.</li> </ol> <p>I want to run a test, where first I execute code1 and generate all files and than for each config file run code2 and verify that the results if good. My attempt so far was:</p> <pre><code>@pytest.fixture(scope='session') def pytest_generate_tests(metafunc, tmpdir_factory): path = os.path.join(tmpdir_factory, "configs") gc.main(path, gc.VARIANTS, gc.MODELS, default_curvature_avg=0.0, curvature_avg_variation=0.9, default_gradient_avg=0.0, gradient_avg_variation=0.9, default_inversion="approximate", vary_inversion=False, vary_projections=True) params = [] for model in os.listdir(path): model_path = os.path.join(path, model) for dataset in os.listdir(model_path): dataset_path = os.path.join(model_path, dataset) for file_name in os.listdir(dataset_path): config_file = os.path.join(dataset_path, file_name) folder = os.path.join(dataset_path, file_name[:-5]) tmpdir_factory.mktemp(folder) params.append(dict(config_file=config_file, output_folder=folder)) metafunc.addcall(funcargs=dict(config_file=config_file, output_folder=folder)) def test_compile_and_error(config_file, output_folder): final_error = main(config_file, output_folder) assert final_error &lt; 0.9 </code></pre> <p>However, the <code>tmpdir_factory</code> fixture does not work for the <code>pytest_generate_tests</code> method. My questions is how to achieve my goal by generating all of the tests?</p>
<p><strong>First and most importantly,</strong> <code>pytest_generate_tests</code> is meant to be a hook in pytest and not a name for a fixture function. Get rid of the <code>@pytest.fixture</code> before it and have another look in <a href="https://docs.pytest.org/en/latest/parametrize.html#pytest-generate-tests" rel="nofollow noreferrer">its docs</a>. Hooks should be written in the <code>conftest.py</code> file or a plugin file, and are collected automatically according to the <code>pytest_</code> prefix.</p> <p>Now for your matter: Just use a temporary directory manually using:</p> <pre><code>import tempfile import shutil dirpath = tempfile.mkdtemp() </code></pre> <p>inside <code>pytest_generate_tests</code>. Save <code>dirpath</code> in a global in the conftest, and delete in <code>pytest_sessionfinish</code> using</p> <pre><code># ... do stuff with dirpath shutil.rmtree(dirpath) </code></pre> <p>Source: <a href="https://stackoverflow.com/a/3223615/3858507">https://stackoverflow.com/a/3223615/3858507</a></p> <p>Remember that if you have more than one test case, <code>pytest_generate_tests</code> will be called for each one. So you better save all your tempdirs in some list and delete all of them in the end. In contrast, if you only need one tempdir than think about using the hook <code>pytest_sesssionstart</code> to create it there and use it later.</p>
python|pytest|fixtures
4
1,909,309
20,555,269
Can I run a loop in the background of another loop? Python
<p>Seen a few q's like these, nothing helped ): Is there a way of running a loop to check if a number is more than say, 100, and if so, do something? For example:</p> <p><code>while x == 1: #DoStuff</code></p> <p>and have this: running simultaneously in the background without waiting upon a user for their input?</p> <pre><code>if moneyLoop &gt; 100: nCoins = nCoins + 1? </code></pre>
<p>In order to do this you have a few options:</p> <ul> <li><a href="http://docs.python.org/2/library/threading.html" rel="nofollow">Python's threading module</a></li> <li><a href="http://www.stackless.com/" rel="nofollow">Stackless Python</a></li> <li><a href="https://pypi.python.org/pypi/greenlet" rel="nofollow">The Greenlet Library</a></li> </ul> <p>There are others if you google python parallelism libraries. </p>
python
2
1,909,310
51,660,865
Docker Image command pythonreturning non-zero code
<p>So I'm trying to build a new docker image with Python2.7 and pip for python 2.7 however I'm getting a "The command '/bin/sh -c pip2 install -r requirements.txt' returned a non-zero code: 1" error when trying to build the image.</p> <pre><code>FROM colstrom/python:legacy MAINTAINER **REDACTED** RUN pip2 install -r requirements.txt CMD ["python2.7", "parser.py"] </code></pre> <p>Any ideas?</p>
<p>You have to <code>COPY</code>/<code>ADD</code> or mount your data (at least <code>requirements.txt</code> and <code>parser.py</code>) into the container.</p> <p>Assuming your Dockerfile resides at the root directory of your project:</p> <pre><code>FROM colstrom/python:legacy MAINTAINER **REDACTED** COPY . . RUN pip2 install -r requirements.txt CMD ["python2.7", "parser.py"] </code></pre>
python-2.7|docker
0
1,909,311
39,070,135
boxplot (from seaborn) would not plot as expected
<p><strong>The boxplot would not plot as expected. This is what it actually plotted:</strong> <a href="https://i.stack.imgur.com/rAdwz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rAdwz.png" alt="enter image description here"></a></p> <p><strong>This is what it is supposed to plot:</strong> <a href="https://i.stack.imgur.com/dhZrP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dhZrP.png" alt="enter image description here"></a></p> <p><strong>This is the code and data:</strong></p> <pre><code> from sklearn.ensemble import RandomForestClassifier from sklearn.cross_validation import cross_val_score scores = [] for ne in range(1,41): ## ne is the number of trees clf = RandomForestClassifier(n_estimators = ne) score_list = cross_val_score(clf, X, Y, cv=10) scores.append(score_list) sns.boxplot(scores) # scores are list of arrays plt.xlabel('Number of trees') plt.ylabel('Classification score') plt.title('Classification score as a function of the number of trees') plt.show() scores = [array([ 0.8757764 , 0.86335404, 0.75625 , 0.85 , 0.86875 , 0.81875 , 0.79375 , 0.79245283, 0.8490566 , 0.85534591]), array([ 0.89440994, 0.8447205 , 0.79375 , 0.85 , 0.8625 , 0.85625 , 0.86875 , 0.88050314, 0.86792453, 0.8427673 ]), array([ 0.91304348, 0.9068323 , 0.83125 , 0.84375 , 0.8875 , 0.875 , 0.825 , 0.83647799, 0.83647799, 0.87421384]), array([ 0.86956522, 0.86956522, 0.85 , 0.875 , 0.88125 , 0.86875 , 0.8625 , 0.8490566 , 0.86792453, 0.89308176]), </code></pre> <p>....]</p>
<p>I would first create pandas DF out of <code>scores</code>:</p> <pre><code>import pandas as pd In [15]: scores Out[15]: [array([ 0.8757764 , 0.86335404, 0.75625 , 0.85 , 0.86875 , 0.81875 , 0.79375 , 0.79245283, 0.8490566 , 0.85534591]), array([ 0.89440994, 0.8447205 , 0.79375 , 0.85 , 0.8625 , 0.85625 , 0.86875 , 0.88050314, 0.86792453, 0.8427673 ]), array([ 0.91304348, 0.9068323 , 0.83125 , 0.84375 , 0.8875 , 0.875 , 0.825 , 0.83647799, 0.83647799, 0.87421384]), array([ 0.86956522, 0.86956522, 0.85 , 0.875 , 0.88125 , 0.86875 , 0.8625 , 0.8490566 , 0.86792453, 0.89308176])] In [16]: df = pd.DataFrame(scores) In [17]: df Out[17]: 0 1 2 3 4 5 6 7 8 9 0 0.875776 0.863354 0.75625 0.85000 0.86875 0.81875 0.79375 0.792453 0.849057 0.855346 1 0.894410 0.844720 0.79375 0.85000 0.86250 0.85625 0.86875 0.880503 0.867925 0.842767 2 0.913043 0.906832 0.83125 0.84375 0.88750 0.87500 0.82500 0.836478 0.836478 0.874214 3 0.869565 0.869565 0.85000 0.87500 0.88125 0.86875 0.86250 0.849057 0.867925 0.893082 </code></pre> <p>now we can easily plot boxplots:</p> <pre><code>In [18]: sns.boxplot(data=df) Out[18]: &lt;matplotlib.axes._subplots.AxesSubplot at 0xd121128&gt; </code></pre> <p><a href="https://i.stack.imgur.com/6JdeA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6JdeA.png" alt="enter image description here"></a></p>
python|arrays|python-2.7|scikit-learn|seaborn
2
1,909,312
47,949,871
Accessing each entries of DataFrame and replacing them in a better way?
<p>This question is an additional question of my previous question that I posted. What I would like to do is to replace DataFrame's string value to its first initial string. For example,</p> <pre><code>s = pd.DataFrame({'A':['S12','S1','E53',np.NaN], 'B':[1,2,3,4]}) s.A.fillna('P', inplace=True) </code></pre> <p>This will give me a Dataframe </p> <pre><code> A B 0 S12 1 1 S1 2 2 E53 3 3 P 4 </code></pre> <p>But then, I would like to change the string values of column 'A' to ['S', 'S', 'E', 'P'], which is their first character. What I did is following,</p> <pre><code>for i, row in s.iterrows(): if len(row['A']) &gt; 1: s['A'][i] = row['A'][0] </code></pre> <p>and I got this warning.</p> <pre><code>/anaconda/lib/python2.7/site-packages/ipykernel/__main__.py:3: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas- docs/stable/indexing.html#indexing-view-versus-copy app.launch_new_instance() /anaconda/lib/python2.7/site-packages/ipykernel/__main__.py:7: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy </code></pre> <p>I understand that this is a non-preferred way, but what exactly I am doing inefficiently and what would be the preferred way? Is it possible to do it without converting them to numpy array?</p> <p>Thank you!</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html" rel="nofollow noreferrer"><code>fillna</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/text.html#indexing-with-str" rel="nofollow noreferrer">indexing with str</a> by <code>str[0]</code>:</p> <pre><code>s['A'] = s['A'].fillna('P').str[0] print (s) A B 0 S 1 1 S 2 2 E 3 3 P 4 </code></pre>
python|pandas|dataframe
1
1,909,313
40,330,645
Django - import excel to database
<p>I have a model abd a excel table that represents a data that shluld be imported to the database. How should I do that ?</p> <p>Django 1.7</p> <p>Thank you!</p>
<p>Have you seen this utility? <a href="https://pypi.python.org/pypi/django-import-export/" rel="nofollow">https://pypi.python.org/pypi/django-import-export/</a>?</p>
python|django|django-models
1
1,909,314
47,485,399
Python POST Request iteratively
<p>I'm trying to make POST requests with in a loop with the following code </p> <pre><code>description = fake.catch_phrase() group_id = '' invite_only = 1 if fake.boolean(chance_of_getting_true=50) == True else 0 is_public = 1 if fake.boolean(chance_of_getting_true=50) == True else 0 title = fake.company() payload = {description, group_id, invite_only, is_public, title} response = requests.post(createGroup, data=(payload), headers=headers) </code></pre> <p>I get <code>SequelizeValidationError: notNull Violation: v_title cannot be null</code>on the server However if I try sending the same payload like this</p> <pre><code>payload = {'description': 'abc', 'group_id': '1', 'invite_only': '1', 'is_public': '1', 'title': 'someTitle'} </code></pre> <p>It works perfectly fine.</p> <p>The question, therefore, is that can I send randomly generated data in the post call? If I can, how would that be possible? </p>
<p>The <code>data</code> parameter should be a dictionary, a string or a file. This, however, is a <em>set</em>:</p> <pre><code>payload = {description, group_id, invite_only, is_public, title} </code></pre> <p>So pass a dictionary like this:</p> <pre><code>payload = {'description': description, 'group_id': group_id, 'invite_only': invite_only, 'is_public': is_public, 'title': title} </code></pre> <p>and initialise the random variables like this:</p> <pre><code>description = fake.catch_phrase() group_id = '' invite_only = int(fake.boolean(chance_of_getting_true=50)) is_public = int(fake.boolean(chance_of_getting_true=50)) title = fake.company() </code></pre> <p>(note the use of <code>int()</code> to convert the boolean to 1 or 0)</p>
python|post|request|python-requests|faker
0
1,909,315
46,792,424
How to use Python Seaborn Visualizations in PowerPoint?
<p>I created some figures with Seaborn in a Jupyter Notebook. I would now like to present those figures in a PowerPoint presentation. </p> <p>I know that it is possible to export the figures as png and include them in the presentation. But then they would be static, and if something changes in the dataframe, the picture would be the same. Is there an option to have a dynamic figure in PowerPoint? Something like a small Jupyter Notebook you could Display in the slides?</p>
<p>You could try <a href="https://docs.anaconda.com/anaconda/fusion/" rel="nofollow noreferrer">Anaconda Fusion</a> (also the <a href="https://www.youtube.com/watch?v=wBHf1PLzLvg" rel="nofollow noreferrer">video here</a>), which let's you use Python inside of Excel. This could possibly work since you can link figures/data elements between Excel and PowerPoint (but special restrictions might apply when the figure is created via Python rather than standard Excel). Anaconda Fusion is free to try for a couple of months.</p> <p>Another solution would be to use the <a href="https://medium.com/@mjspeck/presenting-code-using-jupyter-notebook-slides-a8a3c3b59d67" rel="nofollow noreferrer">Jupyter Notebook to create your presentation instead of PowerPoint</a>. Go to <code>View -&gt; Cell Toolbar -&gt; Slideshow</code>and you can choose which code cells should become slides.</p> <p>A third approach would be to create an animation of the figure as the data frame changes and then include the animation (GIF or video) in PowerPoint.</p>
python|dynamic|powerpoint|seaborn|figure
4
1,909,316
48,224,916
Join Single level table to a multi level table in Python
<p>I have df1 which is multi level table with expenses -it looks like this (</p> <p>a &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 60<br> &nbsp;&nbsp; EUR 10<br> &nbsp;&nbsp; AUD 20<br> &nbsp;&nbsp;&nbsp;USD 30 </p> <p>b &nbsp; &nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 65<br> &nbsp;&nbsp;&nbsp; EUR 40<br> &nbsp;&nbsp;&nbsp; GBP 10<br> &nbsp;&nbsp;&nbsp; HKD 15 </p> <p>both of them are done using this script</p> <pre><code>t_sub=pd.concat([ t.assign( **{x: '' for x in ['Client'][i:]} ).groupby(list(['Client'])).sum() for i in range(1,2) ]).sort_index() </code></pre> <p>and then i have Another table which is with the Money of each person - df2 </p> <p>a - 100<br> b - 200 </p> <p>I want to Append the second table to the first one but it has to match the level of the client total only e.g.</p> <p>a &nbsp; &nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 60 100<br> &nbsp;&nbsp;&nbsp; EUR 10 -<br> &nbsp;&nbsp;&nbsp; AUD 20 -<br> &nbsp;&nbsp;&nbsp; USD 30 - </p> <p>b &nbsp; &nbsp; &nbsp;&nbsp;&nbsp;&nbsp; 65 200<br> &nbsp;&nbsp;&nbsp; EUR 40 -<br> &nbsp;&nbsp;&nbsp; GBP 10 -<br> &nbsp;&nbsp;&nbsp; HKD 15 - </p>
<p>It's hard to know exactly what needs to be done without an example of the data, but here is a possible solution. It is not too elegant, but hopefully it will get you started and make it possible to get what you want.</p> <p>First let's create the two dataframes:</p> <pre><code>import pandas as pd df1 = pd.DataFrame({'Expenses': [60, 10, 20, 30, 65, 40, 10, 15]}, index=pd.MultiIndex(levels=[['a', 'b'], ['', 'EUR', 'AUD', 'USD', 'GBP', 'HKD']], labels=[[0, 0, 0, 0, 1, 1, 1, 1], [0, 1, 2, 3, 0, 1, 4, 5]], names=['Person', 'Currency'])) df1 # Expenses # Person Currency # a 60 # EUR 10 # AUD 20 # USD 30 # b 65 # EUR 40 # GBP 10 # HKD 15 df2 = pd.DataFrame({'Money': [100, 200]}, index=pd.Index(['a', 'b'], name='Person')) df2 # Money # Person # a 100 # b 200 </code></pre> <p>Now we can merge the dataframes by their indices. Notice that I gave the same name to the <code>Person</code> indices in both dataframes. If you don't have that, you might have to provide a <code>name</code> to the index in <code>df2</code>:</p> <pre><code>new_df = pd.merge(df1, df2, left_index=True, right_index=True) </code></pre> <p>This doesn't get exactly what you want because the <code>Money</code> value in <code>df2</code> is copied to all rows for the same person:</p> <pre><code>new_df # Expenses Money # Person Currency # a 60 100 # EUR 10 100 # AUD 20 100 # USD 30 100 # b 65 200 # EUR 40 200 # GBP 10 200 # HKD 15 200 </code></pre> <p>So, even if hacky, you can just find the rows that shouldn't have the value (those in which there is no <code>Currency</code> value), and replace <code>Money</code> by whatever you want (I put a dash to match what you said in your question).</p> <p>I do this in two steps. First, I select all the rows with no value in <code>Currency</code>:</p> <pre><code> no_change = new_df.loc[(slice(None), slice('')), :] </code></pre> <p>And then, in the <code>new_df</code> dataframe, I select all the other rows (the ones that need to be changed), and I modify the value in <code>Money</code>:</p> <pre><code>new_df.loc[~new_df.index.isin(no_change.index), 'Money'] = '-' </code></pre> <p>This gives you what you seem to be looking for:</p> <pre><code>new_df # Expenses Money # Person Currency # a 60 100 # EUR 10 - # AUD 20 - # USD 30 - # b 65 200 # EUR 40 - # GBP 10 - # HKD 15 - </code></pre>
python|pandas|merge|append
0
1,909,317
51,260,504
Unable to write data to Vertica Database using Python SqlAlchemy - Type "TEXT" does not exist
<p>I am trying to upload pandas dataframe into Vertica Database was able to setup the engine and query database using sqlalchemy.</p> <p>But when I try to upload data from pandas dataframe get error message as Type "TEXT" does not exist. I am using windows 10, and created an ODBC connection.</p> <pre><code>import sqlalchemy as sa engine = sa.create_engine('vertica+pyodbc:///?odbc_connect=%s' %(urllib.parse.quote('DSN=TESTDB'),)) sql_query = "select * from sample_table" df = pd.read_sql_query(sql_query, con=engine) # this works, get the data as required in the dataframe *df.apply[Do various data transformations as required]* # Write back to the database df.to_sql(name='sample_table_cleaned', con = engine, schema = "Dev" , if_exists = 'append', index = True) </code></pre> <p><strong>the above code (df.to_sql) snippet comes up with an error as : ProgrammingError: (pyodbc.ProgrammingError) ('42704', '[42704] ERROR 5108: Type "TEXT" does not exist\n (5108) (SQLExecDirectW)')</strong></p> <p>Can Anyone help on this,</p> <p>Thanks in Advance !!</p>
<p>Have faced similar thing at work, and have changed types using VARCHAR for the columns which are of string object</p> <pre class="lang-py prettyprint-override"><code>def updateType(df_para): dtypedict = {} # create and empty dictionary for i,j in zip(df_para.columns, df_para.dtypes): if "object" in str(j): dtypedict.update({i: sa.types.VARCHAR}) return dtypedict updatedict = updateType(df) # update the datafraame type </code></pre>
python|pandas|sqlalchemy|vertica
2
1,909,318
70,646,720
How to expand an angular custom element which is shadow DOM in python selenium?
<p>I have 4 years of experience in Selenium, but I faced this kind of problem firstly. I think only python selenium masters can solve this issue. I used this function to get an Angular element, but it didn't work.</p> <pre><code>def expand_shadow_element(element): shadow_root = driver.execute_script('return arguments[0].shadowRoot', element) return shadow_root </code></pre>
<p>Chrome / chromedriver 96 and newer changed how interacting with Shadow DOM works. In order to interact with a Shadow DOM element with Selenium Python, you'll need to use a minimum of <code>selenium</code> version <code>4.1.0</code> or newer, and the method call you'll need to use is: <code>element.shadow_root</code>.</p> <p>Additionally, there's a Selenium Python framework called SeleniumBase that can help simplify interactions with Shadow DOM elements. Here's an example test of that: <a href="https://github.com/seleniumbase/SeleniumBase/blob/master/examples/shadow_root_test.py" rel="nofollow noreferrer">https://github.com/seleniumbase/SeleniumBase/blob/master/examples/shadow_root_test.py</a>, which uses the SeleniumBase-specific <code>::shadow</code> selector to pierce through an element that contains one or more shadow root segments.</p>
python|angular|selenium|selenium-webdriver
0
1,909,319
73,258,095
How to append the second column data below the value of first column data?
<p>I have a dataframe as follows:</p> <p>df</p> <pre><code>Name Sequence abc ghijklmkhf bhf uyhbnfkkkkkk dmf hjjfkkd </code></pre> <p>I want to append the second column data to the below of the first column data in specific format as follows:</p> <pre><code>Name Sequence Merged abc ghijklmkhf &gt;abc ghijklmkhf bhf uyhbnfkkkkkk &gt;bhf uyhbnfkkkkkk dmf hjjfkkd &gt;dmf hjjfkkd </code></pre> <p>I tried <code>df['Name'] = '&gt;' + df['Name'].astype(str)</code> to get the name in the format with <code>&gt;</code> symbol. How do I append the second column data below the value of first column data?</p>
<p>You can use vectorial concatenation:</p> <pre><code>df['Merged'] = '&gt;'+df['Name']+'\n'+df['Sequence'] </code></pre> <p>output:</p> <pre><code> Name Sequence Merged 0 abc ghijklmkhf &gt;abc\nghijklmkhf 1 bhf uyhbnfkkkkkk &gt;bhf\nuyhbnfkkkkkk 2 dmf hjjfkkd &gt;dmf\nhjjfkkd </code></pre> <p>Checking that there are two lines:</p> <pre><code>print(df.loc[0, 'Merged']) &gt;abc ghijklmkhf </code></pre>
python|pandas
2
1,909,320
73,279,782
Tensorboard profiling a predict call using Cloud TPU Node
<p>I've been trying to profile a predict call of a custom NN model using a Cloud TPU v2-8 Node.</p> <p>It is important to say that my prediction call takes about 2 minutes to finish and I do it using data divided in TFRecord batches.</p> <p>I followed the official documentation &quot;<a href="https://cloud.google.com/tpu/docs/cloud-tpu-tools" rel="nofollow noreferrer">Profile your model with Cloud TPU Tools</a>&quot; and I tryied to capture a profile:</p> <ol> <li>Using <a href="https://cloud.google.com/tpu/docs/cloud-tpu-tools#capture_a_profile_using_tensorboard" rel="nofollow noreferrer">Tensorboard UI</a> and</li> <li>The &quot;<a href="https://cloud.google.com/tpu/docs/cloud-tpu-tools#capture_a_profile_programmatically" rel="nofollow noreferrer">programatic way</a>&quot; with a tf.profiler.experimental.start() and tf.profilier.experimental.stop() wrapping the predict call, but I had no success in both cases.</li> </ol> <pre><code># TPU Node connection is done before... # TPU at this point is already running logdir_path = &quot;logs/predict&quot; tf.profiler.experimental.start(logdir_path) # Tensorflow predict call here tf.profiler.experimental.stop() </code></pre> <p>I could generate some data in both cases (Tensorboard UI and profiler call), but when I try to open it in Tensorboard pointing the logdir path, I received a &quot;No dashboard are active for the current data set&quot; message.</p> <p><strong>Is there any way to profile a Tensorflow/Keras prediction call with a model running in a Cloud TPU Node?</strong> <br> <br> <br> <br> <strong>Curious fact</strong> - There seems to be an inconsistency in the Tensorflow docs and Cloud TPU docs: in <a href="https://www.tensorflow.org/guide/profiler#profiling_use_cases" rel="nofollow noreferrer">Tensorflow Optimization Docs</a> we can see that tf.profiler.experimental.start/stop calls are not supported by TPU hardware, but in <a href="https://cloud.google.com/tpu/docs/cloud-tpu-tools#capture_a_profile_programmatically" rel="nofollow noreferrer">Google Cloud docs</a> this is the recommended method to capture a profile in TPU.</p> <p>Config:</p> <ul> <li>Tensorflow 2.6.1</li> <li>Tensorboard 2.9.1</li> <li>Python 3.8</li> <li>Cloud TPU Node v2-8</li> </ul>
<ol> <li>Please check the trace files in your logdir. If they are too small, it's likely that you got some issues during tracing.</li> <li>Just be sure that you typed the right command. <code>$ tensorboard --logdir logs/predict</code></li> <li>Try another profiling method by using <code>tf.profiler.experimental.client.start(...)</code>, as indicated by <a href="https://www.tensorflow.org/guide/profiler#profiling_use_cases" rel="nofollow noreferrer">TF profiler Docs</a>. Below is the code snippet.</li> </ol> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf from threading import Thread def call_trace(tpu_resolver): # This should be called asynchronously # a profiler service has been started in the TPU worker at port 8466 service_addr = &quot;:&quot;.join(tpu_resolver.get_master().split(&quot;:&quot;)[:-1] + [&quot;8466&quot;]) # need to change for TPU pod tf.profiler.experimental.client.trace(service_addr=service_addr, logdir=&quot;gs://your_logdir&quot;, duration_ms=5000) tpu_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(...) # Other initialization codes thr = Thread(target=call_trace, args=(tpu_resolver,)) thr.start() # Codes you want to execute on the cloud TPU node thr.join() </code></pre> <p>Then open tensorboard for visualization.</p> <pre class="lang-bash prettyprint-override"><code>$ tensorboard --logdir gs://your_logdir </code></pre>
python|tensorflow|machine-learning|tensorboard|google-cloud-tpu
1
1,909,321
50,135,309
Can't decrypt blowfish CTR file with pycryptodome
<p>I'm trying to recover file encrypted with an old pure python implementation of blowfish.</p> <p>the old code relied on a single blofish.py file (Copyright (C) 2002 Michael Gilfix )</p> <p>The old data are encrypted performing following operations:</p> <pre><code>cipher = Blowfish(self.masterKey) cipher.initCTR() cleanData = cipher.decryptCTR(encData) </code></pre> <p>That code don't initialize the nonce that is required in modern implementation of blowfish, so I was unable to port it to pycryptodome function</p> <pre><code>cipher = Blowfish.new(self.masterKey, Blowfish.MODE_CTR, nonce = ?????) cleanData = cipher.decrypt(encData) </code></pre> <p>The only suggestion that I can find is inside the initCTR function where iv is set to 0 (even if CTR mode don't have IV)</p> <pre><code>def initCTR(self, iv=0): """Initializes CTR mode of the cypher""" assert struct.calcsize("Q") == self.blocksize() self.ctr_iv = iv self._calcCTRBUF() def _calcCTRBUF(self): """Calculates one block of CTR keystream""" self.ctr_cks = self.encrypt(struct.pack("Q", self.ctr_iv)) # keystream block self.ctr_iv += 1 self.ctr_pos = 0 </code></pre> <p>can someone help me?</p>
<p>First, a few warnings:</p> <ol> <li>Blowfish is not a secure cipher by today's standard. Use AES.</li> <li>Counter mode (CTR) is not secure because it does not detect malicious modification of the encrypted data. Use other modes like GCM, CCM or EAX.</li> <li>Counter mode really requires a random IV for every message. However, you are using a fixed IV fixed which is <strong>very wrong</strong>.</li> </ol> <p>To answer your question, you should initialize the cipher as:</p> <pre><code>from Crypto.Util import Counter ctr = Counter.new(64, initial_value=0, little_endian=True) cipher = Blowfish.new(self.masterKey, Blowfish.MODE_CTR, counter=ctr) </code></pre> <p>The Counter object is documented <a href="https://www.pycryptodome.org/en/latest/src/util/util.html#crypto-util-counter-module" rel="nofollow noreferrer">here</a>. It allows the definition of a little-endian counter (typically CTR is big-endian).</p> <p>NOTE: <a href="https://github.com/MyNameIsMeerkat/EncryptedYAML/blob/master/blowfish.py" rel="nofollow noreferrer"><code>blowfish.py</code></a> encrypts differently in big-endian machines than on little-endian ones.</p>
python-2.7|pycrypto|blowfish|pycryptodome
2
1,909,322
50,123,928
How does the GIL handle chunked I/O read/write?
<p>Say I had a <code>io.BytesIO()</code> I wanted to write a response to sitting on a thread:</p> <pre><code>f = io.ByteIO() with requests.Session() as s: r = s.get(url, stream = True) for chunk in r.iter_content(chunk_size = 1024): f.write(chunk) </code></pre> <p>Now this is not to harddisk but rather in memory (got plenty of it for my purpose), so I don't have to worry about the needle being a bottleneck. I know for blocking I/O (file read/write) the GIL is released from the <a href="https://docs.python.org/3/c-api/init.html#thread-state-and-the-global-interpreter-lock" rel="nofollow noreferrer">docs</a> and this SO <a href="https://stackoverflow.com/a/29270846/6741482">post</a> by Alex Martelli, but I wonder, does the GIL just release on <code>f.write()</code> and then reacquire on the <code>__next__()</code> call of the loop? </p> <p>So what I end up with are a bunch of fast GIL acquisitions and releases. Obviously I would have to time this to determine anything worth note, <em>but does writing to in memory file objects on a multithreaded web scraper in general support GIL bypass</em>?</p> <p>If not, I'll just handle the large responses and dump them into a queue and process on <code>__main__</code>.</p>
<p>From what I can see in the <a href="https://github.com/python/cpython/blob/master/Modules/_io/bytesio.c" rel="nofollow noreferrer"><code>BytesIO</code> type's source code</a>, the GIL is not released during a call to <code>BytesIO.write</code>, since it's just doing a quick memory copy. It's only for system calls that may block that it makes sense for the GIL to be released.</p> <p>There probably is such a syscall in the <code>__next__</code> method of the <code>r.iter_content</code> generator (when data is read from a socket), but there's none on the writing side.</p> <p>But I think your question reflects an incorrect understanding of what it means for a builtin function to release the GIL when doing a blocking operation. It will release the GIL just before it does the potentially blocking syscall. But it will reacquire the GIL it before it returns to Python code. So it doesn't matter how many such GIL releasing operations you have in a loop, all the Python code involved will be run with the GIL held. The GIL is never released by one operation and reclaimed by different one. It's both released and reclaimed for each operation, as a single self-contained step.</p> <p>As an example, you can look at <a href="https://github.com/python/cpython/blob/61f82e0e337f971da57f8f513abfe693edf95aa5/Python/fileutils.c#L1440" rel="nofollow noreferrer">the C code that implements writing to a file descriptor</a>. The macro <code>Py_BEGIN_ALLOW_THREADS</code> releases the GIL. A few lines later, <code>Py_END_ALLOW_THREADS</code> reacquires the GIL. No Python level runs in between those steps, only a few low-level C assignments regarding <code>errno</code>, and the <code>write</code> syscall that might block, waiting on the disk.</p>
python|python-3.x
1
1,909,323
64,986,434
How to move folder in a path to another path in windows using python3?
<p>I need a script to move all contents from source folder and create/replace into destination folder without removing any existing files/folders in destination. The contents of test1 folder will be a html file along with some images in png format.</p> <p><strong>source path</strong>: C:\Users\USERNAME\Documents\LocalFolder\reports\v2<br /> <strong>source folder Name</strong>: <em>test1</em></p> <p><strong>Destination path</strong>: T:\remoteReports\myTests <strong>Path to be created while copying:</strong> \LocalFolder\reports\v2<br /> <strong>Folder to copy from source into destination</strong>: <em>test1</em></p> <p><strong>final path in destinaton:</strong> T:\remoteReports\myTests\LocalFolder\reports\v2\test1</p> <p>I have taken a piece of code from stack overflow and using it for my application but running the script is not creating the path and folder in the destination.</p> <pre><code>import time import os import random import shutil import pathlib def move_files(root_src_dir,root_dst_dir): print(root_src_dir) print(root_dst_dir) for src_dir, dirs, files in os.walk(root_src_dir): dst_dir = src_dir.replace(root_src_dir,root_dst_dir, 1) if not os.path.exists(dst_dir): os.makedirs(dst_dir) for file_ in files: src_file = os.path.join(src_dir, file_) dst_file = os.path.join(dst_dir, file_) if os.path.exists(dst_file): # in case of the src and dst are the same file if os.path.samefile(src_file, dst_file): continue os.remove(dst_file) shutil.move(src_file, dst_dir) srcPath = os.path.join(&quot;C:&quot;, &quot;Users&quot;, os.getlogin(),&quot;Documents&quot;, &quot;LocalFolder&quot;, &quot;v2&quot;, &quot;test1&quot;) dstPath = os.path.join(&quot;T:&quot;, &quot;remoteReports&quot;, &quot;myTests&quot;, &quot;LocalFolder&quot;, &quot;v2&quot;, &quot;test1&quot;) move_files(srcPath,dstPath ) </code></pre> <p>It will be helpful if someone can guide me with this !</p>
<p>You can use <a href="https://docs.python.org/3/library/pathlib.html" rel="nofollow noreferrer"><code>pathlib</code></a> to copy all files from one folder to another:</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path from shutil import copy src = Path(r&quot;C:\Users\USERNAME\Documents\LocalFolder\reports\v2&quot;) dst = Path(r&quot;T:\remoteReports\myTests\LocalFolder\reports\v2&quot;) for file in src.iterdir(): if file.is_file() and not (dst / file.name).is_file(): copy(file, dst) </code></pre> <p>If you want to copy whole tree including sub directories and it's content, you can apply <a href="https://docs.python.org/3/library/os.html#os.walk" rel="nofollow noreferrer"><code>os.walk()</code></a>:</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path from os import walk from shutil import copy src = Path(r&quot;C:\Users\USERNAME\Documents\LocalFolder\reports\v2&quot;) dst = Path(r&quot;T:\remoteReports\myTests\LocalFolder\reports\v2&quot;) for dir_name, _, filenames in walk(src): src_path = Path(dir_name) dst_path = dst / src_path.relative_to(src) if not dst_path.is_dir(): dst_path.mkdir() for file in filenames: dst_file = dst_path / file if not dst_file.is_file(): copy(src_path / file, dst_path) </code></pre> <p>Or you can make function from code I've suggested firstly and use it recursively:</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path from shutil import copy def copy_folder(src_path, dst_path): if not dst_path.is_dir(): dst_path.mkdir() for file in src_path.iterdir(): if file.is_dir(): copy_folder(file, dst_path / file.relative_to(src_path)) # recursion elif file.is_file() and not (dst_path / file.name).is_file(): copy(file, dst_path) src = Path(r&quot;C:\Users\USERNAME\Documents\LocalFolder\reports\v2&quot;) dst = Path(r&quot;T:\remoteReports\myTests\LocalFolder\reports\v2&quot;) copy_folder(src, dst) </code></pre> <hr /> <p><strong>BUT</strong> You don't really need to implement all this manually, because <a href="https://docs.python.org/3/library/shutil.html#shutil.copytree" rel="nofollow noreferrer"><code>shutil.copytree()</code></a> already exists and does exactly the same that all options I've provided before. <em>Important notice: you should pass <code>dirs_exist_ok</code> argument to not throw an exception if directory in destination path already exists.</em></p> <pre class="lang-py prettyprint-override"><code>from shutil import copy, copytree from os.path import exists src = r&quot;C:\Users\USERNAME\Documents\LocalFolder\reports\v2&quot; dst = r&quot;T:\remoteReports\myTests\LocalFolder\reports\v2&quot; copytree(src, dst, copy_function=lambda s, d: not exists(d) and copy(s, d), dirs_exist_ok=True) </code></pre> <p>Here I've used <code>lambda</code> which check does file already exist to not overwrite it, you can omit this parameter if you don't care that files will be overwritten.</p> <hr /> <p>If you want to move all files, you can pass <a href="https://docs.python.org/3/library/shutil.html#shutil.move" rel="nofollow noreferrer"><code>shutil.move()</code></a> to <code>copy_function</code>:</p> <pre class="lang-py prettyprint-override"><code>from shutil import move, copytree ... copytree(src, dst, copy_function=move, dirs_exist_ok=True) </code></pre> <p>It'll move all files, but leave tree directory. If you want to copy all directory tree and delete it from source, you can call <a href="https://docs.python.org/3/library/shutil.html#shutil.copytree" rel="nofollow noreferrer"><code>shutil.copytree()</code></a> and <a href="https://docs.python.org/3/library/shutil.html#shutil.rmtree" rel="nofollow noreferrer"><code>shutil.rmtree()</code></a>:</p> <pre class="lang-py prettyprint-override"><code>from shutil import copytree, rmtree ... copytree(src, dst, dirs_exist_ok=True) rmtree(src) </code></pre>
python|python-3.x
0
1,909,324
64,981,343
Regular Expression Multiple Match Negative
<p><em>foo, bar, tag</em></p> <p>I want my regex to match if the sentence doesn't contain all three words together above.</p> <pre><code>foo bar goal </code></pre> <p>But if the words are all together, it shouldn't match.</p> <pre><code>foo bar tag </code></pre> <p>I have tried this regex in Python but couldn't make it work.</p> <pre><code>^(?!.*(foo)).*(?!.*(bar)).*(?!.*(tag)).*$ </code></pre> <p>Any ideas? Thanks.</p>
<p>You can nest positive lookheads with negative lookahead like this:</p> <pre><code>^(?!(?=.*foo)(?=.*bar)(?=.*tag)).*$ </code></pre> <p>Here are the <a href="https://regex101.com/r/rJMO2H/3" rel="nofollow noreferrer">test cases</a></p>
python|regex|regex-lookarounds
0
1,909,325
64,839,650
Python program keeps returning an empty list
<p>For this program I am trying to get a new list that displays those students who got a grade of 95 or higher. No matter what I try I keep getting an empty list as a result. What am I doing wrong?</p> <p>Here is my code:</p> <pre><code>students = [&quot;Robin&quot;,&quot;Emily&quot;,&quot;Mary&quot;,&quot;Joe&quot;,&quot;Dean&quot;,&quot;Claire&quot;,&quot;Anne&quot;,&quot;Yingzhu&quot;,&quot;James&quot;, &quot;Monica&quot;,&quot;Tess&quot;,&quot;Anaya&quot;,&quot;Cheng&quot;,&quot;Tammy&quot;,&quot;Fatima&quot;] scores = [87, 72, 98, 93, 96, 65, 78, 83, 85, 97, 89, 65, 96, 82, 98] def wise_guys(students): wise_guys = [] for i in range(len(students)): if score in scores &gt;= 95: wise_guys.append(students[i]) return wise_guys </code></pre> <p>wise_guys(students)</p>
<p>Firstly <code>wise_guys.append(students[i])</code> need to be indented once more, as it should only be executed if the <code>if</code> statement returns true. The same goes for <code>return wise_guys</code>, as it is a part of <code>def</code>. Secondly, the syntax for <code>if</code> statements comparing items in a list of integers is <code>if list[index] comparison_operator integer</code>.</p> <p>This script seems to work fine:</p> <pre><code>students = [&quot;Robin&quot;,&quot;Emily&quot;,&quot;Mary&quot;,&quot;Joe&quot;,&quot;Dean&quot;,&quot;Claire&quot;,&quot;Anne&quot;,&quot;Yingzhu&quot;,&quot;James&quot;, &quot;Monica&quot;,&quot;Tess&quot;,&quot;Anaya&quot;,&quot;Cheng&quot;,&quot;Tammy&quot;,&quot;Fatima&quot;] scores = [87, 72, 98, 93, 96, 65, 78, 83, 85, 97, 89, 65, 96, 82, 98] def wise_guys(): wise_guys = [] for i in range(len(students)): if scores[i] &gt;= 95: wise_guys.append(students[i]) return wise_guys print(wise_guys()) </code></pre> <p>Good luck!</p>
python|python-3.x|python-requests
2
1,909,326
64,898,643
Using Python to access DirectShow to create and use Virtual Camera(Software Only Camera)
<p>Generally to create a Virtual Camera we need to create a C++ application and include DirectShow API to achieve this. But with the modules such as win32 modules and other modules we can use win32 api which lets us use these apis in python.</p> <p>Can anyone Help sharing a good documentation or some Sample codes for doing this?</p>
<p>There is no reliable way to emulate a webcam on Windows otherwise than supplying a driver. Many applications take simpler path with DirectShow, and emulate a webcam for a subset of DirectShow based applications (in particular, modern apps will be excluded since they don't use DirectShow), but even in this case you have to develop C++ camera enumation code and connect your python code with it.</p>
python|windows|directshow|pywin32|win32com
1
1,909,327
63,760,996
add_widget missing 1 required positional argument: 'screen' also parameter 'screen' unfulfilled
<p>edit: it states that it wants <code>SM.add_widget(self, screen)</code> but if i add that it becomes an unresolved reference... I'll keep trying.</p> <p>I apologise that I am a complete noob at this, this is my first more complex project without using a tutorial. I can't for the life of me work out a way to correct this error. :/ I do not want to use a .kv file for this project.</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\danti\PycharmProjects\IoU\main.py&quot;, line 188, in &lt;module&gt; SM.add_widget(screen) TypeError: add_widget() missing 1 required positional argument: 'screen' </code></pre> <p>I've tried a few formats to try solve it by myself but after 3 hours I'm lost.</p> <p>when i hover my mouse over the</p> <blockquote> <p>(screen)</p> </blockquote> <p>on the same line #188 it shows</p> <blockquote> <p>parameter 'screen' unfulfilled.</p> </blockquote> <p>I'm lost as to what to look for anymore.</p> <p>Here is my main.py file (Error line is 10th up from bottom of main.py)</p> <pre><code>`from kivy.uix.widget import Widget from kivy.uix.button import Button from kivy.graphics import Rectangle, Ellipse from kivy.uix.screenmanager import Screen, ScreenManager from kivy.uix.floatlayout import FloatLayout from kivy.uix.gridlayout import GridLayout from kivy.uix.label import Label from kivy.uix.textinput import TextInput from kivy.app import App from kivy.uix.popup import Popup from kivy.properties import ObjectProperty from database import DataBase def invalidlogin(): pop = Popup(title='Invalid Login', content=Label(text='Invalid username or password.'), size_hint=(None, None), size=(400, 400)) pop.open() def invalidform(): pop = Popup(title='Invalid Form', content=Label(text='Please fill in all inputs\n with valid information.'), size_hint=(None, None), size=(400, 400)) pop.open() db = DataBase('users.txt') class SM(ScreenManager): pass def dbvalidatelgn(self, *args): if db.validate(self.email, self.password): Wlcm.current = self.email.text self.reset() SM.current = '_wlcm' else: invalidlogin() class MyLogin(Screen): email = ObjectProperty(None) password = ObjectProperty(None) def __init__(self, **kwargs): super(MyLogin, self).__init__(**kwargs) self.inner = GridLayout() self.inner.cols = 2 self.inner.rows = 2 self.inner.size_hint = (1, .25) self.inner.pos_hint = {'x': 0, 'top': .8} self.inner.add_widget(Label(text='Email:')) self.email = TextInput(multiline=False) self.inner.add_widget(self.email) self.inner.add_widget(Label(text='Password:')) self.password = TextInput(multiline=False) self.inner.add_widget(self.password) self.submit = Button(text='Login') self.submit.font_size = 20 self.add_widget(self.submit) self.submit.size_hint = (.8, .15) self.submit.pos_hint = {'top': .4, 'right': .9} self.submit.bind(on_release=dbvalidatelgn) self.noacct = Label(text='Don\'t have an account?') self.noacct.font_size = 14 self.add_widget(self.noacct) self.noacct.size_hint = (.8, .15) self.noacct.pos_hint = {'top': .28, 'right': .9} self.newacct = Button(text='Create New Account', ) self.newacct.font_size = 13 self.add_widget(self.newacct) self.newacct.size_hint = (.6, .07) self.newacct.pos_hint = {'top': .15, 'right': .8} self.newacct.bind(on_release=self.new_acct) self.add_widget(self.inner) def new_acct(self, *args): self.manager.current = '_newacct' def Welcome(self, *args): self.manager.current = '_wlcm' class NewAcct(Screen): name = ObjectProperty(None) email = ObjectProperty(None) password = ObjectProperty(None) def __init__(self, **kwargs): super(NewAcct, self).__init__(**kwargs) self.inner = GridLayout() self.inner.cols = 2 self.inner.rows = 3 self.inner.size_hint = (1, .25) self.inner.pos_hint = {'x': 0, 'top': .8} self.inner.add_widget(Label(text='Name:')) self.name = TextInput(multiline=False) self.inner.add_widget(self.name) self.inner.add_widget(Label(text='Email:')) self.email = TextInput(multiline=False) self.inner.add_widget(self.email) self.inner.add_widget(Label(text='Password:')) self.password = TextInput(multiline=False) self.inner.add_widget(self.password) self.create = Button(text='Submit') self.create.font_size = 20 self.add_widget(self.create) self.create.size_hint = (.8, .15) self.create.pos_hint = {'top': .4, 'right': .9} self.hasacct = Label(text='Already have an account?') self.hasacct.font_size = 14 self.add_widget(self.hasacct) self.hasacct.size_hint = (.8, .15) self.hasacct.pos_hint = {'top': .28, 'right': .9} self.retlog = Button(text='Return to Login', ) self.retlog.font_size = 13 self.add_widget(self.retlog) self.retlog.size_hint = (.6, .07) self.retlog.pos_hint = {'top': .15, 'right': .8} self.retlog.bind(on_release=self.has_acct) self.add_widget(self.inner) def has_acct(self, *args): self.manager.current = '_mylgn' def create_new(self): if self.name.text != '' and self.email.text != '' and self.email.text.count('@') == 1 and self.email.text.count( '.') &gt; 0: if self.password != '': db.add_user(self.email.text, self.password.text, self.name.text) self.reset() else: invalidform() def login(self): self.reset() SM.current = '_mylgn' def reset(self): self.email.text = '' self.password.text = '' self.name.text = '' SM.current = '_mylgn' class Wlcm(Screen): def __init__(self, **kwargs): super(Wlcm, self).__init__(**kwargs) self.wlcmtxt = Label(text='Login Successful!\nWelcome!') self.wlcmtxt.font_size = 30 self.add_widget(self.wlcmtxt) self.retlog = Button(text='Log Out', ) self.retlog.font_size = 16 self.add_widget(self.retlog) self.retlog.size_hint = (.6, .07) self.retlog.pos_hint = {'top': .15, 'right': .8} self.retlog.bind(on_release=self.have_acct) def have_acct(self, *args): self.manager.current = '_mylgn' screen = Screen() screens = [MyLogin(name='_mylgn'), NewAcct(name='_newacct'), Wlcm(name='_wlcm')] for screen in screens: SM.add_widget(screen) #SM.add_widget(MyLogin(name='_mylgn'), screen) #SM.add_widget(NewAcct(name='_newacct'), screen) #SM.add_widget(Wlcm(name='_wlcm'), screen) SM.current = '_mylgn' class MyApp(App): def build(self): return MyLogin() if __name__ == '__main__': MyApp().run() ` </code></pre> <p>Here is my database.py file incase that helps</p> <pre><code>import datetime from kivy.uix.popup import Popup from kivy.uix.label import Label class DataBase: def __init__(self, filenaem): self.filename = filenaem self.users = None self.file = None self.load() def load(self): self.file = open(self.filename, 'r') self.users = {} for line in self.file: email, password, name, created = line.strip().split(';') self.users[email] = (password, name, created) self.file.close() def get_user(self, email): if email in self.users: return self.users[email] else: return -1 def add_user(self, email, password, name): if email.strip() not in self.users: self.users[email.strip()] = (password.strip(), name.strip(), DataBase.get_date()) self.save() else: print('This email already exists!') return -1 def validate(self, email, password): if self.get_user(email) != -1: return self.users[email][0] == password else: return False def save(self): with open(self.filename, 'w') as f: for user in self.users: f.write(user + ';' + self.users[user][0] + ';' + self.users[user][1] + ';' + self.users[user][2] + '\n') @staticmethod def get_date(): return str(datetime.datetime.now()).split(' ')[0] </code></pre>
<p>The <code>add_widget()</code> method of <code>ScreenManager</code> is an instance method and is expected to be called on an instance of <code>ScreenManager</code>. In your code, <code>SM</code> is a <code>ScreenManager</code> class, it is not an instance of <code>ScreenManager</code>. So you need to create an instance by doing something like:</p> <pre><code>sm = SM() screens = [MyLogin(name='_mylgn'), NewAcct(name='_newacct'), Wlcm(name='_wlcm')] for screen in screens: sm.add_widget(screen) </code></pre> <p>Your <code>build()</code> method probably should look like:</p> <pre><code>class MyApp(App): def build(self): return sm </code></pre>
python|kivy
0
1,909,328
63,920,172
Python code for solving a system from LU decomposition with forward and backward substitution
<p>I keep recieving an error in my code and I can't figure it out, I need to solve a system from an LU decomposition using backwards and forwards substitution. This is what I have so far:</p> <pre><code>import numpy as np L=np.array([(1,0,0,0),(-1,1,0,0),(2,-1,1,0),(-3,2,-2,1)]) U=np.array([(2,1,-1,3),(0,1,-1,3),(0,0,-1,3),(0,0,0,3)]) b=np.array([(12,-8,21,-26)]) def forward_subs(L,b): y=[] for i in range(len(b)): y.append(b[i]) for j in range(i): y[i]=y[i]-(L[i,j]*y[j]) y[i]=y[i]/L[i,i] return y def back_subs(U,y): x=np.zeros_like(y) for i in range(len(x),0,-1): x[i-1]=(y[i-1]-np.dot(U[i-1,i:],x[i:]))/U[i-1,i-1] return x def solve_system_LU(L,U,b): y=forward_subs(L,b) x=back_subs(U,y) return x print(solve_system_LU(L,U,b)) </code></pre>
<p>Very small mistake while defining <code>b</code> you are adding an extra dimension. So just define <code>b</code> as:</p> <p><code>b = np.array([12,-8,21,-26])</code></p> <p>And leave the remaining code as it is. Code runs without any errors and outputs</p> <p><code>[4. 3. 3. 1.33333333]</code></p>
python|numpy
0
1,909,329
53,245,974
Creating python virtual environment in Visual Studio fails: returned non-zero exit status 1
<p>In Visual Studio 2015 and 2017, I always get an error when trying to create a virtual environment with certain base interpreters. </p> <p><strong>Base interpreters that work:</strong></p> <ul> <li>Python 3.6 32-bit</li> <li>Python 2.7, 64-bit</li> <li>Anaconda 5.0.1 (2.7, 64-bit)</li> </ul> <p><strong>Base interpreters that give error:</strong></p> <ul> <li>Anaconda 5.0.1 (3.6, 64-bit)</li> <li>Python 3.6 64-bit</li> </ul> <p><strong>The error:</strong></p> <blockquote> <p>Error: Command '['F:\OneDrive\Visual Studio 2017 Projects\Web Test\DjangoWebProject1\DjangoWebProject1\env4\Scripts\python.exe', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1. Virtual environment was not created at 'F:\OneDrive\Visual Studio 2017 Projects\Web Test\DjangoWebProject1\DjangoWebProject1\env4'. Exit code: 1 Virtual environment was not created at 'F:\OneDrive\Visual Studio 2017 Projects\Web Test\DjangoWebProject1\DjangoWebProject1\env4'</p> </blockquote>
<p>Turns out this was a known issue with previous versions of Anaconda. The Python 3.6 64-bit interpreter that was listed as giving an error was actually an Anaconda version of Python as well. </p> <p>Simply using the package manager to upgrade to the most recent versions of Anaconda fixed the issues.</p>
python|visual-studio|virtual-environment
0
1,909,330
53,074,691
Calculation Year of Age in a .CSV file by Python
<p>I have a <code>Customer_Profile.csv</code> file that contains a column <code>Birthday</code> and values are like <code>19460620</code> (YearMonthDay) format. </p> <p>I want to calculate only the year of age from the present / now day. In addition, after calculating the age, I also want to categorize / group the age in a new column named <code>Age_Group</code>. </p> <p>For example, the age group should be as follows:</p> <p>Age between 10 to 20 is group 1<br> Age between 21 to 30 is group 2<br> Age between 31 to 40 is group 3 </p> <p>and so on. Any idea to write python script for the above tasks.</p>
<p>you can easily parse the birth date using <code>datetime.datetime.strptime</code> like this:</p> <pre><code>birth_date = datetime.datetime.strptime("19460620", "%Y,%m%d") </code></pre> <p>and the current time:</p> <pre><code>now = datetime.datetime.now() </code></pre> <p>then you can get the age using the following:</p> <pre><code>birthday_passed = (now.month &gt; birth_date.month) or (now.month == birth_date.month and now.day == birth_date.day) age = now.year - birth_date.year if birthday_passed: age -= 1 </code></pre> <p>to group your ages you can use integer division:</p> <pre><code>group = (age - 1) // 10 </code></pre> <p>The csv reading and writing is easy enough to do using pandas. just look up <code>pandas.read_csv</code> and <code>pandas.to_csv</code></p>
python|pandas|csv|datetime|calculation
0
1,909,331
68,528,179
Complete QtQuick Localization on PySide
<p>I know there is nearly the same question posted on <a href="https://stackoverflow.com/questions/44578113/localization-in-qtquick-from-top-to-bottom">Localization in QtQuick from top to bottom</a> but the guy there already knows how to start off, further on it's based on C++.</p> <p>In my problem, I also have to translate all strings on QML side in real-time, but Python (PySide) is used in the backend instead of C++. And since I am quite a newbie in this section, I don't know how to achieve this with minimal Python use.</p> <p>Based on the linked question, I am so far able to:</p> <ul> <li>Appended QT_TR_NOOP() to all of my translatable strings for translation at runtime.</li> </ul> <p>But the further steps described there are not clear to me. The documentation of QML for Python is very minimalistic.</p> <p>I would be very thankful for some detailed descriptions or examples.</p>
<p>The logic is similar to what is done in C++ since QML does not change anything, you only have to adapt the logic in python.</p> <p>The step is as follows:</p> <ol> <li><p>Generate the .ts from qml using lupdate.</p> <pre><code>lupdate main.qml -ts i18n_es.ts </code></pre> </li> <li><p>Use Qt Linguist to add the translations.</p> </li> <li><p>Compile the .ts to .qm using lrelease.</p> <pre><code>lrelease i18n_es.ts i18n_es.qm </code></pre> </li> <li><p>Load the .qm using python.</p> </li> </ol> <pre><code>├── main.py ├── qml │ └── main.qml └── translations ├── i18n_es.qm ├── i18n_es.ts ├── i18n_fr.qm └── i18n_fr.ts </code></pre> <p><strong>main.py</strong></p> <pre class="lang-py prettyprint-override"><code>import os import re import sys from pathlib import Path from PySide2.QtCore import ( Property, QCoreApplication, QDir, QObject, Qt, QTranslator, QUrl, Signal, Slot, ) from PySide2.QtGui import QGuiApplication, QStandardItem, QStandardItemModel from PySide2.QtQml import QQmlApplicationEngine CURRENT_DIRECTORY = Path(__file__).resolve().parent QML_DIRECTORY = CURRENT_DIRECTORY / &quot;qml&quot; TRANSLATIONS_DIR = CURRENT_DIRECTORY / &quot;translations&quot; class Translator(QObject): language_changed = Signal(name=&quot;languageChanged&quot;) def __init__(self, engine, parent=None): super().__init__(parent) self._engine = engine self._languages_model = QStandardItemModel() self.load_translations() self._translator = QTranslator() @Slot(str) def set_language(self, language): if language != &quot;Default&quot;: trans_dir = QDir(os.fspath(TRANSLATIONS_DIR)) filename = trans_dir.filePath(f&quot;i18n_{language}.qm&quot;) if not self._translator.load(filename): print(&quot;Failed&quot;) QGuiApplication.installTranslator(self._translator) else: QGuiApplication.removeTranslator(self._translator) self._engine.retranslate() def languages_model(self): return self._languages_model languages = Property(QObject, fget=languages_model, constant=True) def load_translations(self): self._languages_model.clear() item = QStandardItem(&quot;Default&quot;) self._languages_model.appendRow(item) trans_dir = QDir(os.fspath(TRANSLATIONS_DIR)) for filename in trans_dir.entryList([&quot;*.qm&quot;], QDir.Files, QDir.Name): language = re.search(r&quot;i18n_(.*?)\.qm&quot;, filename).group(1) item = QStandardItem(language) self._languages_model.appendRow(item) def main(): app = QGuiApplication(sys.argv) engine = QQmlApplicationEngine() translator = Translator(engine, app) engine.rootContext().setContextProperty(&quot;translator&quot;, translator) filename = os.fspath(QML_DIRECTORY / &quot;main.qml&quot;) url = QUrl.fromLocalFile(filename) def handle_object_created(obj, obj_url): if obj is None and url == obj_url: QCoreApplication.exit(-1) engine.objectCreated.connect(handle_object_created, Qt.QueuedConnection) engine.load(url) sys.exit(app.exec_()) if __name__ == &quot;__main__&quot;: main() </code></pre> <p><strong>main.qml</strong></p> <pre><code>import QtQuick 2.15 import QtQuick.Controls 2.15 import QtQuick.Layouts 1.15 ApplicationWindow { width: 640 height: 480 visible: true title: qsTr(&quot;Title&quot;) ListModel { id: list_model ListElement { name: QT_TR_NOOP(&quot;house&quot;) } ListElement { name: QT_TR_NOOP(&quot;table&quot;) } ListElement { name: QT_TR_NOOP(&quot;chair&quot;) } } ColumnLayout { anchors.fill: parent ComboBox { model: translator ? translator.languages : null textRole: &quot;display&quot; Layout.fillWidth: true onActivated: function(index) { translator.set_language(currentText); } } Button { text: qsTr(&quot;name&quot;) Layout.fillWidth: true } ListView { model: list_model Layout.fillWidth: true Layout.fillHeight: true delegate: Text { text: qsTr(name) } } } } </code></pre> <p><sub>The full example is <a href="https://github.com/eyllanesc/stackoverflow/tree/master/questions/68528179" rel="nofollow noreferrer">here</a>.</sub></p>
python|qt|qml|pyside2|qt-quick
0
1,909,332
68,870,171
Using pandas .shift on multiple columns with different shift lengths
<p>I have created a function that parses through each column of a dataframe, shifts up the data in that respective column to the first observation (shifting past '-'), and stores that column in a dictionary. I then convert the dictionary back to a dataframe to have the appropriately shifted columns. The function is operational and takes about 10 seconds on a 12x3000 dataframe. However, when applying it to 12x25000 it is extremely extremely slow. I feel like there is a much better way to approach this to increase the speed - perhaps even an argument of the shift function that I am missing. Appreciate any help.</p> <pre><code>def create_seasoned_df(df_orig): &quot;&quot;&quot; Creates a seasoned dataframe with only the first 12 periods of a loan &quot;&quot;&quot; df_seasoned = df_orig.reset_index().copy() temp_dic = {} for col in cols: to_shift = -len(df_seasoned[df_seasoned[col] == '-']) temp_dic[col] = df_seasoned[col].shift(periods=to_shift) df_seasoned = pd.DataFrame.from_dict(temp_dic, orient='index').T[:12] return df_seasoned </code></pre>
<p>Try using this code with <code>apply</code> instead:</p> <pre><code>def create_seasoned_df(df_orig): df_seasoned = df_orig.reset_index().apply(lambda x: x.shift(df_seasoned[col].eq('-').sum()), axis=0) return df_seasoned </code></pre>
python|pandas|shift
0
1,909,333
71,747,634
Analysis of text using Gunning Fox index
<p>While doing Analysis of readability using Gunning Fox index-. I have to calculate following values</p> <ol> <li>Average Sentence Length = the number of words / the number of sentences</li> <li>Percentage of Complex words = the number of complex words / the number of words</li> <li>Fog Index = 0.4 * (Average Sentence Length + Percentage of Complex words)</li> </ol> <p>I want to know whether the number of words will be calculated after removing duplicates and stop words i.e. after cleaning or just the total no of words in the text without removing any words or cleaning?</p> <p>Thanks for help!</p>
<p>No, you don't do any cleaning or 'stop-word' removal.</p> <p>You are trying to calculate how easy it is to read the text. Stop words are only relevant for old-style information retrieval. Also, do not remove duplicates. Process the text as-is, otherwise the result will be wrong.</p> <p>If you were to remove stopwords, the text would be more difficult to read, as effectively a lot of short (ie &quot;easy&quot;) words will have been removed.</p>
python|nlp|stop-words
0
1,909,334
71,568,556
How can I create account come with mnemonic use web3.py?
<p>account = web3.eth.account.create() this account only have private_key and address,I want the mnemonic also,How can I do it?</p> <p>For create a acount I want the result with private_key,address,mnemonic ... How can I do that?</p>
<pre><code> &gt;&gt;&gt; from eth_account import Account &gt;&gt;&gt; Account.enable_unaudited_hdwallet_features() &gt;&gt;&gt; acct, mnemonic = Account.create_with_mnemonic() &gt;&gt;&gt; acct.address # doctest: +SKIP '0x5ce9454909639D2D17A3F753ce7d93fa0b9aB12E' &gt;&gt;&gt; acct == Account.from_mnemonic(mnemonic) True </code></pre>
python|ethereum|web3py
0
1,909,335
10,320,900
Recording and asserting an empty mock list in python unittest
<p>I'm using <code>unittest</code> and <code>mock</code> (Michael Foord's module) to test some python code.</p> <p>I have something like this (this is a proof of concept that could be rewritten more cleanly, but my real code needs to behave like that <code>foo</code> function):</p> <pre><code>import unittest from mock import patch def foo(): my_list = [] class Test(unittest.TestCase): def test_foo(self): with patch('__main__.my_list', new=[], create=True) as baz: baz.extend(['foo', 'bar']) self.assertEqual(foo(), None) self.assertListEqual([], baz) if __name__ == '__main__': unittest.main() </code></pre> <p>So the problem is that my <code>baz</code> mock object doesn't change accordingly after the <code>foo()</code> call and the last assertion fails.</p> <p>If I use <code>my_list.remove(x)</code> in <code>foo()</code> then I can see the changes in my test case, but I just want to empty that list, I don't want to pass through every element of the list then remove it, no, I want a fast empty operation.</p> <p>How can I check if my mock object is emptied without using <code>.remove(x)</code>, but using the current implementation of function <code>foo</code>?</p>
<p>So, I end up answering my own question...</p> <p>The solution was to use <code>my_list[:] = []</code> in <code>foo</code>.</p> <p>But I also realized that passing <code>create=True</code> is bad because if that list doesn't exist (exactly the case of this POC) it will be created and I'll possibly test broken code that passes the tests.</p>
python|unit-testing|mocking
1
1,909,336
67,418,961
Getting an error trying to play an audio file in python
<p>I am trying to play a mp3 file in the same directory using pygame.mixer. Here is the code</p> <pre><code>from pygame import mixer mixer.init() mixer.music.load('tom.mp3') mixer.music.set_volume(0.7) mixer.music.play() </code></pre> <p>But I am getting this error and i am not able to rectify it. I have tried changing it to a .wav file, that did not work either.</p> <p>ERROR:</p> <pre><code> mixer.music.play() pygame.error: mpg123_seek: Invalid RVA mode. (code 12) </code></pre> <p>I have no idea about audio in python. Any help is appreciated.</p> <p>Thank you</p>
<p>Try:</p> <pre><code>from pygame import mixer mixer.init() sound = mixer.Sound('tom.mp3') sound.play() </code></pre>
python
0
1,909,337
67,380,660
How to select rows in pandas dataframe based on condition
<p>I have a huge data and my python pandas dataframe looks like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">HR</th> <th style="text-align: center;">SBP</th> <th style="text-align: right;">DBP</th> <th style="text-align: right;">SepsisLabel</th> <th style="text-align: right;">PatientID</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">92</td> <td style="text-align: center;">120</td> <td style="text-align: right;">80</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">98</td> <td style="text-align: center;">115</td> <td style="text-align: right;">85</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">93</td> <td style="text-align: center;">125</td> <td style="text-align: right;">75</td> <td style="text-align: right;">0</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">95</td> <td style="text-align: center;">130</td> <td style="text-align: right;">90</td> <td style="text-align: right;">0</td> <td style="text-align: right;">1</td> </tr> <tr> <td style="text-align: left;">102</td> <td style="text-align: center;">120</td> <td style="text-align: right;">80</td> <td style="text-align: right;">1</td> <td style="text-align: right;">1</td> </tr> <tr> <td style="text-align: left;">109</td> <td style="text-align: center;">115</td> <td style="text-align: right;">75</td> <td style="text-align: right;">1</td> <td style="text-align: right;">1</td> </tr> <tr> <td style="text-align: left;">94</td> <td style="text-align: center;">135</td> <td style="text-align: right;">100</td> <td style="text-align: right;">0</td> <td style="text-align: right;">2</td> </tr> <tr> <td style="text-align: left;">97</td> <td style="text-align: center;">100</td> <td style="text-align: right;">70</td> <td style="text-align: right;">0</td> <td style="text-align: right;">2</td> </tr> <tr> <td style="text-align: left;">85</td> <td style="text-align: center;">120</td> <td style="text-align: right;">80</td> <td style="text-align: right;">0</td> <td style="text-align: right;">2</td> </tr> <tr> <td style="text-align: left;">88</td> <td style="text-align: center;">115</td> <td style="text-align: right;">75</td> <td style="text-align: right;">0</td> <td style="text-align: right;">3</td> </tr> <tr> <td style="text-align: left;">93</td> <td style="text-align: center;">125</td> <td style="text-align: right;">85</td> <td style="text-align: right;">1</td> <td style="text-align: right;">3</td> </tr> <tr> <td style="text-align: left;">78</td> <td style="text-align: center;">130</td> <td style="text-align: right;">90</td> <td style="text-align: right;">1</td> <td style="text-align: right;">3</td> </tr> <tr> <td style="text-align: left;">115</td> <td style="text-align: center;">140</td> <td style="text-align: right;">110</td> <td style="text-align: right;">0</td> <td style="text-align: right;">4</td> </tr> <tr> <td style="text-align: left;">102</td> <td style="text-align: center;">120</td> <td style="text-align: right;">80</td> <td style="text-align: right;">0</td> <td style="text-align: right;">4</td> </tr> <tr> <td style="text-align: left;">98</td> <td style="text-align: center;">140</td> <td style="text-align: right;">110</td> <td style="text-align: right;">0</td> <td style="text-align: right;">4</td> </tr> </tbody> </table> </div> <p>I want to select only those rows based on PatientID which have SepsisLabel = 1. Like PatientID 0, 2, and 4 don't have sepsis label 1. So, I don't want them in new dataframe. I want PatientID 1 and 3, which have SepsisLabel = 1 in them.</p> <p>I hope you can understand what I want to say. If so, please help me with a python code. I am sure it needs some condition along with iloc() function (I might be wrong).</p> <p>Regards.</p>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.any.html" rel="nofollow noreferrer"><code>GroupBy.any</code></a> for test if at least one <code>True</code> per groups and filtering by <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>:</p> <pre><code>df1 = df[df['SepsisLabel'].eq(1).groupby(df['PatientID']).transform('any')] </code></pre> <p>Or filter all groups with <code>1</code> and filter them in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>Series.isin</code></a>:</p> <pre><code>df1 = df[df['PatientID'].isin(df.loc[df['SepsisLabel'].eq(1), 'PatientID'])] </code></pre> <p>If small data or performance not important is possible use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.filter.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.filter</code></a>:</p> <pre><code>df1 = df.groupby('PatientID').filter(lambda x: x['SepsisLabel'].eq(1).any()) </code></pre> <hr /> <pre><code>print (df1) HR SBP DBP SepsisLabel PatientID 3 95 130 90 0 1 4 102 120 80 1 1 5 109 115 75 1 1 9 88 115 75 0 3 10 93 125 85 1 3 11 78 130 90 1 3 </code></pre>
python|pandas|dataframe|conditional-statements
1
1,909,338
60,369,494
What is col[:2] and col[4:] doing when iterating through column headers in pandas DataFrame
<p>I'm new to pandas. For a DataFrame with Olympic results, the question was to change the column names through iterating the column headers and then use rename to rename the column names. In the given answer, I don't understand what col[:2] is doing, and in the rename parameters, what is col[4:] doing? Can you please help me understand what the code is doing? Thanks. (Sorry I'm not allowed to embed pictures so the dataframes before and after are in the links.) <a href="https://i.stack.imgur.com/Vhfl6.png" rel="nofollow noreferrer">dataframe_before</a> <a href="https://i.stack.imgur.com/tkcpZ.png" rel="nofollow noreferrer">dataframe_after</a></p> <p>The given code is: </p> <pre><code>for col in df.columns: if col[:2]=='01': df.rename(columns={col:'Gold' + col[4:]}, inplace=True) if col[:2]=='02': df.rename(columns={col:'Silver' + col[4:]}, inplace=True) if col[:2]=='03': df.rename(columns={col:'Bronze' + col[4:]}, inplace=True) if col[:1]=='№': df.rename(columns={col:'#' + col[1:]}, inplace=True) </code></pre> <p>df.head()</p>
<p>In the dataframe before the change, the column headers have a specific structure. Each header will begin with <code>01</code> for Gold, <code>02</code> for Silver and <code>03</code> for Bronze. After the first two characters, it might have other identifying information. But as I can see from the earlier dataframe, the identifying information is separated from the first two letters by a <code>newline</code> character and a <code>!</code> character (example: <code>01\n!.123</code>, <code>03\n!.456</code>). The aim of the code is to replace the first two digits by their respective medal types without changing the following information.</p> <p><code>col[:2]</code> indexes all letters before position <code>2</code>, that is, the first two letters and replaces them with the appropriate medal type. We do not want the remaining information to change, but we also want to discard the intervening <code>newline</code> and the <code>!</code> characters. So we just take everything from the fourth position in the original name which already skips over the first two (already replaced) and the intervening characters that we do not want, and add this to the medal names we just created. <code>col[4:]</code> does precisely that.</p> <p>I have assumed certain things about the names of the original dataframe. Please let me know if any of those assumptions is incorrect.</p>
python-3.x|pandas|dataframe|jupyter-notebook
0
1,909,339
60,433,056
How to create dropdown dependencies
<p>I have a dropdown taking distinct values from a database. Depending on the selected value in this dropdown, I want a second dropdown to be updated with other distinct values from the same table. As an example, let's say that I have a table with 'Continents' and 'Countries' fields, if I select 'Europe' for instance, I want the second dropdown to show the countries in my db which belong to this continent.</p> <p>I managed to create the first dropdown and connect it to the db, but I'm struggling with the second step, which is to retrieve the first dropdown's selection and use it to update the second one.</p> <p>Here is my views.py:</p> <pre><code>def MyView(request): query_results = data_world.objects.all() continents_list = ContinentsChoiceField() countries_list = CountriesChoiceField() query_results_dict = { 'query_results': query_results, 'continents_list': continents_list , 'countries_list': countries_list , } return render(request,'home.html', query_results_dict) </code></pre> <p>models.py:</p> <pre><code>class data_world(models.Model): id = models.AutoField(primary_key=True) continent_name= models.TextField(db_column='CONTINENTS', blank=True, null=True) country_name = models.TextField(db_column='COUNTRIES', blank=True, null=True) city_name = models.TextField(db_column='CITIES', blank=True, null=True) class Meta: managed = True db_table = 'world' </code></pre> <p>forms.py:</p> <pre><code> class ContinentsChoiceField(forms.Form): continents = forms.ModelChoiceField( queryset=data_world.objects.values_list("continent_name", flat=True).distinct(), empty_label=None ) class CountriesChoiceField(forms.Form): countries = forms.ModelChoiceField( queryset=data_world.objects.values_list("country_name", flat=True).distinct(), empty_label=None ) </code></pre> <p>home.html:</p> <pre><code>&lt;select id="continents"&gt; {% for item in continents_list %} &lt;option val="{{ item.continent_name }}"&gt; {{ item }} &lt;/option&gt; {% endfor %} &lt;/select&gt; &lt;script type="text/javascript"&gt; function GetSelectedText(){ var ct = document.getElementById("continents"); var result_ct = ct.options[ct.selectedIndex].text; document.getElementById("result_ct").innerHTML = result_ct; } &lt;/script&gt; &lt;button type="button" onclick="GetSelectedText()"&gt;Get Selected Value&lt;/button&gt; &lt;select id="countries"&gt; {% for item in countries_list %} &lt;option val="{{ item.country_name }}"&gt; {{ item }} &lt;/option&gt; {% endfor %} &lt;/select&gt; </code></pre> <p>Edit 1: updated my html page with some Js code which helps me to retrieve the chosen continent. Now I would need to use it to display the corresponding countries in the second dropdown.</p>
<p>The django-select2 package does exactly this.</p> <p>Take a look at <a href="https://django-select2.readthedocs.io/en/latest/extra.html" rel="nofollow noreferrer">https://django-select2.readthedocs.io/en/latest/extra.html</a></p> <p>In models.py</p> <pre><code>class Country(models.Model): name = models.CharField(max_length=255) class City(models.Model): name = models.CharField(max_length=255) country = models.ForeignKey('Country', related_name="cities") </code></pre> <p>Then customise a form:</p> <pre><code>... from django_select2.forms import ModelSelect2Widget class AddressForm(forms.Form): country = forms.ModelChoiceField( queryset=Country.objects.all(), label=u"Country", widget=ModelSelect2Widget( model=Country, search_fields=['name__icontains'], ) ) city = forms.ModelChoiceField( queryset=City.objects.all(), label=u"City", widget=ModelSelect2Widget( model=City, search_fields=['name__icontains'], dependent_fields={'country': 'country'}, max_results=500, ) ) </code></pre>
python|html|django|dropdown
0
1,909,340
60,673,961
How to find minimum longitude of coordinate system?
<p>I have to write a function in Python which takes list of coordinates and a coordinate reference system (as EPSG code). It should return True if the coordinates are valid in the coordinate system, or False if not. How can I do that? </p> <p>My idea was to get min and max latitude and longitude, create a bonding polygon and check if point falls inside. Problem is I don't know how to get min and max lat and lon from EPSG code. My only idea is to write them manually into function which is rather pointless. </p> <p>Is this the right approach or I'm overthinking it and there's an easier way?</p> <p>So far I have: </p> <pre><code>def valid_coordinates(EPSG): print "Coordinate System : ", arcpy.SpatialReference(EPSG).name array = arcpy.Array([ arcpy.Point(-180, -90), arcpy.Point(180, -90), arcpy.Point(-180, 90), arcpy.Point(180, 90), arcpy.Point(-180, -90), ])# build a polygon based on the array polygon = arcpy.Polygon(array, EPSG) point_1 = Point(-0.4, 30.3) point_2 = Point(-1000,-5000) print "Point 1: ", polygon.contains(point_1) #returns True which is correct print "Point 2: ", polygon.contains(point_2) #returns False is correct </code></pre> <p>but this is pointless as function should take any coordinates and any EPSG </p>
<p>Achieve this with <code>len()</code>:</p> <pre><code>latitude = 'your_extension' len(latitude) </code></pre>
python|coordinates|gis|arcmap|epsg
0
1,909,341
71,190,507
Getting "None" for BS4 web scraping
<p>So I am trying to create a code that will get the price of bitcoin For some reason running this code will result in the output of None, however I would like the output of the current bitcoin price, how do I fix this?</p> <pre><code>url = 'https://www.google.com/search?q=bitcoin+price' r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') text = soup.find('span', {'class':'vpclqee'}) print(text) </code></pre>
<p>If you have no restriction on using Google's Bitcoin price, some other sites have easier access to this value, like CoinMarketCap:</p> <pre><code>from bs4 import BeautifulSoup import requests url = 'https://coinmarketcap.com/currencies/bitcoin/' r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') text = soup.find_all('div', {'class':&quot;priceValue&quot;}) for elem in text: print(elem.get_text()) </code></pre> <p>But note that this is not suitable for any real-time updating as I believe it updates much too slowly.</p> <p>Output:</p> <pre><code>$39,878.01 </code></pre>
python|web-scraping|beautifulsoup
1
1,909,342
63,721,094
how to handle NULL values in DATE type column on where statement
<p>I run the SQL below in Python to Google BigQuery but I got error <code>BadRequest: 400 Could not cast literal &quot;&quot; to type DATE </code> I want to check the data in cv_date column that is same row of luid exists or not and if it exists , it return True. cv_date column is DATE type. I tried use <code>cv_date IS NOT NULL</code> but it didn't work . anyone has idea ?? or I need to change the logic of this SQL ??</p> <pre><code>from flask import Flask, request from google.cloud import bigquery app = Flask(__name__) @app.route('/') def get_request(): request_luid = request.args.get('luid') or '' client = bigquery.Client() query = &quot;&quot;&quot;SELECT EXISTS( SELECT 1 FROM `test-266778.conversion_log.conversion_log_2020*` as p WHERE p.luid = '{}' AND p.cv_date != '' limit 1000)&quot;&quot;&quot;.format(request_luid) job_config = bigquery.QueryJobConfig( query_parameters=[ bigquery.ScalarQueryParameter(&quot;request_luid&quot;, &quot;STRING&quot;, request_luid) ] ) query_job = client.query(query) query_res = query_job.result() for row in query_res: return str(row[0]) if __name__ == &quot;__main__&quot;: app.run() </code></pre>
<p>If I am not mistaken, you want to check if the Date column exists or not for every row with a certain luid.</p> <p>Continuing with my public bigquery data example I believe you want something similar to:</p> <pre><code>SELECT load_id,report_date IS NOT NULL as date FROM `bigquery-public-data.austin_waste.waste_and_diversion` as p limit 1000 </code></pre> <p>for a specific luid, which in your case would be something like</p> <pre><code>&quot;&quot;&quot;SELECT luid,cv_date IS NOT NULL as date FROM `&lt;project-id&gt;.conversion_log.conversion_log_2020*` as p where p.luid = '{}' limit 1000&quot;&quot;&quot;.format(request_luid) </code></pre>
python-3.x|google-app-engine|flask|google-bigquery
1
1,909,343
61,036,991
Python Regular Expression Why Quantifier (+) is not greedy
<p>Input: <code>asjkd http://www.as.com/as/g/ff askl</code></p> <p>Expected output: <code>http://www.as.com/as/g/ff</code></p> <p>When I try below I am getting expected output</p> <pre><code>pattern=re.compile(r'http[\w./:]+') print(pattern.search("asjkd http://www.as.com/as/g/ff askl")) </code></pre> <p>Why isn't the <code>+</code> quantifier greedy here? I was expecting it to be greedy. Here actually not being greedy is helping to find the right answer.</p>
<p>It <em>is</em> greedy. It stops matching when it hits the space because <code>[\w./:]</code> doesn't match a space. A space isn't a <a href="https://docs.python.org/3/library/re.html#index-32" rel="nofollow noreferrer">word character</a> (alphanumeric or underscore), dot, slash, or colon.</p> <p>Change <code>+</code> to <code>+?</code> and you can see what happens when it's non-greedy.</p> <p><strong>Greedy</strong></p> <pre><code>&gt;&gt;&gt; pattern=re.compile(r'http[\w./:]+') &gt;&gt;&gt; print(pattern.search("asjkd http://www.as.com/as/g/ff askl")) &lt;re.Match object; span=(6, 31), match='http://www.as.com/as/g/ff'&gt; </code></pre> <p><strong>Non-greedy</strong></p> <pre><code>&gt;&gt;&gt; pattern=re.compile(r'http[\w./:]+?') &gt;&gt;&gt; print(pattern.search("asjkd http://www.as.com/as/g/ff askl")) &lt;re.Match object; span=(6, 11), match='http:'&gt; </code></pre> <p>It matches a single character <code>:</code>!</p>
python|regex
0
1,909,344
60,857,221
Python script to output CSV of only duplicate records?
<p>I have a bunch of records like so:</p> <pre><code>uniqueIdHere1,2020-02-21T21:29:31Z,2020-03-25T20:44:29.810951Z2020-02-21 21:29:31.996,1582320571996 uniqueIdHere1,2020-02-21T21:29:31Z,2020-03-25T20:44:29.810951Z2020-02-21 21:29:31.996,1582320571996 uniqueIdHere1,2020-02-21T21:29:31Z,2020-03-25T20:44:29.810951Z2020-02-21 21:29:31.996,1582320571996 uniqueIdHere2,2020-02-21T21:29:31Z,2020-03-25T20:44:29.810951Z2020-02-21 21:29:31.996,1582320571996 uniqueIdHere3,2020-02-21T21:29:31Z,2020-03-25T20:44:29.810951Z2020-02-21 21:29:31.996,1582320571996 </code></pre> <p>My goal is to have an <code>duplicatedRecords.csv</code> of only records that are duplicated by the ID column. Expected output:</p> <pre><code>uniqueIdHere1,2020-02-21T21:29:31Z,2020-03-25T20:44:29.810951Z2020-02-21 21:29:31.996,1582320571996 uniqueIdHere1,2020-02-21T21:29:31Z,2020-03-25T20:44:29.810951Z2020-02-21 21:29:31.996,1582320571996 uniqueIdHere1,2020-02-21T21:29:31Z,2020-03-25T20:44:29.810951Z2020-02-21 21:29:31.996,1582320571996 </code></pre> <p>I don't really know Python, but was hoping to just have a little one off script. Attempted code a little bit:</p> <pre><code>with open('1.csv','r') as in_file, open('2.csv','w') as out_file: seen = set() # set for fast O(1) amortized lookup dupeSet = set() # dupe check for filtering serialized output data for line in in_file: if line not in seen and line not in dupeSet: seen.add(line) if line in seen and line in dupeSet: out_file.write(line) </code></pre> <p>Something like this, but it got messy and was hoping for a little help. </p>
<p>Here's one approach, which uses the pandas library to import the csv and select repeated id rows:</p> <pre><code>import pandas as pd df = pd.read_csv('1.csv', names = ['id', 'col1', 'col2', 'col3'] ) counts = df['id'].value_counts() df_output = df.loc[df['id'].isin(counts.index[counts &gt; 1])] df_output.to_csv('newfile.csv',index = False) </code></pre>
python|csv
3
1,909,345
66,082,439
Getting a 'NoneType' Object Not Subscriptable on a Python script for Digit Recognition
<p>So I decided to try out a digit recognition system, and it works when reading from thr <code>MNIST</code> database but when I try to import my own images I get an error saying <code>'NoneType' Object Is Not Subscriptable</code> on the second line of the following code snippet. I have imported <code>cv2</code> as <code>cv</code> and all the correct imports are there and have been given appropriate aliases.</p> <pre><code>import cv2 as cv import numpy as np import tensorflow as tf import matplotlib.pyplot as plt #the neural network setup code for x in range(1, 6): img = cv.imread(f'{x}.png')[:, :, 0] # where x is the name of the file e.g '5.png' img = np.invert(np.array([img])) prediction = model.predict(img) print(f' The result is probably: {np.argmax(prediction)}') plt.imshow(img[0], cmap=plt.cm.binary) plt.show() </code></pre>
<p>The file can't be opened, producing a <code>None</code> result. Instead use a raw file path. After this you may encounter an error concerning Qt, in which case uninstall and reinstall Qt and PyQt.</p> <pre><code>for x in range(1, 6): img = cv.imread(fr'[file path]\{x}.png')[:, :, 0] # where x is the name of the file e.g '5.png' img = np.invert(np.array([img])) prediction = model.predict(img) print(f' The result is probably: {np.argmax(prediction)}') plt.imshow(img[0], cmap=plt.cm.binary) plt.show() </code></pre>
python|opencv|artificial-intelligence
0
1,909,346
72,586,331
How to make `tqdm.set_postfix_str` faster?
<p>When I use <code>tqdm</code>, I want to track a score caculated during the iterations and thus I used the <code>set_postfix_str</code> method. But I found it could dramatically slow down the program:</p> <pre><code>from tqdm import tqdm import time N = 10000 pbar = tqdm(range(N), total=N, desc=&quot;N&quot;) start = time.time() for i in pbar: pass print(&quot;w/o set_postfix_str: {:.2f}&quot;.format(time.time() - start)) pbar = tqdm(range(N), total=N, desc=&quot;N&quot;) start = time.time() for i in pbar: pass pbar.set_postfix_str(s=&quot;{}&quot;.format(i)) print(&quot;w set_postfix_str: {:.2f}&quot;.format(time.time() - start)) </code></pre> <p>Output:</p> <p>N: 100%|██████████| 10000/10000 [00:00&lt;00:00, 1239195.20it/s]<br /> w/o set_postfix_str: 0.01<br /> N: 100%|██████████| 10000/10000 [00:12&lt;00:00, 774.67it/s, 9999]<br /> w set_postfix_str: 12.91</p> <p>I tried to set the <code>miniters</code> in tqdm but it didn't help. How to make it faster?</p>
<p>how about setting up the bar_format parameter to your liking</p> <pre><code>import time from tqdm import tqdm N = 10000000 #I increased the number to see something in my machine pbar = tqdm(range(N), total=N, desc=&quot;N&quot;) start = time.time() for i in pbar: pass print(&quot;w/o set_postfix_str: {:.2f}&quot;.format(time.time() - start)) pbar = tqdm(range(N), total=N, desc=&quot;N&quot;,bar_format='{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}&lt;{remaining}, {rate_fmt} {n_fmt}]') start = time.time() for i in pbar: pass print(&quot;w custom bar: {:.2f}&quot;.format(time.time() - start)) </code></pre> <p>the <a href="https://tqdm.github.io/docs/tqdm/" rel="nofollow noreferrer">bar_format</a> by default is <code>'{l_bar}{bar}{r_bar}'</code> in this case we want to change the <code>{r_bar}</code> part which in turn default to <code>'| {n_fmt}/{total_fmt} [{elapsed}&lt;{remaining}, ' '{rate_fmt}{postfix}]'</code>, I try some combination with just passing the postfix arguments but with no success, so explicity putting all in the bar_format was the solution I found, and thus we change the postfix part to what we want, that in your example is equivalent to n_fmt and thus the new bar_format is</p> <pre><code>'{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}&lt;{remaining}, {rate_fmt} {n_fmt}]' </code></pre> <p>and a quick test</p> <pre><code>C:\Users\copperfield\Desktop&gt;test.py N: 100%|███████████████████████████| 10000000/10000000 [00:01&lt;00:00, 5168526.86it/s] w/o set_postfix_str: 1.93 N: 100%|██████████████████| 10000000/10000000 [00:01&lt;00:00, 5320703.94it/s 10000000] w custom bar: 1.88 C:\Users\copperfield\Desktop&gt; </code></pre> <p>and turns out to be a little bit faster funnily enough</p> <hr /> <p>with further tinkering, and also looking at the source code of tqdm, and in order to no use this custom bar thing, because said that we want something that is calculated in the loop and not something that the progress bar keep track of, like in the previous example, this set_postfix_str also take a second argument <code>refresh</code> that by default is True, just change it to False, it still would be slow compared to the simple case, but not by that extreme amount, in my test only by like ~3 time slower</p> <pre><code>pbar = tqdm(range(N),total=N, desc=&quot;N&quot;) start = time.time() for i in pbar: pbar.set_postfix_str(f&quot;{i}&quot;,refresh=False) print(&quot;w set_postfix_str refresh=False: {:.2f}&quot;.format(time.time() - start)) </code></pre> <p>and after taking a peek at the source code, the only thing this does is set up an attribute, so lets cut the middle man and do it directly</p> <pre><code>pbar = tqdm(range(N),total=N, desc=&quot;N&quot;) start = time.time() for i in pbar: pbar.postfix=f&quot;{i}&quot; print(&quot;w .postfix: {:.2f}&quot;.format(time.time() - start)) </code></pre> <p>this improve the time an little an now is just some ~1.7 times slower</p> <p>and a quick test</p> <pre><code>C:\Users\copperfield\Desktop&gt;test.py N: 100%|██████████████████████████████████████████████████████████████| 10000000/10000000 [00:01&lt;00:00, 5467326.75it/s] w/o set_postfix_str: 1.83 N: 100%|█████████████████████████████████████████████████████| 10000000/10000000 [00:01&lt;00:00, 5497884.24it/s 10000000] w custom bar: 1.82 N: 100%|█████████████████████████████████████████████████████| 10000000/10000000 [00:05&lt;00:00, 1874176.95it/s, 9999999] w set_postfix_str refresh=False: 5.34 N: 100%|█████████████████████████████████████████████████████| 10000000/10000000 [00:03&lt;00:00, 3088319.84it/s, 9999999] w .postfix: 3.24 C:\Users\copperfield\Desktop&gt; </code></pre>
python|tqdm
2
1,909,347
72,715,762
Issues with concatenation in multi-dimensional arrays in Python
<p>I am trying to concatenate <code>A</code> with <code>C1</code> and <code>C2</code>. For <code>C1=[]</code>, I am not sure why there is an extra <code>[0]</code> in <code>B1</code>. For <code>C2=[1,2]</code>, there is a shape mismatch. The current and the desired outputs are attached. I am interested in the following conditions:</p> <p>(1) If <code>C1=[]</code>, no need to insert <code>A1</code> in <code>B1</code>. (2) If <code>C1=[1]</code>, insert <code>A1</code> for the specific position in <code>B1</code>. (3) If <code>C1=[1,2]</code>, insert <code>A1</code> for all the specific positions in <code>B1</code>.</p> <pre><code>import numpy as np A=np.array([[[1], [2], [3], [4], [5], [6], [7]]]) C1=[] C2=[1,2] D=[7] A1=np.array([0]) A2=np.array([0]) B1=np.insert(A,C1+D,[A1,A2],axis=1) print(&quot;B1 =&quot;,[B1]) B2=np.insert(A,C2+D,[A1,A2],axis=1) print(&quot;B1 =&quot;,[B2]) </code></pre> <p>The current output is</p> <pre><code>B1 = [array([[[1], [2], [3], [4], [5], [6], [7], [0], [0]]])] in &lt;module&gt; B2=np.insert(A,C2+D,[A1,A2],axis=1) File &quot;&lt;__array_function__ internals&gt;&quot;, line 5, in insert File &quot;C:\Users\USER\anaconda3\lib\site-packages\numpy\lib\function_base.py&quot;, line 4678, in insert new[tuple(slobj)] = values ValueError: shape mismatch: value array of shape (2,1) could not be broadcast to indexing result of shape (3,1,1) </code></pre> <p>The desired output is</p> <pre><code>B1 = [array([[[1], [2], [3], [4], [5], [6], [7], [0]]])] B2 = [array([[[1], [0], [0], [2], [3], [4], [5], [6], [7], [0]]])] </code></pre>
<p><code>insert</code> is a Python function. It's arguments are evaluated in full before being passed to it. Look at what you are passing:</p> <pre><code>In [20]: C1+D, C2+D Out[20]: ([7], [1, 2, 7]) In [21]: np.array([A1,A2]) Out[21]: array([[0], [0]]) </code></pre> <p>You have lost the imagined <code>[],[7]</code> and <code>[[1,2],[7]]</code> structure.</p> <p>Your <code>B1</code> successfully puts the [[0],[0]] at slot 7. <code>B2</code> fails because there are 3 slots, but 2 values.</p> <p>Here's what your <code>A1</code> inserts are doing (the axis=1 isn't important here):</p> <pre><code>In [22]: np.insert(A,C1,A1) Out[22]: array([1, 2, 3, 4, 5, 6, 7]) In [23]: np.insert(A,C2,A1) Out[23]: array([1, 0, 2, 0, 3, 4, 5, 6, 7]) </code></pre> <p>Since <code>A1</code> is a (1,) shape, it can <code>broadcast</code> to match the shape of the (0,) <code>C1</code> and (2,) <code>C2</code>.</p> <p>If you want two 0's together you'll need one of:</p> <pre><code>In [25]: np.insert(A,[1],[0,0]) Out[25]: array([1, 0, 0, 2, 3, 4, 5, 6, 7]) In [26]: np.insert(A,[1,1],[0]) Out[26]: array([1, 0, 0, 2, 3, 4, 5, 6, 7]) </code></pre>
python|numpy
0
1,909,348
59,058,652
Is there a way to quickly access all annotations and sub-annotations from an OWL (RDF/XML) file?
<p>So I have an ontology I've built in Protege which has annotations and sub-annotations. What I mean by that is that a concept might have a definition and that definition might have a comment.</p> <p>So you might have something like (s,p,o):</p> <pre><code>'http://purl.fakeiri.org/ONTO/1111' --&gt; 'label' --&gt; 'Term' 'Term' --&gt; 'comment' --&gt; 'Comment about term.' </code></pre> <p>I am trying to make the ontology easily explorable using a Flask app (I'm using Python to parse the ontology file), and I can't seem to quickly get all of the annotations and sub-annotations. </p> <p>I started using the <code>owlready2</code> package but it requires you to self-define each individual annotation property (you can't just get a list of all of them, so if you add a property like <code>random_identifier</code> you have to go back into the code and add <code>entity.random_identifier</code> or it won't be picked up). This works okay, it's pretty fast, but subannotations require loading the IRI, then searching for it as:</p> <pre><code>random_prop = IRIS['http://schema.org/fillerName'] sub_annotation = x[entity, random_prop, annotation_label] </code></pre> <p>This is extremely slow, taking 5-10 minutes to load to search through around 140 sub-annotation types, compared to about 3-5 seconds for just the annotations.</p> <p>From there I decided to scrap <code>owlready2</code> and try <code>rdflib</code>. However, it looks like sub-annotations are just attached as BNodes and I can't figure out how to access them through their "parent" annotation or if that's even possible.</p> <p>TL;DR: Does anybody know how to access an entry and gather all of its annotations and sub-annotations quickly in an XML/RDF ontology file?</p> <p>EDIT 1:</p> <p>As suggested, here is a snippet of the ontology:</p> <pre><code> &lt;!-- http://ncicb.nci.nih.gov/xml/owl/EVS/Thesaurus.owl#C42610 --&gt; &lt;owl:Class rdf:about="http://ncicb.nci.nih.gov/xml/owl/EVS/Thesaurus.owl#C42610"&gt; &lt;rdfs:subClassOf rdf:resource="http://ncicb.nci.nih.gov/xml/owl/EVS/Thesaurus.owl#C42698"/&gt; &lt;obo:IAO_0000115 xml:lang="en"&gt;A shortened form of a word or phrase.&lt;/obo:IAO_0000115&gt; &lt;oboInOwl:hasDbXref rdf:datatype="http://www.w3.org/2001/XMLSchema#anyURI"&gt;https://en.wikipedia.org/wiki/Abbreviation&lt;/oboInOwl:hasDbXref&gt; &lt;rdfs:label xml:lang="en"&gt;abbreviation&lt;/rdfs:label&gt; &lt;schema:alternateName xml:lang="en"&gt;abbreviations&lt;/schema:alternateName&gt; &lt;Property:P1036 rdf:datatype="http://www.w3.org/2001/XMLSchema#integer"&gt;411&lt;/Property:P1036&gt; &lt;/owl:Class&gt; &lt;owl:Axiom&gt; &lt;owl:annotatedSource rdf:resource="http://ncicb.nci.nih.gov/xml/owl/EVS/Thesaurus.owl#C42610"/&gt; &lt;owl:annotatedProperty rdf:resource="https://www.wikidata.org/wiki/Property:P1036"/&gt; &lt;owl:annotatedTarget rdf:datatype="http://www.w3.org/2001/XMLSchema#integer"&gt;411&lt;/owl:annotatedTarget&gt; &lt;schema:bookEdition rdf:datatype="http://www.w3.org/2001/XMLSchema#integer"&gt;20&lt;/schema:bookEdition&gt; &lt;/owl:Axiom&gt; </code></pre> <p>Thank you all so much!</p>
<p>From your question I gather that the 'sub-annotation' level is only ever one deep. If that is the case, you could do a SPARQL query as follows:</p> <pre><code>SELECT ?annProp ?annValue ?subAnn ?subValue WHERE { ?annProp a owl:AnnotationProperty . &lt;the:concept&gt; ?annProp ?annValue . OPTIONAL { ?annValue ?subAnn ?subValue . } } </code></pre> <p>This will retrieve all annotation properties and their values for the given concept <code>the:concept</code>, and optionally, if that annotation has a "sub-annotation", it also retrieves that sub-annotation.</p>
xml|python-3.6|rdf|ontology|rdflib
1
1,909,349
59,094,734
python guessing random number game not working, my names arent defined?
<pre><code>import sys, random def rand(): number = random.randint(0, 100) def start(): print("Entrez un nombre et essayez de faire correspondre le nombre aléatoire") guess= int(input()) def check(): print (guess, number) if guess == number: print ("Les nombres sont le même!") print ("Recomence?") reawn=str(input()) if reawn == "oui": rand() start() check() elif guess &lt; number: print ("Ton nombre est plus grands que le nombre aléatoire!") print ("Essaye encore?") reawn=str(input()) if reawn == "oui": start() check() elif guess &gt; number: print ("Ton nombre est plus petit que le nombre aléatoire!") print ("Essaye encore?") reawn=str(input()) if reawn == "oui": start() check() rand() start() check() </code></pre> <blockquote> <p>Traceback (most recent call last): File "F:\Dominic\Python\rando.py", line 36, in check() File "F:\Dominic\Python\rando.py", line 10, in check print (guess, number) NameError: name 'guess' is not defined</p> </blockquote>
<p>Your problem has to do with the difference between <strong>local</strong> and <strong>global</strong> variables.</p> <p>Here, in your function <code>check()</code>, you're refering to the local variable <code>guess</code> which was only defined inside the other function <code>start()</code> and has not been defined in the context of the function <code>check()</code>. The function <code>check()</code> <em>does not know</em> the variable <code>guess</code> unless you specify what it's equal to inside the function.</p> <p>What you could do in this case is:</p> <pre><code>import sys, random def rand(): number = random.randint(0, 100) return number def start(): print("Entrez un nombre et essayez de faire correspondre le nombre aléatoire") guess= int(input()) return guess def check(): number = rand() guess = start() print (guess, number) if guess == number: print ("Les nombres sont le même!") print ("Recomence?") reawn=str(input()) if reawn == "oui": rand() start() check() elif guess &lt; number: print ("Ton nombre est plus grands que le nombre aléatoire!") print ("Essaye encore?") reawn=str(input()) if reawn == "oui": start() check() elif guess &gt; number: print ("Ton nombre est plus petit que le nombre aléatoire!") print ("Essaye encore?") reawn=str(input()) if reawn == "oui": start() check() rand() start() check() </code></pre> <p>Here's <a href="https://docs.python.org/3/faq/programming.html#why-am-i-getting-an-unboundlocalerror-when-the-variable-has-a-value" rel="nofollow noreferrer">more information</a> on global and local variables from the Python documentation.</p>
python|random|python-3.6
0
1,909,350
59,033,557
Kernel error in spyder and jupyter notebook
<p>I tried to use the Anaconda Navigator, but I'm getting this kernel error.</p> <p>I had previously installed python 3.8 without Anaconda and had installed jupyter in it using pip, then I uninstalled it and installed Anaconda again.</p> <p>Since then this error arises whenever I try to compile a file and even in the jupyter notebook.</p> <p>I tried : 1.) Reinstalling Anaconda 2.) Updating the setup tools</p> <p>None of these worked.</p> <pre><code>Traceback (most recent call last): File "C:\Users\DELLL\Anaconda3\lib\site-packages\tornado\web.py", line 1699, in _execute result = await result File "C:\Users\DELLL\Anaconda3\lib\site-packages\tornado\gen.py", line 742, in run yielded = self.gen.throw(*exc_info) # type: ignore File "C:\Users\DELLL\Anaconda3\lib\site-packages\notebook\services\sessions\handlers.py", line 72, in post type=mtype)) File "C:\Users\DELLL\Anaconda3\lib\site-packages\tornado\gen.py", line 735, in run value = future.result() File "C:\Users\DELLL\Anaconda3\lib\site-packages\tornado\gen.py", line 742, in run yielded = self.gen.throw(*exc_info) # type: ignore File "C:\Users\DELLL\Anaconda3\lib\site-packages\notebook\services\sessions\sessionmanager.py", line 88, in create_session kernel_id = yield self.start_kernel_for_session(session_id, path, name, type, kernel_name) File "C:\Users\DELLL\Anaconda3\lib\site-packages\tornado\gen.py", line 735, in run value = future.result() File "C:\Users\DELLL\Anaconda3\lib\site-packages\tornado\gen.py", line 742, in run yielded = self.gen.throw(*exc_info) # type: ignore File "C:\Users\DELLL\Anaconda3\lib\site-packages\notebook\services\sessions\sessionmanager.py", line 101, in start_kernel_for_session self.kernel_manager.start_kernel(path=kernel_path, kernel_name=kernel_name) File "C:\Users\DELLL\Anaconda3\lib\site-packages\tornado\gen.py", line 735, in run value = future.result() File "C:\Users\DELLL\Anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper yielded = next(result) File "C:\Users\DELLL\Anaconda3\lib\site-packages\notebook\services\kernels\kernelmanager.py", line 168, in start_kernel super(MappingKernelManager, self).start_kernel(**kwargs) File "C:\Users\DELLL\Anaconda3\lib\site-packages\jupyter_client\multikernelmanager.py", line 110, in start_kernel km.start_kernel(**kwargs) File "C:\Users\DELLL\Anaconda3\lib\site-packages\jupyter_client\manager.py", line 240, in start_kernel self.write_connection_file() File "C:\Users\DELLL\Anaconda3\lib\site-packages\jupyter_client\connect.py", line 547, in write_connection_file kernel_name=self.kernel_name File "C:\Users\DELLL\Anaconda3\lib\site-packages\jupyter_client\connect.py", line 212, in write_connection_file with secure_write(fname) as f: File "C:\Users\DELLL\Anaconda3\lib\contextlib.py", line 112, in __enter__ return next(self.gen) File "C:\Users\DELLL\Anaconda3\lib\site-packages\jupyter_client\connect.py", line 102, in secure_write with os.fdopen(os.open(fname, open_flag, 0o600), mode) as f: PermissionError: [Errno 13] Permission denied: 'C:\\Users\\DELLL\\AppData\\Roaming\\jupyter\\runtime\\kernel-4058ed7c-98d6-40c9-8b3b-2cbb069740d8.json' </code></pre>
<p>This is permission issue for that particular folder Set the environment path variables C:\Users\DELL\AppData\Roaming\jupyter\runtime</p> <p>give the permission and it will work</p>
python-3.x|jupyter-notebook|anaconda|kernel
0
1,909,351
59,758,815
Apply rank in permutation of a list
<p>I have two lists:</p> <pre><code>l1 = [0, 1, 12, 33, 41, 52, 69, 7.2, 8.9, 9.91] l2 = [45, 51] </code></pre> <p>I need to get all possible combinations (without repetition) from <code>l1</code> with size equals the length of <code>l2</code>. Then apply a ranking metric into <code>l2</code> and <code>l1</code> (for each combination). Finally I need to get the closest metric wrt. <code>l1</code> and <code>lx</code> (<code>lx</code> being the permuted list).</p> <p>What I tried so far (it's more like a pseudo-code so far):</p> <pre><code>import numpy as np def apply_metric(predictions, targets): return np.sqrt(((predictions - targets) ** 2).mean()) l1 = [0, 1, 12, 33, 41, 52, 69, 7.2, 8.9, 9.91] l2 = [45, 51] for item in l1: #do the possible combinations temp_result = apply_metric(np.array(l2), np.array(permuted_items)) </code></pre> <p>output:</p> <pre><code>best metric = 0 (identical) best list = [45, 51] </code></pre>
<p>You can use <a href="https://docs.python.org/2/library/itertools.html#itertools.permutations" rel="nofollow noreferrer">itertools permutation</a> to obtain your permuted list and then apply the metric.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import itertools as it def apply_metric(predictions, targets): return np.sqrt(((predictions - targets) ** 2).mean()) l1 = [0, 1, 12, 33, 41, 52, 69, 7.2, 8.9, 9.91] l2 = [45, 51] temp_dict = {} for elements in it.permutations(l1, len(l2)): temp_result = apply_metric(np.array(l2), np.array(elements)) temp_dict.update({temp_result : list(elements)}) print(f"Best metric: {min(temp_dict)}") print(f"Best list: {temp_dict[min(temp_dict)]}") </code></pre> <p>Which yields:</p> <pre class="lang-py prettyprint-override"><code>Best metric: 2.9154759474226504 Best list: [41, 52] </code></pre>
python|python-3.x|numpy
0
1,909,352
59,481,174
Amazon AWS Kinesis Video Boto GetMedia/PutMedia
<p>Does anybody know of a complete sample as to how to send video to a kinesis video stream, using boto3 sdk? </p> <p>This question was asked initially for both both GetMedia and PutMedia. Now I have got this sample code for the GetMedia part: </p> <pre><code>client = boto3.client('kinesisvideo') response = client.get_data_endpoint( StreamName='my-test-stream', APIName='GET_MEDIA' ) print(response) endpoint = response.get('DataEndpoint', None) print("endpoint %s" % endpoint) if endpoint is not None: client2 = boto3.client('kinesis-video-media', endpoint_url=endpoint) response = client2.get_media( StreamName='my-test-stream', StartSelector={ 'StartSelectorType': 'EARLIEST', } ) print(response) stream = response['Payload'] # botocore.response.StreamingBody object while True: ret, frame = stream.read() print(" stream type ret %s frame %s" % (type(ret), type(frame))) </code></pre> <p>How do you to the PutMedia? From a previous post "<a href="https://stackoverflow.com/questions/49713393/amazon-kinesis-video-getmedia-putmedia">Amazon Kinesis Video GetMedia/PutMedia</a>", it looks like you have to construct your own url request for a PutMedia. Is that the case? If it is, can someone share a complete sample?</p>
<p>The code below works. </p> <p>The code is largely derived from a <a href="https://stackoverflow.com/questions/51991401/how-to-implement-amazon-kinesis-putmedia-method-using-python">previous post How to Implement Amazon Kinesis PutMedia Method using PYTHON</a> and <a href="https://stackoverflow.com/questions/52033203/amazon-kinesis-video-putmedia-using-python">Amazon Kinesis Video PutMedia Using Python</a>. Some of the changes are from monitoring how the kinesis video stream c-producer 2.0.2 works. </p> <p>The same pair of keys are set twice: once in the way for boto, another by the two special environment variables at the beginning of the code. </p> <pre><code>your_env_access_key_var = 'AWS_KVS_USER_ACCESS_KEY' your_env_secret_key_var = 'AWS_KVS_USER_SECRET_KEY' your_stream_name = 'my-video-stream-test' def get_endpoint_boto(): import boto3 client = boto3.client('kinesisvideo') response = client.get_data_endpoint( StreamName=your_stream_name, APIName='PUT_MEDIA' ) pp.pprint(response) endpoint = response.get('DataEndpoint', None) print("endpoint %s" % endpoint) if endpoint is None: raise Exception("endpoint none") return endpoint def sign(key, msg): return hmac.new(key, msg.encode("utf-8"), hashlib.sha256).digest() def get_signature_key(key, date_stamp, regionName, serviceName): kDate = sign(('AWS4' + key).encode('utf-8'), date_stamp) kRegion = sign(kDate, regionName) kService = sign(kRegion, serviceName) kSigning = sign(kService, 'aws4_request') return kSigning def get_host_from_endpoint(endpoint): # u'https://s-123abc78.kinesisvideo.us-east-2.amazonaws.com' if not endpoint.startswith('https://'): return None retv = endpoint[len('https://'):] return str(retv) def get_region_from_endpoint(endpoint): # u'https://s-123abc78.kinesisvideo.us-east-2.amazonaws.com' if not endpoint.startswith('https://'): return None retv = endpoint[len('https://'):].split('.')[2] return str(retv) class gen_request_parameters: def __init__(self): self._data = '' if True: localfile = '6-step_example.webm.360p.webm' # upload ok #localfile = 'big-buck-bunny_trailer.webm' # error fragment duration over limit with open(localfile, 'rb') as image: request_parameters = image.read() self._data = request_parameters self._pointer = 0 self._size = len(self._data) def __iter__(self): return self def next(self): if self._pointer &gt;= self._size: raise StopIteration # signals "the end" left = self._size - self._pointer chunksz = 16000 if left &lt; 16000: chunksz = left pointer_start = self._pointer self._pointer += chunksz print("Data: chunk size %d" % chunksz) return self._data[pointer_start:self._pointer] # ************* REQUEST VALUES ************* endpoint = get_endpoint_boto() method = 'POST' service = 'kinesisvideo' host = get_host_from_endpoint(endpoint) region = get_region_from_endpoint(endpoint) ##endpoint = 'https://**&lt;the endpoint you get with get_data_endpoint&gt;**/PutMedia' endpoint += '/putMedia' # POST requests use a content type header. For DynamoDB, # the content is JSON. content_type = 'application/json' start_tmstp = repr(time.time()) # Read AWS access key from env. variables or configuration file. Best practice is NOT # to embed credentials in code. access_key = None # '*************************' secret_key = None # '*************************' while True: # scope k = os.getenv(your_env_access_key_var) if k is not None and type(k) is str and k.startswith('AKIA5'): access_key = k k = os.getenv(your_env_secret_key_var) if k is not None and type(k) is str and len(k) &gt; 4: secret_key = k break # scope if access_key is None or secret_key is None: print('No access key is available.') sys.exit() # Create a date for headers and the credential string t = datetime.datetime.utcnow() amz_date = t.strftime('%Y%m%dT%H%M%SZ') date_stamp = t.strftime('%Y%m%d') # Date w/o time, used in credential scope # ************* TASK 1: CREATE A CANONICAL REQUEST ************* # http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html # Step 1 is to define the verb (GET, POST, etc.)--already done. # Step 2: Create canonical URI--the part of the URI from domain to query # string (use '/' if no path) ##canonical_uri = '/' canonical_uri = '/putMedia' #endpoint[len('https://'):] ## Step 3: Create the canonical query string. In this example, request # parameters are passed in the body of the request and the query string # is blank. canonical_querystring = '' # Step 4: Create the canonical headers. Header names must be trimmed # and lowercase, and sorted in code point order from low to high. # Note that there is a trailing \n. #'host:' + host + '\n' + canonical_headers = '' #canonical_headers += 'Accept: */*\r\n' canonical_headers += 'connection:keep-alive\n' canonical_headers += 'content-type:application/json\n' canonical_headers += 'host:' + host + '\n' canonical_headers += 'transfer-encoding:chunked\n' #canonical_headers += 'x-amz-content-sha256: ' + 'UNSIGNED-PAYLOAD' + '\r\n' canonical_headers += 'user-agent:AWS-SDK-KVS/2.0.2 GCC/7.4.0 Linux/4.15.0-46-generic x86_64\n' canonical_headers += 'x-amz-date:' + amz_date + '\n' canonical_headers += 'x-amzn-fragment-acknowledgment-required:1\n' canonical_headers += 'x-amzn-fragment-timecode-type:ABSOLUTE\n' canonical_headers += 'x-amzn-producer-start-timestamp:' + start_tmstp + '\n' canonical_headers += 'x-amzn-stream-name:' + your_stream_name + '\n' # Step 5: Create the list of signed headers. This lists the headers # in the canonical_headers list, delimited with ";" and in alpha order. # Note: The request can include any headers; canonical_headers and # signed_headers include those that you want to be included in the # hash of the request. "Host" and "x-amz-date" are always required. # For DynamoDB, content-type and x-amz-target are also required. # #in original sample after x-amz-date : + 'x-amz-target;' signed_headers = 'connection;content-type;host;transfer-encoding;user-agent;' signed_headers += 'x-amz-date;x-amzn-fragment-acknowledgment-required;' signed_headers += 'x-amzn-fragment-timecode-type;x-amzn-producer-start-timestamp;x-amzn-stream-name' # Step 6: Create payload hash. In this example, the payload (body of # the request) contains the request parameters. # Step 7: Combine elements to create canonical request canonical_request = method + '\n' + canonical_uri + '\n' + canonical_querystring + '\n' + canonical_headers + '\n' + signed_headers canonical_request += '\n' canonical_request += hashlib.sha256(''.encode('utf-8')).hexdigest() # ************* TASK 2: CREATE THE STRING TO SIGN************* # Match the algorithm to the hashing algorithm you use, either SHA-1 or # SHA-256 (recommended) algorithm = 'AWS4-HMAC-SHA256' credential_scope = date_stamp + '/' + region + '/' + service + '/' + 'aws4_request' string_to_sign = algorithm + '\n' + amz_date + '\n' + credential_scope + '\n' + hashlib.sha256( canonical_request.encode('utf-8')).hexdigest() # ************* TASK 3: CALCULATE THE SIGNATURE ************* # Create the signing key using the function defined above. signing_key = get_signature_key(secret_key, date_stamp, region, service) # Sign the string_to_sign using the signing_key signature = hmac.new(signing_key, (string_to_sign).encode('utf-8'), hashlib.sha256).hexdigest() # ************* TASK 4: ADD SIGNING INFORMATION TO THE REQUEST ************* # Put the signature information in a header named Authorization. authorization_header = algorithm + ' ' + 'Credential=' + access_key + '/' + credential_scope + ', ' authorization_header += 'SignedHeaders=' + signed_headers + ', ' + 'Signature=' + signature # # Python note: The 'host' header is added automatically by the Python 'requests' library. headers = { 'Accept': '*/*', 'Authorization': authorization_header, 'connection': 'keep-alive', 'content-type': content_type, #'host': host, 'transfer-encoding': 'chunked', # 'x-amz-content-sha256': 'UNSIGNED-PAYLOAD', 'user-agent': 'AWS-SDK-KVS/2.0.2 GCC/7.4.0 Linux/4.15.0-46-generic x86_64', 'x-amz-date': amz_date, 'x-amzn-fragment-acknowledgment-required': '1', 'x-amzn-fragment-timecode-type': 'ABSOLUTE', 'x-amzn-producer-start-timestamp': start_tmstp, 'x-amzn-stream-name': your_stream_name, 'Expect': '100-continue' } # ************* SEND THE REQUEST ************* print('\nBEGIN REQUEST++++++++++++++++++++++++++++++++++++') print('Request URL = ' + endpoint) r = requests.post(endpoint, data=gen_request_parameters(), headers=headers) print('\nRESPONSE++++++++++++++++++++++++++++++++++++') print('Response code: %d\n' % r.status_code) print(r.text) </code></pre>
python|amazon-web-services|boto3|amazon-kinesis
6
1,909,353
48,937,399
python: converting List Comprehension to lambda function?
<p>I have a list below, and I'm trying to have a lambda function to retrieve 'cat' value from the list when 'prop' equals to a given string.</p> <pre><code>a=[{'prop':'ABC','cat':'ABC Dir'}, {'prop':'DEF','cat':'DEF Dir'}, ...] </code></pre> <p>I have successfully got a List Comprehension, which gives me expected 'ABC Dir' if I feed in 'ABC', but I failed to convert it to a lambda function if possible. Advise is appreciated.</p> <pre><code>&gt;&gt;&gt; aa=[x['cat'] for x in a if x['prop'] == 'ABC'] &gt;&gt;&gt; aa ['ABC Dir'] </code></pre> <p>expected result:</p> <pre><code>&gt;&gt;&gt;bb('ABC') 'ABC Dir' &gt;&gt;&gt;bb('DEF') 'DEF Dir' </code></pre>
<p>I will take it as you don't quite get the terminologies in python. Lambda is a key word in python. If you want to define a function, just say define a function. In fact, you don't use lambda to define a named function at all.</p> <p>Following code should do what you are asking for:</p> <pre><code>def bb(x): for i in a: if i['prop'] == x: return i['cat'] else: return None </code></pre> <hr /> <p>It's in the style guide, PEP8, that you shouldn't define a named function using lambda: <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow noreferrer">https://www.python.org/dev/peps/pep-0008/</a></p> <blockquote> <p>Always use a def statement instead of an assignment statement that binds a lambda expression directly to an identifier.</p> <p>Yes:</p> <p>def f(x): return 2*x</p> <p>No:</p> <p>f = lambda x: 2*x</p> </blockquote>
python
2
1,909,354
70,992,208
Undersampling a multi-label DataFrame using pandas
<p>I have a DataFrame like this:</p> <pre><code>file_name label ../input/image-classification-screening/train/... 1 ../input/image-classification-screening/train/... 7 ../input/image-classification-screening/train/... 9 ../input/image-classification-screening/train/... 9 ../input/image-classification-screening/train/... 6 </code></pre> <p>And it has 11 classes (0 to 10) and has high class imbalance. Below is the output of <code>train['label'].value_counts()</code>:</p> <pre><code>6 6285 3 4139 9 3933 7 3664 2 2778 5 2433 8 2338 0 2166 4 2052 10 1039 1 922 </code></pre> <p>How do I under-sample this data in pandas so that each class will have below 2500 examples? I want to remove data points randomly from majority classes like 6, 3, 9, 7 and 2.</p>
<p>You could create a mask that identifies which &quot;label&quot;s have more than 2500 items and then use <code>groupby</code>+<code>sample</code> (by setting <code>n=n</code> to sample the required number of items) on the ones with more than 2500 items and select all of the labels with less than 2500 items. This creates two DataFrames, one sampled to 2500, and the other selected in whole. Then concatenate the two groups using <code>pd.concat</code>:</p> <pre><code>n = 2500 msk = df.groupby('label')['label'].transform('size') &gt;= n df = pd.concat((df[msk].groupby('label').sample(n=n), df[~msk]), ignore_index=True) </code></pre> <p>For example, if you had a DataFrame like:</p> <pre><code>df = pd.DataFrame({'ID': range(30), 'label': ['A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'C', 'C', 'D', 'F', 'F', 'G']}) </code></pre> <p>and</p> <pre><code>&gt;&gt;&gt; df['label'].value_counts() A 13 B 11 C 2 F 2 D 1 G 1 Name: label, dtype: int64 </code></pre> <p>Then the above code with <code>n=3</code>, yields:</p> <pre><code> ID label 0 7 A 1 0 A 2 10 A 3 20 B 4 18 B 5 21 B 6 24 C 7 25 C 8 26 D 9 27 F 10 28 F 11 29 G </code></pre> <p>with</p> <pre><code>&gt;&gt;&gt; df['label'].value_counts() A 3 B 3 C 2 F 2 D 1 G 1 Name: label, dtype: int64 </code></pre>
python|pandas|dataframe|pandas-groupby|sample
2
1,909,355
70,994,980
How to create a django `models.Textchoices` programmatically?
<p>How to create a django <code>models.Textchoices</code> programmatically?</p> <p>In the <a href="https://docs.djangoproject.com/en/4.0/ref/models/fields/#enumeration-types" rel="nofollow noreferrer">django doc</a>, it shows you can define a <code>model.TextChoices</code> with:</p> <pre class="lang-py prettyprint-override"><code>class YearInSchool(models.TextChoices): FRESHMAN = 'FR', _('Freshman') SOPHOMORE = 'SO', _('Sophomore') JUNIOR = 'JR', _('Junior') SENIOR = 'SR', _('Senior') GRADUATE = 'GR', _('Graduate') </code></pre> <p>How can I create programmatically the same choices class from a list of key, values?</p> <pre class="lang-py prettyprint-override"><code>mapping = { 'FRESHMAN': 'FR', 'SOPHOMORE': 'SO, 'JUNIOR': 'JR', 'SENIOR': 'SR', 'GRADUATE': 'GR' } # ??? YearInSchool = build_model_text_choices(mapping) </code></pre>
<p>From the <a href="https://docs.djangoproject.com/en/4.0/ref/models/fields/#enumeration-types" rel="nofollow noreferrer">docs</a> to generate dynamic choices like this:</p> <pre><code>YearInSchool = models.TextChoices('YearInSchool', mapping) </code></pre> <p>and then you can call choices like this:</p> <pre><code>YearInSchool.choices </code></pre>
python|django|django-models
1
1,909,356
60,269,012
how to randomly select from a list in a column of lists in a pandas dataframe
<p>I have the following dataframe:</p> <pre><code>MyAge Ages Names 7 [3,10,15] ['Tom','Jack','Sara'] 6 [12,6,5,13] ['Nora','Betsy','John','Jill'] 15 [24,3,65,15]['Tala','Jane','Bill','Mark'] </code></pre> <p>I want to generate a new column that produces a randomly selected name for each row from the list of <code>Names</code>, so that the age of the person with that randomly selected name is less than or equal to <code>MyAge</code>. The column <code>Ages</code> reflects the ages of each person in the <code>Names</code> column.</p> <p>So one possible outcome is the following:</p> <pre><code>MyAge Ages Names RandomName RandomPersonAge 7 [3,10,15] ['Tom','Jack','Sara'] 'Tom' 3 6 [12,6,5,13] ['Nora','Betsy','John','Jill'] 'Betsy' 6 15 [24,3,65,15]['Tala','Jane','Bill','Mark'] 'Jane' 3 </code></pre>
<p>Given that the number of ages and names can be different for each row, first create a random index to based on the number of ages/names per row using a list comprehension. Then use more list comprehensions to index the names and ages. Finally, assign the results back to the original dataframe.</p> <pre><code># Sample data. df = pd.DataFrame({ "MyAge": [7, 6, 15], "Ages": [[3, 10, 15], [12, 6, 5, 13], [24, 3, 65, 15]], "Names": [['Tom', 'Jack', 'Sara'], ['Nora', 'Betsy', 'John', 'Jill'], ['Tala', 'Jane', 'Bill', 'Mark']] }) # Solution. np.random.seed(0) random_index = [np.random.randint(len(ages)) for ages in df['Ages']] names = [names[idx] for idx, names in zip(random_index, df['Names'])] ages = [ages[idx] for idx, ages in zip(random_index, df['Ages'])] &gt;&gt;&gt; df.assign(RandomName=names, RandomPersonAge=ages) MyAge Ages Names RandomName RandomPersonAge 0 7 [3, 10, 15] [Tom, Jack, Sara] Tom 3 1 6 [12, 6, 5, 13] [Nora, Betsy, John, Jill] Jill 13 2 15 [24, 3, 65, 15] [Tala, Jane, Bill, Mark] Jane 3 </code></pre> <p>To choose the random ages such that they are less than or equal to the value in <code>MyAge</code>, we should first flatten the data. We'll use a conditional, nested list comprehension to filter the data such that each row contains the index together with the name and equivalent age where the age is less than or equal to <code>MyAge</code>. We'll then create a dataframe from this filtered data and set the index based on the first column which is the name to the original dataframe's index. The rows in the dataframe are randomly shuffled via <code>sample(frac=1)</code>. We then group on the index and take the first random row. We then join the result back to the original dataframe (the join is done based on the index by default).</p> <pre><code>filtered_data = ( [(idx, name, age) for idx, (my_age, ages, names) in df.iterrows() for age, name in zip(ages, names) if age &lt;= my_age] ) random_names_and_ages = ( pd.DataFrame(filtered_data, columns=[df.index.name, 'RandomName', 'RandomPersonAge']) .set_index(df.index.name) .sample(frac=1) # Randomly huffle the rows in the dataframe. .groupby(level=0)[['RandomName', 'RandomPersonAge']] # Groupby 'ID' and take the first random row. .first() ) &gt;&gt;&gt; df.join(random_names_and_ages) MyAge Ages Names RandomName \ 0 7 [3, 10, 15] [Tom, Jack, Sara] Tom 1 6 [12, 6, 5, 13] [Nora, Betsy, John, Jill] John 2 15 [24, 3, 65, 15] [Tala, Jane, Bill, Mark] Jane RandomPersonAge 0 3 1 5 2 3 </code></pre>
python|pandas
1
1,909,357
3,207,883
Using ExecuteBatch from Python on Google Calendars API
<p>I'm trying to figure out how to add a series of events to a non-default calendar (and remove some) as a batch, but there's no hint of how to do it in Google's frankly awful documentation.</p> <p>Has anyone cracked this nut or does anyone know where there is actually useful documentation on using the Google Calendar API?</p>
<p>Figured it out in the end. The key is using the right batch URL in ExecuteBatch:</p> <pre><code>uri = self.calendar.GetAlternateLink().href batch_uri = uri + u'/batch' calendar_service.ExecuteBatch(request_feed, batch_uri) </code></pre>
python|batch-file|google-calendar-api
0
1,909,358
67,646,696
How Do I Fix My Ball Animation In Pygame?
<p>So my ball animation for my multiplayer pong game is broken. Instead of moving and bouncing normally, the ball draws it self again after moving.</p> <p><a href="https://i.stack.imgur.com/pQlaw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pQlaw.png" alt="" /></a></p> <p>How do i fix the ball to not like clone itself after it moves 5 pixels? This is the animation code:</p> <pre><code>enter code here import pygame pygame.init() #Creating the window screen_width, screen_height = 1000, 600 screen = pygame.display.set_mode((screen_width, screen_height)) pygame.display.set_caption(&quot;Pong&quot;) #FPS FPS = 60 clock = pygame.time.Clock() #Game Variables light_grey = (170,170,170) #Paddles paddle_1 = pygame.Rect(screen_width - 30, screen_height/2 - 70, 20,100) paddle_2 = pygame.Rect(10, screen_height/2 - 70, 20, 100) #Ball ball = pygame.Rect(screen_width/2, screen_height/2, 30, 30) ball_xVel = 5 ball_yVel = 5 def ball_animation(): global ball_xVel, ball_yVel ball.x += ball_xVel ball.y += ball_yVel if ball.x &gt; screen_width or ball.x &lt; 0: ball_xVel *= -1 if ball.y &gt; screen_height or ball.y &lt; 0: ball_yVel *= -1 def draw(): pygame.draw.ellipse(screen, light_grey, ball) pygame.draw.rect(screen, light_grey, paddle_1) pygame.draw.rect(screen, light_grey, paddle_2) def main(): #main game loop run = True while run: for event in pygame.event.get(): if event.type == pygame.QUIT: run = False draw() ball_animation() pygame.display.update() clock.tick(FPS) pygame.quit() if __name__ == &quot;__main__&quot;: main() </code></pre>
<p>You have to clear the display in every frame with <a href="https://www.pygame.org/docs/ref/surface.html#pygame.Surface.fill" rel="nofollow noreferrer"><code>pygame.Surface.fill</code></a>:</p> <pre class="lang-py prettyprint-override"><code>screen.fill(0) </code></pre> <p>The typical PyGame application loop has to:</p> <ul> <li>handle the events by either <a href="https://www.pygame.org/docs/ref/event.html#pygame.event.pump" rel="nofollow noreferrer"><code>pygame.event.pump()</code></a> or <a href="https://www.pygame.org/docs/ref/event.html#pygame.event.get" rel="nofollow noreferrer"><code>pygame.event.get()</code></a>.</li> <li>update the game states and positions of objects dependent on the input events and time (respectively frames)</li> <li>clear the entire display or draw the background</li> <li>draw the entire scene (<code>blit</code> all the objects)</li> <li>update the display by either <a href="https://www.pygame.org/docs/ref/display.html#pygame.display.update" rel="nofollow noreferrer"><code>pygame.display.update()</code></a> or <a href="https://www.pygame.org/docs/ref/display.html#pygame.display.flip" rel="nofollow noreferrer"><code>pygame.display.flip()</code></a></li> <li>limit frames per second to limit CPU usage with <a href="https://www.pygame.org/docs/ref/time.html#pygame.time.Clock" rel="nofollow noreferrer"><code>pygame.time.Clock.tick</code></a></li> </ul> <pre class="lang-py prettyprint-override"><code>def main(): # main application loop run = True while run: # event loop for event in pygame.event.get(): if event.type == pygame.QUIT: run = False # update and move objects ball_animation() # clear the display screen.fill(0) # draw the scene draw() # update the display pygame.display.update() # limit frames per second clock.tick(FPS) pygame.quit() exit() </code></pre>
python|pygame
0
1,909,359
66,800,827
Identifying the correct line of a text file using re.compile
<p>I'm new to processing text files using Python and I got stuck on finding the correct line in a text file.</p> <p>Looking at the image, I need to find a way that the block returns to me the line which contains the words:</p> <p>'<em><strong>New packet was received</strong></em>' + <em>something in between</em> + <em>the string stored in the variable <strong>stcrouter_line</strong></em>. (See picture).</p> <p>Just with the variable, it returns 3 lines, which is correct in previously. Now, I just need to get the second line (with pattern above).</p> <p>I believe I need the correct expression on the first line of the block:</p> <p><code>line_regex = re.compile(stcrouter_line)</code>.</p> <p>I'm not sure how to formulate this. Can anyone help me out, please?</p> <p><a href="https://i.stack.imgur.com/22CSw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/22CSw.jpg" alt="enter image description here" /></a></p>
<p>Try to modify your code to:</p> <pre><code>line_regex = re.compile('New packet was received' + r'.*?' + stcrouter_line) </code></pre> <p>This is to match:</p> <ul> <li><code>New packet was received</code> match the text literally</li> <li><code>r'.*?</code> raw-string containing the regex <code>.*?</code> that will match any texts in between the literal string above and your existing pattern. With <code>?</code> after <code>*</code> to make it the shortest match so that the match will not overshoot to your existing pattern.</li> <li><code>stcrouter_line</code> your existing pattern</li> </ul>
python|regex|text|jupyter-notebook|data-science
1
1,909,360
72,179,696
Web-scraping using python
<p>I am trying to extract data from this <a href="https://portal.themlc.com/search" rel="nofollow noreferrer">website</a>, It is almost impossible to scrape as after any search it's not changing its URL.</p> <p>I want to search based on <em>PUBLISHER IPI</em> <em>'00144443097'</em> and extract all data they have inside<code>class=&quot;items-container&quot;</code>.</p> <p>My code</p> <pre><code>quote_page = 'https://portal.themlc.com/search' page = urllib.request.urlopen(quote_page) soup = BeautifulSoup(page, 'html.parser') name_box = soup.find('section', attrs={'class': 'items-container'}) name = name_box.text print(name) </code></pre> <p>Here as the URL after search doesn't change it's not giving me any value.</p> <p>After extracting values I want to sort them in pandas</p>
<p>To find the publisheripid, you need to open some of works within the author and look for the work endpoint. hopefully this image loads correctly</p> <p><a href="https://i.stack.imgur.com/EJiFm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EJiFm.png" alt="enter image description here" /></a></p>
pandas|web-scraping|beautifulsoup|request|urllib
0
1,909,361
65,660,031
Transform final output in scrapy?
<p>I have a scrapy process which successfully parses items and sub-items, but I can't see whether there's a final hook which would allow me to transform the final data result after everything has been parsed, but before it is formatted as output.</p> <p>My spider is doing something like this:</p> <pre><code>class MySpider(scrapy.Spider): def parse(self, response, **kwargs): for part in [1,2,3]: url = f'{response.request.url}?part={part}' yield scrapy.Request(url=url, callback=self.parse_part, meta={'part': part}) def parse_part(self, response, **kwargs) # ... for subpart in part: yield { 'title': self.get_title(subpart), 'tag': self.get_tag(subpart) } } </code></pre> <p>This works well, but I haven't been able to figure out where I can take the complete resulting structure and transform it before outputting it to json (or whatever). I thought maybe I could do this in the <code>process_spider_output</code> call of Middleware, but this only seems to give me the single items, not the final structure.</p>
<p>You can use this method to do something after the spider has closed:</p> <pre><code>def spider_closed(self): </code></pre> <p>However, you won't be able to modify items in the method. To modify items you need to write a custom pipeline. In the pipeline you write a method which gets called every time your spider yields an item. So in the method you could save all items to a list and then transform all items in the list in the Pipeline method <code>close_spider</code></p> <p><a href="https://docs.scrapy.org/en/latest/topics/item-pipeline.html" rel="nofollow noreferrer">Read here on how to write your own pipeline</a></p> <p><strong>Example</strong>: Let's say you want to have all you items as JSON to maybe send a request to an API. You have to activate your pipeline in <code>settings.py</code> for it to be used.</p> <pre><code>import json class MyPipeline: def __init__(self, *args, **kwargs): self.items = [] def process_item(self, item, spider): self.items.append(item) return item def close_spider(self, spider): # In the method to can itterate self.items and transform them to your preference. json_data = json.dumps(self.items) print(json_data) </code></pre>
python|scrapy
1
1,909,362
65,694,392
Python: My MacBook suddenly fails to perform an cv2.imshow
<p>My code worked well until morning, but suddenly cv.imshow doesn't work. (no error!!)</p> <p>I didn't change the code.</p> <p>I just updated my Macbook to big sur 11.1 and deleted the Paralls and the Office.</p> <p>this is my code:</p> <pre><code>img = cv2.imread(&quot;test.jpg&quot;) cv2.imshow(&quot;test&quot;, img) cv2.waitKey() cv2.destroyAllWindows() </code></pre> <p>I am using opencv 4.4.0</p> <p>please help me..</p>
<p>This is a common problem when upgrading to Big Sur. The solution is to uninstall OpenCV and reinstall it so that the binaries are built for Big Sur. Simply doing <code>pip uninstall opencv-python</code> or <code>pip uninstall opencv-contrib-python</code> depending on what flavour you're using for OpenCV followed by a fresh install by <code>pip install opencv-python</code> or <code>pip install opencv-contrib-python</code> should work.</p>
python|opencv
2
1,909,363
65,506,543
can i raise exception from inside a function in the 'try:' block when client disconnects from the server?
<p>im trying to build a simple server-client chatroom with python socket.</p> <p>i have the following code:</p> <pre><code>def handle_connection(client): while(1): try: message = receive_message() broadcast(message[&quot;data&quot;]) except: # for now i don't mind which exception print(&quot;client disconnected&quot;) def receive_message(client_socket): try: message_header = client_socket.recv(HEADER) if len(message_header) == 0: return False message_length = int(message_header.decode(&quot;utf-8&quot;)) message = client_socket.recv(message_length).decode(&quot;utf-8&quot;) return {&quot;header&quot;: message_header, &quot;data&quot;: message} except: # most likely will trigger when a client disconnects return False </code></pre> <p>where <code>receive_message()</code> calls inside of it to <code>client.recv(HEADER)</code> and returns either <code>False</code> when there is no message, or <code>{&quot;header&quot;: msg_header, &quot;data&quot;: msg}</code> when everything is ok.</p> <p>my question is: if <code>client.recv()</code> fails inside of <code>receive_message()</code> due to the client CLI closing, will it raise the exception and print <code>&quot;client disconnected&quot;</code>, or not?</p> <p>i did come up with the following solution i think works:</p> <p>i defined a function called <code>handle_disconnection()</code> that handles all the content inside of the <code>except</code> in the code above.</p> <pre><code>def handle_connection(client_socket): while 1: try: message = receive_message() if not message: handle_disconnection(client_socket) break broadcast(message[&quot;data&quot;]) except: # client disconnected handle_disconnection(client_socket) break </code></pre> <p>is this a valid and/or right programming approach to the problem? if this approach is wrong, how can i handle it correctly?</p>
<p>If <code>client.recv()</code> will raise an exception you will handle it inside of <code>receive_message()</code> and <code>handle_connection()</code> will not receive the exception.</p> <p>I suggest you to identify situations when you want to control the flow with exceptions or with if-else. I think <code>receive_message()</code> should return a value of message or throw <code>ConnectionError</code> when there are connection issues. In case when there are no messages from the socket you can return <code>None</code> or raise <code>NoMessagesAvailableError</code>.</p> <p>There is also a rule that tells you should catch specified exceptions, not all of them. Your code will print <code>client disconnected</code> when you are out of memory.</p>
python|python-3.x|sockets|return-value|try-except
1
1,909,364
50,868,261
Calling a function for a rematch?
<pre><code>def restart(): import random import time opp = 0 count = 0 right = 0 time.sleep (1.5) print ("E for easy, M for medium, H for hard!") time.sleep (1.5) choice = input("What do you choose?: ") print ("") if choice == 'E': print("You have chosen easy difficulty!") elif choice == 'M': print("You have chosen medium difficulty!") elif choice == 'H': print("You have chosen hard difficulty!") else: print("Invalid input") print ("") restart() time.sleep (2) print ("") print ("Are you ready?") print ("") time.sleep (2) print ("Lets go!") print ("") time.sleep (2) def rematch(): if choice == "E": while (count &lt;= 9): num1 = random.randint(1,5) num2 = random.randint(1,5) opp = random.randint(1,2) if opp == 1: opp = ("+") elif opp == 2: opp = ("-") print("what is " + str(num1) + str(opp) + str(num2)+ "") answer = (input("It is... ")) if opp == "+": qanswer = str(num1+num2) elif opp == "-": qanswer = str(num1-num2) if answer == qanswer: right = right + 1 print ("You got it right!") time.sleep (1) print ("") else: print ("You got it wrong...") time.sleep (0.5) if opp == "+": print ("The correct answer was " + str(num1+num2) + "") print ("") elif opp == "-": print ("The correct answer was " + str(num1-num2) + "") print ("") time.sleep (1) count = count + 1 print ("you got " + str(right) + " out of 10!") yeet = (input("Would you like to rematch, restart, or quit?: ")) if yeet == ("restart"): print ("") restart() elif yeet == ("rematch"): print ("") rematch() elif yeet == ("quit"): print ("See you next time!") print ("") time.sleep (2) exit() else: print ("Invalid input,") time.sleep (1) print ("Please input either rematch, restart or quit") </code></pre> <p>At the end of this code, I call the <code>restart()</code> function to start it. The <code>restart</code> function definition also works in the "restart" selection at the end of the questionnaire. Though I have tried many times to make the defition of "rematch" both run at the start of the code by itself, it never does, stopping the code at the "are you ready?" print. On top of this I am trying to make it so that when someone types "rematch" the code restarts just from the definition. How could that be done?</p>
<p>Why use so many calls to <code>sleep</code>? this is not necessary.</p> <p>I've cleaned up things a bit for you and made a minimal example that you can use to build on.<br> I've not implemented the difficulty levels; once you understand the basics of the program, you can try to implement it; my approach would be to link the difficulty level with the range of numbers randomly generated.</p> <p>Have fun.</p> <pre><code>import random import time def start(): global opp, count, right, choice opp = 0 count = 0 right = 0 while True: choice = input("What difficulty do you choose? \n") try: print(f"You have chosen {difficulty[choice]} difficulty!") break except KeyError: print("please enter 'E', 'M', or 'H': ") print("Are you ready?") time.sleep (2) print("Lets go!") def play(): global opp, count, right, choice # if choice == "E": # implement that later while count &lt; 10: num1 = random.randint(1, 5) num2 = random.randint(1, 5) opp = random.choice(('+', '-')) print(f"what is {num1} {opp} {num2}") answer = input("It is... ") qanswer = str(num1 + num2) if opp == "+" else str(num1 - num2) if answer == qanswer: right = right + 1 print("You got it right!") else: print("You got it wrong...") print(f"The correct answer was: {qanswer}") count = count + 1 print(f"you got {right} out of 10!") yeet = input("Would you like to rematch (Y, N)?: ") if yeet == "Y": start() else: return if __name__ == '__main__': difficulty = {'E': 'easy', 'M': 'medium', 'H': 'hard'} opp = 0 count = 0 right = 0 choice = None start() play() </code></pre>
python|python-3.x
0
1,909,365
51,085,919
Pandas 0.23 groupby and pct change not returning expected value
<p>For each <code>Name</code> in the following dataframe I'm trying to find the percentage change from one <code>Time</code> to the next of the <code>Amount</code> column:</p> <p><a href="https://i.stack.imgur.com/J6imC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J6imC.png" alt="enter image description here"></a></p> <p>Code to create the dataframe:</p> <pre><code>import pandas as pd df = pd.DataFrame({'Name': ['Ali', 'Ali', 'Ali', 'Cala', 'Cala', 'Cala', 'Elena', 'Elena', 'Elena'], 'Time': [1, 2, 3, 1, 2, 3, 1, 2, 3], 'Amount': [24, 52, 34, 95, 98, 54, 32, 20, 16]}) df.sort_values(['Name', 'Time'], inplace = True) </code></pre> <p>The first approach I tried (based on <a href="https://stackoverflow.com/questions/40273251/pandas-groupby-with-pct-change">this question and answer</a>) used <code>groupby</code> and <code>pct_change</code>:</p> <pre><code>df['pct_change'] = df.groupby(['Name'])['Amount'].pct_change() </code></pre> <p>With the result:</p> <p><a href="https://i.stack.imgur.com/Zwtoz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zwtoz.png" alt="enter image description here"></a></p> <p>This doesn't seem to be grouping by the name because it is the same result as if I had used no <code>groupby</code> and called <code>df['Amount'].pct_change()</code>. According to the <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.pct_change.html" rel="nofollow noreferrer">Pandas Documentation</a> for <code>pandas.core.groupby.DataFrameGroupBy.pct_change</code>, the above approach should work to calculate the percentage change of each value to the previous value within a group.</p> <p>For a second approach I used <code>groupby</code> with <code>apply</code> and <code>pct_change</code>:</p> <pre><code>df['pct_change_with_apply'] = df.groupby('Name')['Amount'].apply(lambda x: x.pct_change()) </code></pre> <p>With the result:</p> <p><a href="https://i.stack.imgur.com/AK2OH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AK2OH.png" alt="enter image description here"></a></p> <p>This time all the percentage changes are correct. </p> <p>Why does the <code>groupby</code> and <code>pct_change</code> approach not return the correct values, but using <code>groupby</code> with <code>apply</code> does?</p> <p><strong>Edit January 28, 2018</strong>: This behavior has been corrected in the latest version of Pandas, 0.24.0. To install run <code>pip install -U pandas</code>.</p>
<p>As already noted by @piRSquared in the comments; this is due to a <a href="https://github.com/pandas-dev/pandas/issues/21621" rel="nofollow noreferrer">bug filed on Github under issue #21621</a>. It already looks to be solved in milestone <code>0.24.0</code> (due 2018-12-31). My version (<code>0.23.4</code>) still displayed this bugged behaviour.</p>
python|pandas|dataframe
3
1,909,366
35,050,938
How do I read values from a text file and store them to be used in an equation in python
<p>I have a text file that looks like the text below:</p> <pre><code>a,1234 b,34322 c,9439 d,132431 </code></pre> <p>I want to write a code that reads just the numbers after the comma. For example, in the first line, only 1234 needs to be read by the code.</p> <p>Then, that number(1234) needs to used in the equation <em>y = 10 + n</em>, where <em>n</em> is that number 1234. I want to read every number from every line and provide the result to me, and use those results for feeding into a chart, which also needs to be coded within the same program and saved as an image.</p> <p>I have no clue how to approach this problem. All I did was type the following code, which reads those values from the text file and stores them in an array. I don't like that because that is just in one array. I seriously need help on this.</p> <pre><code>a1 = [] with open('project.txt') as f: for line in f: a1.append(line) print (a1) </code></pre>
<p>Try this: a1 = []</p> <pre><code>with open('project.txt') as f: for line in f: before, sep, after = line.partition(',') n = after y = 10 + n a1.append(y) print (a1) </code></pre>
python|text-files
0
1,909,367
56,794,808
Import CSV with unknown columns to PostgreSQL using Python
<p>I'm trying to import a CSV file which contains 74 columns to a PostgreSQL table.I have tried to do it via PostgreSQL and couldn't able to do it, below is the post, from this post came to know that i need a client side programming language to accomplish this,So thought of doing it through python as our project uses python for additional backend operations.</p> <p>I'm new to python and i have searched a lot but in every example i found, the table column names was predefined, in my case since the CSV contains 74 columns it will not be possible to create a table by hardcoding every columns.</p> <p><strong>So can anyone suggest or recommend a generalised solution for this,It will be of great help.</strong></p> <p><a href="https://stackoverflow.com/questions/56787570/cannot-copy-a-csv-file-from-local-machine-to-remote-server">Cannot COPY a CSV file from local machine to remote server</a></p>
<p>Depending on whether this is a for production or just casual use, you need to</p> <ol> <li>Figure out how many columns there are in the CSV</li> <li>Determine the data types, as CSV doesn't have data types that means you will need to examine the columns individually to determine which fit your criteria as text. (Alternatively if the data are available in Excel format you could use a the openpyxl library to read it and it will provide you with some data type information).</li> <li>If you want to actually have a table with a variable number of columns then you would need to execute a <code>CREATE TABLE</code> query to do so. Depending on the what you will end up doing with the data in your application, it might be better to use a general structure where the column number is in a field. Let me explain that.</li> </ol> <p>Say your CSV is like this</p> <pre><code>'Hdg 1', 'Hdg 2', 'Hdg 3', 'Hdg 4', 'Hdg 5' 'Some text', 23, 47, 'More text', 'Even more text', 21 'A string', 66, 22, 'Another string', 'Last string', 42 </code></pre> <p>For step 1, counting the columns should be straightforward, and the csv reader package suggested would be of help.</p> <p>To satisfy 2, you would need to decide which are strings (maybe by the quote character, maybe because it's all digits, maybe it can be determine from the heading, etc.</p> <p>For #3, let's assume that we use a single table to store our data (rather than creating a custom table for each new import, which I think should be avoided unless absolutely necessary for some reason).</p> <p>If we take the first data row and extract the text [<code>Some text</code>, <code>More text</code>, <code>Even more text</code>], from columns 1, 4 and 5 respectively we can use any one of the column number, the heading or the position in the list of text string (0, 1 and 2) to identify the original column. If using a single table then you also might need to identify the origin of the data, say if it came from 'employees.csv' then you would use 'employees' as the identifier. So each row of the CSV file would result in the <code>INSERT</code> of 3 rows in the new table. I will not show the primary key but there should also be one.</p> <pre><code>'employees', 'Hdg 1', 'Some text' 'employees', 'Hdg 2', 'More text' 'employees', 'Hdg 3', 'Even more text' </code></pre> <p>or</p> <pre><code>'employees', 0, 'Some text' 'employees', 1, 'More text' 'employees', 2, 'Even more text' </code></pre> <p>You could also put the first column in a related table which lists all the data sources. These are all design decisions you will need to make depending on how you want to access the data, volume of data, etc.</p> <p>If you need to present the data in tabular format then you could write a view that uses the RDBMS' pivot query to retrieve the data that way.</p>
python|python-3.x|postgresql
0
1,909,368
45,083,908
When I use Django Celery apply_async with eta, it does the job immediately
<p>i looked at celery documentation and trying something from it but it not work like the example. maybe i'm wrong at some point, please give me some pointer if i'm wrong about the following code</p> <p>in views.py i have something like this:</p> <pre><code>class Something(CreateView): model = something def form_valid(self, form): obj = form.save(commit=False) number = 5 test_limit = datetime.now() + timedelta(minutes=5) testing_something.apply_async((obj, number), eta=test_limit) obj.save() </code></pre> <p>and in celery tasks i wrote something like this:</p> <pre><code>@shared_task() def add_number(obj, number): base = Base.objects.get(id=1) base.add = base.number + number base.save() return obj </code></pre> <p>my condition with this code is the celery runs immediately after CreateView runs, my goal is to run the task add_number once in 5 minutes after running Something CreateView. Thank You so much</p> <p><strong>Edit:</strong> </p> <ol> <li>i've tried change the <code>eta</code> into <code>countdown=180</code> but it still running function <code>add_number</code> immediately. i also tried longer countdown but still running immediately</li> <li>i've tried @johnmoustafis answer but still the same, the task run immediately</li> <li>i've also tried @dana answer but it still the same, the task run immediately</li> </ol>
<p>Celery by default uses UTC time.<br> If your timezone is "behind" the UTC (UTC - HH:MM) the <code>datetime.now()</code> call will return a timestamp which is "behind" UTC, thus causing your task to be executed immediately.</p> <p>You can use <a href="https://docs.python.org/3/library/datetime.html#datetime.datetime.utcnow" rel="noreferrer"><code>datetime.utcnow()</code></a> instead:</p> <pre><code>test_limit = datetime.utcnow() + timedelta(minutes=5) </code></pre> <hr> <p>Since you are using django, there exist another option:</p> <p>If you have set the <code>USE_TZ = True</code> in your <code>setting.py</code>, you have enabled the <a href="https://docs.djangoproject.com/en/1.11/topics/i18n/timezones/" rel="noreferrer">django timezone settings</a> and you can use <code>timezone.now()</code> instead of <code>datetime.utcnow()</code>:</p> <pre><code>from django.utils import timezone ... test_limit = timezone.now() + timedelta(minutes=5) </code></pre>
python|django|celery|django-celery|celery-task
8
1,909,369
61,278,110
Excluding modules when importing everything in __init__.py
<p><strong>Problem</strong></p> <p>Consider the following layout:</p> <pre><code>package/ main.py math_helpers/ mymath.py __init__.py </code></pre> <p><code>mymath.py</code> contains:</p> <pre><code>import math def foo(): pass </code></pre> <p>In <code>main.py</code> I want to be able to use code from <code>mymath.py</code> like so:</p> <pre><code>import math_helpers math_helpers.foo() </code></pre> <p>In order to do so, <code>__init__.py</code> contains:</p> <pre><code>from .mymath import * </code></pre> <p>However, modules imported in <code>mymath.py</code> are now in the <code>math_helpers</code> namespace, e.g. <code>math_helpers.math</code> is accessible.</p> <hr> <p><strong>Current approach</strong></p> <p>I'm adding the following at the end of <code>mymath.py</code>.</p> <pre><code>import types __all__ = [name for name, thing in globals().items() if not (name.startswith('_') or isinstance(thing, types.ModuleType))] </code></pre> <p>This seems to work, but is it the correct approach?</p>
<p>On the one hand there are many good reasons not to do star imports, but on the other hand, python is for consenting adults.</p> <p><code>__all__</code> is the recommended approach to determining what shows up in a star import. Your approach is correct, and you can further sanitize the namespace when finished:</p> <pre><code>import types __all__ = [name for name, thing in globals().items() if not (name.startswith('_') or isinstance(thing, types.ModuleType))] del types </code></pre> <p>While less recommended, you can also sanitize elements directly out of the module, so that they don't show up at all. This will be a problem if you need to use them in a function defined in the module, since every function object has a <code>__globals__</code> reference that is bound to its parent module's <code>__dict__</code>. But if you only import <code>math_helpers</code> to call <code>math_helpers.foo()</code>, and don't require a persistent reference to it elsewhere in the module, you can simply unlink it at the end:</p> <pre><code>del math_helpers </code></pre> <p><strong>Long Version</strong></p> <p>A module import runs the code of the module in the namespace of the module's <code>__dict__</code>. Any names that are bound at the top level, whether by class definition, function definition, direct assignment, or other means, live in the that dictionary. Sometimes, it is desirable to clean up intermediate variables, as I suggested doing with <code>types</code>.</p> <p>Let's say your module looks like this:</p> <p><strong>test_module.py</strong></p> <pre><code>import math import numpy as np def x(n): return math.sqrt(n) class A(np.ndarray): pass import types __all__ = [name for name, thing in globals().items() if not (name.startswith('_') or isinstance(thing, types.ModuleType))] </code></pre> <p>In this case, <code>__all__</code> will be <code>['x', 'A']</code>. However, the module itself will contain the following names: <code>'math', 'np', 'x', 'A', 'types', '__all__'</code>.</p> <p>If you run <code>del types</code> at the end, it will remove that name from the namespace. Clearly this is safe because <code>types</code> is not referenced anywhere once <code>__all__</code> has been constructed.</p> <p>Similarly, if you wanted to remove <code>np</code> by adding <code>del np</code>, that would be OK. The class <code>A</code> is fully constructed by the end of the module code, so it does not require the global name <code>np</code> to reference its parent class.</p> <p>Not so with <code>math</code>. If you were to do <code>del math</code> at the end of the module code, the function <code>x</code> would not work. If you import your module, you can see that <code>x.__globals__</code> is the module's <code>__dict__</code>:</p> <pre><code>import test_module test_module.__dict__ is test_module.x.__globals__ </code></pre> <p>If you delete <code>math</code> from the module dictionary and call <code>test_module.x</code>, you will get</p> <pre><code>NameError: name 'math' is not defined </code></pre> <p>So you under some very special circumstances you may be able to sanitize the namespace of <code>mymath.py</code>, but that is not the recommended approach as it only applies to certain cases.</p> <p>In conclusion, stick to using <code>__all__</code>.</p> <p><strong>A Story That's Sort of Relevant</strong></p> <p>One time, I had two modules that implemented similar functionality, but for different types of end users. There were a couple of functions that I wanted to copy out of module <code>a</code> into module <code>b</code>. The problem was that I wanted the functions to work as if they had been defined in module <code>b</code>. Unfortunately, they depended on a constant that was defined in <code>a</code>. <code>b</code> defined its own version of the constant. For example:</p> <p><strong>a.py</strong></p> <pre><code>value = 1 def x(): return value </code></pre> <p><strong>b.py</strong></p> <pre><code>from a import x value = 2 </code></pre> <p>I wanted <code>b.x</code> to access <code>b.value</code> instead of <code>a.value</code>. I pulled that off by adding the following to <code>b.py</code> (based on <a href="https://stackoverflow.com/a/13503277/2988730">https://stackoverflow.com/a/13503277/2988730</a>):</p> <pre><code>import functools, types x = functools.update_wrapper(types.FunctionType(x.__code__, globals(), x.__name__, x.__defaults__, x.__closure__), x) x.__kwdefaults__ = x.__wrapped__.__kwdefaults__ x.__module__ = __name__ del functools, types </code></pre> <p>Why am I telling you all this? Well, you can make a version of your module that does not have any stray names in your namespace. You won't be able to see changes to global variables in your functions though. This is just an exercise in pushing python beyond its normal usage. I highly don't recommend doing this, but here is a sample module that effectively freezes its <code>__dict__</code> as far as the functions are concerned. This has the same members as <code>test_module</code> above, but with no modules in the global namespace:</p> <pre><code>import math import numpy as np def x(n): return math.sqrt(n) class A(np.ndarray): pass import functools, types, sys def wrap(obj): """ Written this way to be able to handle classes """ for name in dir(obj): if name.startswith('_'): continue thing = getattr(obj, name) if isinstance(thing, FunctionType) and thing.__module__ == __name__: setattr(obj, name, functools.update_wrapper(types.FunctionType(thing.func_code, d, thing.__name__, thing.__defaults__, thing.__closure__), thing) getattt(obj, name).__kwdefaults__ = thing.__kwdefaults__ elif isinstance(thing, type) and thing.__module__ == __name__: wrap(thing) d = globals().copy() wrap(sys.modules[__name__]) del d, wrap, sys, math, np, functools, types </code></pre> <p>So yeah, please don't ever do this! But if you do, stick it in a utility class somewhere.</p>
python|python-import
5
1,909,370
60,529,588
How can I cast a Pandas string column to the new nullable Int64 type?
<p>I am trying to cast a string column in a Pandas DataFrame into numeric columns.</p> <p>I use the following DataFrame:</p> <pre><code>import pandas as pd import numpy as np d = {'col1': ['1', '2'], 'col2': ['5', str(np.nan)], 'col3': [99, str(pd.NA)]} df = pd.DataFrame(d) print(df) </code></pre> <blockquote> <pre><code> col1 col2 col3 0 1 5 99 1 2 nan &lt;NA&gt; </code></pre> </blockquote> <p>Now, when I cast <code>col1</code> from to <code>int</code>, and <code>col2</code> to <code>float</code>, it works fine:</p> <pre><code>print(df.col1.astype(int)) print(df.col2.astype(float)) </code></pre> <blockquote> <pre><code>0 1 1 2 Name: col1, dtype: int64 0 5.0 1 NaN Name: col2, dtype: float64 </code></pre> </blockquote> <p>But when I try to cast <code>col3</code> from <code>str</code> to <code>Int64</code> I get the following error:</p> <pre><code>df.col3.astype(pd.Int64Dtype()) </code></pre> <blockquote> <p><code>TypeError: object cannot be converted to an IntegerDtype</code></p> </blockquote> <p>Is this intended?</p> <p>How can I overcome this limitation?</p> <p><strong>EDIT:</strong> I edited the example data to make the intention clearer.</p>
<p><strong>Update</strong>:</p> <p>Your sample data has column <code>col3</code> as having an integer <code>99</code> and a string representation of <code>pd.NA</code>, but your question title asking about string column. So, just in case you meant that <code>col3</code> has a string <code>'99'</code> and a string representation of <code>pd.NA</code> such as</p> <pre><code>In [124]: s1 = pd.Series(['99', str(pd.NA)]) In [125]: s1 Out[125]: 0 99 1 &lt;NA&gt; dtype: object In [126]: s1.map(type) Out[126]: 0 &lt;class 'str'&gt; 1 &lt;class 'str'&gt; dtype: object </code></pre> <p>In this case, pandas doesn't allow using <code>astype</code> to direct convert it to <code>Int64</code>. You need to use <code>pd.to_numeric</code> with <code>'coerce'</code> and cast to <code>Int64</code></p> <pre><code>In [130]: s = pd.to_numeric(s1, errors='coerce').astype('Int64') In [131]: s Out[131]: 0 99 1 &lt;NA&gt; dtype: Int64 In [132]: s.map(type) Out[132]: 0 &lt;class 'int'&gt; 1 &lt;class 'pandas._libs.missing.NAType'&gt; dtype: object </code></pre> <hr> <p><strong>Original</strong>: </p> <p>In pandas 1.0.0+, <code>pd.NA</code> is introduced to represent missing values for the nullable integer and boolean data types and the new string data type. When you call <code>str</code> on <code>pd.NA</code> (i.e. you call <code>str(pd.NA)</code> in dataframe constructor for <code>col3</code>), it returns the its string representation. Its string representation is string <code>&lt;NA&gt;</code>. </p> <pre><code>In [84]: pd.NA.__str__() Out[84]: '&lt;NA&gt;' </code></pre> <p>It is the same as you call <code>str</code> on <code>np.nan</code>, its string representation is string <code>nan</code>.</p> <pre><code>In [86]: np.nan.__str__() Out[86]: 'nan' </code></pre> <p>Therefore, <code>col3</code> actually has <strong>NO</strong> <code>pd.NA</code>. It just contains an integer <code>99</code> and a string representation of <code>pd.NA</code> (i.e. it is just a plain string <code>&lt;NA&gt;</code>). You want to cast string <code>&lt;NA&gt;</code> to the nullable integer type <code>Int64</code> (an alias of <code>pd.Int64Dtype()</code>), so it errors out.</p> <p><strong>The solution</strong>: </p> <p>You need to replace this plain string <code>&lt;NA&gt;</code> to the truly <code>pd.NA</code> and cast to <code>Int64</code></p> <pre><code>s = df.col3.replace('&lt;NA&gt;', pd.NA).astype('Int64') Out[57]: 0 99 1 &lt;NA&gt; Name: col3, dtype: Int64 </code></pre> <hr> <p><strong>Detail:</strong></p> <p>The <code>&lt;NA&gt;</code> in <code>col3</code> is clearly just a plain string</p> <pre><code>In [64]: df.loc[1, 'col3'] Out[64]: '&lt;NA&gt;' In [65]: type(df.loc[1, 'col3']) Out[65]: str </code></pre> <p>After replacing it with <code>pd.NA</code> and cast to <code>Int64</code>, it is the truly <code>pd.NA</code></p> <pre><code>In [66]: s = df.col3.replace('&lt;NA&gt;', pd.NA).astype('Int64') In [68]: s[1] Out[68]: &lt;NA&gt; In [69]: type(s[1]) Out[69]: pandas._libs.missing.NAType </code></pre>
python|python-3.x|pandas|dataframe|types
3
1,909,371
57,751,042
How to search for only emails containing attachments in poplib
<p>I am trying to use poplib to search through emails and get only the ones with attachments, I have some current code but it's so slow as it downloads all the email messages, is there any way to just search the server for emails that have attachments and then download them?</p> <pre><code>def fetch_mail(delete_after=False): pop_conn = mail_connection() print("connected") messages = [pop_conn.retr(i) for i in range(1, len(pop_conn.list()[1]) + 1)] messages = ["\n".join(mssg[1]) for mssg in messages] messages = [parser.Parser().parsestr(mssg) for mssg in messages] if delete_after == True: delete_messages = [pop_conn.dele(i) for i in range(1, len(pop_conn.list()[1]) + 1)] pop_conn.quit() return messages allowed_mimetypes = ["text/plain"] def get_attachments(): messages = fetch_mail() attachments = [] for msg in messages: for part in msg.walk(): if part.get_content_type() in allowed_mimetypes: name = part.get_filename() data = part.get_payload(decode=True) if name != None: ranint = random.randint(100000,999999) f = open(str(ranint) + name, 'wb') f.write(data) f.close() attachments.append(name) else: continue return attachments``` </code></pre>
<p>The POP3 protocol does not support searching or any other feature which might allow you to determine which messages have attachments. If you are stuck with POP3, you are out of luck.</p>
python|python-2.7|email|pop3
0
1,909,372
56,066,190
Kivy - updating label text periodically
<p>New to python and kivy. Working on a dashboard to display time and other parameters. Dashboard is currently set and displays all values perfectly. But still can't figure out how to update the time dynamically in those labels used for time. Found similar posts but still struggling. Posting below the summarised portion of my codes. </p> <p>Worked with the Clock object to trigger a method every one sec which need to update the label text in the kv file. But unable to put this logic into working.</p> <p><strong>sample.py</strong></p> <pre><code>import time import datetime import kivy kivy.require('1.11.0') from kivy.app import App from kivy.uix.boxlayout import BoxLayout from kivy.core.text import LabelBase from kivy.clock import Clock class MySec(BoxLayout): seconds_string = time.strftime("%S") class MyApp(App): def build(self): #Clock.schedule_interval('periodic_method', 1) return MySec() if __name__ == '__main__': MyApp().run() </code></pre> <p><strong>my.kv file</strong></p> <pre><code>&lt;mysec&gt;: orientation: 'vertical' Label: id: kv_sec text: root.seconds_string font_size: 200 </code></pre> <p>In short, how should I modify the .py file so that so that my .kv label text gets updated every second with the updated value. Thanks a lot in advance.</p>
<p>Use a Kivy <a href="https://kivy.org/doc/stable/api-kivy.properties.html#kivy.properties.StringProperty" rel="nofollow noreferrer"><code>StringProperty</code></a> to automatically update the <code>Label</code>'s text, and use Kivy <a href="https://kivy.org/doc/stable/api-kivy.clock.html" rel="nofollow noreferrer"><code>Clock</code></a> object e.g. <code>Clock.schedule_interval()</code> to update the <code>StringProperty</code> at every time interval.</p> <ul> <li>Replace <code>seconds_string = time.strftime("%S")</code> with class attribute of type <code>StringProperty</code>, <code>seconds_string = StringProperty('')</code></li> <li>Implement a method, <code>update_time()</code> to update class attribute, <code>seconds_string</code></li> <li>Use Kivy <code>Clock.schedule_interval()</code> to invoke method <code>update_time()</code></li> </ul> <h1><a href="https://kivy.org/doc/stable/guide/events.html#introduction-to-properties" rel="nofollow noreferrer">Kivy » Introduction to Properties</a></h1> <blockquote> <p>Properties are an awesome way to define events and bind to them. Essentially, they produce events such that when an attribute of your object changes, all properties that reference that attribute are automatically updated.</p> </blockquote> <h1>Example</h1> <p>The following example uses <code>time()</code> function to extract time. It can be replaced with <code>datetime.now()</code> e.g. replace <code>time.strftime("%S")</code> with <code>datetime.now().strftime("%S")</code>, and add import statement, <code>from datetime import datetime</code></p> <h2>main.py</h2> <pre><code>import time import kivy kivy.require('1.11.0') from kivy.app import App from kivy.uix.boxlayout import BoxLayout from kivy.clock import Clock from kivy.lang import Builder from kivy.properties import StringProperty Builder.load_string(""" &lt;MySec&gt;: orientation: 'vertical' Label: id: kv_sec text: root.seconds_string font_size: 200 """) class MySec(BoxLayout): seconds_string = StringProperty('') class MyApp(App): def build(self): Clock.schedule_interval(lambda dt: self.update_time(), 1) return MySec() def update_time(self): self.root.seconds_string = time.strftime("%S") if __name__ == '__main__': MyApp().run() </code></pre> <h1>Output</h1> <p><a href="https://i.stack.imgur.com/yH77Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yH77Q.png" alt="Result"></a></p>
python|kivy
1
1,909,373
71,516,246
os.environ.setdefault('DJANGO_SETTINGS_MODULE') doesn't work
<p>I'm trying to run a standalone script that uses the Django models for accessing the database.</p> <p>The script is very simple, see below:</p> <pre><code>import sys from manager.models import Playlist from manager.utils import clean_up_playlist, add_record_to_playlist def main(playlist_id, username): playlist = Playlist.objects.get(playlists=playlist_id) # the script does other stuff if __name__ == &quot;__main__&quot;: playlist_id = sys.argv[1] username = sys.argv[2] import os os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'SpotifyPlaylistManager.settings') import django django.setup() main(playlist_id, username) </code></pre> <p>The script is in the top folder of the Django folder</p> <pre><code>SpotifyPlaylistManager/ |-SpotifyPlaylistManager/ |-settings.py |-venv |-manage.py |-my_script.py </code></pre> <p>For some reason, if I try to run it with the command below I got the error</p> <pre><code>raise ImproperlyConfigured( django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings. </code></pre> <p>The actual command I need to launch</p> <pre><code>source /home/nicola/PycharmProjects/SpotifyPlaylistManager/venv/bin/activate &amp;&amp; python /home/nicola/PycharmProjects/SpotifyPlaylistManager/scheduler.py 6tIMeXF1Q9bB7KDywBhG2P nicoc &amp;&amp; deactivate </code></pre> <p>I can't find the issue</p>
<p>Moving the Django include inside the main worked</p> <pre><code>if __name__ == &quot;__main__&quot;: playlist_id = sys.argv[1] username = sys.argv[2] import os os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'SpotifyPlaylistManager.settings') import django django.setup() from manager.models import Playlist from manager.utils import clean_up_playlist, add_record_to_playlist main(playlist_id, username) </code></pre>
python|django|python-venv
0
1,909,374
55,341,028
Getting two inputs in one line, and how to let the code run even if the user gives only one input?
<p>I want to use one line to get multiple inputs, and if the user only gives one input, the algorithm would decide whether the input is a negative number. If it is, then the algorithm stops. Otherwise, the algorithm loops to get a correct input.</p> <p>My code:</p> <pre class="lang-py prettyprint-override"><code>integer, string = input(&quot;Enter an integer and a word: &quot;) </code></pre> <p>When I try the code, Python returns</p> <pre><code>ValueError: not enough values to unpack (expected 2, got 1) </code></pre> <p>I tried &quot;try&quot; and &quot;except&quot;, but I couldn't get the &quot;integer&quot; input. How can I fix that?</p>
<p>In order to get two inputs at a time, you can use <a href="https://docs.python.org/3/library/stdtypes.html#str.split" rel="nofollow noreferrer"><code>split()</code></a>. Just like the following example :</p> <pre><code>x = 0 while int(x)&gt;= 0 : try : x, y = input(&quot;Enter a two value: &quot;).split() except ValueError: print(&quot;You missed one&quot;) print(&quot;This is x : &quot;, x) print(&quot;This is y : &quot;, y) </code></pre>
python|python-3.x
2
1,909,375
57,571,342
datetime format in python to append to a filename
<p>Following is the code that I am using to get the current date &amp; time:</p> <pre><code>import datetime date_object = datetime.datetime.now() print(date_object) </code></pre> <p>Output is: 2019-08-20 15:24:46.670533</p> <p>I need the output in format: 20190820152446 so that I can append it to the filename like abc.txt_20190820152446</p> <p>I used the following:</p> <pre><code> date_object = datetime.datetime.now() print(date_object) stamp=str(date_object.year)+str(date_object.month) +str(date_object.day)+str(date_object.hour)+str(date_object.minute) +str(date_object.second) print(stamp) </code></pre> <p>Output is : 2019-08-20 15:24:46.670533</p> <p>2019820152446</p> <p>Is there a better way to do it in python? I am new to python.Any help is Appreciated</p>
<p>Use <code>.strftime()</code></p> <p><strong>Ex:</strong></p> <pre><code>import datetime date_object = datetime.datetime.now() print(date_object.strftime("%Y%m%d%H%M%S")) # --&gt;20190820154309 </code></pre>
python-3.x
1
1,909,376
57,650,834
Gaps in matplotlib's histogram `hist`
<p>I am following an online course on Python. This is the code, verbatim. It conducts a Monte Carlo repetition of 100 random walks, 10 steps each.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt np.random.seed(123) final_tails = [] for x in range(100) : tails = [0] for x in range(10) : coin = np.random.randint(0, 2) tails.append(tails[x] + coin) final_tails.append(tails[-1]) plt.hist(final_tails, bins = 10) plt.show() </code></pre> <p>The course says that I should get the plot without gaps. I get exactly the same bar heights, in exactly the same order, but with some odd spacing gaps between them.</p> <p>Can anyone corroborate this result or explain it?</p> <p><a href="https://i.stack.imgur.com/hqh3L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hqh3L.png" alt="enter image description here"></a><a href="https://i.stack.imgur.com/vgg7x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vgg7x.png" alt="enter image description here"></a></p> <p>I am using:</p> <ul> <li>Python 3.7.1 64-bit</li> <li>Windws 7 64-bit</li> <li>Spyder 3.3.2</li> </ul> <p>Thanks.</p> <p><strong>AFTERNOTE</strong></p> <p>I noticed that, unlike the course's abutted bars, my bin edges align with integers. This is not good, as the data will be integers, but whether the integers fall into the left or right side of the bin edges should be consistent. Hence, it doesn't seem to explain the gap. It could mean, however, that the auto-generation of bin edges changed somewhere in the evolution of matplotlib. I don't know what version the course uses.</p> <p><strong>P.S.</strong> The following indicates the problem is that the bin edges don't straddle all the integers in the data value range:</p> <pre><code>print( np.unique( np.array( final_tails ) ) ) print( np.unique( final_tails ) ) # data values hist, bin_edges = np.histogram( final_tails ) print(bin_edges) # bin edges print(hist) # bar heights </code></pre> <ul> <li><p>The data values are: [2 3 4 5 6 7 8 9]</p></li> <li><p>The bin edges are: [2. 2.7 3.4 4.1 4.8 5.5 6.2 6.9 7.6 8.3 9. ]</p></li> <li><p>The bar heights are: [ 2 10 23 0 21 27 0 10 6 1]</p></li> </ul> <p>I got the course's nice abutted bars using:</p> <pre><code>plt.hist( final_tails , bins = np.arange( min( final_tails ) - 0.5 , max( final_tails ) + 1.5 , 1.0 ) , edgecolor="k" ) plt.show() </code></pre> <p>I have not posted this as the answer, as the credit goes to saibhaskar and ImportanceOfBeingErnest, who provided the details.</p> <p>But I do wonder whether this need to customize the bin edges is might be because the scheme for automatic bin edges has changed between the creation of the course material and now.</p>
<p>The minimum and maximum of your data are 2, and 9, repsectively. Dividing this range by 10 bins, means each bin is 0.7 wide. We can compute the edges, which are 2, 2.7, 3.4, 4.1, 4.8, etc.. </p> <pre><code>print(min(final_tails), max(final_tails)) # 2 9 step = (max(final_tails)-min(final_tails))/10 print(step) # 0.7 edges = np.linspace(min(final_tails), max(final_tails), 10+1) print(edges) # [2.0 2.7 3.4 4.1 4.8 5.5 6.2 6.9 7.6 8.3 9.0 ] </code></pre> <p>Since your data is only integer numbers, e.g. in the bin between 4.1 and 4.8, there is no data, hence that bin's bar is missing in the plot. </p> <p>I suspect that the image you show from the course has been produced by a different code than the one you show here.</p>
python|matplotlib
1
1,909,377
42,355,551
Displaying line graph using data frame grouping
<p>I am learning data frames and trying out different graphs. I have a data set of video games and am trying to plot a graph which shows years on x axis, net sales on y axis and the graph has to be per video game genre. I have grouped the data but am facing issues displaying it. Below is what I have tried:</p> <pre><code>import pandas as pd %matplotlib inline from matplotlib.pyplot import hist df = pd.read_csv('VideoGames.csv') s = df.groupby(['Genre','Year_of_Release']).agg(sum)['Global_Sales'] print(s) </code></pre> <p>The data is grouped properly as shown below:</p> <pre><code>Genre Year_of_Release Action 1980.0 0.34 1981.0 14.84 1982.0 6.52 1983.0 2.86 1984.0 1.85 1985.0 3.52 1986.0 13.74 1987.0 1.12 1988.0 1.75 1989.0 4.64 1990.0 6.39 1991.0 6.76 1992.0 3.83 1993.0 1.81 1994.0 1.55 1995.0 3.57 1996.0 20.58 1997.0 27.58 1998.0 39.44 1999.0 27.77 2000.0 34.04 2001.0 59.39 2002.0 86.76 2003.0 67.93 2004.0 76.25 2005.0 85.53 2006.0 66.13 2007.0 104.97 2008.0 135.01 2009.0 137.66 ... Sports 2013.0 41.23 2014.0 45.10 2015.0 40.90 2016.0 23.53 Strategy 1991.0 0.94 1992.0 0.37 1993.0 0.81 1994.0 3.56 1995.0 6.51 1996.0 5.61 1997.0 7.71 1998.0 13.46 1999.0 18.45 2000.0 8.52 2001.0 7.55 2002.0 5.56 2003.0 7.99 2004.0 7.16 2005.0 5.31 2006.0 4.22 2007.0 9.26 2008.0 11.55 2009.0 12.36 2010.0 13.77 2011.0 8.84 2012.0 3.27 2013.0 6.09 2014.0 0.99 2015.0 1.84 2016.0 1.15 Name: Global_Sales, dtype: float64 </code></pre> <p>Please advise how i can plot the graphs for all the genre's in one diagram. Thank you.</p>
<p>In pandas plot, the index will be plotted as x axis and every column is plotted separately, so you just need to transform the series to a data frame with <em>Genre</em> as columns:</p> <pre><code>ax = s.unstack('Genre').plot(kind = "line") </code></pre> <p><a href="https://i.stack.imgur.com/ZTYrQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZTYrQ.png" alt="enter image description here"></a></p>
python|python-2.7|pandas|dataframe|graph
2
1,909,378
42,427,487
Using alembic.config.main redirects log output
<p>I have a script that performs database operations alongside an alembic API call to upgrade a newly created database to head. I am having an issue with a python logger instance where logs are written to a file using a module level logger.</p> <p>Then the script invokes <code>alembic.config.main(argv=alembic_args)</code> to run a migration. However, every log statement after the alembic call, using the original logger instance, isn't written to the expected log file.</p> <p>Here is an example script that reproduces the behavior.</p> <pre><code>#!/usr/bin/env python3 import logging import os import alembic.config from .utilities import get_migration_dir logging.basicConfig(filename='test.log', level=logging.DEBUG) CUR_DIR = os.path.dirname(__file__) LOG = logging.getLogger('so_log') LOG.info('Some stuff') LOG.info('More stuff') alembic_config = ( '--raiseerr', 'upgrade', 'head' ) os.chdir(get_migration_dir()) alembic.config.main(argv=alembic_config) os.chdir(CUR_DIR) LOG.debug('logging after alembic call.') LOG.debug('more logging after alembic call.') print('Code still running after alembic') </code></pre> <p>Log file output</p> <pre><code>INFO:so_log:Some stuff INFO:so_log:More stuff </code></pre> <p>stdout</p> <pre><code>INFO [alembic.runtime.migration] Context impl PostgresqlImpl. INFO [alembic.runtime.migration] Will assume transactional DDL. print statement before alembic Code still running after alembic </code></pre> <p>It seems as though the logger instance, <code>LOG</code>, is losing context or being directed elsewhere after calling the alembic API.</p> <p>Also, I've tried running the alembic call in a separate thread which yielded the same result. What I expect to happen should be that log statements continue to write to the specified file after using alembic for migrations but that is not happening. And further, it's actually breaking the <code>LOG</code> instance for any code that's called afterward; Unless I'm just missing something here.</p>
<p>This is because alembic sets up logging using <code>fileConfig</code> from <code>alembic.ini</code>, you can see it in your <code>env.py</code> script:</p> <pre><code># Interpret the config file for Python logging. # This line sets up loggers basically. fileConfig(config.config_file_name) </code></pre> <p>This effectively overrides your original logger config.</p> <p>To avoid this, you can simply remove this line from <code>env.py</code>, however this will result in no logs being produced when running <code>alembic</code> from console.</p> <p>A more robust option is to run alembic commands via <code>alembic.command</code> instead of <code>alembic.config.main</code>. This way you can override alembic config at runtime:</p> <pre><code>from alembic.config import Config import alembic.command config = Config('alembic.ini') config.attributes['configure_logger'] = False alembic.command.upgrade(config, 'head') </code></pre> <p>Then in <code>env.py</code>:</p> <pre><code>if config.attributes.get('configure_logger', True): fileConfig(config.config_file_name) </code></pre>
python|logging|python-3.5|alembic
18
1,909,379
42,469,114
scatterplot python double edge line
<p>I have a scatter plot as the one below and would like my plots to have a double edge, without having to create the same scatter with the same coordinates on top of this one. I could not find how to have a double line as an edge. </p> <pre><code>import numpy as np import matplotlib.pyplot as plt N = 50 x = np.random.rand(N) y = np.random.rand(N) colors = np.random.rand(N) plt.scatter(x, y, s=400, c=colors,marker='h' alpha=0.5,edgecolors='black',linewidth=1) plt.show() </code></pre> <p>The main reason for this question comes from a bug I have: when I superimpose scatterplots with the same coordinates, the new plots I'm creating tends to modify slightly their positions and does not perfectly fits one over each other<a href="https://i.stack.imgur.com/7zpJ1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7zpJ1.png" alt="bad fit"></a></p> <p>This bug does not show when the background marker has <code>facecolors=''</code> but only when it has <code>facecolors='w'</code> which is a problem for me.</p>
<p>This seems to be indeed a bug. </p> <p>A possible solution can be to use the <code>colors</code> argument to plot white scatter points.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt N = 50 x = np.random.rand(N) y = np.random.rand(N) colors = np.random.rand(N) whites = [[1,1,1]]*N plt.scatter(x, y, s=400, c=whites, marker='h', alpha=0.5,edgecolors='black',linewidth=1) plt.scatter(x, y, s=260, c=colors, marker='h', alpha=0.5,edgecolors='black',linewidth=1) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/HG4E6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HG4E6.png" alt="enter image description here"></a></p> <p><hr> Depending on the application using special symbols as markers may also be an option. See <a href="https://stackoverflow.com/questions/41108055/matplotlib-plot-dashed-circles-using-plt-plot-instead-of-plt-scatter">this question</a> or the <a href="http://ctan.space-pro.be/tex-archive/fonts/stix/doc/stix.pdf" rel="nofollow noreferrer">complete list</a>.</p> <pre><code>import matplotlib.pyplot as plt N = 4 x = [1,1,2,2] y = [1,2,1,2] symbols = [ur"$\u27C1$", ur"$\u25C8$", ur"$\u229A$", ur"$\u29C8$"] for i in range(N): plt.scatter(x[i], y[i], s=400, c=(i/float(N), 0, 1-i/float(N)), marker=symbols[i], alpha=0.5,linewidth=1) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/9TDLk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9TDLk.png" alt="enter image description here"></a></p>
python|matplotlib|scatter-plot
1
1,909,380
54,056,711
PySpark: Uncaught exception in thread stdout writer for python.exe
<p>I'm working on an ETL application using pyspark. I've finished the implementation and when running it on pieces of my dataset it works fine. However I try using the entire dataset (2.5 GB of text) I get an error like this:</p> <pre><code>[Stage 137:============&gt;(793 + 7) / 800][Stage 139:&gt; (0 + 1) / 800]Traceback (most recent call last): File "c:\spark\python\lib\pyspark.zip\pyspark\java_gateway.py", line 169, in local_connect_and_auth File "c:\spark\python\lib\pyspark.zip\pyspark\java_gateway.py", line 144, in _do_server_auth File "c:\spark\python\lib\pyspark.zip\pyspark\serializers.py", line 653, in loads File "c:\spark\python\lib\pyspark.zip\pyspark\serializers.py", line 690, in read_int File "C:\Users\username\AppData\Local\Continuum\miniconda3\lib\socket.py", line 586, in readinto return self._sock.recv_into(b) socket.timeout: timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\username\AppData\Local\Continuum\miniconda3\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\Users\username\AppData\Local\Continuum\miniconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "c:\spark\python\lib\pyspark.zip\pyspark\worker.py", line 290, in &lt;module&gt; File "c:\spark\python\lib\pyspark.zip\pyspark\java_gateway.py", line 172, in local_connect_and_auth NameError: name '_exception_message' is not defined 19/01/05 10:53:28 ERROR Utils: Uncaught exception in thread stdout writer for C:\Users\username\AppData\Local\Continuum\miniconda3\python.exe java.net.SocketException: socket already closed at java.net.TwoStacksPlainSocketImpl.socketShutdown(Native Method) at java.net.AbstractPlainSocketImpl.shutdownOutput(AbstractPlainSocketImpl.java:580) at java.net.PlainSocketImpl.shutdownOutput(PlainSocketImpl.java:258) at java.net.Socket.shutdownOutput(Socket.java:1556) at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1$$anonfun$apply$2.apply$mcV$sp(PythonRunner.scala:263) at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1$$anonfun$apply$2.apply(PythonRunner.scala:263) at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1$$anonfun$apply$2.apply(PythonRunner.scala:263) at org.apache.spark.util.Utils$.tryLog(Utils.scala:2005) at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:263) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1992) at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:170) 19/01/05 10:53:28 ERROR Executor: Exception in task 797.0 in stage 137.0 (TID 24032) java.net.SocketException: Connection reset by peer: socket write error at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111) at java.net.SocketOutputStream.write(SocketOutputStream.java:155) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$write$1(PythonRDD.scala:211) at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:223) at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:223) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:223) at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:439) at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:247) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1992) at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:170) 19/01/05 10:53:28 ERROR Executor: Exception in task 796.0 in stage 137.0 (TID 24031) org.apache.spark.SparkException: Python worker failed to connect back. at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:148) at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:76) at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117) at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:86) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:67) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) at org.apache.spark.scheduler.Task.run(Task.scala:109) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.SocketException: Software caused connection abort: socket write error at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111) at java.net.SocketOutputStream.write(SocketOutputStream.java:134) at java.io.DataOutputStream.writeInt(DataOutputStream.java:198) at org.apache.spark.security.SocketAuthHelper.writeUtf8(SocketAuthHelper.scala:96) at org.apache.spark.security.SocketAuthHelper.authClient(SocketAuthHelper.scala:57) at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:143) ... 31 more 19/01/05 10:53:29 ERROR TaskSetManager: Task 797 in stage 137.0 failed 1 times; aborting job Traceback (most recent call last): File "C:/Users/username/Desktop/etc/projectDir/Main.py", line 476, in &lt;module&gt; Main(sys.argv[1:]) File "C:/Users/username/Desktop/etc/projectDir/Main.py", line 471, in __init__ for reportName, report in dataObj.generateReports(sqlContext): File "C:/Users/username/Desktop/etc/projectDir/Main.py", line 443, in generateReports report = reportGenerator(sqlContext, commonSchema) File "C:/Users/username/Desktop/etc/projectDir/Main.py", line 378, in generateByCycleReport **self.generateStats(contributionsByCycle[cycle])}) File "C:/Users/username/Desktop/etc/projectDir/Main.py", line 424, in generateStats stats[columnName] = aggregator(self.dataFrames['demographics'][demographicId]) File "C:/Users/username/Desktop/etc/projectDir/Main.py", line 282, in totalContributed return df.agg({"amount": "sum"}).collect()[0]['sum(amount)'] or 0 File "C:\Users\username\AppData\Local\Continuum\miniconda3\lib\site-packages\pyspark\sql\dataframe.py", line 466, in collect sock_info = self._jdf.collectToPython() File "C:\Users\username\AppData\Local\Continuum\miniconda3\lib\site-packages\py4j\java_gateway.py", line 1257, in __call__ answer, self.gateway_client, self.target_id, self.name) File "C:\Users\username\AppData\Local\Continuum\miniconda3\lib\site-packages\pyspark\sql\utils.py", line 63, in deco return f(*a, **kw) File "C:\Users\username\AppData\Local\Continuum\miniconda3\lib\site-packages\py4j\protocol.py", line 328, in get_return_value format(target_id, ".", name), value) py4j.protocol.Py4JJavaError: An error occurred while calling o273.collectToPython. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 797 in stage 137.0 failed 1 times, most recent failure: Lost task 797.0 in stage 137.0 (TID 24032, localhost, executor driver): java.net.SocketException: Connection reset by peer: socket write error at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111) at java.net.SocketOutputStream.write(SocketOutputStream.java:155) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$write$1(PythonRDD.scala:211) at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:223) at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:223) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:223) at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:439) at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:247) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1992) at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:170) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1651) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1639) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1638) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1638) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1872) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1821) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1810) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2074) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099) at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:363) at org.apache.spark.rdd.RDD.collect(RDD.scala:944) at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297) at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:3200) at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:3197) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3197) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.SocketException: Connection reset by peer: socket write error at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111) at java.net.SocketOutputStream.write(SocketOutputStream.java:155) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$write$1(PythonRDD.scala:211) at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:223) at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:223) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:223) at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:439) at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:247) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1992) at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:170) [Stage 137:============&gt;(793 + 5) / 800][Stage 139:&gt; (0 + 2) / 800] </code></pre> <p>Note that this is only an instance of an error happening, the errors themselves, the location and time to fail have not been consistent. I believe this has something to do with the setup of my project rather than the implementation itself. The only part that the errors seem to have in common is the <code>ERROR Utils: Uncaught exception in thread stdout writer for C:\Users\username\AppData\Local\Continuum\miniconda3\python.exe</code>. </p> <p>I'm not sure why this is happening since there's barely any reference to my implementation, the one stack trace back to my code gives the message <code>java.net.SocketException: Connection reset by peer: socket write error</code> which isn't something I understand. </p> <p>I've looked over other StackOverflow questions regarding PySpark, and while I haven't found one that matches my problem, it seems that the scalability issues go back to the configuration. This is the config I've seen using for every run:</p> <pre><code>spark.driver.memory: 12g spark.driver.port: 51126 spark.executor.id: driver spark.driver.maxResultSize: 12g spark.memory.offHeap.size: 12g spark.memory.offHeap.enabled: true spark.executor.memory: 12g spark.executor.heartbeatInterval: 36000000s spark.executor.cores: 4 spark.driver.host: &lt;redacted&gt; spark.rdd.compress: True spark.network.timeout: 60000000s spark.serializer.objectStreamReset: 100 spark.app.name: &lt;redacted&gt; spark.master: local[*] spark.submit.deployMode: client spark.app.id: local-1546685579638 spark.memory.fraction: 0 spark.ui.showConsoleProgress: true </code></pre> <p>Any help with this issue is appreicated, also details of my system:</p> <ul> <li>Python 3.6 (via Anaconda)</li> <li>PySpark 2.3.2 (using builtin-java classes, no native Hadoop)</li> <li>PyCharm CE 2018.3.1</li> <li>Windows 10 (16GB Memory, 8 cores)</li> </ul>
<p>I see a primary socket timeout error. Try to increase the <code>spark.executor.heartbeatInterval</code> to 3600s.</p> <p>Include this in your code,line after the <code>conf</code> variable is defined and give a try. It should be working.</p> <pre><code>conf.set("spark.executor.heartbeatInterval","3600s") </code></pre>
python|apache-spark|pyspark|pyspark-sql
0
1,909,381
53,893,864
Using custom images for Tensorflow.js in node.js or browser
<p>Tensorflow.js cnn example is nice and I decided to train with my custom character images (local images like this <img src="https://i.imgur.com/ZJHTjfo.png" alt="imgur">. also available as browser img elements). However, I can't replicate the test because the examples' code uses preprocessed data images.</p> <p>I copied the example of here (<a href="https://github.com/tensorflow/tfjs-examples/blob/master/mnist-node/README.md" rel="nofollow noreferrer">https://github.com/tensorflow/tfjs-examples/blob/master/mnist-node/README.md</a>) and added required node js packages. The example ran successfully. But I realized that I can't change the data of the example is using because it loads preprocessed datas like below.</p> <pre><code>const BASE_URL = 'https://storage.googleapis.com/cvdf-datasets/mnist/'; const TRAIN_IMAGES_FILE = 'train-images-idx3-ubyte'; const TRAIN_LABELS_FILE = 'train-labels-idx1-ubyte'; const TEST_IMAGES_FILE = 't10k-images-idx3-ubyte'; const TEST_LABELS_FILE = 't10k-labels-idx1-ubyte'; </code></pre> <p>I made images of same format with MNIST(28*28) so I thought I could just change train and test datas but failed because I don't know what <code>idx3-ubyte</code> format is. The <code>data.js</code> files' URL is <a href="https://github.com/tensorflow/tfjs-examples/blob/master/mnist-node/data.js" rel="nofollow noreferrer">here</a>.</p> <p>How can I generate same <code>ubyte</code> files? or How to use local images or img element directly?</p> <p><strong>update</strong> I examined the <code>data.js</code> file's reading part and managed to generate same file format. It also has header values.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="false" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code> async function loadImages(filename) { const buffer = await fetchOnceAndSaveToDiskWithBuffer(filename); const headerBytes = IMAGE_HEADER_BYTES; const recordBytes = IMAGE_HEIGHT * IMAGE_WIDTH; const headerValues = loadHeaderValues(buffer, headerBytes); assert.equal(headerValues[0], IMAGE_HEADER_MAGIC_NUM); assert.equal(headerValues[2], IMAGE_HEIGHT); assert.equal(headerValues[3], IMAGE_WIDTH); const images = []; let index = headerBytes; while (index &lt; buffer.byteLength) { const array = new Float32Array(recordBytes); for (let i = 0; i &lt; recordBytes; i++) { // Normalize the pixel values into the 0-1 interval, from // the original 0-255 interval. array[i] = buffer.readUInt8(index++) / 255; } images.push(array); } assert.equal(images.length, headerValues[1]); return images; } async function loadLabels(filename) { const buffer = await fetchOnceAndSaveToDiskWithBuffer(filename); const headerBytes = LABEL_HEADER_BYTES; const recordBytes = LABEL_RECORD_BYTE; const headerValues = loadHeaderValues(buffer, headerBytes); assert.equal(headerValues[0], LABEL_HEADER_MAGIC_NUM); const labels = []; let index = headerBytes; while (index &lt; buffer.byteLength) { const array = new Int32Array(recordBytes); for (let i = 0; i &lt; recordBytes; i++) { array[i] = buffer.readUInt8(index++); } labels.push(array); } assert.equal(labels.length, headerValues[1]); return labels; } getData_(isTrainingData) { let imagesIndex; let labelsIndex; if (isTrainingData) { imagesIndex = 0; labelsIndex = 1; } else { imagesIndex = 2; labelsIndex = 3; } const size = this.dataset[imagesIndex].length; tf.util.assert( this.dataset[labelsIndex].length === size, `Mismatch in the number of images (${size}) and ` + `the number of labels (${this.dataset[labelsIndex].length})`); // Only create one big array to hold batch of images. const imagesShape = [size, IMAGE_HEIGHT, IMAGE_WIDTH, 1]; const images = new Float32Array(tf.util.sizeFromShape(imagesShape)); const labels = new Int32Array(tf.util.sizeFromShape([size, 1])); let imageOffset = 0; let labelOffset = 0; for (let i = 0; i &lt; size; ++i) { images.set(this.dataset[imagesIndex][i], imageOffset); labels.set(this.dataset[labelsIndex][i], labelOffset); imageOffset += IMAGE_FLAT_SIZE; labelOffset += 1; } return { images: tf.tensor4d(images, imagesShape), labels: tf.oneHot(tf.tensor1d(labels, 'int32'), LABEL_FLAT_SIZE).toFloat() }; } }</code></pre> </div> </div> </p> <p>Below is generator code.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="false" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>const {createCanvas, loadImage} = require('canvas'); const tf = require('@tensorflow/tfjs'); require('@tensorflow/tfjs-node'); const fs = require('fs'); const util = require('util'); // const writeFile = util.promisify(fs.writeFile); // const readFile = util.promisify(fs.readFile); (async()=&gt;{ const canvas = createCanvas(28,28); const ctx = canvas.getContext('2d'); const ch1 = await loadImage('./u.png'); const ch2 = await loadImage('./q.png'); const ch3 = await loadImage('./r.png'); const ch4 = await loadImage('./c.png'); const ch5 = await loadImage('./z.png'); console.log(ch1); ctx.drawImage(ch1, 0, 0); const ch1Data = tf.fromPixels(canvas, 1); ctx.drawImage(ch2, 0, 0); const ch2Data = tf.fromPixels(canvas, 1); ctx.drawImage(ch3, 0, 0); const ch3Data = tf.fromPixels(canvas, 1); ctx.drawImage(ch4, 0, 0); const ch4Data = tf.fromPixels(canvas, 1); ctx.drawImage(ch5, 0, 0); const ch5Data = tf.fromPixels(canvas, 1); // console.log(await ch1Data.data()); const b1 = Buffer.from(await ch1Data.data()); const b2 = Buffer.from(await ch2Data.data()); const b3 = Buffer.from(await ch3Data.data()); const b4 = Buffer.from(await ch4Data.data()); const b5 = Buffer.from(await ch5Data.data()); const buffers = [b1,b2,b3,b4,b5]; const labels = [0,1,3,2,4,0,1,2,1,0,3,0,2,3,4,0,]; const Images = []; const size = labels.length; for(var i = 0; i &lt; size;i++){ Images.push(buffers[labels[i]]); } const imageHeaderBytes = 16; const imageRecordBytes = 28 * 28; const labelHeaderBytes = 8; const labelRecordBytes = 1; let imageBuffer = Buffer.alloc(imageHeaderBytes + size * imageRecordBytes); let labelBuffer = Buffer.alloc(labelHeaderBytes + size * labelRecordBytes); const imageHeaderValues = [2051, size, 28, 28]; const labelHeaderValues = [2049, size]; for (let i = 0; i &lt; 4; i++) { // Header data is stored in-order (aka big-endian) imageBuffer.writeUInt32BE(imageHeaderValues[i], i * 4); } for (let i = 0; i &lt; 2; i++) { // Header data is stored in-order (aka big-endian) labelBuffer.writeUInt32BE(labelHeaderValues[i], i * 4); } let imageindex = imageHeaderBytes; let labelindex = labelHeaderBytes; for(let i = 0; i &lt; size; i++){ // imageBuffer = Buffer.concat([imageBuffer, Images[i]]); // labelBuffer= Buffer.concat([labelBuffer, Buffer.from([labels[i]])]); // labelBuffer= Buffer.concat([labelBuffer, Buffer.from([labels[i]])]); const image = Images[i]; let index = 0; while(index &lt; image.byteLength){ imageBuffer.writeUInt8(image[index], imageindex); index++; imageindex++; } labelBuffer.writeUInt8(labels[i], labelindex++); } fs.writeFileSync('./testGeneratedImageBuffer', imageBuffer); fs.writeFileSync('./testGeneratedLabelBuffer', labelBuffer); })();</code></pre> </div> </div> </p>
<p>"ubyte" stands for "unsigned byte". It refers to an unsigned 8-bit integer. Each of the two <em>images</em>-ubyte* files contains a series of unsigned 8-bit integers. Every integer is a pixel in an MNIST image and has a value >=0 and &lt;=255.</p> <p>That's how the images are represented at the pixel level. Now let's take a look at the level of a whole image, consisting of 28 rows and 28 columns. It takes 28 * 28 = 784 such integers to represent an image. In the file, they are organized in a way such that the first 28 integers correspond to the first row, the next 28 integers correspond to the second row and so forth.</p> <p>All the images in the dataset are represented this way and their integers are concatenated to form the content of an <em>image</em>-ubyte file. Why are there two such files? This is because train-images-idx3-ubyte is the training dataset and t10k-images-idx3-ubyte is the test dataset. </p> <p>The other two files (<em>labels</em>-ubyte) are the labels for the MNIST images. Like the <em>image</em>-ubyte files, they contain uint8 (i.e., unsigned 8-bit integers). But instead of holding values fro 0-255, the label files have values >=0 and &lt;=9, because there are only 10 image classes in the MNIST dataset.</p> <p>Hope this is clear.</p>
node.js|image-processing|machine-learning|deep-learning|tensorflow.js
2
1,909,382
58,333,994
python PIL using Photoshop to open Image
<p>I am trying to open image with normal photo viewer but PIL module always opens up Photoshop to show image. </p> <pre><code>from PIL import Image image1 = Image.open("images/dog_1.jpg") image1.show() </code></pre>
<p>Looks like your default image viewer is set to Photoshop. Try changing to another app in your system settings. If you are on windows 10, then navigate to Settings > Apps > Default apps and change 'Photo viewer' app to the one you want.</p>
python|python-imaging-library
0
1,909,383
22,498,237
while converting linked list to number value error raising
<p>i have a question that i want to convert a linked list to single number for eg.,</p> <pre><code>assert 120 == list_to_number([1,2,0]) assert -120 == list_to_number([-1,-2,0]) assert 0 == list_to_number([0]) </code></pre> <p>here i wrote a code for this but it is encountering an error</p> <pre><code>def list_to_number(head): p = True num = '' while (head!=None): val = str(head) if (val.find('-') == 0): p = False num = num + val.replace('-','') head = head.next if (p == False): return -1*int(num) else: return int(num) </code></pre> <p>therefore the error is </p> <pre><code>ValueError: invalid literal for int() with base 10 </code></pre>
<p>Weird little function lol, but here you go</p> <pre><code>def list_to_number(head): p = True final_number = '' for number in head: n = number if number &gt; 0 else -number final_number = int(str(final_number) + str(n)) return final_number if p else -final_number </code></pre>
python|list|while-loop|linked-list|numbers
1
1,909,384
22,896,229
Difficulty with writing a merge function with conditions
<p>Im trying to write a function but simply cant get it right. This is supposed to be a merge function the merges as follows: the function recieves as an input a list of lists(m lists, all ints). The function creates a list that contains the indexes of the minimun values in each list of the input(each list of the list of lists, overall m indexes). example: lst_of_lsts= [[3,4,5],[2,0,7]] min_lst= [0,1]</p> <p>At each stage, the function chooses the minimum value from that list and adds it to a new list called merged. Then, it erases it from the list of indexes(min_lst) and adds the next index which is now the new minimum. At the end it returns merged which is an organized list from small ints to big ints. example: merged= [0,2,3,4,5,7]</p> <p>Another thing is that Im not allowed to change the original input.</p>
<pre><code>def min_index(lst): return min(range(len(lst)), key=lambda n: lst[n]) def min_lists(lstlst): return [min_index(lst) for lst in lstlst] </code></pre> <p>then</p> <pre><code>min_lists([[3,4,5],[2,0,7]]) # =&gt; [0, 1] </code></pre> <p><strong>Edit:</strong></p> <p>This site doesn't exist to solve your homework for you. If you work at it and your solution doesn't do what you expect, show us what you've done and we'll try to point out your mistake.</p> <p>I figure my solution is OK because it's correct, but in such a way that your teacher will never believe you wrote it; however if you can understand this solution it should help you solve it yourself, and along the way teach you some Python-fu.</p>
python|python-3.x|merge
1
1,909,385
22,516,474
import error in pytest
<p>I am trying to import a file in current directory but i am getting something like this</p> <pre><code> module_not_in_the_pythonpath__ </code></pre> <p>for all the variables that i have defined in that file. </p> <p>i have added the path</p> <pre><code> C:/Project/main_test_folder </code></pre> <p>in the python-path</p> <p>My directory structure is like this.</p> <pre><code> main_test_folder test_folder1 //working fine here __init__.py Constants.py Some test file Constant.py will be imported. test_folder2 //working fine here __init__.py Constants.py //Some test file Constant.py will be imported. test_folder3 //for this folder alone its not working __init__.py Constants.py Common.py //Constant.py is imported in this(not working) Testcase.py //Some more test file ,Constant.py,Common.py will be imported in each file. </code></pre> <p>Common.py</p> <pre><code> from constants import * class Payload(object): def __init__(self, params): ''' Constructor ''' @staticmethod def createPayLoad(Name,model): //NAME and MODEL Defined in constants.py PAYLOAD_PROFILE={ NAME: Name, MODEL: model, } return PAYLOAD </code></pre> <p>Testcase.py</p> <pre><code> from common import * //this line is not working from constants import * </code></pre> <p>Error :</p> <pre><code> test_folder3\test_01_rename.py:15: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ @staticmethod def createPayLoad(Name, model): PAYLOAD={ &gt; NAME: Name,MODEL: model} E NameError: global name 'NAME' is not defined test_folder3\common .py:34: NameError&lt;/failure&gt; </code></pre>
<p>It's bad practice to use import * <a href="https://stackoverflow.com/questions/2386714/why-is-import-bad">see</a><br> You can do so if you set the __ALL__ variable in your module's __init__.py <a href="https://stackoverflow.com/questions/44834/can-someone-explain-all-in-python">see</a></p>
python|python-2.7|python-3.x|pytest
1
1,909,386
22,540,537
Tool for multiplying really big numbers
<p><strong>I write a program for multiplying really big numbers.</strong><br> Lets say I have two numbers: first with 532 digits, second with 526 digits. My program gives me a number, that look quite right: it has 1058 digits.</p> <p>I tried to compare the result with some tool to check if my program calculates it right. I used python: for these input numbers, it looks correct. First digits are the same, last digits are the same. I can't compare every digits cause there are over thousand now.<br>Also I want to check my program for bigger input numbers.</p> <p>So, to finally verify the result I type in python:</p> <pre><code>first_number * second_number - my_program_result </code></pre> <p>For numbers with ~ &lt;200 digits result is 0. For bigger numbers I got:</p> <pre><code>-1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000L </code></pre> <p><strong>I dont know is it my programs fault or pythons.<br>Can python handle such big numbers?<br> Is there any other tool I can use to verify my results?</strong></p> <p><strong>EDIT</strong> As asked, I give numbers that gave me that output:</p> <pre><code>3475674888888888888888888888888888888888888888888888888888888888888888888666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666699999999999999999999999999999999999999999999999999999999999999999999999999999999999999999933333333333333333333333333333333333333333333333333333333333333366666666666666666666666666666666666666666666666666666666666666662222222222222222222222222222222222222222222222222222222222222222288888888888888888888888888888888888888888888888888888888888882222222222222222222222222 3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333377777777777777777777777777777777777777777777777777777777777777777777777777777777777777778888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222299999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999 </code></pre>
<p>Try using Python's built-in function <code>cmp()</code> to compare the product of your multiplied numbers to your expected result.</p> <pre><code>&gt;&gt;&gt; print cmp(product_of_multiplication, expected_result) </code></pre> <p>If <code>product_of_multiplication == expected_result</code>, then <code>0</code> will be returned.</p> <p>If <code>product_of_multiplication &gt; expected_result</code>, then <code>1</code> will be returned.</p> <p>If <code>product_of_multiplication &lt; expected_result</code>, the <code>-1</code> will be returned.</p>
python
0
1,909,387
14,854,659
Python: Why can't I unpack a tuple into a dictionary?
<p>Why doesn't this work?:</p> <pre><code>d["a"], d["b"] = *("foo","bar") </code></pre> <p>Is there a better way to achieve what I'm trying to achieve?</p>
<p>It would work if you define a dictionary <code>d</code> before hand, and remove the <code>*</code> from there:</p> <pre><code>&gt;&gt;&gt; d = {} &gt;&gt;&gt; d["a"], d["b"] = ("foo","bar") </code></pre> <p>In fact, you don't need those parenthesis on the RHS, so this will also work:</p> <pre><code>&gt;&gt;&gt; d['a'], d['b'] = 'foo', 'bar' </code></pre>
python|dictionary|iterable-unpacking
17
1,909,388
73,074,404
How to add arrows between the shapes in python plotly?
<p>I created a plotly figure with three rectangles having space between them. I want to add arrows between the rectangle shapes pointing towards the right as passed by the user. How can I create arrows between the rectangle shapes?</p> <p>Here is an example of how I want the figure:</p> <p><a href="https://i.stack.imgur.com/d6FI8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d6FI8.png" alt="IMAGE" /></a></p> <p>Many thanks in advance!</p>
<p>Finally, I found the solution. Posting it hoping it would help people in future:</p> <pre><code>fig.add_annotation(ax = x_axis_start_point_of_arrow, axref = 'x', ay = y_axis_start_point_of_arrow, ayref = 'y',x = x_axis_end_point_of_arrow,arrowcolor='red', xref = 'x', y = y_axis_end_point_of_arrow, yref='y',arrowwidth=2.5,arrowside='end',arrowsize=1,arrowhead = 4) </code></pre>
python|python-3.x|plotly
0
1,909,389
55,841,594
Extract underlined text from pdf
<p>I am trying to extract data from a PDF which is in a table. I am able to extract the data using pandas and read the data.</p> <p>Recently the data got changed and now I am suppose to extract only those values that are underlined in table that is in PDF. The table structure is same. But the data to extract has been underlined.bi tried OCR, tessaract to extract data but with no luck as they extracted all the data. But I only need underlined data.</p> <p>If it helps the underline is in red color always.</p> <p>I am using Python as programming language.</p>
<pre class="lang-py prettyprint-override"><code>for run in para.runs: if run.font.underline : underline.append(run.text) </code></pre>
python|pandas|pdf|ocr
-1
1,909,390
50,103,666
Remove not int elements in list
<p>Hi I have this task in python and I should remove all not int elements, the result of the code down below is <code>[2, 3, 1, [1, 2, 3]]</code> and I have no idea why in the result the list is not moved away. Only tested suggestions please, I mean working ones.</p> <pre><code># Great! Now use .remove() and/or del to remove the string, # the boolean, and the list from inside of messy_list. # When you're done, messy_list should have only integers in it messy_list = ["a", 2, 3, 1, False, [1, 2, 3]] for ele in messy_list: print('the type of {} is {} '.format(ele,type(ele))) if type(ele) is not int: messy_list.remove(ele) print(messy_list) </code></pre>
<p>Try this one:</p> <pre><code>&gt;&gt;&gt; messy_list = ["a", 2, 3, 1, False, [1, 2, 3]] &gt;&gt;&gt; [elem for elem in messy_list if type(elem) == int] [2, 3, 1] </code></pre>
python
3
1,909,391
66,524,953
Paramiko TypeError: '<' not supported between instances of 'int' and 'str'
<p>I am currently trying to move file from SFTP to S3 bucket using Paramiko library in Lambda function in Python. But facing type error.</p> <p>My code:</p> <pre><code>def open_ftp_connection(ftp_host, ftp_port, ftp_username, ftp_password): ''' Opens ftp connection and returns connection object ''' client = paramiko.SSHClient() client.load_system_host_keys() client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) try: transport = paramiko.Transport(ftp_host, ftp_port) except Exception as e: return 'conn_error' try: transport.connect(username=ftp_username, password=ftp_password) except Exception as identifier: return 'auth_error' ftp_connection = paramiko.SFTPClient.from_transport(transport) return ftp_connection </code></pre> <p><code>ftp_host</code> is <code>str</code> - <code>13.xxx.1xx.xx</code><br /> <code>ftp_port</code> is <code>str</code> - <code>22</code><br /> <code>ftp_username</code> is <code>str</code></p> <p>Error:</p> <pre class="lang-none prettyprint-override"><code>[ERROR] TypeError: '&lt;' not supported between instances of 'int' and 'str' Traceback (most recent call last): File &quot;/var/task/transfer_data.py&quot;, line 159, in transfer_handler ftp_connection = paramiko.SFTPClient.from_transport(transport) File &quot;/var/task/paramiko/sftp_client.py&quot;, line 165, in from_transport window_size=window_size, max_packet_size=max_packet_size File &quot;/var/task/paramiko/transport.py&quot;, line 879, in open_session timeout=timeout, File &quot;/var/task/paramiko/transport.py&quot;, line 973, in open_channel window_size = self._sanitize_window_size(window_size) File &quot;/var/task/paramiko/transport.py&quot;, line 1970, in _sanitize_window_size return clamp_value(MIN_WINDOW_SIZE, window_size, MAX_WINDOW_SIZE) File &quot;/var/task/paramiko/util.py&quot;, line 308, in clamp_value return max(minimum, min(val, maximum)) </code></pre>
<p>As @Vishnudev answered.</p> <p>The Port should be given as integer.</p>
python|ssh|aws-lambda|sftp|paramiko
2
1,909,392
64,744,860
how to import django admin inlines to CreateView?
<p>this is my CreateView</p> <pre><code>class PizzaCreateView(PermissionRequiredMixin,SuccessMessageMixin,CreateView,): model = Pizza fields = ['name','price','pizza_description','toppings','Admin.PizzaImageInline'] action = 'Add pizza' success_url = reverse_lazy('pages:home') permission_required = 'pizzas.add_pizza' success_message = '&quot;%(name)s&quot; was created successfully' </code></pre> <p>this is my admin.py</p> <pre><code>class PizzaImageInline(admin.TabularInline): model = PizzaImage extra = 3 class PizzaAdmin(admin.ModelAdmin): inlines = [ PizzaImageInline, ] admin.site.register(Pizza, PizzaAdmin) </code></pre> <p>how do i can transfer PizzaAdmin to my view form</p>
<p>may be use <code>django-extra-views</code> <a href="https://github.com/AndrewIngram/django-extra-views/" rel="nofollow noreferrer">https://github.com/AndrewIngram/django-extra-views/</a></p> <p>see below: <a href="https://stackoverflow.com/questions/35560758/django-createview-with-multiple-models">Django - CreateView with multiple models</a></p>
python|django
0
1,909,393
71,941,810
How to merge partial data into a given df
<p>How can I merge partial data into a given df, without changing unknown values? Here is a minimal example:</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; import numpy as np &gt;&gt;&gt; data = { ... &quot;Feat_A&quot;: [&quot;INVALID&quot;, &quot;INVALID&quot;, &quot;INVALID&quot;], ... &quot;Feat_B&quot;: [&quot;INVALID&quot;, &quot;INVALID&quot;, &quot;INVALID&quot;], ... &quot;Key&quot;: [12, 25, 99], ... } &gt;&gt;&gt; &gt;&gt;&gt; origin = pd.DataFrame(data=data) &gt;&gt;&gt; data = {&quot;Feat_A&quot;: [1, np.nan], &quot;Feat_B&quot;: [np.nan, 2], &quot;Key&quot;: [12, 99]} &gt;&gt;&gt; new = pd.DataFrame(data=data) &gt;&gt;&gt; origin = origin.merge( ... new[[&quot;Key&quot;, &quot;Feat_A&quot;, &quot;Feat_B&quot;]], ... on=&quot;Key&quot;, ... how=&quot;left&quot;, ... ) &gt;&gt;&gt; &gt;&gt;&gt; origin Feat_A_x Feat_B_x Key Feat_A_y Feat_B_y 0 INVALID INVALID 12 1.0 NaN 1 INVALID INVALID 25 NaN NaN 2 INVALID INVALID 99 NaN 2.0 </code></pre> <p>This is what I am looking for:</p> <pre><code># Feat_A Feat_B Key # 0 1.0 INVALID 12 # 1 INVALID INVALID 25 # 2 INVALID 2.0 99 </code></pre>
<p>First set &quot;INVALID&quot; cells to NAN then you can set <code>key</code> as index and use fillna method to fill the values from new to origin.</p> <pre><code>import numpy as np origin = origin.map({'INVALID':np.nan}) originNew=origin.set_index('Key').fillna(new.set_index(&quot;Key&quot;)) </code></pre>
python-3.x|pandas
0
1,909,394
68,763,560
pyspark local[*] vs spark.executor.cores"
<p>I am running spark cluster in local mode using python pyspark. One of the spark configuration options are set to: <code>&quot;spark.executor.cores&quot;: &quot;8&quot;</code> <code>&quot;spark.cores.max&quot;: &quot;8&quot;</code></p> <p>After setting all options:</p> <pre><code>SparkSession.builder.config(conf=spark_configuration) </code></pre> <p>I build the spark context: <code>SparkSession.builder.master(&quot;local[*]&quot;).appName(application_name).getOrCreate()</code></p> <p>My machine has 16 cores and I see that the application consumes all available resources. My question is how does the option <code>&quot;local[*]&quot;</code> vs <code>&quot;spark.executor.cores&quot;: &quot;8&quot;</code> influence the spark driver (how many cores local executor will consume)?</p>
<p>This is what I observed on a system with 12 cores:</p> <p>When I mark executor cores as 4, total 3 executors will be created with 4 cores each on standalone mode.</p> <p><a href="https://i.stack.imgur.com/yRPi7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yRPi7.jpg" alt="spark-ui standalone mode" /></a></p> <p>But this is not the case with local mode. Even if I pass flag <code>--num-executors 4</code> or change <code>spark.driver.cores/spark.executor.cores/spark.executor.instances</code> nothing is changing the number of executors. All the time only one executor will be there with id as driver and cores will be equal to what we pass in master. <a href="https://i.stack.imgur.com/rEdAe.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rEdAe.jpg" alt="spark-ui local mode" /></a></p>
python|apache-spark|pyspark
1
1,909,395
10,730,836
Merge two lists,one as keys, one as values, into a dict in Python
<p>Is there any <strong>build-in</strong> function in Python that merges two lists into a dict? Like:</p> <pre><code>combined_dict = {} keys = ["key1","key2","key3"] values = ["val1","val2","val3"] for k,v in zip(keys,values): combined_dict[k] = v </code></pre> <p>Where:</p> <p><code>keys</code> acts as the list that contains the keys.</p> <p><code>values</code> acts as the list that contains the values</p> <p>There is a function called <a href="http://php.net/manual/en/function.array-combine.php" rel="noreferrer">array_combine</a> that achieves this effect.</p>
<p>Seems like this should work, though I guess it's not <em>one</em> single function:</p> <pre><code>dict(zip(["key1","key2","key3"], ["val1","val2","val3"])) </code></pre> <p>from here: <a href="https://stackoverflow.com/questions/7271385/how-do-i-combine-two-lists-into-a-dictionary-in-python">How do I combine two lists into a dictionary in Python?</a></p>
python|list|dictionary
8
1,909,396
62,614,370
How can I take input from something the Console has printed in Python?
<p>I am building an <code>Encrypter/Decrypter</code> and I already got the backend working. To <code>encrypt or decrypt a file</code> you need to <code>input the name and the extension</code> (<code>Example: Image.png</code>).</p> <p>If the file name you input on it doesn't exist or has a typo, the console prints:</p> <blockquote> <pre><code>FileNotFoundError: [Errno 2] No such file or directory: </code></pre> </blockquote> <p>proceeded with your input text.</p> <p>What I want to be able to do is like an If(the console prints that error message), then display a text that says <code>&quot;No file found. Please check your directory or your spelling&quot;</code> using <code>Tkinter</code>.</p> <p>Please help me so I can finish the project.</p>
<p>You just have to include this in your code</p> <pre><code>try: #open file except FileNotFoundError: print(&quot;No file found. Please check your directory or your spelling&quot;) </code></pre> <p>and that should work</p>
python|debugging|tkinter|console
0
1,909,397
61,886,303
Accuracy/train and Loss/train graph by tensorboard
<p>i used tensorboard for my pytorch project and got this result for accuracy/train and loss/train but i dont understand what it means<br> <img src="https://i.stack.imgur.com/OrCyO.jpg" alt="enter image description here"></p>
<p>Your loss does not decrease and your accuracy does not increase during training. Not significantly.</p> <p>First thing to try is adjusting the learning rate:<br> - One possibility is that the learning rate is too small, and therefore the weight updates are tiny and insignificant. Try increasing the learning rate by factor of x10 or even x100.<br> - On the other hand, your loss/accuracy do seem to oscillate, which may suggest update steps are too large. Try decreasing the learning rate by x10 and see if this oscillation subsides.</p>
deep-learning|pytorch|tensorboard|gradient-descent|loss
1
1,909,398
61,726,567
Python - Accept list of strings (names) as parameter and store it in array
<p>This is Python code. I found multiple examples of storing integer values in an array variable but dont see any working example of storing strings (passed as parameter) stored in array.</p> <pre><code>try: my_list = [] while True: **my_list.append(int(input()))** except: print(my_list) </code></pre> <p>The above code gives output as below where i provided 4 integer input values.</p> <pre><code>./test-00.py 3 54 7 90 [3, 54, 7, 90] </code></pre> <p>If i change the code to accepts strings instead of integers as shown in code line below the result breaks and i am thrown out as soon as i hit Return.</p> <pre><code>my_list.append(str(input())) </code></pre> <p>Can anyone tell me how i can take multiple strings as values of one array variable and print them?</p>
<p>You are exiting your infinite loop in first code snippet because you encountered an exception. An enter pressed is equal to reading an empty string and empty string is not converted to integer. Hence exception is raised and you exit <code>while True</code> loop.</p> <p><strong>Demo</strong>:</p> <pre><code>&gt;&gt;&gt; int(input()) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; ValueError: invalid literal for int() with base 10: '' </code></pre> <p>Here you can implement the same approach considering that you need to break from the loop on empty input:</p> <pre><code>my_list = [] while True: inp = input() if inp == '': break my_list.append(inp) print(my_list) </code></pre>
python|arrays|string|parameters
1
1,909,399
61,856,697
How to have all the dataframe columns included
<p>I have a concern here. What I have programmed can do this </p> <pre><code>df = pd.DataFrame({'STREAM':['EAGLE','HAWK','NORTH','HAWK','EAGLE','HAWK','NORTH'],'MAT':['A','D','F','D','C','C','E'],'KIS':['B','D','E','D','A','C','D']}) columns = ["A","B","C","D","E", "F"] a = (pd.crosstab(df.STREAM,df.MAT, margins=True, margins_name='TOTAL').iloc[:,:-1].reindex(columns, axis=1, fill_value=0).rename_axis(None)) saved = a.to_csv(index=False) a['TOT'] = a.sum(axis=1) a['MEAN'] = a.mean(axis=1).round(2) def x(i): if i &gt;5: grade='A' else: grade='E' return grade a['GRD'] = a.MEAN.apply(x) print(a) </code></pre> <p>This gets me a result of </p> <pre><code>MAT A B C D E F TOT MEAN GRD EAGLE 1 0 1 0 0 0 2 0.57 E HAWK 0 0 1 2 0 0 3 0.86 E NORTH 0 0 0 0 1 1 2 0.57 E TOTAL 1 0 2 2 1 1 7 2.00 E </code></pre> <p>This is near what I want but there's one problem, only MAT is included. Could I have a way of including the total observations for both 'MAT' and 'KIS'in the table and also have the name 'MAT' on the top left corner of my table removed(blank)?</p> <p>EDITS Expected output</p> <pre><code> A B C D E F TOT MEAN GRD EAGLE 2 1 1 0 0 0 4 0.?? E HAWK 0 0 2 4 0 0 6 0.?? E NORTH 0 0 0 1 2 1 4 0.?? E TOTAL 2 1 3 5 2 1 14 ?? E </code></pre>
<p>I think you need to change the shape of you dataframe before doing the <code>crosstab</code>. Here is one way with <code>melt</code> and <code>pivot_table</code> instead of <code>crosstab</code> (mostly because I'm unsure how to use this method yet):</p> <pre><code>a = df.melt(id_vars=['STREAM'], value_vars=['MAT','KIS'])\ .pivot_table(index='STREAM', columns='value', values='variable', aggfunc='count', fill_value=0, margins=True, margins_name='TOTAL')\ .rename_axis(columns=None)\ .rename(columns={'TOTAL':'TOT'}) a['MEAN'] = a.iloc[:,:-1].mean(axis=1).round(2) a["GRADE"] = np.where(a['MEAN']&gt; 5, 'A', 'E') print (a) A B C D E F TOT MEAN GRADE STREAM EAGLE 2 1 1 0 0 0 4 0.67 E HAWK 0 0 2 4 0 0 6 1.00 E NORTH 0 0 0 1 2 1 4 0.67 E TOTAL 2 1 3 5 2 1 14 2.33 E </code></pre>
python|pandas
3