Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,905,300
71,854,149
Pandasql conditional join with one result returned from right table
<p>I'm doing an analysis in Pandas which requires a few conditional joins for which it is very practical to switch over to SQL with pandasql. Unfortunately I'm having a problem with one of the joins where there are dates involved and the join requires only one result to be returned.</p> <p>Having table1 where the ID, REGION require an exact match on table2 and DATE requires an approximate match where the DATE in table1 needs to be &lt;= then the DATE in table2 and the closest one (smallest day difference).</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>ID</th> <th>REGION</th> <th>DATE</th> </tr> </thead> <tbody> <tr> <td>111</td> <td>ABC</td> <td>12.05.2021</td> </tr> <tr> <td>111</td> <td>ZDF</td> <td>14.02.2021</td> </tr> <tr> <td>222</td> <td>DEF</td> <td>31.12.2021</td> </tr> </tbody> </table> </div> <p>And table2 with the additional column INDIC which needs to be returned in the join</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>ID</th> <th>REGION</th> <th>DATE</th> <th>INDIC</th> </tr> </thead> <tbody> <tr> <td>111</td> <td>ABC</td> <td>30.06.2021</td> <td>Y</td> </tr> <tr> <td>111</td> <td>ABC</td> <td>12.08.2021</td> <td>X</td> </tr> <tr> <td>111</td> <td>ABC</td> <td>15.10.2021</td> <td>Z</td> </tr> <tr> <td>222</td> <td>DEF</td> <td>08.10.2021</td> <td>A</td> </tr> <tr> <td>222</td> <td>DEF</td> <td>05.01.2022</td> <td>B</td> </tr> <tr> <td>222</td> <td>DEF</td> <td>13.04.2022</td> <td>C</td> </tr> </tbody> </table> </div> <p>The result I would expect of the join should look like:</p> <p><a href="https://i.stack.imgur.com/s3av9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s3av9.png" alt="enter image description here" /></a></p> <p>The code that I have for the moment is:</p> <pre><code>SELECT df_left.*, df_right.[INDIC] FROM tbl1_df AS df_left LEFT JOIN tbl2_df AS df_right ON df_left.[ID] = df_right.[ID] AND df_left.[REGION] = df_right.[REGION] WHERE (df_left.[DATE] &lt;= df_right.[DATE]) </code></pre> <p>This will at the moment present an unwanted but expected result: <a href="https://i.stack.imgur.com/8G5w4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8G5w4.png" alt="enter image description here" /></a></p> <p>Can anyone suggest how the SQL part should look like?</p>
<p>I think I found the best solution so far with below code:</p> <pre><code>SELECT tbl1_df.*, tbl2_df.[INDIC] FROM tbl1_df LEFT JOIN tbl2_df ON tbl1_df.[ID] = tbl2_df.[ID] AND tbl2_df.[DATE] = ( SELECT MIN(DATE) FROM tbl2_df WHERE (tbl2_df.[ID] = tbl1_df.[ID]) AND (tbl1_df.[DATE] &lt;= tbl2_df.[DATE]) AND (tbl1_df.[REGION] = tbl2_df.[REGION]) ) </code></pre> <p>Which gives a following result and I can easily deal with the NONE later on.</p> <p><a href="https://i.stack.imgur.com/UvzNA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UvzNA.png" alt="enter image description here" /></a></p>
sql|pandas
0
1,905,301
68,637,235
Pandas pct_change on only one column in the data frame to create a new column
<p>I'm attempting to add a new column in my pandas data frame for the daily % change of a specific column in my dataframe. Whenever I try to use the pct_change() method, it creates a new dataframe and applies pct_change() to all columns in the df. Below is the table I currently have:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: right;">new_date</th> <th style="text-align: right;">Name</th> <th style="text-align: right;">sentiment_polarity</th> <th style="text-align: right;">sentiment_score</th> <th style="text-align: right;">engagement</th> <th style="text-align: right;">engagement_polarity_score</th> <th style="text-align: right;">engagement_sentiment_score</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: right;">2020-01-01</td> <td style="text-align: right;">Bitcoin</td> <td style="text-align: right;">0.342000</td> <td style="text-align: right;">0.107069</td> <td style="text-align: right;">6.142000</td> <td style="text-align: right;">-0.325000</td> <td style="text-align: right;">0.589380</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: right;">2020-01-01</td> <td style="text-align: right;">Cardano</td> <td style="text-align: right;">0.334572</td> <td style="text-align: right;">0.133310</td> <td style="text-align: right;">11.256506</td> <td style="text-align: right;">8.866171</td> <td style="text-align: right;">2.509937</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: right;">2020-01-01</td> <td style="text-align: right;">Dogecoin</td> <td style="text-align: right;">0.434783</td> <td style="text-align: right;">0.155303</td> <td style="text-align: right;">13.173913</td> <td style="text-align: right;">11.121739</td> <td style="text-align: right;">2.742231</td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: right;">2020-01-01</td> <td style="text-align: right;">Ethereum</td> <td style="text-align: right;">0.389000</td> <td style="text-align: right;">0.133417</td> <td style="text-align: right;">6.121000</td> <td style="text-align: right;">4.652000</td> <td style="text-align: right;">1.480854</td> </tr> <tr> <td style="text-align: right;">4</td> <td style="text-align: right;">2020-01-01</td> <td style="text-align: right;">Stellar</td> <td style="text-align: right;">0.759000</td> <td style="text-align: right;">0.216281</td> <td style="text-align: right;">7.437000</td> <td style="text-align: right;">6.385000</td> <td style="text-align: right;">1.851542</td> </tr> <tr> <td style="text-align: right;">5</td> <td style="text-align: right;">2020-01-02</td> <td style="text-align: right;">Bitcoin</td> <td style="text-align: right;">0.202000</td> <td style="text-align: right;">0.067189</td> <td style="text-align: right;">4.512000</td> <td style="text-align: right;">1.536000</td> <td style="text-align: right;">0.568809</td> </tr> <tr> <td style="text-align: right;">6</td> <td style="text-align: right;">2020-01-02</td> <td style="text-align: right;">Cardano</td> <td style="text-align: right;">0.307971</td> <td style="text-align: right;">0.120505</td> <td style="text-align: right;">17.282609</td> <td style="text-align: right;">5.355072</td> <td style="text-align: right;">1.606946</td> </tr> <tr> <td style="text-align: right;">7</td> <td style="text-align: right;">2020-01-02</td> <td style="text-align: right;">Dogecoin</td> <td style="text-align: right;">0.266667</td> <td style="text-align: right;">0.095962</td> <td style="text-align: right;">2.266667</td> <td style="text-align: right;">1.276190</td> <td style="text-align: right;">0.553433</td> </tr> <tr> <td style="text-align: right;">8</td> <td style="text-align: right;">2020-01-02</td> <td style="text-align: right;">Ethereum</td> <td style="text-align: right;">0.244000</td> <td style="text-align: right;">0.098055</td> <td style="text-align: right;">9.670000</td> <td style="text-align: right;">4.583000</td> <td style="text-align: right;">1.637720</td> </tr> <tr> <td style="text-align: right;">9</td> <td style="text-align: right;">2020-01-02</td> <td style="text-align: right;">Stellar</td> <td style="text-align: right;">0.729000</td> <td style="text-align: right;">0.206842</td> <td style="text-align: right;">5.765000</td> <td style="text-align: right;">4.617000</td> <td style="text-align: right;">1.093504</td> </tr> </tbody> </table> </div> <p>I'd like for there to be another column at the end that captures the daily % change of the engagement_sentiment_score column.</p> <p>I tried using the below snippet but get an error:</p> <pre><code>Bit['Daily % Sentiment Change'] = Bit.pct_change(axis=1)['engagement_sentiment_score'] </code></pre> <p>Error message: TypeError: unsupported operand type(s) for /: 'str' and 'str'</p> <p>I then checked the data type for the values in the engagement_sentiment_score column and it says that they are floats, so I'm unsure why I'm getting this error.</p> <p>Thanks for the help!</p>
<p>use <code>select_dtypes()</code> to select only int and float values:</p> <pre><code>Bit['Daily % Sentiment Change'] =(Bit.select_dtypes(include=['int','float']) .pct_change(axis=1)['engagement_sentiment_score']) </code></pre>
python|pandas|dataframe|numpy|methods
0
1,905,302
10,471,726
Add Custom Art to ToolBar
<p>This source code</p> <pre><code>class MyToolBar(wx.ToolBar): def AddTool2(self, id, shortHelp = '', longHelp = ''): global TB_SIZE try: ArtId = ArtMap.get(id) Ico = wx.ArtProvider.GetBitmap(ArtId, wx.ART_TOOLBAR, TB_SIZE) self.AddSimpleTool(id, Ico, shortHelp, longHelp) except StandardError: print('Something wrong, maybe wrong id') Class MyFrame(wx.Frame): def __init__(self, parent, *args, **kwargs): wx.Frame.__init__(self, parent, *args, **kwargs) ToolBar = MyToolBar(self) ToolBar.AddTool2(wx.ID_NEW, 'New', 'Creates new file') self.SetToolBar(ToolBar) self.GetToolBar().Realize() ArtMap = { wx.ID_NEW : wx.ART_NEW, } ID_BOUNCE = wx.NewId() TB_SIZE = wx.Size(16,16) app = wx.app() frame = MyFrame(None, -1, 'MyFrame', (0,0)) app.MainLoop() </code></pre> <p>works well for adding tools to toolbar when the tool has a wx.ART. But how do you add a new tool that have no wx.ART or no wx.ART that can represent it well like the ID_BOUNCE where the tool Bounce is suppose to make a ball bounce in the frame?</p> <p>Thanks in advance.</p>
<p><code>wx.ToolBar</code> has <a href="http://www.wxpython.org/docs/api/wx.ToolBarBase-class.html#AddLabelTool" rel="nofollow"><code>AddLabelTool</code></a> method with a bitmap parameter.<br> Find an example over at <a href="http://zetcode.com/wxpython/menustoolbars/" rel="nofollow">zetcode</a>.</p>
python|wxpython|toolbar|wxwidgets
1
1,905,303
10,852,345
Most efficient way to create a dictionary of empty lists in Python?
<p>I am initializing a dictionary of empty lists. Is there a more efficient method of initializing the dictionary than the following?</p> <pre><code>dictLists = {} dictLists['xcg'] = [] dictLists['bsd'] = [] dictLists['ghf'] = [] dictLists['cda'] = [] ... </code></pre> <p>Is there a way I do not have to write dictLists each time, or is this unavoidable?</p>
<p>You can use <a href="http://docs.python.org/library/collections.html#collections.defaultdict" rel="nofollow noreferrer">collections.defaultdict</a> it allows you to set a factory method that returns specific values on missing keys.</p> <pre><code>a = collections.defaultdict(list) </code></pre> <p>Edit:</p> <p>Here are my keys</p> <pre><code>b = ['a', 'b','c','d','e'] </code></pre> <p>Now here is me using the "predefined keys"</p> <pre><code>for key in b: a[key].append(33) </code></pre>
python
8
1,905,304
62,729,933
Creating a LSTM Neural Network outputting two Y variables
<p>I have a classification task based on Tweets. The Tweets are my only input (X) variable. I have two target (y) variables however, one target variable should output 1 or 0 for either positive or negative, while the other target variable should output 1 or 0 for either political or non-political.</p> <p>I created a LSTM neural network which is running, but I can't seem to get it to output two target variables. Can somebody please advise?</p> <p>My input shapes are:</p> <pre><code>X_train y_train (15203, 250) (15203, 2) X_val y_val (3801, 250) (3801, 2) </code></pre> <p>My model is:</p> <pre><code> model = Sequential() model.add(Embedding(MAX_NB_WORDS, EMBEDDING_DIM, input_length=X.shape[1])) model.add(SpatialDropout1D(0.2)) model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2)) # For two label classification model.add(Dense(2, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) </code></pre> <p>I noticed I was not getting two target variables when I later ran this piece of code on unseen data</p> <pre><code> new_text = [&quot;Tony Gonzales (@TonyGonzales4TX) will be a GREAT Congressman for Texas! A Navy veteran, he is Strong on the Economy, Life and the Second Amendment. We need him to defeat the Radical Left in November. Tony has my Complete and Total Endorsement! #TX23&quot;] seq = tokenizer.texts_to_sequences(new_text) padded = pad_sequences(seq, maxlen = 250) pred = model.predict(padded) labels = [0,1] print(pred, labels[np.argmax(pred)]) </code></pre> <p>On this and all other tests, I noticed my predictions were only returning a binary classification. e.g. on the above I got 0.13 0.87 (these figures add up to 100 so are tied together)</p> <p>However, in the above case, a positive political Tweet, I would have expected a dual result of circa 0.88 0.91</p> <p>Any help would on how to get two output y variables would be much appreciated.</p>
<p>You can add 4 nodes in your output layer representing (negative, positive, political, non-political) and map you ytrain in that manner or you can try this:</p> <pre><code>x = your input numpy array y1 = your sentiment one hot encoded output numpy array y2 = your political one hot encoded output numpy array # x, y1, y2 should have same length or shape[0] data = tensorflow.data.Dataset.from_tensor_slices((x, [y1, y2])) data = data.shuffle(len(x)) input = Input(shape=(X.shape[1], )) x = Embedding(MAX_NB_WORDS, EMBEDDING_DIM, input_length=X.shape[1])(input) x = SpatialDropout1D(0.2)(x) x = LSTM(100, dropout=0.2, recurrent_dropout=0.2)(x) out1 = Dense(2, activation='softmax', name='sentiment')(x) out2 = Dense(2, activation='softmax', name='political')(x) model = Model(inputs = input, outputs=[out1, out2]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(data, other arguments....] #supposing your output is one hot encoded </code></pre> <p><a href="https://i.stack.imgur.com/J1T1n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J1T1n.png" alt="enter image description here" /></a></p> <p>See if it works or not.</p>
python|keras|neural-network|lstm
3
1,905,305
62,772,470
Accessing individual columns in a numpy ndarray
<p>I have an nd-numpy array of shape <code>(m, 1,100,4)</code> for which I would like to access the individual columns of the inner array (shape: <code>(1,100,4)</code>).</p> <p>MWE: As example, say I have this:</p> <pre><code>import numpy as np X = np.random.randn(2, 1, 5, 4) X array([[[[-0.40867508, 0.09331783, 1.26134307, -1.18900601], [-0.79177772, 0.96738931, -0.33332772, 0.53130287], [ 3.67290383, 0.30954936, 0.63221306, -0.64003826], [-1.20878773, 1.21499506, 1.84995811, 0.15663168], [-0.60648072, -0.30464852, -0.44044224, -4.46482868]]], [[[-1.90531392, -0.47108517, 1.21177166, 0.09561669], [ 3.21803694, 0.30611821, 1.71334417, 0.73383279], [-1.12869017, -0.1497266 , -0.54913676, 0.36704922], [ 0.5652546 , -0.75012341, -0.72496611, 1.12428097], [-1.19727408, -0.13813127, 2.63948821, -0.37661527]]]]) </code></pre> <p>where nested arrays are shaped <code>(1,5,4)</code>. Then accessing the first columns of each nested array returns the entire array instead:</p> <pre><code>X[ :, 0] array([[[-0.40867508, 0.09331783, 1.26134307, -1.18900601], [-0.79177772, 0.96738931, -0.33332772, 0.53130287], [ 3.67290383, 0.30954936, 0.63221306, -0.64003826], [-1.20878773, 1.21499506, 1.84995811, 0.15663168], [-0.60648072, -0.30464852, -0.44044224, -4.46482868]], [[-1.90531392, -0.47108517, 1.21177166, 0.09561669], [ 3.21803694, 0.30611821, 1.71334417, 0.73383279], [-1.12869017, -0.1497266 , -0.54913676, 0.36704922], [ 0.5652546 , -0.75012341, -0.72496611, 1.12428097], [-1.19727408, -0.13813127, 2.63948821, -0.37661527]]]) </code></pre> <p>My intention is to get a tuple, such that:</p> <pre><code>s,t,u,v = X[first_columns], X[second_columns], X[third_columns], X[fouth_columns] </code></pre> <p>such that:</p> <pre><code>s =[-0.40867508, -0.79177772, 3.67290383, -1.20878773, -0.60648072, -1.90531392, 3.21803694, -1.12869017, 0.5652546, -1.19727408] </code></pre>
<p>What you are looking for is</p> <pre><code>X[:,0,:,0].ravel() </code></pre> <p>Note that with this shape of <code>X</code>, we cannot directly get the desired elements as an array but as a 2d matrix. Therefore we need to <code>reshape</code> to array form.</p> <p>The other correspond to:</p> <pre><code>t = X[:,0,:,1].ravel() u = X[:,0,:,2].ravel() v = X[:,0,:,3].ravel() </code></pre>
python|numpy|multidimensional-array|numpy-ndarray
1
1,905,306
62,026,946
Changing a folder in a path for writing using re and glob libraries
<p>I have two directories: <code>path/to/folder</code> and <code>path/to/otherfolder</code> which both have several sub-directories inside them: <code>path/to/folder/TEST1</code>, <code>path/to/folder/TEST2</code>, <code>path/to/otherfolder/TEST1</code>, <code>path/to/otherfolder/TEST2</code>, etc.</p> <p>I'm getting all the subdirectories in the root folder using <code>folder_path = glob.glob('path/to/folder/*')</code></p> <p>I then loop over each sub-directory to get all the files in them:</p> <pre><code>for folder in folder_path: file_path = glob.glob(folder + '\*') for files in file_path: new_path = files.replace('folder', 'otherfolder') with open(files, r) as f: with open(new_path, 'wb') as wf: do stuff </code></pre> <p>This isn't working though as no files are being written to. I thought about simply changing this line to <code>files.replace('\\folder\\', '\\otherfolder\\')</code> but I don't think this will work. </p> <p>I'd like to use Python's <code>re</code> library if anyone has any ideas?</p>
<p>It looks like the problem is the glob pattern. Instead of:</p> <pre><code> file_path = glob.glob(folder + '\*') </code></pre> <p>can you try</p> <pre><code> file_path = glob.glob(os.path.join(folder, '*')) </code></pre> <p>?</p> <p>This will require you to <code>import os</code> at the top of your file.</p> <hr> <p>There is also a syntax error here:</p> <pre><code> with open(files, r) as f: </code></pre> <p>Should be:</p> <pre><code> with open(files, 'r') as f: </code></pre>
python|jython|glob|python-re|jython-2.5
0
1,905,307
61,729,361
How to pass additional parameter in firebase transaction method in Python
<p>I am trying to create a transaction like this for real time firebase database:</p> <pre><code>tranRef = db.reference('all_items') new_transRef = tranRef.transaction(updateDatabase) </code></pre> <p>My updateDatabase function looks like this:</p> <pre><code>def updateDatabase(current_value): print(type(current_value)) return current_value </code></pre> <p>Here the current_value is data containing all the child nodes at root "all_items". This works fine.<br> What I want is to pass an additional argument to the updateDatabase say cartList which is a list of dictionaries. How should I do that? What I essentially want is a function that looks something like this:</p> <pre><code>def updateDatabase(current_value, cartList): print(type(current_value)) return current_value </code></pre> <p>How should I pass the list in calling the function:</p> <pre><code>tranRef = db.reference('all_items') new_transRef = tranRef.transaction( ## what should I write here ##) </code></pre>
<p>I was also dealing with same problem and I solved it by using <code>lambda</code>:</p> <pre><code>extra_param = ['your cart list here'] tranRef = db.reference('all_items') new_transRef = tranRef.transaction(lambda current_value: updateDatabase(current_value, extra_param)) </code></pre> <p>You can then call the transaction method like this:</p> <pre><code>def updateDatabase(current_value, cartList): print(type(current_value)) return current_value </code></pre>
python|firebase-realtime-database
0
1,905,308
67,424,254
How do I find this HTML element with Python and Selenium?
<p>This is an excerpt from the HTML source:</p> <pre><code>&lt;div class=&quot;flex items-center mt-4&quot;&gt; &lt;svg style=&quot;fill: var(--color-reptile);&quot; viewbox=&quot;0 0 16 16&quot; width=&quot;24&quot;&gt; </code></pre> <p>I want to find the svg element. This works:</p> <pre><code>e = driver.find_element_by_css_selector('div.flex.items-center.mt-4 [style]') print(e.get_attribute('style')) # prints 'fill: var(--color-reptile);' </code></pre> <p>But how would I find this element directly without addressing the parent? I tried <code>driver.find_element_by_css_selector('svg.fill\:.var\(--color-reptile\)\;')</code> or <code>driver.find_element_by_css_selector('.var\(--color-reptile\)\;')</code>and all kind of different variations but every attempt just raises a &quot;no such element&quot; error.</p>
<p>It case this is the only <code>svg</code> element on the page I guess the following <code>xpath</code> should work:<br /> <code>//*[name()='svg']</code><br /> Since, surely, it's not the only <code>svg</code> on the page you should add some more details like:<br /> <code>//*[name()='svg' and (@width='24')]</code></p>
python|html|selenium|selenium-webdriver|selector
0
1,905,309
60,368,088
How do I sort variables into other variables by ranking in python
<p>So I am trying to make a program to imrove my python skills which is basically a lucky wheel, you get several items which are all ranked by numbers, I have made the items randomly generate but how would i make them print in order? I assume that the sort() method won't be any use in this situation.</p> <pre><code># to sort: itemrating1, itemrating2, itemrating3 print(toprateditem) print(meditem) print(lowitem) </code></pre> <p>that is basically what I want to do, I hope I explained it well.</p>
<p>You can store them in a list:</p> <pre><code>allitems = [item1, item2, item3] </code></pre> <p>You can call <code>sort()</code> on the list or just return <code>max(allitems)</code> and <code>min(allitems)</code></p>
python|sorting
0
1,905,310
71,435,096
How to use a loop to display highest mark and lowest
<pre><code>#List of students displayed in a SET (J for Java, C For c# , P for Python) BernhardtJ, BernhardtC, BernhardtP = [93 , 75 , 83] AshleyJ, AshleyC, AshleyP = [55 , 84, 69] ChristiaanJ, ChristiaanC, ChristiaanP = [63 , 74, 89] StevenJ, StevenC, StevenP = [81 , 74, 64] NicholasJ, NicholasC, NicholasP = [58 , 46, 74] PeterJ, PeterC, PeterP = [78 , 41, 57] MosesJ, MosesC, MosesP = [63 , 42, 21] Students = [BernhardtJ, BernhardtC, BernhardtP , AshleyJ, AshleyC, AshleyP, ChristiaanJ, ChristiaanC, ChristiaanP ,StevenJ, StevenC, StevenP ,NicholasJ, NicholasC, NicholasP, PeterJ, PeterC, PeterP, MosesJ, MosesC, MosesP] print (int(BernhardtJ + BernhardtC + BernhardtP)/3, &quot;Bernhardts Average mark for this semester&quot;) print (int(AshleyJ +AshleyC+ AshleyP)/3, &quot;Ashley Average mark for this semester&quot;) print (int(ChristiaanJ + ChristiaanC + ChristiaanP)/3, &quot;Christiaan Average mark for this semester&quot;) </code></pre> <p>I want to display the highest average and lowest from the printed integers.</p>
<p>The RIGHT way to do this is to put the data in a dictionary to begin with, NOT into individual variables.</p> <pre><code>scores = { 'Bernhardt': [93 , 75 , 83], 'Ashley': [55 , 84, 69], 'Christiaan': [63 , 74, 89], 'Steven': [81 , 74, 64], 'Nicholas': [58 , 46, 74], 'Peter': [78 , 41, 57], 'Moses': [63 , 42, 21] } averages = [] for k,v in scores.items(): print( &quot;Average for&quot;, k, &quot;is&quot;, sum(v)/3 ) averages.append( sum(v)/3 ) print(&quot;High average is&quot;, max(averages) ) print(&quot;Low average is&quot;, min(averages) ) </code></pre> <p>Output:</p> <pre><code>Average for Bernhardt is 83.66666666666667 Average for Ashley is 69.33333333333333 Average for Christiaan is 75.33333333333333 Average for Steven is 73.0 Average for Nicholas is 59.333333333333336 Average for Peter is 58.666666666666664 Average for Moses is 42.0 High average is 83.66666666666667 Low average is 42.0 </code></pre>
python
0
1,905,311
64,210,587
Sum index wise lists based on index comparison
<p>I have multiple lists, let´s say:</p> <pre><code>t1 = ['ABC', 100, 20] t2 = ['XXX', 200, 35] t3 = ['ABC', 500, 90] t4 = ['XXX', 100, 15] </code></pre> <p>I want to sum the second and third elements from all lists only if the first ones are equal, resulting:</p> <pre><code>list = [['ABC', 600, 110], ['XXX', 300, 50]] </code></pre> <p>Tried with <code>map() function</code> but couldn't get it done.</p> <p>Can anyone help?</p>
<p>You can work with an intermediate <a href="https://docs.python.org/3/library/collections.html#collections.defaultdict" rel="nofollow noreferrer"><code>defaultdict</code></a> of <a href="https://numpy.org/doc/stable/reference/generated/numpy.array.html" rel="nofollow noreferrer"><code>numpy</code> arrays</a> to create some kind of a costume counter:</p> <pre class="lang-py prettyprint-override"><code>from collections import defaultdict import numpy as np lists = [['ABC', 100, 20], ['XXX', 200, 35], ['ABC', 500, 90], ['XXX', 100, 15]] res = defaultdict(lambda: np.array([0, 0])) for l in lists: res[l[0]] += l[1:] print(res) print([[key] + list(vals) for key, vals in res.items()]) </code></pre> <p>Gives:</p> <pre><code>{'ABC': array([600, 110]), 'XXX': array([300, 50])} [['ABC', 600, 110], ['XXX', 300, 50]] </code></pre>
python
1
1,905,312
63,639,250
Only update the value given and ignore other values in dynamodb
<p>Hi I am writing a lambda function that will update the DynamoDb using boto3. In this code <code>employee_id</code> is auto-generated but you have to provide <code>last_name</code> or <code>first_name</code>. I am doing it with <code>if-else</code>. If the attribute tends to increase so does the checks. I can't keep on going with if condition. How can I tackle this what changes should I make</p> <pre><code>import boto3 import json dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('Employee') def lambda_handler(event, context): employee_id = event['employee_id'] if 'first_name' in event and 'last_name' not in event: first_name = event['first_name'] UpdateExpression = 'SET first_name = :val1' ExpressionAttributeValues = {':val1': first_name } elif 'last_name' in event and 'first_name' not in event: last_name = event['last_name'] UpdateExpression = 'SET last_name = :val1' ExpressionAttributeValues = {':val1': last_name} elif 'first_name' in event and 'last_name' in event: last_name = event['last_name'] first_name= event['first_name'] UpdateExpression = 'SET last_name = :val1, first_name = :val2' ExpressionAttributeValues = { ':val1': last_name, ':val2': first_name } else: raise ValueError(&quot;first_name and last_name not given&quot;) update = table.update_item( Key={ 'employee_id': employee_id }, ConditionExpression= 'attribute_exists(employee_id)', UpdateExpression=UpdateExpression, ExpressionAttributeValues=ExpressionAttributeValues ) </code></pre> <p>The code that I came up with but is not working</p> <pre><code>import boto3 import json dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('Employee') def lambda_handler(event, context): employee_id = event['employee_id'] last_name= event['last_name'] first_name= event['first_name'] column = [first_name,last_name] for i in range(0,len(column): query = 'SET {} = :val1,:val2'.format(column[i]) response = table.update_item( Key={ 'employee_id': employee_id }, ConditionExpression= 'attribute_exists(employee_id)', UpdateExpression = query, ExpressionAttributeValues={ ':val1': first_name, ':val2': last_name }, ReturnValues=&quot;UPDATED_NEW&quot; ) </code></pre>
<p>You should look at storing the update expression and expression values separately, then passing the complete set into the Lambda function.</p> <p>This would also allow you to validate against each parameter (perhaps breaking this into a validate function to avoid excessive size of function). This way you support both required and optional parameters, then at the end validate that the update expression has valid parameters.</p> <p>Perhaps something like the below?</p> <pre><code>import boto3 import json dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('Employee') def lambda_handler(event, context): update_expression_values = [] expression_attribute_values = {} if 'employee_id' in event: employee_id = event['employee_id'] else: raise ValueError(&quot;employee_id not given&quot;) if 'first_name' in event: update_expression_values.append('first_name = :val_first_name') expression_attribute_values[':val_first_name'] = event['first_name'] if 'last_name' in event: update_expression_values.append('last_name = :val_last_name') expression_attribute_values[':val_last_name'] = event['last_name'] if len(update_expression_values) &lt; 1: raise ValueError(&quot;first_name and last_name not given&quot;) seperator = ',' update = table.update_item( Key={ 'employee_id': employee_id }, ConditionExpression= 'attribute_exists(employee_id)', UpdateExpression='SET ' + seperator.join(update_expression_values), ExpressionAttributeValues=expression_attribute_values ) </code></pre> <p>This could be broken down further to reuse the logic through a function that can perform these checks such as the below.</p> <pre><code>import boto3 import json dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('Employee') update_expression_values = [] expression_attribute_values = {} def lambda_handler(event, context): global update_expression_values global expression_attribute_values update_expression_values = [] expression_attribute_values = {} if 'employee_id' in event: employee_id = event['employee_id'] else: raise ValueError(&quot;employee_id not given&quot;) process_event_key(event, 'first_name') process_event_key(event, 'last_name') process_event_key(event, 'new_value') if len(update_expression_values) &lt; 1: raise ValueError(&quot;first_name and last_name not given&quot;) seperator = ',' update = table.update_item( Key={ 'employee_id': employee_id }, ConditionExpression= 'attribute_exists(employee_id)', UpdateExpression='SET ' + seperator.join(update_expression_values), ExpressionAttributeValues=expression_attribute_values ) def process_event_key(event, key): global update_expression_values global expression_attribute_values if key in event: update_expression_values.append(key + ' = :val_' + key) expression_attribute_values[':val_' + key] = event[key] </code></pre> <p>Test Event</p> <pre><code>{ &quot;new_value&quot;: &quot;test&quot;, &quot;employee_id&quot;: &quot;value2&quot;, &quot;last_name&quot;: &quot;value3&quot;, &quot;first_name&quot;: &quot;value4&quot; } </code></pre>
python|amazon-web-services|aws-lambda|amazon-dynamodb|boto3
2
1,905,313
56,585,362
Whiten black contours around a skewed image opencv
<p>I have this image: </p> <p><a href="https://i.stack.imgur.com/t4ApL.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t4ApL.jpg" alt="enter image description here"></a></p> <p>I want to whiten the black contours (borders) around it without affecting the image content. Here is the code I used: </p> <pre><code>import cv2 image = cv2.imread('filename.jpg') height, width, channels = image.shape white = [255, 255, 255] black = [0, 0, 0] for x in range(0,width): for y in range(0, height): channels_xy = image[y, x] if all(channels_xy == black): image[y, x] = white cv2.imwrite('result.jpg', image) </code></pre> <p>The black borders are whitened (well not 100%), but the writing in the image has been affected too<br> <a href="https://i.stack.imgur.com/nLmCi.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nLmCi.jpg" alt="enter image description here"></a> </p> <p>Is there any suggestion to better whiten to black borders without affecting the image content?</p>
<p>This code can help, but it is very slow. And you need to install the shapely package for python.</p> <pre class="lang-py prettyprint-override"><code>import cv2 import numpy as np from pyutils_gph.utils import showIm import shapely.geometry as shageo from tqdm import tqdm, trange img = cv2.imread('test.jpg') cv2.imshow('src', img) # get the gray image and do binaryzation gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) gray[gray &lt; 100] = 0 gray[gray &gt; 0] = 255 # get the largest boundry of the binary image to locate the target contours, _ = cv2.findContours(gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) rect = cv2.minAreaRect(contours[0]) box = cv2.boxPoints(rect) box = np.int0(box) poly = shageo.Polygon(box) minx = min(box[:, 0]) maxx = max(box[:, 0]) miny = min(box[:, 1]) maxy = max(box[:, 1]) h, w = img.shape[:2] ind = np.zeros((h, w), np.bool) # chech the point is inside the target or not for i in trange(h): for j in range(w): if j &lt; minx or j &gt; maxx or i &lt; miny or i &gt; maxy: ind[i, j] = True else: p = shageo.Point(j, i) if not p.within(poly): ind[i, j] = True # make outside point to be white img[ind] = (255, 255, 255) cv2.imshow('res', img) cv2.waitKey(0) </code></pre> <p>the result is like below. <a href="https://i.stack.imgur.com/8SSJK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8SSJK.jpg" alt="result.jpg"></a></p>
python|opencv
2
1,905,314
70,011,884
Flip order of characters within a string using Python
<p>I wish to flip the order of characters within the 'date' column using Python</p> <p><strong>Data</strong></p> <pre><code>id type date aa hi 2022 Q1 aa hi 2022 Q2 </code></pre> <p><strong>Desired</strong></p> <pre><code>id type date aa hi Q1 2022 aa hi Q2 2022 </code></pre> <p><strong>Doing</strong></p> <p>I believe I can separate and then reverse them?</p> <pre><code>a = df.split() </code></pre> <p>Any suggestion is helpful</p>
<p>We can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.replace.html" rel="nofollow noreferrer"><code>str.replace</code></a> with capture groups if wanting to be explicit on pattern:</p> <pre><code>df['date'] = df['date'].str.replace(r'^(\d{4}) (Q\d)$', r'\2 \1', regex=True) </code></pre> <p>Or with several <code>str</code> calls (<a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>str.split</code></a>, <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.html" rel="nofollow noreferrer"><code>str</code></a>, <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.join.html" rel="nofollow noreferrer"><code>str.join</code></a>), but this can be slow as it requires several copies of data:</p> <pre><code>df['date'] = df['date'].str.split().str[::-1].str.join(' ') </code></pre> <p><code>df</code>:</p> <pre><code> id type date 0 aa hi Q1 2022 1 aa hi Q2 2022 </code></pre> <hr /> <p>Setup:</p> <pre><code>import pandas as pd df = pd.DataFrame({ 'id': ['aa', 'aa'], 'type': ['hi', 'hi'], 'date': ['2022 Q1', '2022 Q2'] }) </code></pre>
python|pandas
3
1,905,315
69,713,492
How to include keywords within fstring in python3
<p>I am new to python and fstring, Can you please help me how to write multiple lines to the file with fstring, it is to create a config file with all the contents within the fstring is to be written to a file, ignoring the content what it has</p> <pre><code> def create_testfile(self) -&gt; None: &quot;&quot;&quot;writes the config file &quot;&quot;&quot; value = f&quot;&quot;&quot; #================================== ## NAT network DHCP params configuration bridge=$(ifconfig | grep &quot;flags&quot; | awk -F : '{print $1}' | sed '/lo/d' | sed '/vif/d') echo &quot;${command}&quot; case &quot;${command}&quot; in online) ... ... ... port_num_check esac &quot;&quot;&quot; with open(&quot;network.txt&quot;, &quot;w&quot;) as network: network.write(value) </code></pre> <p>Gives an error message File &quot;&quot;, line 1 (print $1) ^ SyntaxError: invalid syntax</p> <p>I tried changing the triple quoted string to single quoted string but there is another problem with that, adding \n will result in error message. SyntaxError: f-string expression part cannot include a backslash</p>
<p>You are using f-strings where they are not necessary. You just need a multiline string. The error you are getting is because of how f-strings work.</p> <p>everything inside curly braces <code>{</code> and <code>}</code> is being treated as a variable in python instead of what you want it to be treated as, just a plain string.</p> <p>one of the solutions is to just not use f-strings like I said earlier.</p> <p>other solution is to escape the curly braces so they are not used as an escape sequence for variables. you can do that by surrounding existing <code>{...}</code> with a pair of braces, again. so <code>{...}</code> should be written as <code>{{...}}</code>.</p> <p>do this where ever you want the commands in the shell script are meant to be treated as variables in the shell script but not in python.</p> <p>to be precise modify your program as follows</p> <pre class="lang-py prettyprint-override"><code>def create_testfile(self) -&gt; None: &quot;&quot;&quot;writes the config file &quot;&quot;&quot; value = f&quot;&quot;&quot; #================================== ## NAT network DHCP params configuration bridge=$(ifconfig | grep &quot;flags&quot; | awk -F : '{{print $1}}' | sed '/lo/d' | sed '/vif/d') echo &quot;${{command}}&quot; case &quot;${{command}}&quot; in online) ... ... ... port_num_check esac &quot;&quot;&quot; with open(&quot;network.txt&quot;, &quot;w&quot;) as network: network.write(value) </code></pre> <p>notice how in your question the syntax highlighting is treating the text inside braces as variables but in the code above it is being treated just as strings.</p> <p>some things to read for f-strings:</p> <ol> <li><a href="https://realpython.com/python-f-strings/#braces" rel="nofollow noreferrer">Real Python's Article</a></li> <li><a href="https://www.python.org/dev/peps/pep-0498/#triple-quoted-f-strings" rel="nofollow noreferrer">PEP 498 -- Literal String Interpolation</a></li> </ol>
python|python-3.x|pycharm|python-3.6|f-string
2
1,905,316
17,869,727
XML-RPC Python Slow before first request
<p>I am running a simulation and transmitting data through XML-RPC to a remote client. I'm using a thread to run the XML-RPC part. But for some reason, the program runs really slow until I a make a request from any of clients that connect. And after I run the very first request, the program then runs fine. I have a class that inherits from Threading, and that I use in order to start the XML-RPC stuff</p> <p>I cannot really show you the code, but do you have any suggestions as to why this is happening? </p> <p>Thanks, and I hope my question is clear enough</p>
<p>In Python, due to the GIL, threads doesn't really execute in parallel. If the RPC part is waiting in an active way (loop poling for connection instead of waiting), you most probably will have the behavior you are describing. However, without seeing any code, this is just wild guess. </p>
python|multithreading|performance|request|xml-rpc
0
1,905,317
61,039,357
Python How to find a keyword inside a column of data table after transforming the field texts into lower cases?
<p>I need to find if a keyword in a field of an excel.</p> <p>The first thing I want to do is transform this field into lowercase:</p> <pre><code>import pandas as pd data = pd.read_excel('data.xlsx', sheet_name = 1) </code></pre> <p>So I used the following:</p> <pre><code>data['Notes']=(map(str.upper, data['Notes'])) </code></pre> <p>Where <code>Notes</code> is the field I want to use. But the function is returning something like this for each cell:</p> <blockquote> <p></p> </blockquote> <p>I tried it using <code>list()</code>:</p> <pre><code>data['Notes']=list(map(str.upper, data['Notes'])) </code></pre> <p>But I received an error:</p> <blockquote> <p>descriptor 'lower' requires a 'str' object but received a 'map'</p> </blockquote> <p>For the search, I made the following:</p> <pre><code>keywords = ['reception', 'warehouse', 'under construction', 'construction'] data['new field'] = '' for note in data['Notes']: for keyword in keywords: if keyword in note: data['new field'] = True else: data['new field'] = False </code></pre> <p>But the <code>new_field</code> is always contains <code>False</code>.</p>
<p>If you want to lowercase a column in pandas dataframe. It should be:</p> <pre><code>data['Notes'] = data['Notes'].str.lower() # NOT THIS: data['Notes']=(map(str.upper, data['Notes'])) </code></pre> <p>Now you can try looping over it and check:</p> <pre><code>keywords = ['reception', 'warehouse', 'under construction', 'construction'] data['new field'] = '' for note in data['Notes']: if note in keywords: data['new field'] = True else: data['new field'] = False </code></pre>
python|data-science
2
1,905,318
61,176,601
using Levenshtein Distance to sort search query in sqlalchemy?
<p>I've been trying to get Sqlalchemy to use a function ( Levenshtein Distance ) to order the results of a query , I've tried some of the ideas here and there , but none use a parameter that is not in the Sqlalchemy model it self , here is what i am trying to do : </p> <pre><code> communs=baladiya.query.filter_by(wilaya=wilaya).filter(baladiya.name.like('%{}%'.format(name))).order_by(jf.Levenshtein_Distance(searchstring,baladiya.name)).paginate(per_page=4,page=page) </code></pre> <p>i am using this method to keep the app well structured and can use paginate object , if this is not possible at all , is there a way to create an empty query , and then keep appending using my own sorting .</p> <p>this is the baladiya model : </p> <pre><code>class baladiya(db.Model): id = db.Column(db.Integer , primary_key=True) name = db.Column(db.String(1000),unique=False , nullable=False) postalcode = db.Column(db.String(10),unique=True, nullable=False) posX = db.Column(db.String(20), nullable=True, default='16000') posY = db.Column(db.String(20), nullable=True, default='16000') wilaya= db.Column(db.Integer,unique=False, nullable=False) </code></pre>
<p>i did some tries my self , and what Sqlalchemy actually does when you pass a filter query , is that it creates a mysql based query , which means it doesnt pull the data and applies the filter , but just send the request , you better pull the data you want , and then apply what ever you want on that as a list . </p>
python|flask|sqlalchemy|flask-sqlalchemy
0
1,905,319
60,938,763
Regex function to validate ' MM/DD/YYYY hh:mm' in python
<p>I have used the regex function </p> <pre><code>**r'\d{4}-\d?\d-\d?\d (?:2[0-3]|[01]?[0-9]):[0-5]?[0-9]:[0-5]?[0-9]'** </code></pre> <p>but it doesn't work on the date data in the format : 01/02/2020 05:25 AM</p> <p>Where am I going wrong</p>
<p>Your regex is trying to match YYYY-MM-DD for the date. And it is also expecting a 24-hour time of the format HH:MM:SS when your data only has hours and minutes and uses AM/PM (and so, hours will run from 01 to 12 not 00 to 23).</p> <p>This regex will do what you want: \d?\d/\d?\d/\d{4} [0-1][0-9]:[0-5][0-9] ([AP]M)</p> <p>But you should not use a regex for this, because the validation it performs will be short of the mark. This regex will match 31 February as if it were correct, and it isn't.</p> <p>Use <code>datetime.datetime.strptime()</code> instead, with format <code>%m/%d/%Y %I:%M %p</code>. </p> <pre><code>&gt;&gt;&gt; datetime.datetime.strptime("01/02/2020 05:25 AM", "%m/%d/%Y %I:%M %p") datetime.datetime(2020, 1, 2, 5, 25) </code></pre> <p>This is better because <code>strptime()</code> knows about months with less than 31 days, and leap years.</p> <pre><code>&gt;&gt;&gt; datetime.datetime.strptime("02/29/2020 05:25 AM", "%m/%d/%Y %I:%M %p") datetime.datetime(2020, 2, 29, 5, 25) &gt;&gt;&gt; datetime.datetime.strptime("02/29/2021 05:25 AM", "%m/%d/%Y %I:%M %p") Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "E:\Python27\lib\_strptime.py", line 458, in _strptime datetime_date(year, 1, 1).toordinal() + 1 ValueError: day is out of range for month </code></pre> <p>This shows you that you need to put a <code>try...except ValueError</code> block around your <code>strptime()</code> call to trap invalid dates. </p> <p>And, besides validation, why else should you convert your string to a datetime? Well, because the next thing you will want to do is store the date in a variable for processing, or in a database for storage. You're <em>not</em> planning to store it as a string, are you?</p>
python
1
1,905,320
66,095,316
Sorting a list of files to be uploaded in order
<p>My goal is to have python upload files from a set directory to telegram, using telegram-upload, in ascending order. The script that I have cannot seem to upload in order. It uploads the files in random order. I've used the <code>sorted</code> function to no avail. Looking at my script you can see some things I've tried (commented). I had a setup with <code>sorted</code> that would list the files in order, but when attempting to upload, I couldn't re-convert the list I had created and sorted back to a string so <code>subprocess</code> could read the arg.</p> <p>Here's the script:</p> <pre><code>import os import subprocess import time #import shutil Number_Of_Files = 0 #PATH = r'C:\Users\myuser\Downloads' PATH = '/home/pi/Public/' for root, subFolder, files in os.walk(PATH): for item in files: #Number_Of_Files=Number_Of_Files+1 fileNamePath = os.path.join(root, item) #sorted = sorted(fileNamePath) #subprocess.run(['telegram-upload', '-f', 'my_channel', str(sorted)]) subprocess.run(['telegram-upload', '-f', 'my_channel', str(fileNamePath)]) #os.remove(fileNamePath) print(fileNamePath) #time.sleep(60) #else: #print(Number_Of_Files) </code></pre>
<p>Your whole loop can be greatly simplified by using <a href="https://docs.python.org/3/library/pathlib.html" rel="nofollow noreferrer">pathlib</a> and <a href="https://docs.python.org/3/library/functions.html#sorted" rel="nofollow noreferrer">sorted</a>:</p> <pre><code>import subprocess from pathlib import Path p=Path('/home/pi/Public/') for fn in sorted((x for x in p.glob('**/*') if x.is_file())): print(fn) # subprocess.run(['telegram-upload', '-f', 'my_channel', str(fn)]) </code></pre> <p>The <code>glob('**/*')</code> is a recursive glob equivalent to using <code>os.walk</code> but a little easier to manage. The <code>(x for x in p.glob('**/*') if x.is_file())</code> is a comprehension that only returns files, not directories and files. The result of that is sorted and away you go...</p> <p>Given this folder structure:</p> <pre><code>. ├── A │   ├── b.txt │   ├── d.txt │   └── y.doc └── B ├── a.txt ├── c.txt └── x.doc </code></pre> <p><code>sorted((x for x in p.glob('**/*') if x.is_file())</code> returns files in this order:</p> <pre><code>./A/b.txt ./A/d.txt ./A/y.doc ./B/a.txt ./B/c.txt ./B/x.doc </code></pre> <p>If you change the sorted comprehension to <code>sorted((x for x in p.glob('**/*') if x.is_file()), key=lambda x: x.name)</code> then you would sort that same tree only by filename:</p> <pre><code>./B/a.txt ./A/b.txt ./B/c.txt ./A/d.txt ./B/x.doc ./A/y.doc </code></pre> <p>Or sort by suffix first then name with <code>sorted((x for x in p.glob('**/*') if x.is_file()), key=lambda x: (x.suffix, x.name))</code>:</p> <pre><code>./B/x.doc ./A/y.doc ./B/a.txt ./A/b.txt ./B/c.txt ./A/d.txt </code></pre> <p>With the same method, you can sort by time created, directory name, extension, whatever...</p>
python|python-3.x|sorting|os.walk
1
1,905,321
68,175,694
How to use Wave file as input in VOSK speech recognition?
<p>I have a project that needs to get a recorded file and then process by the code and extract the text from file and match the extracted file with the other text and verify it. my problem is: I can't use recorded file in code and it does'nt read the file</p> <p>init function is the fundamental of code.</p> <p>verify functtion confirm the matched speech and text.</p> <pre><code>import argparse import json import os import queue import random import sys from difflib import SequenceMatcher import numpy as np import sounddevice as sd import vosk q = queue.Queue() def int_or_str(text): &quot;&quot;&quot;Helper function for argument parsing.&quot;&quot;&quot; try: return int(text) except ValueError: return text def callback(indata, frames, time, status): &quot;&quot;&quot;This is called (from a separate thread) for each audio block.&quot;&quot;&quot; if status: print(status, file=sys.stderr) q.put(bytes(indata)) def init(): parser = argparse.ArgumentParser(add_help=False) parser.add_argument( '-l', '--list-devices', action='store_true', help='show list of audio devices and exit') args, remaining = parser.parse_known_args() if args.list_devices: print(sd.query_devices()) parser.exit(0) parser = argparse.ArgumentParser( description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter, parents=[parser]) parser.add_argument( '-f', '--filename', type=str, metavar='FILENAME', help='audio file to store recording to') parser.add_argument( '-m', '--model', type=str, metavar='MODEL_PATH', help='Path to the model') parser.add_argument( '-d', '--device', type=int_or_str, help='input device (numeric ID or substring)') parser.add_argument( '-r', '--samplerate', type=int, help='sampling rate') args = parser.parse_args(remaining) try: if args.model is None: args.model = &quot;model&quot; if not os.path.exists(args.model): print(&quot;Please download a model for your language from https://alphacephei.com/vosk/models&quot;) print(&quot;and unpack as 'model' in the current folder.&quot;) parser.exit(0) if args.samplerate is None: device_info = sd.query_devices(args.device, 'input') # soundfile expects an int, sounddevice provides a float: args.samplerate = int(device_info['default_samplerate']) model = vosk.Model(args.model) if args.filename: dump_fn = open(args.filename, &quot;wb&quot;) else: dump_fn = None except KeyboardInterrupt: print('\nDone') parser.exit(0) except Exception as e: parser.exit(type(e).__name__ + ': ' + str(e)) return model, args def verify(random_sentence, model, args): num, T_num, F_num, num_word = 0, 0, 0, 1 with sd.RawInputStream(samplerate=args.samplerate, blocksize=8000, device=args.device, dtype='int16', channels=1, callback=callback): rec = vosk.KaldiRecognizer(model, args.samplerate) print(&quot;{}) &quot;.format(num_word), random_sentence, end='\n') print('=' * 30, end='\n') run = True while run: data = q.get() if rec.AcceptWaveform(data): res = json.loads(rec.FinalResult()) res['text'] = res['text'].replace('ي', 'ی') if SequenceMatcher(None, random_sentence, res['text']).ratio() &gt; 0.65: T_num, num, num_word += 1 else: F_num, num, num_word += 1 run = False print('=' * 30) print('True Cases : {}\n False Cases : {}'.format(T_num, F_num)) if __name__ == &quot;__main__&quot;: model, args = init() verify(random_sentences, model, args) </code></pre>
<p>I have been working on a similar project. I modified <a href="https://github.com/alphacep/vosk-api/blob/master/python/example/test_ffmpeg.py" rel="nofollow noreferrer">the code from VOSK Git repo</a> and wrote the following function that takes file name / path as the input and outputs the captured text. Sometimes, when there is a long pause (~seconds) in the audio file, the returned text would be an empty string. To remedy this problem, I had to write additional code that picks out the longest string that was captured. I could make do with this fix.</p> <pre><code>def get_text_from_voice(filename): if not os.path.exists(&quot;model&quot;): print (&quot;Please download the model from https://alphacephei.com/vosk/models and unpack as 'model' in the current folder.&quot;) exit (1) wf = wave.open(filename, &quot;rb&quot;) if wf.getnchannels() != 1 or wf.getsampwidth() != 2 or wf.getcomptype() != &quot;NONE&quot;: print (&quot;Audio file must be WAV format mono PCM.&quot;) exit (1) model = Model(&quot;model&quot;) rec = KaldiRecognizer(model, wf.getframerate()) rec.SetWords(True) text_lst =[] p_text_lst = [] p_str = [] len_p_str = [] while True: data = wf.readframes(4000) if len(data) == 0: break if rec.AcceptWaveform(data): text_lst.append(rec.Result()) print(rec.Result()) else: p_text_lst.append(rec.PartialResult()) print(rec.PartialResult()) if len(text_lst) !=0: jd = json.loads(text_lst[0]) txt_str = jd[&quot;text&quot;] elif len(p_text_lst) !=0: for i in range(0,len(p_text_lst)): temp_txt_dict = json.loads(p_text_lst[i]) p_str.append(temp_txt_dict['partial']) len_p_str = [len(p_str[j]) for j in range(0,len(p_str))] max_val = max(len_p_str) indx = len_p_str.index(max_val) txt_str = p_str[indx] else: txt_str ='' return txt_str </code></pre> <p>Make sure that the correct model is present in the same directory or put in the path to the model. Also, note that VOSK accepts audio files only in wav mono PCM format.</p>
python|speech-recognition|vosk
0
1,905,322
58,984,833
Adding Horizontal Line to Dash-Plotly Python Dashboard
<p>I am creating a Dash app in Python3. Trying to add in a horizontal line to a bar graph. The examples in the documentation are for line graphs, which have numeric x and y axis, whereas I have a categorical X-axis. The below code creates the graph successfully, but does not show the shape object. How can I add a horizontal line to this graph?</p> <pre><code> html.Div( [ dcc.Graph( id='example-graph-23', figure={ 'data': [ {'x': ['Overall', 'NBA', 'WWC', 'NFL'], 'y': [3,2,2.5], 'type': 'bar', 'name': 'Instagram'}, ], 'layout': { 'yaxis' : dict( range=[0, 4] ), 'plot_bgcolor': colors['background'], 'paper_bgcolor': colors['background'], 'font': { 'color': colors['text'] }, 'shapes' : dict(type="line", x0=0, y0=2, x1=5, y1=2, line=dict( color="Red", width=4, dash="dashdot", )) } } ) ] , className="four columns" ), </code></pre>
<p>You can add a vertical line by adding <code>x</code> and <code>y</code> coordinates to the <code>figure.data</code> like this:</p> <pre><code>import dash import dash_html_components as html import dash_core_components as dcc app = dash.Dash() colors = {'background': 'white', 'text': 'black'} app.layout = html.Div( [ dcc.Graph( id='example-graph-23', figure={ 'data': [ {'x': ['Overall', 'NBA', 'WWC', 'NFL'], 'y': [3,2,2.5], 'type': 'bar', 'name': 'Instagram'}, {'x': ['Overall', 'Overall'], 'y': [0, 4], 'type': 'line', 'name': 'v_line_1'}, {'x': ['NBA', 'NBA'], 'y': [0, 4], 'type': 'line', 'name': 'v_line_2'}, {'x': ['WWC', 'WWC'], 'y': [0, 4], 'type': 'line', 'name': 'v_line_3'}, ], 'layout': { 'yaxis' : dict( range=[0, 4] ), 'plot_bgcolor': colors['background'], 'paper_bgcolor': colors['background'], 'font': { 'color': colors['text'] }, 'shapes' : dict(type="line", x0=0, y0=2, x1=5, y1=2, line=dict( color="Red", width=4, dash="dashdot", )) } } )], className="four columns" ) if __name__ == '__main__': app.run_server(debug=True) </code></pre> <p><a href="https://i.stack.imgur.com/zE4lF.png" rel="nofollow noreferrer">enter image description here</a></p>
python|plotly|plotly-dash|plotly-python
1
1,905,323
31,590,184
Plot Multicolored line based on conditional in python
<p>I have a pandas dataframe with three columns and a datetime index</p> <pre><code>date px_last 200dma 50dma 2014-12-24 2081.88 1953.16760 2019.2726 2014-12-26 2088.77 1954.37975 2023.7982 2014-12-29 2090.57 1955.62695 2028.3544 2014-12-30 2080.35 1956.73455 2032.2262 2014-12-31 2058.90 1957.66780 2035.3240 </code></pre> <p>I would like to make a time series plot of the 'px_last' column that is colored green if on the given day the 50dma is above the 200dma value and colored red if the 50dma value is below the 200dma value. I have seen this example, but can't seem to make it work for my case <a href="http://matplotlib.org/examples/pylab_examples/multicolored_line.html" rel="noreferrer">http://matplotlib.org/examples/pylab_examples/multicolored_line.html</a></p>
<p>Here is an example to do it without <code>matplotlib.collections.LineCollection</code>. The idea is to first identify the cross-over point and then using a <code>plot</code> function via groupby.</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt # simulate data # ============================= np.random.seed(1234) df = pd.DataFrame({'px_last': 100 + np.random.randn(1000).cumsum()}, index=pd.date_range('2010-01-01', periods=1000, freq='B')) df['50dma'] = pd.rolling_mean(df['px_last'], window=50) df['200dma'] = pd.rolling_mean(df['px_last'], window=200) df['label'] = np.where(df['50dma'] &gt; df['200dma'], 1, -1) # plot # ============================= df = df.dropna(axis=0, how='any') fig, ax = plt.subplots() def plot_func(group): global ax color = 'r' if (group['label'] &lt; 0).all() else 'g' lw = 2.0 ax.plot(group.index, group.px_last, c=color, linewidth=lw) df.groupby((df['label'].shift() * df['label'] &lt; 0).cumsum()).apply(plot_func) # add ma lines ax.plot(df.index, df['50dma'], 'k--', label='MA-50') ax.plot(df.index, df['200dma'], 'b--', label='MA-200') ax.legend(loc='best') </code></pre> <p><a href="https://i.stack.imgur.com/o93QZ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/o93QZ.png" alt="enter image description here"></a></p>
python|pandas|matplotlib|plot
10
1,905,324
16,004,289
Python Boolean error?
<p>Ok so I'm creating a loop:</p> <pre><code>def equ(par1,par2): con1=4/par1 ready=False add=False if ready==True: if add==True: par2+=con1 add=False print("true") elif add==False: par2-=con1 add=True print("False") elif ready==False: par2=con1 ready=True input() return par2 </code></pre> <p>Every time I run the program it doesn't do what it's supposed to. I notice that it will NOT change ready to true. Could any one give me some help? THANKS! :)</p>
<p>First, you have no looping construct. You only have a linear flow of logic.</p> <p>Second, <code>ready==True</code> will never be true, since it is explicitly set to <code>False</code> before that code block is ever hit.</p> <p>If you're intending to reuse the boolean value <code>ready</code>, then you'd either want to preserve its state somewhere outside of the scope of the method - once you leave the method, it goes right back through and sets it to <code>False</code> again.</p>
python
1
1,905,325
15,741,357
adding '+' to all the numbers as a prefix (numbers are stored in a csv file) using a python script
<p><strong>goal</strong></p> <p>All the numbers in the csv file that I exported from hotmail are stored as <code>91123456789</code> whereas to complete a call i need to dial <code>+91123456789</code>. These contacts will be converted to a batch of vcf files and exported to my phone. I want to add the + to all my contacts at the beginning.</p> <p><strong>approach</strong></p> <p>write a python script that can do this for an indefinite number of contacts.</p> <p><strong>pre-conditions</strong></p> <p>none of the numbers in the csv file will have a + in them.</p> <p><strong>problem</strong></p> <p>(a) there is a posibility that the number itself may have a <code>91</code> in it like: <code>+919658912365</code>. This makes the adding a plus very difficult. </p> <p>explanation:I am adding this as a problem, as if the 91 is there only at the beginning of a number then we can add it simple by checking two consecutive digits and if they match <code>91</code> then we can add + else we don't need to add <code>+</code> and we can move on to the next pair of digits.</p> <p>(b) the fields are seprated by comma's. I want to add the <code>+</code> as a prefix only in front of the field which has the header <code>mobile</code> and not in any other field where a set of digits <code>91</code> may appear(like in landline numbers or fax numbers)</p> <p><strong>research</strong></p> <p>I tried this with excel, but the process it would take an unreasonable amount of time(like 2 hours!)</p> <p><strong>specs</strong></p> <p>I have 400 contacts. Windows XP SP 3</p> <p><strong>please</strong> help me solve this problem.</p>
<p>Something like below??</p> <pre><code>import csv for row in csv.reader(['num1, 123456789', 'num2, 987654321', 'num3, +23456789']): phoneNumber = row[1].strip() if not phoneNumber.startswith('+'): phoneNumber = '+' + phoneNumber print phoneNumber </code></pre>
python|file-io
1
1,905,326
15,580,488
Python Nested List Grouping
<p>I have a nested list in this format:</p> <p>finalValues = [ [x,123] , [x,23542] , [y,56] , [y,765] , [y,54] , [z,98] ]</p> <p>I am writing to a text file like this currently (using a loop for the index):</p> <pre><code>outputFile.write("\n--------------------------------------------------") outputFile.write("\nVariable: " + finalValues[index][0]) outputFile.write("\nNumber: " + finalValues[index][1]) outputFile.write("\n--------------------------------------------------") outputFile.write("\n") </code></pre> <p>For this specific example that means I am printing out 6 unique outputs to the text file.</p> <p>What is the easiest way to group the second value by the first value? So my output would be (EDIT --- I cannot format this perfectly like my output due to the forum formatting features, but you can get the general idea):</p> <pre> '-------------------------------------------------- Variable: x Number: 123 Number: 23542 '-------------------------------------------------- '-------------------------------------------------- Variable: y Number: 56 Number: 765 Number: 54 '-------------------------------------------------- '-------------------------------------------------- Variable: z Number: 98 '-------------------------------------------------- </pre>
<p>One way to do it is to group the elements with <code>itertools.groupby</code> using <code>operator.itemgetter</code> to get the key value you're interested in. The list needs to be sorted by the key first.</p> <pre><code>import operator import itertools get_key = operator.itemgetter(0) finalValues.sort(key = get_key) for key, group in itertools.groupby(finalValues, get_key): outputFile.write("\n--------------------------------------------------") outputFile.write("\nVariable: " + key) for pair in group: outputFile.write("\nNumber: " + pair[1]) outputFile.write("\n--------------------------------------------------") outputFile.write("\n") </code></pre>
python|python-2.7
2
1,905,327
59,707,625
Chatterbot failing to install
<p>I´ve been trying to install ChatterBot, its a new machine so its pretty much a fresh installation of python, i'am running python 3.8 64bits</p> <p>Complete log</p> <pre><code>C:\Users\Marcos&gt;pip install chatterbot Collecting chatterbot Using cached https://files.pythonhosted.org/packages/6c/0e/dac0d82f34f86bf509cf5ef3e2dfc5aa7d444bd843a2330ceb7d854f84f2/ChatterBot-1.0.5-py2.py3-none-any.whl Collecting pint&gt;=0.8.1 Using cached https://files.pythonhosted.org/packages/90/f9/2bdadf95328c02e57a79e5370d1e911a9c6fdb9952b6c4de44d6c7052978/Pint-0.10.1-py2.py3-none-any.whl Collecting sqlalchemy&lt;1.3,&gt;=1.2 Downloading https://files.pythonhosted.org/packages/f9/67/d07cf7ac7e6dd0bc55ba62816753f86d7c558107104ca915e730c9ec2512/SQLAlchemy-1.2.19.tar.gz (5.7MB) |████████████████████████████████| 5.7MB 6.8MB/s Requirement already satisfied: pytz in c:\users\marcos\appdata\local\programs\python\python38\lib\site-packages (from chatterbot) (2019.3) Collecting mathparse&lt;0.2,&gt;=0.1 Using cached https://files.pythonhosted.org/packages/c3/e5/4910fb85950cb960fcf3f5aabe1c8e55f5c9201788a1c1302b570a7e1f84/mathparse-0.1.2-py3-none-any.whl Collecting nltk&lt;4.0,&gt;=3.2 Using cached https://files.pythonhosted.org/packages/f6/1d/d925cfb4f324ede997f6d47bea4d9babba51b49e87a767c170b77005889d/nltk-3.4.5.zip Collecting python-dateutil&lt;2.8,&gt;=2.7 Using cached https://files.pythonhosted.org/packages/74/68/d87d9b36af36f44254a8d512cbfc48369103a3b9e474be9bdfe536abfc45/python_dateutil-2.7.5-py2.py3-none-any.whl Collecting pymongo&lt;4.0,&gt;=3.3 Downloading https://files.pythonhosted.org/packages/77/5e/f30374f2a997710913c7616eb087e6473ccfd8a46eacee956d7fb8c7dd27/pymongo-3.10.1-cp38-cp38-win_amd64.whl (355kB) |████████████████████████████████| 358kB 6.4MB/s Collecting pyyaml&lt;5.2,&gt;=5.1 Using cached https://files.pythonhosted.org/packages/e3/e8/b3212641ee2718d556df0f23f78de8303f068fe29cdaa7a91018849582fe/PyYAML-5.1.2.tar.gz Collecting spacy&lt;2.2,&gt;=2.1 Using cached https://files.pythonhosted.org/packages/1f/e2/46650d03c7ff2b57ed7af211d41c3f606540f7adea92b5af65fcf9f605c0/spacy-2.1.9.tar.gz Installing build dependencies ... error ERROR: Command errored out with exit status 1: command: 'c:\users\marcos\appdata\local\programs\python\python38\python.exe' 'c:\users\marcos\appdata\local\programs\python\python38\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\Marcos\AppData\Local\Temp\pip-build-env-nntoyoz6\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools 'wheel&gt;0.32.0,&lt;0.33.0' Cython 'cymem&gt;=2.0.2,&lt;2.1.0' 'preshed&gt;=2.0.1,&lt;2.1.0' 'murmurhash&gt;=0.28.0,&lt;1.1.0' 'thinc&gt;=7.0.8,&lt;7.1.0' cwd: None Complete output (62 lines): Collecting setuptools Using cached https://files.pythonhosted.org/packages/af/e7/02db816dc88c598281bacebbb7ccf2c9f1a6164942e88f1a0fded8643659/setuptools-45.0.0-py2.py3-none-any.whl Collecting wheel&lt;0.33.0,&gt;0.32.0 Using cached https://files.pythonhosted.org/packages/ff/47/1dfa4795e24fd6f93d5d58602dd716c3f101cfd5a77cd9acbe519b44a0a9/wheel-0.32.3-py2.py3-none-any.whl Collecting Cython Using cached https://files.pythonhosted.org/packages/41/2c/9d873fc8d1be29af12a1d41707461399327396da10e50e183754aa4136b9/Cython-0.29.14-cp38-cp38-win_amd64.whl Collecting cymem&lt;2.1.0,&gt;=2.0.2 Using cached https://files.pythonhosted.org/packages/8c/1f/43be34e4decc602fae2bda73b05525bc49deff44baeb95611b23a2929195/cymem-2.0.3-cp38-cp38-win_amd64.whl Collecting preshed&lt;2.1.0,&gt;=2.0.1 Using cached https://files.pythonhosted.org/packages/0b/14/c9aa735cb9c131545fc9e23031baccb87041ac9215b3d75f99e3cf18f6a3/preshed-2.0.1.tar.gz Collecting murmurhash&lt;1.1.0,&gt;=0.28.0 Using cached https://files.pythonhosted.org/packages/5b/73/129c1aed56c88a446c70e4eda186fe014bfb8330478e5e257cc923bd3e15/murmurhash-1.0.2-cp38-cp38-win_amd64.whl Collecting thinc&lt;7.1.0,&gt;=7.0.8 Using cached https://files.pythonhosted.org/packages/92/39/ea2a3d5b87fd52fc865fd1ceb7b91dca1f85e227d53e7a086d260f6bcb93/thinc-7.0.8.tar.gz Collecting blis&lt;0.3.0,&gt;=0.2.1 Using cached https://files.pythonhosted.org/packages/59/9e/84a83616cbe5daa94909da38b780e93bf566dc2113c3dc35d7b4cad52f63/blis-0.2.4.tar.gz Collecting wasabi&lt;1.1.0,&gt;=0.0.9 Using cached https://files.pythonhosted.org/packages/21/e1/e4e7b754e6be3a79c400eb766fb34924a6d278c43bb828f94233e0124a21/wasabi-0.6.0-py3-none-any.whl Collecting srsly&lt;1.1.0,&gt;=0.0.6 Using cached https://files.pythonhosted.org/packages/a1/bb/0982e39b1a6dd652d7605f199cc5209746145f3a9e677c0014302cc22f66/srsly-1.0.1-cp38-cp38-win_amd64.whl Collecting numpy&gt;=1.7.0 Using cached https://files.pythonhosted.org/packages/95/47/ea0ae5a778aae07ede486f3dc7cd4b788dc53e11b01a17251b020f76a01d/numpy-1.18.1-cp38-cp38-win_amd64.whl Collecting plac&lt;1.0.0,&gt;=0.9.6 Using cached https://files.pythonhosted.org/packages/9e/9b/62c60d2f5bc135d2aa1d8c8a86aaf84edb719a59c7f11a4316259e61a298/plac-0.9.6-py2.py3-none-any.whl Collecting tqdm&lt;5.0.0,&gt;=4.10.0 Using cached https://files.pythonhosted.org/packages/72/c9/7fc20feac72e79032a7c8138fd0d395dc6d8812b5b9edf53c3afd0b31017/tqdm-4.41.1-py2.py3-none-any.whl Installing collected packages: setuptools, wheel, Cython, cymem, preshed, murmurhash, numpy, blis, wasabi, srsly, plac, tqdm, thinc Running setup.py install for preshed: started Running setup.py install for preshed: finished with status 'done' Running setup.py install for blis: started Running setup.py install for blis: finished with status 'error' ERROR: Command errored out with exit status 1: command: 'c:\users\marcos\appdata\local\programs\python\python38\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Marcos\\AppData\\Local\\Temp\\pip-install-dsthm_0n\\blis\\setup.py'"'"'; __file__='"'"'C:\\Users\\Marcos\\AppData\\Local\\Temp\\pip-install-dsthm_0n\\blis\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Marcos\AppData\Local\Temp\pip-record-md34dflk\install-record.txt' --single-version-externally-managed --prefix 'C:\Users\Marcos\AppData\Local\Temp\pip-build-env-nntoyoz6\overlay' --compile cwd: C:\Users\Marcos\AppData\Local\Temp\pip-install-dsthm_0n\blis\ Complete output (25 lines): BLIS_COMPILER? None running install running build running build_py creating build creating build\lib.win-amd64-3.8 creating build\lib.win-amd64-3.8\blis copying blis\about.py -&gt; build\lib.win-amd64-3.8\blis copying blis\benchmark.py -&gt; build\lib.win-amd64-3.8\blis copying blis\__init__.py -&gt; build\lib.win-amd64-3.8\blis creating build\lib.win-amd64-3.8\blis\tests copying blis\tests\common.py -&gt; build\lib.win-amd64-3.8\blis\tests copying blis\tests\test_dotv.py -&gt; build\lib.win-amd64-3.8\blis\tests copying blis\tests\test_gemm.py -&gt; build\lib.win-amd64-3.8\blis\tests copying blis\tests\__init__.py -&gt; build\lib.win-amd64-3.8\blis\tests copying blis\cy.pyx -&gt; build\lib.win-amd64-3.8\blis copying blis\py.pyx -&gt; build\lib.win-amd64-3.8\blis copying blis\cy.pxd -&gt; build\lib.win-amd64-3.8\blis copying blis\__init__.pxd -&gt; build\lib.win-amd64-3.8\blis running build_ext error: [WinError 2] O sistema não pode encontrar o arquivo especificado msvc py_compiler msvc {'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:', 'HOSTTYPE': 'x86_64', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'LANG': 'C.UTF-8', 'OLDPWD': '/home/matt/repos/flame-blis', 'VIRTUAL_ENV': '/home/matt/repos/cython-blis/env3.6', 'USER': 'matt', 'PWD': '/home/matt/repos/cython-blis', 'HOME': '/home/matt', 'NAME': 'LAPTOP-OMKOB3VM', 'XDG_DATA_DIRS': '/usr/local/share:/usr/share:/var/lib/snapd/desktop', 'SHELL': '/bin/bash', 'TERM': 'xterm-256color', 'SHLVL': '1', 'LOGNAME': 'matt', 'PATH': '/home/matt/repos/cython-blis/env3.6/bin:/tmp/google-cloud-sdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/mnt/c/Users/matt/Documents/cmder/vendor/conemu-maximus5/ConEmu/Scripts:/mnt/c/Users/matt/Documents/cmder/vendor/conemu-maximus5:/mnt/c/Users/matt/Documents/cmder/vendor/conemu-maximus5/ConEmu:/mnt/c/Python37/Scripts:/mnt/c/Python37:/mnt/c/Program Files (x86)/Intel/Intel(R) Management Engine Components/iCLS:/mnt/c/Program Files/Intel/Intel(R) Management Engine Components/iCLS:/mnt/c/Windows/System32:/mnt/c/Windows:/mnt/c/Windows/System32/wbem:/mnt/c/Windows/System32/WindowsPowerShell/v1.0:/mnt/c/Program Files (x86)/Intel/Intel(R) Management Engine Components/DAL:/mnt/c/Program Files/Intel/Intel(R) Management Engine Components/DAL:/mnt/c/Program Files (x86)/Intel/Intel(R) Management Engine Components/IPT:/mnt/c/Program Files/Intel/Intel(R) Management Engine Components/IPT:/mnt/c/Program Files/Intel/WiFi/bin:/mnt/c/Program Files/Common Files/Intel/WirelessCommon:/mnt/c/Program Files (x86)/NVIDIA Corporation/PhysX/Common:/mnt/c/ProgramData/chocolatey/bin:/mnt/c/Program Files/Git/cmd:/mnt/c/Program Files/LLVM/bin:/mnt/c/Windows/System32:/mnt/c/Windows:/mnt/c/Windows/System32/wbem:/mnt/c/Windows/System32/WindowsPowerShell/v1.0:/mnt/c/Windows/System32/OpenSSH:/mnt/c/Program Files/nodejs:/mnt/c/Users/matt/AppData/Local/Microsoft/WindowsApps:/mnt/c/Users/matt/AppData/Local/Programs/Microsoft VS Code/bin:/mnt/c/Users/matt/AppData/Roaming/npm:/snap/bin:/mnt/c/Program Files/Oracle/VirtualBox', 'PS1': '(env3.6) \\[\\e]0;\\u@\\h: \\w\\a\\]${debian_chroot:+($debian_chroot)}\\[\\033[01;32m\\]\\u@\\h\\[\\033[00m\\]:\\[\\033[01;34m\\]\\w\\[\\033[00m\\]\\$ ', 'VAGRANT_HOME': '/home/matt/.vagrant.d/', 'LESSOPEN': '| /usr/bin/lesspipe %s', '_': '/home/matt/repos/cython-blis/env3.6/bin/python'} clang -c C:\Users\Marcos\AppData\Local\Temp\pip-install-dsthm_0n\blis\blis\_src\config\bulldozer\bli_cntx_init_bulldozer.c -o C:\Users\Marcos\AppData\Local\Temp\tmpweh55tja\bli_cntx_init_bulldozer.o -O2 -funroll-all-loops -std=c99 -D_POSIX_C_SOURCE=200112L -DBLIS_VERSION_STRING="0.5.0-6" -DBLIS_IS_BUILDING_LIBRARY -Iinclude\windows-x86_64 -I.\frame\3\ -I.\frame\ind\ukernels\ -I.\frame\1m\ -I.\frame\1f\ -I.\frame\1\ -I.\frame\include -IC:\Users\Marcos\AppData\Local\Temp\pip-install-dsthm_0n\blis\blis\_src\include\windows-x86_64 ---------------------------------------- ERROR: Command errored out with exit status 1: 'c:\users\marcos\appdata\local\programs\python\python38\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Marcos\\AppData\\Local\\Temp\\pip-install-dsthm_0n\\blis\\setup.py'"'"'; __file__='"'"'C:\\Users\\Marcos\\AppData\\Local\\Temp\\pip-install-dsthm_0n\\blis\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Marcos\AppData\Local\Temp\pip-record-md34dflk\install-record.txt' --single-version-externally-managed --prefix 'C:\Users\Marcos\AppData\Local\Temp\pip-build-env-nntoyoz6\overlay' --compile Check the logs for full command output. ---------------------------------------- ERROR: Command errored out with exit status 1: 'c:\users\marcos\appdata\local\programs\python\python38\python.exe' 'c:\users\marcos\appdata\local\programs\python\python38\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\Marcos\AppData\Local\Temp\pip-build-env-nntoyoz6\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools 'wheel&gt;0.32.0,&lt;0.33.0' Cython 'cymem&gt;=2.0.2,&lt;2.1.0' 'preshed&gt;=2.0.1,&lt;2.1.0' 'murmurhash&gt;=0.28.0,&lt;1.1.0' 'thinc&gt;=7.0.8,&lt;7.1.0' Check the logs for full command output. </code></pre>
<p><code>ChatterBot</code> indirectly required <code>blis==0.2.4</code> and the version <a href="https://pypi.org/project/blis/0.2.4/#files" rel="nofollow noreferrer">doesn't provide</a> precompiled wheels for Python 3.8. My advice is to downgrade to Python 3.7.</p> <p>If you want to compile <code>blis</code> for Python 3.8 see the compilation instructions at <a href="https://github.com/explosion/cython-blis#installation" rel="nofollow noreferrer">https://github.com/explosion/cython-blis#installation</a>:</p> <blockquote> <p>If you want to install from source and you're on Windows, you'll need to install LLVM.</p> </blockquote>
python|pip|installation|chatterbot
0
1,905,328
60,331,531
jGRASP wedge: could not execute python3
<p>I am trying to use jGrasp to run python3 for debugging purposes, but it is throwing the error below when I try to run my program <a href="https://i.stack.imgur.com/qX9U4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qX9U4.png" alt="jGrasp RUN I/O output"></a>I have downloaded python3 and I made sure it works in the terminal, but for some reason it is not working in jGrasp. It might have something to do with the PATH but I don't know what location I should add for jGrasp to be able to execute it properly.</p>
<p>The Python path should end with "bin", not "bin/python3". Paths need to be directories that contain executable files, not the executable files themselves. Whether you have added this path in jGRASP only, or at the OS level, you need to change it.</p>
python-3.x|jgrasp
0
1,905,329
60,147,788
Import issue for setproctitle on Mac OS,
<p>In python, If I try to <code>import setproctitle</code> I get the following import error:</p> <pre><code> ImportError: dlopen(/Users/xxx/.local/share/virtualenvs/airflow_gg-F_Vv1Po_/lib/python3.7/site-packages/setproctitle.cpython-37m-darwin.so, 2): Symbol not found: _Py_GetArgcArgv Referenced from: /Users/xxx/.local/share/virtualenvs/airflow_gg-F_Vv1Po_/lib/python3.7/site-packages/setproctitle.cpython-37m-darwin.so Expected in: flat namespace in /Users/xxx/.local/share/virtualenvs/airflow_gg-F_Vv1Po_/lib/python3.7/site-packages/setproctitle.cpython-37m-darwin.so </code></pre> <p>What I have tried so far:</p> <ul> <li>Try to reinstall it (with different flags such as --upgrade and --no-cache)</li> <li>Try to use both venv and Pipenv</li> </ul> <p>Info on my system: System version: macOS 10.15.2 (19C57), Kernel version: Darwin 19.2.0</p> <p>I did not manage to find any information online for this specific import error. Any ideas?</p> <p>--- Edit</p> <p>I installed python 3.8 from the official website and, indeed, it works (with that interpreter as base for venv). I previously had python 3.7 installed with brew (brew install python3). I do not know why it did not work. </p>
<p>Works fine for <code>Python 3.8</code> installed directly from Python page.</p> <pre><code>&gt; python3.8 -m pip install virtualenv &gt; python3.8 -m virtualenv -p \ /Library/Frameworks/Python.framework/Versions/3.8/bin/python3.8 proctest &gt; source proctest/bin/activate &gt; python3.8 -m pip install setproctitle &gt; python3.8 ... ... &gt;&gt;&gt; import setproctitle &gt;&gt;&gt; </code></pre>
python|python-import|importerror
3
1,905,330
60,118,449
How to find the smallest four digit number, whose digits dont repeat, but add up to a randomized number
<p>I need to take a number, lets say 6, and then find the smallest 4 digit number, whose digits do not repeat and add up to 6.</p> <p>For example(These will not add up to the same number):</p> <pre><code>1023 3045 2345 </code></pre> <p>These numbers are all ok because their digits do not repeat and are four digits</p> <p>While:</p> <pre><code>1122 3344 123 </code></pre> <p>These are not ok, because they either are not four digits or their numbers repeat</p> <p>I'm currently at a roadblock where I can find said four digit number, but one: it is in a list which the program i need to plug this into wont accept and two: the digits aren't in the same order as the answers on the program (ie the smallest four digit number that does not have repeat digits, but adds up to six is 1023, but my program returns 0123, which is incorret.</p> <p>Here is my current code:</p> <pre><code>x = 6 Sum = 0 #Find the four digit number for i in range (999, 10000): #Find the sum of those numbers Sum = sum(map(int, str(i))) #Check if sum is = to x if Sum == x: num = i #Get rid of any identical numbers result = list(set(map(int, str(num)))) #Make sure the length is 4 if len(result) == 4: print(result) #Output [0,1,2,3] </code></pre> <p>Any help on how I could change this to work for what I want would be great</p>
<p>Changed your code a little:</p> <pre><code>x = 6 Sum = 0 result={} #Find the four digit number for i in range (999, 10000): #Find the sum of those numbers Sum = sum(map(int, str(i))) #Check if sum is = to x if Sum == x: num = i aux = ''.join(list(set(map(str, str(num))))) if not aux in result: result[aux] = [] result[aux].append(i) for k in result: print(k, min(result[k])) </code></pre>
python|python-3.x
2
1,905,331
2,601,047
Import a python module without the .py extension
<p>I have a file called foobar (without .py extension). In the same directory I have another python file that tries to import it:</p> <pre><code>import foobar </code></pre> <p>But this only works if I rename the file to foobar.py. Is it possible to import a python module that doesn't have the .py extension?</p> <p>Update: the file has no extension because I also use it as a standalone script, and I don't want to type the .py extension to run it. </p> <p>Update2: I will go for the symlink solution mentioned below.</p>
<p>You can use the <code>imp.load_source</code> function (from the <code>imp</code> module), to load a module dynamically from a given file-system path.</p> <pre><code>import imp foobar = imp.load_source('foobar', '/path/to/foobar') </code></pre> <p>This <a href="https://stackoverflow.com/questions/301134/dynamic-module-import-in-python">SO discussion</a> also shows some interesting options.</p>
python|import
60
1,905,332
67,881,794
Fetching TFLite Version information from TFLite model
<p>I have a TFLite Model. How to fetch the version of TFLite used to create the model?</p> <p>During automation, I was trying to fetch the TFLite Models and running inference over them. Currently , I am using TFLite 2.4.1 library. The models created above this versions which has unsupported operations, need to error out.</p> <p>What is the best way of handling ? How to get TFLite version from the model.</p>
<p>The &quot;min_runtime_version&quot; model metadata in the TFLite mode file contains the information that describes the minimal runtime version that is capable of running the given model.</p> <p>The above value in the TFLite flatbuffer schema can be read by the existing C++ and Python schema libraries. For example,</p> <pre><code>from tensorflow.lite.python import schema_py_generated as schema_fb tflite_model = schema_fb.Model.GetRootAsModel(model_buf, 0) # Gets metadata from the model file. for i in range(tflite_model.MetadataLength()): meta = tflite_model.Metadata(i) if meta.Name().decode(&quot;utf-8&quot;) == &quot;min_runtime_version&quot;: buffer_index = meta.Buffer() metadata = tflite_model.Buffers(buffer_index) min_runtime_version_bytes = metadata.DataAsNumpy().tobytes() </code></pre> <h2>References:</h2> <p><a href="https://github.com/tensorflow/tensorflow/blob/8ad56264b31e0ae8c984f3c2f2ef0dc18cd6540b/tensorflow/lite/schema/schema.fbs#L1156" rel="nofollow noreferrer">Model metadata table in TFLite flatbuffer schema</a></p>
tensorflow|tensorflow2.0|tensorflow-lite
0
1,905,333
66,814,076
Predicting with Nan in input
<p>I trained a (0,1) model with tensorflow but without Nans in it. Is there any way to predict some values with Nan in it. I use 'adam' as optimizer.</p> <blockquote> <p>Making model:</p> </blockquote> <pre><code>input_size = 16 output_size = 2 hidden_layer_size = 50 model = tf.keras.Sequential([ tf.keras.layers.Dense(hidden_layer_size, activation='relu'), # 1st hidden layer tf.keras.layers.Dense(hidden_layer_size, activation='relu'), # 2nd hidden layer tf.keras.layers.Dense(output_size, activation='softmax') # output layer ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) batch_size = 100 max_epochs = 20 early_stopping=tf.keras.callbacks.EarlyStopping() model.fit(train_inputs, # train inputs train_targets, # train targets batch_size=batch_size, # batch size epochs=max_epochs, # epochs that we will train for (assuming early stopping doesn't kick in) callbacks=[early_stopping], validation_data=(validation_inputs, validation_targets), # validation data verbose = 1 # making sure we get enough information about the training process ) </code></pre> <blockquote> <p>Potential input I'd like to add:</p> </blockquote> <pre><code>x=np.array([[ 0.8048038 , 2.22810658, 0.7184345 , -0.59266753, 1.73062328, 0.69392477, -1.35764524, -0.55833263, 0.10620523, 1.31206921, -1.07966389, 1.04462389, -0.99787875, 0.797905 , -0.35954954, np.NaN]]) </code></pre> <blockquote> <p>The return I get:</p> </blockquote> <pre><code>array([[nan, nan]], dtype=float32) </code></pre> <p>So is there any way to achive it?</p>
<p>The optimizer needs to be able to do computations with the input. This means NaN is not a valid input for that, as there really is no good way to do anything with it in this case. You therefore have to either replace these NaNs with meaningful numbers, or you will be unable to use this data point and you will have to drop it like so:</p> <pre><code>x = x[np.isfinite(x)] </code></pre>
python|tensorflow|keras
0
1,905,334
63,972,050
How to create a single figure from subplots in for loop matplotlib
<p>I have 4 images in numpy array format where each is a 4D (61, 73, 61, 11) and the last dimension coresponds to image channels (11 in my case). I use a for loop to iterate to the channels and at each iteration I create a subplot with 4 plots for each image. In the jupyter notebook I am able to see all the subplots but I want to create a single figure with all the subplots so I can create a single png and not 11. This is the code in matplotlib.</p> <pre><code>import maplotlib.pyplot as plt center_slices = [s//2 for s in concat_img.shape[:1]] # take the middle slice print(np.squeeze(concat_img[center_slices[0], :, :, 5]).shape) for i in range(10): f, axarr = plt.subplots(1, 4, figsize=(20,5), sharex=True); f.suptitle('Different intensity normalisation methods on brain fMRI image dual_regression + ALFF derivatives') img = axarr[0].imshow(np.squeeze(concat_img[:, :, center_slices[0], i]), cmap='gray'); axarr[0].axis('off') axarr[0].set_title('Original image') f.colorbar(img, ax=axarr[0]) img = axarr[1].imshow(np.squeeze(concat_img_white[:, :, center_slices[0], i]), cmap='gray'); axarr[1].axis('off') axarr[1].set_title('Zero mean/unit stdev') f.colorbar(img, ax=axarr[1]) img = axarr[2].imshow(np.squeeze(concat_img_zero_one[:, :, center_slices[0], i]), cmap='gray'); axarr[2].axis('off') axarr[2].set_title('[0,1] rescaling') f.colorbar(img, ax=axarr[2]) img = axarr[3].imshow(np.squeeze(concat_img_one_one[:, :, center_slices[0], i]), cmap='gray'); axarr[3].axis('off') axarr[3].set_title('[-1,1] rescaling') f.colorbar(img, ax=axarr[3]) f.subplots_adjust(wspace=0.05, hspace=0, top=0.8) # plt.savefig('./TTT.{0:07d}.png'.format(i)) # save each subplot in png plt.show(); </code></pre> <p><a href="https://i.stack.imgur.com/KlePo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KlePo.png" alt="enter image description here" /></a></p> <p>Also a print screen with the output from jupyter for the first 5 rows.</p> <p><strong>UPDATE</strong> I tried to adjust the code according to @Timo answer in the comments using the following code :</p> <pre><code>center_slices = [s//2 for s in concat_img.shape[:1]] print(np.squeeze(concat_img[center_slices[0], :, :, 5]).shape) nrows , ncols = (11, 4) fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=(140, 120)) fig.suptitle('Different intensity normalisation methods on brain fMRI image dual_regression + ALFF derivatives') # f.subplots_adjust(wspace=0.05, hspace=0, top=0.8) zdata = [concat_img, concat_img_white, concat_img_zero_one, concat_img_one_one] titles =['Original image', 'Zero mean/unit stdev', '[0,1] rescaling', '[-1,1] rescaling'] for j in range(nrows): for i in range(ncols): img = zdata[i] cbar = ax[j, i].imshow(np.squeeze(img[:, :, center_slices[0], i]), cmap='gray', interpolation='nearest'); ax[j, i].axis('off') ax[j, i].set_title(f'{titles[i]},channel :{j}') fig.colorbar(cbar, ax=ax[j, i]) fig.tight_layout() </code></pre> <p>Although the images are very small and have a lot of space between despite using tight layout</p> <p><a href="https://i.stack.imgur.com/dmNwd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dmNwd.png" alt="enter image description here" /></a></p> <p><strong>Solution</strong></p> <p>I manage to produce the plot and made this helper function</p> <pre><code># Helper function def myplot(nrows, ncols, zdata, global_title, title, savefig, name=None): center_slices = [s//2 for s in zdata[0].shape[:1]] print(np.squeeze(zdata[0][center_slices[0], :, :, 5]).shape) fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=(5 * ncols, 4 * nrows)) for j in range(nrows): for i in range(ncols): img = zdata[i] img = img[:, :, center_slices[0], j] cbar = ax[j, i].imshow(np.squeeze(img), cmap='gray', interpolation='nearest', aspect='auto'); ax[j, i].axis('off') ax[j, i].set_title(f'{titles[i]},channel :{j}') fig.colorbar(cbar, ax=ax[j, i]) fig.tight_layout() fig.suptitle(global_title, fontsize=16, y=1.005) plt.show() st = fig.suptitle(global_title, fontsize=16, y= 1.005) if savefig : fig.savefig(name, bbox_extra_artists=[st], bbox_inches='tight') nrows = 11 ncols = 4 global_title ='Different intensity normalisation methods on brain fMRI image ' zdata = [concat_img, concat_img_white, concat_img_zero_one , concat_img_one_one] titles =['Original image', 'Zero mean/unit stdev', '[0,1] rescaling', '[-1,1] rescaling'] myplot(nrows, ncols, zdata, global_title, titles, False) </code></pre>
<p>This can be done by creating an axes instance with <code>nrows != 1</code>. I have attached an example below.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np nrows = 5 ncols = 4 xdata = np.linspace(-np.pi, np.pi) ydata = 1 * xdata X, Y = np.meshgrid(xdata, ydata) zdata = np.sin(X + Y) fig, ax = plt.subplots(nrows=nrows, ncols=ncols, sharex=True, figsize=(nrows * 2.2, 2 * ncols)) for j in range(nrows): for i in range(ncols): cbar = ax[j, i].contourf(zdata) fig.colorbar(cbar, ax=ax[j, i]) fig.tight_layout() </code></pre>
python|matplotlib|visualization
0
1,905,335
42,685,265
Beautifulsoup can not get content from a tag with hidden attributes
<pre><code>&lt;a id="ember1601" role="button" href="/carsearch/book?piid=AQAQAQRRg2INmYAyjZmAMwmKOGATj2qoYBQANIAVCeAZgB6fUEsAED&amp;amp;totalPriceShown=71.66&amp;amp;searchKey=-575257062&amp;amp;offerQualifiers=GreatDeal" data-book-button="book-EY-EC-Car" target="_self" class="ember-view btn btn-secondary btn-action"&gt;&lt;span class="btn-label"&gt; &lt;span aria-hidden="true"&gt; &lt;span class="visuallyhidden"&gt; Reserve Item 1, Economy from Economy Rent a Car Rental Company at $72 total &lt;/span&gt;Reserve &lt;/span&gt; &lt;/span&gt; &lt;/a&gt; </code></pre> <p>Hi, I am new to python I can not get the price &amp;72 under the <code>&lt;span class="visuallyhidden"&gt;</code>,also how can I get the href links in tag <code>&lt;a&gt;</code> on the first line, please help, thanks by the way, i am using beautifulsoup lib, if other lib can help, please let me know. thanks</p>
<pre><code>In [9]: soup = BeautifulSoup(html, 'lxml') # html is the code you posted In [10]: soup.find("span", class_="visuallyhidden").text Out[10]: '\n Reserve Item 1, Economy from Economy Rent a Car Rental Company at $72 total\n ' In [11]: soup.a["href"] Out[11]: '/carsearch/book?piid=AQAQAQRRg2INmYAyjZmAMwmKOGATj2qoYBQANIAVCeAZgB6fUEsAED&amp;totalPriceShown=71.66&amp;searchKey=-575257062&amp;offerQualifiers=GreatDeal' </code></pre> <p>if you need to extract part text from a string, you need to use regex:</p> <pre><code>In [12]: text = soup.find("span", class_="visuallyhidden").text In [15]: re.search(r'\$\d+', text).group() Out[15]: '$72' </code></pre>
python|beautifulsoup
1
1,905,336
42,793,539
How to remove the duplicates values from only one element of the dictionary at a time?
<p>In this given dictionary <code>defaultdict(dict)</code> type data:</p> <pre><code>{726: {'X': [3.5, 3.5, 2.0}, 'Y': [2.0, 0.0, 0.0], 'chr': [2, 2, 2]}, 128: {'X': [0.5, 4.0, 4.0], 'Y': [4.0, 3.5, 3.5], 'chr': [3, 3, 3]}} </code></pre> <p>the numeric value <code>726</code> and <code>128</code> are the keys and are unique. The other elements are the values tagged with <code>unique identifier</code> and are also unique.</p> <p>I want to remove the duplicates only from the <code>list values</code> in <code>chr</code> <strong>without affecting the data or order of the values</strong> in any other parts of the dictionary.</p> <p>How may I accomplish that?</p> <p>Thanks,</p>
<p>You can use a nested dict comprehension and convert the list to <code>set</code> in order to get a unique set of items. Since all them items within <code>chr</code>'s value are the same the set will generate 1 item and thus the order doesn't matter in this case. Otherwise you can use <code>OrderedDict.fromkeys()</code> to get a unique set of your items by preserving the order.</p> <pre><code>In [4]: {k: {k2: set(v2) if k2=='chr' else v2 for k2, v2 in v.items()} for k, v in d.items()} Out[4]: {128: {'Y': [4.0, 3.5, 3.5], 'X': [0.5, 4.0, 4.0], 'chr': {3}}, 726: {'Y': [2.0, 0.0, 0.0], 'X': [3.5, 3.5, 2.0], 'chr': {2}}} </code></pre>
python|list|dictionary|duplicates|defaultdict
1
1,905,337
42,794,933
How to count and compare in Django
<p>Currently have a database with 10 questions which posts a number from 1-4 i just want to add that number up to create a total then only show the closest match for that total number <strong>Models.py</strong></p> <pre><code>class Question(models.Model): name = models.CharField(max_length=10, primary_key=True) question1 = models.CharField(max_length=50, choices=Question1_CHOICES) question2 = models.CharField(max_length=50, choices=Question2_CHOICES) question3 = models.CharField(max_length=50, choices=Question3_CHOICES) question4 = models.CharField(max_length=50, choices=Question4_CHOICES) question5 = models.CharField(max_length=50, choices=Question5_CHOICES) question6 = models.CharField(max_length=50, choices=Question6_CHOICES) question7 = models.CharField(max_length=50, choices=Question7_CHOICES) question8 = models.CharField(max_length=50, choices=Question8_CHOICES) question9 = models.CharField(max_length=50, choices=Question9_CHOICES) question10 = models.CharField(max_length=50, choices=Question10_CHOICES) </code></pre> <p>Views.py</p> <pre><code>def comparison(request): return render(request, 'music/compare.html', dict(rows=Question.objects.all(), total=Question.objects.count())) </code></pre> <p>I tried using total with count but I don't think its correct. Copy of database layout attached <a href="https://i.stack.imgur.com/LbMD3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LbMD3.png" alt="here"></a></p> <p>Edit - Addded compare.html</p> <pre><code> {% extends 'music/index.html' %} {% block body %} &lt;body&gt; &lt;table&gt; &lt;table&gt; &lt;th&gt; &lt;tr&gt; &lt;table border="3"&gt; {% for row in rows %} &lt;td&gt;&lt;strong&gt;Name&lt;/strong&gt;&lt;/td&gt; &lt;td&gt;Question 1 &lt;/td&gt; &lt;td&gt;Question 2 &lt;/td&gt; &lt;td&gt;Question 3 &lt;/td&gt; &lt;td&gt;Question 4 &lt;/td&gt; &lt;td&gt;Question 5 &lt;/td&gt; &lt;td&gt;Question 6 &lt;/td&gt; &lt;td&gt;Question 7 &lt;/td&gt; &lt;td&gt;Question 8 &lt;/td&gt; &lt;td&gt;Question 9 &lt;/td&gt; &lt;td&gt;Question 10 &lt;/td&gt; &lt;td&gt;Total &lt;/td&gt; &lt;/tr&gt; &lt;/th&gt; &lt;tr&gt; &lt;td&gt;{{row.name}} &lt;/td&gt; &lt;td&gt;{{row.question1}} &lt;/td&gt; &lt;td&gt;{{row.question2}} &lt;/td&gt; &lt;td&gt;{{row.question3}} &lt;/td&gt; &lt;td&gt;{{row.question4}} &lt;/td&gt; &lt;td&gt;{{row.question5}} &lt;/td&gt; &lt;td&gt;{{row.question6}} &lt;/td&gt; &lt;td&gt;{{row.question7}} &lt;/td&gt; &lt;td&gt;{{row.question8}} &lt;/td&gt; &lt;td&gt;{{row.question9}} &lt;/td&gt; &lt;td&gt;{{row.question10}} &lt;/td&gt; &lt;td&gt;{{ question.s }} &lt;/td&gt; &lt;/tr&gt; &lt;/tr&gt; {% endfor %} &lt;/table&gt; &lt;/body&gt; {%endblock% </code></pre> <p>}</p>
<p>If you want the sum of <code>Question1</code> through <code>Question10</code> for each row then do this:</p> <pre><code>from django.db.models import F questions = Question.objects.annotate(s=F('question1') + F('question2') + F('question3') + F('question4') + F('question5') + F('question6') + F('question7') + F('question8') + F('question9') + F('question10')) </code></pre> <p>This will produce <code>x</code> number of results equal to the number of rows. Then you can do:</p> <pre><code>for question in questions: print(question.s) # prints the sum of Question1 - Question10 </code></pre> <p>Or, if you just want the values (not <code>Question</code> objects) then:</p> <pre><code>sums = Question.objects.annotate(s=F('question1') + F('question2') + F('question3') + F('question4') + F('question5') + F('question6') + F('question7') + F('question8') + F('question9') + F('question10')).values('s') </code></pre> <p>[UPDATE]: It seems you are not looping correctly the <code>questions</code> <code>QuerySet</code>.</p> <p>Here is what you have to do:</p> <p>in your <code>views.py</code> have it like this:</p> <pre><code>questions = Question.objects.annotate(s=F('question1') + F('question2') + F('question3') + F('question4') + F('question5') + F('question6') + F('question7') + F('question8') + F('question9') + F('question10')) ... return render(request, 'music/compare.html', {'questions': questions}) </code></pre> <p>And then then in your HTML have it like this:</p> <pre><code>{% for question in questions %} &lt;td&gt;Question 1 &lt;/td&gt; ... &lt;td&gt;{{question.name}} &lt;/td&gt; ... &lt;td&gt;{{ question.s }} &lt;/td&gt; </code></pre> <p><strong>Edit 2 - changed views.py</strong> </p> <pre><code>def compare(): questions = Question.objects.annotate( s=F('question1') + F('question2') + F('question3') + F('question4') + F('question5') + F('question6') + F( 'question7') + F('question8') + F('question9') + F('question10')) ... return render(request, 'music/compare.html', {'questions': questions}) </code></pre> <p><strong>compare.html</strong> </p> <pre><code>{% for question in questions %} &lt;td&gt;Question 1 &lt;/td&gt; ... &lt;td&gt;{{question.name}} &lt;/td&gt; ... &lt;td&gt;{{ question.s }} &lt;/td&gt; {%endfor%} &lt;/body&gt; </code></pre>
python|django
1
1,905,338
50,845,037
How to reverse the virtual address of string from a core dump?
<p>I'm trying to find a specific string in a process's memory. Specifically I want to find the virtual address where it's stored. I wrote a python script to call <code>gcore</code> on the process and scan the resulting file for all matches. Then I call <code>pmap</code> and iterate through the entries there. My idea is to find the section of memory each index corresponds to, then subtract the sum of the sizes of previous sections to get the offset in the correct section, add it to the base, and get the virtual address. However, when I search for strings at the virtual addresses I'm computing using gdb, I don't find the right strings. Why doesn't my method work? Does <code>gcore</code> not dump the entire contents of virtual memory in order?</p> <pre><code>#!/usr/bin/python3 import sys import ctypes import ctypes.util import subprocess import os import ptrace import re if(len(sys.argv) != 2): print("Usage: search_and_replace.py target_pid") sys.exit(-1) pid = sys.argv[1] if not pid.isdigit(): print("Invalid PID specified. Make sure PID is an integer") sys.exit(-1) bash_cmd = "sudo gcore -a {}".format(pid) os.system(bash_cmd) with open("core." + sys.argv[1], 'rb') as f: s = f.read() # with open("all.dump", 'rb') as f: # s = f.read() str_query = b'a random string in program\'s memory' str_replc = b'This is an inserted string, replacing the original.' indices = [] for match in re.finditer(str_query, s): indices.append(match.start()) print("number of indices is " + str(len(indices))) #index = s.find(str_query) # print("offset is " + str(index)) # if(index == 0): # print("error: String not found") # sys.exit(-1) bash_cmd = "sudo pmap -x {} &gt; maps".format(pid) print(bash_cmd) subprocess.call(bash_cmd, shell=True) with open("maps") as m: lines = m.readlines() #calculate the virtual address of the targeted string the running process via parsing the pmap output pages = [] v_addrs = [] for index in indices: sum = 0 offset = 0 v_addr = 0 #print(index) for i in range(2, len(lines) - 2): line = lines[i] items = line.split() v_addr = int(items[0], 16) old_sum = sum sum += int(items[1]) * 1024 if sum &gt; index: offset = index - old_sum print("max is " + hex(v_addr + int(items[1]) * 1024)) print("offset is " + str(offset) + " hex " + hex(offset)) print("final va is " + hex(v_addr + offset)) pages.append(hex(v_addr) + ", " + hex(v_addr + int(items[1]) * 1024)) v_addrs.append(hex(v_addr + offset)) break print("base va is " + hex(v_addr)) v_addr += offset for page in set(pages): print(page) for va in v_addrs: print(va) </code></pre> <p>On a related note, I also tried to use gdb to scan the file manually--it doesn't seem to find nearly as many matches when I use its <code>find</code> command to scan for the string in the region of memory in question (exact numbers vary greatly). Why is that?</p>
<p>You can use python code to locate various things in core files. The <a href="https://github.com/wackrat/structer" rel="nofollow noreferrer">structer</a> package includes an <code>elf</code> module whose <code>Elf</code> class provides methods for that. The following output from a <code>gdb</code> session has examples of how to use that code.</p> <p>The first excerpt of that session shows <code>gdb</code> opening a core file which was generated by <code>gcore</code>, and providing some data for the subsequent searches.</p> <pre><code>18:33:00 $ gdb -q /home/efuller/gnu/bin/gdb core.17856 Reading symbols from /home/efuller/gnu/bin/gdb...done. [New LWP 17856] [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Core was generated by `/home/efuller/gnu/bin/gdb /home/efuller/gnu/bin/gdb'. Program terminated with signal SIGINT, Interrupt. #0 0x00007ffff62c5660 in __poll_nocancel () at ../sysdeps/unix/syscall-template.S:84 84 ../sysdeps/unix/syscall-template.S: No such file or directory. (gdb) backtrace #0 0x00007ffff62c5660 in __poll_nocancel () at ../sysdeps/unix/syscall-template.S:84 #1 0x00005555557f7ea6 in gdb_wait_for_event (block=1) at event-loop.c:772 #2 0x00005555557f7185 in gdb_do_one_event () at event-loop.c:347 #3 0x00005555557f71bd in start_event_loop () at event-loop.c:371 #4 0x00005555557f003a in captured_command_loop (data=0x0) at main.c:324 #5 0x00005555557eb2e9 in catch_errors (func=0x5555557efff8 &lt;captured_command_loop(void*)&gt;, func_args=0x0, errstring=0x555555b4f733 "", mask=RETURN_MASK_ALL) at exceptions.c:236 #6 0x00005555557f16e2 in captured_main (data=0x7fffffffea10) at main.c:1149 #7 0x00005555557f170b in gdb_main (args=0x7fffffffea10) at main.c:1159 #8 0x00005555555f2daa in main (argc=2, argv=0x7fffffffeb18) at gdb.c:32 (gdb) frame 6 #6 0x00005555557f16e2 in captured_main (data=0x7fffffffea10) at main.c:1149 1149 catch_errors (captured_command_loop, 0, "", RETURN_MASK_ALL); (gdb) info locals context = 0x7fffffffea10 argc = 2 argv = 0x7fffffffeb18 quiet = 0 set_args = 0 inhibit_home_gdbinit = 0 symarg = 0x7fffffffed8e "/home/efuller/gnu/bin/gdb" execarg = 0x7fffffffed8e "/home/efuller/gnu/bin/gdb" pidarg = 0x0 corearg = 0x0 pid_or_core_arg = 0x0 cdarg = 0x0 ttyarg = 0x0 print_help = 0 print_version = 0 print_configuration = 0 cmdarg_vec = 0x0 cmdarg_p = 0x0 dirarg = 0x555555fdeb80 dirsize = 1 ndir = 0 system_gdbinit = 0x0 home_gdbinit = 0x555556174960 "/home/efuller/.gdbinit" local_gdbinit = 0x0 i = 0 save_auto_load = 1 objfile = 0x0 pre_stat_chain = 0x555555b2c000 &lt;sentinel_cleanup&gt; (gdb) </code></pre> <p>The next excerpt shows <code>gdb</code> importing the python code, and performing two searches based on the value of a local variable. The first search shows multiple addresses at which that value occurs (the value of <code>symarg</code> and <code>execarg</code> is among them). The <code>findbytes</code> method requires a <code>bytes</code> object, not a <code>str</code> object. The second search shows just one address which contains the address of the first match from the first search, which happens to have a name in the symbol table.</p> <pre><code>(gdb) pi &gt;&gt;&gt; from structer import memmap, elf &gt;&gt;&gt; core = elf.Elf(memmap('core.17856')) &gt;&gt;&gt; from pprint import pprint &gt;&gt;&gt; (gdb) python pprint(tuple(hex(a) for a in core.findbytes(b"/home/efuller/gnu/bin/gdb"))) ('0x555555fdef30', '0x55555606fce0', '0x55555614ff72', '0x5555562496a0', '0x55555624b915', '0x55555625f250', '0x5555562c6c4b', '0x55555689f2b5', '0x7ffff5f2d490', '0x7fffffffed74', '0x7fffffffed8e', '0x7fffffffedf0', '0x7fffffffefde') (gdb) python pprint(tuple(hex(a) for a in core.findwords(0x555555fdef30))) ('0x555555faea38',) (gdb) x/a 0x555555faea38 0x555555faea38 &lt;_ZL16gdb_program_name&gt;: 0x555555fdef30 (gdb) </code></pre> <p>The next excerpt shows other variations on the search. Searching for the <code>dirname</code> of the first search pattern turns up multiple hits, which include all of the hits from the first search. The subsequent search filters out the common hits by requiring a null terminator, and the one after that filters out hits which do not begin with a null terminator. Those last two searches report the same results, although the addresses differ by one, because the searches which require a leading null point at that leading null. </p> <pre><code>(gdb) python pprint(tuple(hex(a) for a in core.findbytes(b"/home/efuller/gnu/bin"))) ('0x555555b4f701', '0x555555bd33f0', '0x555555fdef30', '0x55555606fce0', '0x55555614ff72', '0x5555562496a0', '0x55555624b915', '0x55555625f250', '0x5555562c6c4b', '0x55555689f2b5', '0x7ffff5f2d490', '0x7fffffffed74', '0x7fffffffed8e', '0x7fffffffedf0', '0x7fffffffefde') (gdb) python pprint(tuple(hex(a) for a in core.findbytes(b"/home/efuller/gnu/bin\x00"))) ('0x555555b4f701', '0x555555bd33f0') (gdb) python pprint(tuple(hex(a) for a in core.findbytes(b"\x00/home/efuller/gnu/bin\x00"))) ('0x555555b4f700', '0x555555bd33ef') (gdb) </code></pre> <p>The final excerpt separates the hits from the first search into two cases, those with leading nulls and those without leading nulls. The latter uses the most general type of search (the one that both <code>findbytes</code> and <code>findwords</code> rely on) so that it can include the non-null characters preceding the fixed part of the search pattern.</p> <pre><code>(gdb) python pprint(tuple(hex(a) for a in core.findbytes(b"\x00/home/efuller/gnu/bin/gdb"))) ('0x555555fdef2f', '0x55555606fcdf', '0x55555624969f', '0x55555625f24f', '0x7fffffffed73', '0x7fffffffed8d', '0x7fffffffefdd') (gdb) python import re (gdb) python pprint(tuple(hex(a) for a in core.find(re.compile(rb"\x00[^\x00]+/home/efuller/gnu/bin/gdb")))) ('0x55555614ff6f', '0x55555624b8ff', '0x5555562c6c37', '0x55555689f297', '0x7ffff5f2d487', '0x7fffffffeded') (gdb) x/s 0x55555614ff6f + 1 0x55555614ff70: "_=/home/efuller/gnu/bin/gdb" (gdb) </code></pre> <p>The <code>+ 1</code> in the last command skips the leading null in that search hit, although that could also be incorporated into the search code, as follows.</p> <pre><code>(gdb) python pprint(tuple(hex(a+1) for a in core.find(re.compile(rb"\x00[^\x00]+/home/efuller/gnu/bin/gdb")))) ('0x55555614ff70', '0x55555624b900', '0x5555562c6c38', '0x55555689f298', '0x7ffff5f2d488', '0x7fffffffedee') (gdb) </code></pre> <p>The <a href="https://github.com/wackrat/structer" rel="nofollow noreferrer">structer</a> code does not require <code>gdb</code>; it can run in a python interpreter outside of gdb. It is not compatible with python2, so running it within <code>gdb</code> requires a <code>gdb</code> binary linked against python3.5.</p> <p>Searching for patterns in a core file can report results which are not reported by the search methods in the <a href="https://github.com/wackrat/structer" rel="nofollow noreferrer">structer</a> code. There are two reasons for that. The <a href="https://github.com/wackrat/structer" rel="nofollow noreferrer">structer</a> code only searches the load segments, so it will not find the contents of note segments, which contains various things which do not correspond to virtual addresses in the core. The <a href="https://github.com/wackrat/structer" rel="nofollow noreferrer">structer</a> code does not find results which span multiple load segments, if two adjacent segments have a gap (an unmapped region between the segments). The code combines adjacent segments which are contiguous in the virtual address space, so a search result need not be confined to a single segment.</p>
python|gdb|gcore
1
1,905,339
50,906,372
word2Vec and abbreviations
<p>I am working on text classification task where my dataset contains a lot of abbreviations and proper nouns. For instance: <strong>Milka choc. bar</strong>.<br> My idea is to use bidirectional LSTM model with word2vec embedding.<br> And here is my problem how to code words, that not appears in the dictionary? I partially solved this problem by merging pre-trained vectors with randomly initialized. Here is my implementation:</p> <pre><code>import gensim from gensim.models import Word2Vec from gensim.utils import simple_preprocess from gensim.models.keyedvectors import KeyedVectors word_vectors = KeyedVectors.load_word2vec_format('ru.vec', binary=False, unicode_errors='ignore') EMBEDDING_DIM=300 vocabulary_size=min(len(word_index)+1,num_words) embedding_matrix = np.zeros((vocabulary_size, EMBEDDING_DIM)) for word, i in word_index.items(): if i&gt;=num_words: continue try: embedding_vector = word_vectors[word] embedding_matrix[i] = embedding_vector except KeyError: embedding_matrix[i]=np.random.normal(0,np.sqrt(0.25),EMBEDDING_DIM) def LSTMModel(X,words_nb, embed_dim, num_classes): _input = Input(shape=(X.shape[1],)) X = embedding_layer = Embedding(words_nb, embed_dim, weights=[embedding_matrix], trainable=True)(_input) X = The_rest_of__the_LSTM_model()(X) </code></pre> <p>Do you think, that allowing the model to adjust the embedding weights is a good idea? Could you please tell me, how can I encode words like <strong>choc</strong>? Obviously, this abbreviation stands for <strong>chocolate</strong>. </p>
<p>It is often not a good idea to adjust word2vec embeddings if you do not have sufficiently large corpus in your training. To clarify that, take an example where your corpus has <em>television</em> but not <em>TV</em>. Even though they might have word2vec embeddings, after training only <em>television</em> will be adjust and not <em>TV</em>. So you disrupt the information from word2vec.</p> <p>To solve this problem you have 3 options:</p> <ol> <li>You let the LSTM in the upper layer figure out what the word might mean based on its context. For example, <em>I like choc.</em> the LSTM can figure out it is an object. This was demonstrated by <a href="https://arxiv.org/abs/1410.3916" rel="nofollow noreferrer">Memory Networks</a>.</li> <li>Easy option, pre-process, canonicalise as much as you can before passing to the model. Spell checkers often capture these very well and are really fast.</li> <li>You can use character encoding along side word2vec. This is employed in many of the question answering models such as <a href="https://arxiv.org/abs/1410.3916" rel="nofollow noreferrer">BiDAF</a> where the character representation is merged with word2vec so you have some information relating characters to words. In this case, <em>choc</em> might be similar to <em>chocolate</em>.</li> </ol>
python|keras|nlp|word2vec
1
1,905,340
3,955,196
Python string to integer value
<p>I'd like to know how to convert strings in Python to their corresponding integer values, like so:</p> <p><code>&gt;&gt;&gt;print WhateverFunctionDoesThis('\x41\x42')</code> </p> <p><code>&gt;&gt;&gt;16706</code></p> <p>I've searched around but haven't been able to find an easy way to do this. </p> <p>Thank you.</p>
<pre><code>&gt;&gt;&gt; import struct &gt;&gt;&gt; struct.unpack("&gt;h",'\x41\x42') (16706,) &gt;&gt;&gt; struct.unpack("&gt;h",'\x41\x42')[0] 16706 </code></pre> <p>For other format chars see <a href="http://docs.python.org/library/struct.html#format-characters" rel="nofollow">the documentation</a></p>
python
7
1,905,341
3,539,107
Python: rewinding one line in file when iterating with f.next()
<p>Python's f.tell doesn't work as I expected when you iterate over a file with f.next():</p> <pre><code>&gt;&gt;&gt; f=open(".bash_profile", "r") &gt;&gt;&gt; f.tell() 0 &gt;&gt;&gt; f.next() "alias rm='rm -i'\n" &gt;&gt;&gt; f.tell() 397 &gt;&gt;&gt; f.next() "alias cp='cp -i'\n" &gt;&gt;&gt; f.tell() 397 &gt;&gt;&gt; f.next() "alias mv='mv -i'\n" &gt;&gt;&gt; f.tell() 397 </code></pre> <p>Looks like it gives you the position of the buffer rather than the position of what you just got with next().</p> <p>I've previously used the seek/tell <a href="https://stackoverflow.com/questions/3505479/python-undo-a-python-file-readline-operation-so-file-pointer-is-back-in-origin">trick</a> to rewind one line when iterating over a file with readline(). Is there a way to rewind one line when using next()?</p>
<p>No. I would make an adapter that largely forwarded all calls, but kept a copy of the last line when you did <code>next</code> and then let you call a different method to make that line pop out again.</p> <p>I would actually make the adapter be an adapter that could wrap any iterable instead of a wrapper for file because that sounds like it would be frequently useful in other contexts.</p> <p>Alex's suggestion of using the <code>itertools.tee</code> adapter also works, but I think writing your own iterator adapter to handle this case in general would be cleaner.</p> <p>Here is an example:</p> <pre><code>class rewindable_iterator(object): not_started = object() def __init__(self, iterator): self._iter = iter(iterator) self._use_save = False self._save = self.not_started def __iter__(self): return self def next(self): if self._use_save: self._use_save = False else: self._save = self._iter.next() return self._save def backup(self): if self._use_save: raise RuntimeError("Tried to backup more than one step.") elif self._save is self.not_started: raise RuntimeError("Can't backup past the beginning.") self._use_save = True fiter = rewindable_iterator(file('file.txt', 'r')) for line in fiter: result = process_line(line) if result is DoOver: fiter.backup() </code></pre> <p>This wouldn't be too hard to extend into something that allowed you to backup by more than just one value.</p>
python|next|seek
12
1,905,342
56,484,263
Distinct SQLite query-result to Python List or Tuple
<p>I have a query that returns e.g. the following result:</p> <pre><code>Row1: "Schmidt" Row2: "Schmidt, Meier" Row3: "Mustermann, Schmidt" </code></pre> <p>Question: how do I get the results in a Python tuple or list etc:? I would like the following list: </p> <pre><code>"Meier, Mustermann, Schmidt". </code></pre> <p>Each name appears only once.</p> <p>Python code which executes the query (used to populate entries for a comboBox):</p> <p>class DatabaseManager(object): def <strong>init</strong>(self, db): self.conn = sqlite3.connect(db) self.cur = self.conn.cursor()</p> <pre><code>def get_names(self): sSql = "SELECT DISTINCT name "\ " FROM patient "\ " ORDER BY 1" return self.cur.execute(sSql) </code></pre> <p>And this is called from an instance which populates the results into a comboBox:</p> <pre><code>def populate_names(self, combobox): rows = self.db.get_names() for row in rows: combobox.addItem(row[0]) </code></pre>
<p>Create an empty list to hold the names.</p> <p>Loop through each name in the query result.</p> <p>If the current name is not already in the list, add it.</p> <p>Something like this:</p> <pre><code>names = [] for row in sql_result: if row['name'] not in names: names.append(row['name']) </code></pre>
python|sqlite
0
1,905,343
56,775,729
how can i extract images from the last decoder layer "logits" after training my neural network?
<p>i'm training a neural network model using tensorflow , for image segmentation , and i want to be able to extract the images after training, from the final logits layer </p> <p>here is the decoder part of my model </p> <p>DECODER</p> <p>upsampling layer 1 :</p> <pre><code>upsample1 = tf.image.resize_images(pool5, size=(200, 200), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) Deconvolutional layer 1 : deconv1 = tf.layers.conv2d_transpose(inputs=upsample1, filters=512, kernel_size=(3, 3),strides=(1, 1), padding='same', activation=tf.nn.relu) deconv1bis = tf.layers.conv2d_transpose(inputs=deconv1, filters=512, kernel_size=(3, 3),strides=(1, 1), padding='same', activation=tf.nn.relu) deconv1bisbis = tf.layers.conv2d_transpose(inputs=deconv1bis, filters=512, kernel_size=(3, 3),strides=(1, 1), padding='same', activation=tf.nn.relu) </code></pre> <p>upsampling layer 2 :</p> <pre><code>upsample2 = tf.image.resize_images(deconv1bisbis, size=(200, 200), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) </code></pre> <p>Deconvolutional layer 2 :</p> <pre><code>deconv2 = tf.layers.conv2d_transpose(inputs=upsample2, filters=512,strides=(1, 1), kernel_size=(3, 3), padding='same', activation=tf.nn.relu) deconv2bis = tf.layers.conv2d_transpose(inputs=deconv2, filters=512,strides=(1, 1), kernel_size=(3, 3), padding='same', activation=tf.nn.relu) deconv2bisbis = tf.layers.conv2d_transpose(inputs=deconv2bis, filters=512, strides=(1, 1), kernel_size=(3, 3), padding='same', activation=tf.nn.relu) </code></pre> <p>upsampling layer 3 :</p> <pre><code>upsample3 = tf.image.resize_images(deconv2bisbis, size=(200, 200), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) </code></pre> <p>Deconvolutional layer 3 :</p> <pre><code>deconv3 = tf.layers.conv2d_transpose(inputs=upsample3, filters=256,strides=(1, 1), kernel_size=(3, 3), padding='same', activation=tf.nn.relu) deconv3bis = tf.layers.conv2d_transpose(inputs=deconv3, filters=256,strides=(1, 1), kernel_size=(3, 3), padding='same', activation=tf.nn.relu) deconv3bisbis = tf.layers.conv2d_transpose(inputs=deconv3bis, filters=512,strides=(1, 1), kernel_size=(3, 3), padding='same', activation=tf.nn.relu) </code></pre> <p>upsampling layer 4 :</p> <pre><code>upsample4 = tf.image.resize_images(deconv3bisbis, size=(200, 200), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) </code></pre> <p>Deconvolutional layer 4 :</p> <pre><code>deconv4 = tf.layers.conv2d_transpose(inputs=upsample4, filters=128,strides=(1, 1), kernel_size=(3, 3), padding='same', activation=tf.nn.relu) deconv4bis = tf.layers.conv2d_transpose(inputs=deconv4, filters=128,strides=(1, 1), kernel_size=(3, 3), padding='same', activation=tf.nn.relu) </code></pre> <p>upsampling layer 5 :</p> <pre><code>upsample5 = tf.image.resize_images(deconv4bis, size=(200, 200), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) </code></pre> <p>Deconvolutional layer 5 :</p> <pre><code>deconv5 = tf.layers.conv2d_transpose(inputs=upsample5, filters=64,strides=(1, 1), kernel_size=(3, 3), padding='same', activation=tf.nn.relu) deconv5bis = tf.layers.conv2d_transpose(inputs=deconv5, filters=64,strides=(1, 1), kernel_size=(3, 3), padding='same', activation=tf.nn.relu) </code></pre> <p>Logits Layer</p> <pre><code>logits = tf.layers.dense(inputs=deconv5bis, units=1, activation=tf.nn.relu) </code></pre> <p>any one have an idea how i can do that ?</p>
<p>I guess what you mean by : </p> <blockquote> <p>I want to be able to extract the images after training, from the final logits layer</p> </blockquote> <p>Well, actually this doesn't work this way. But don't worry, I guess you meant that you want to see how you can see the segmentation or the masks on the images after you finish the training of your neural network. </p> <p>Here we are talking about the prediction part after the training is done. </p> <p>Once you are done, you can take any image you want and apply this model to output the segmentation results on it. </p> <p>The output will in the form of a number (probability) in case of classification problem, or a sequence of integers that locate the mask on the image, ... etc.</p> <p>You may find the answer here: <a href="https://stackoverflow.com/questions/51718341/tensorflow-how-to-run-prediction-using-image-as-input-for-a-saved-model">Tensorflow: how to run prediction (using an image as input) for a trained model?</a></p> <p>Just as a note that would be helpful, I highly recommend starting with <code>Keras</code> if you are a starter with deep learning. <code>Tensorflow</code> is a great tool but it's low level and requires more complicated details that you don't need to know. </p> <p>I hope this could help. </p>
python|tensorflow
0
1,905,344
56,471,211
If I take a user supplied value as a function parameter, how do I make it a global variable?
<p>I'm writing a program that can riffle shuffle a given sequence (list l), m times. My function takes in the list l and the number m as inputs but Ive defined the shuffle itself for one shuffle and then used a for loop to do it m times. However, now the for loop does not take the user assigned value of m. </p> <p>I'm a Python noob so it's likely I'm missing a simple thing. Ive tried using global m, to (re)define m within my function but either I dont know how to do it, or it doesn't seem to work. </p> <pre><code>def riffle_shuffle(l, global m): #global m goes here? r = np.random.binomial(len(l),0.5) sd1 = l[:r] d2 = l[r:] fl = [] c = [sd2,sd1] l2 = sd2+sd1 for i in range(1,len(l) + 1): x = [sd2,sd1] y = [(len(sd2))/(len(l) - i+1),(len(sd1))/(len(l) - i+1)] a = choices(x,y) a1 = a[0][0] fl.append(a1) #Deck Split is c #Sub decks are',c #Probabilities are',y #Deck chosen is',a #fl if a1 in sd1: sd1.remove(a1) elif a1 in sd2: sd2.remove(a1) return fl,m for j in range(1,m+1): fl = riffle_shuffle(fl) return fl </code></pre> <p>I've gotten errors that say m is not defined, invalid syntax, the following error message. I don't know what this last one means. </p> <p>'maximum recursion depth exceeded in comparison'</p> <p>Any help is much appreciated, thanks!</p> <p>EDIT: I missed the for loop I'd mentioned in the description. It's up now sorry. </p>
<p>So... You want method that do riffle shuffle m times, right?</p> <p>There is some problems with your code : </p> <p>First, <code>return</code> is outside of the function.</p> <p>Second, you call your function in your function, without breaking condition : So the function will call the function, and that will call the function again, and again, and so on, until error occurs. That is <code>maximum recursion depth exceeded in comparison</code>.</p> <p>Third, you have use <code>np.random.choice</code> like this : <code>np.random.choice(x, p=y)</code>. Otherwise, python don't know y is probabilities, and it will interpret it as second argument : size of the output. So error occurs here.</p> <p>This might be the code you want to write : </p> <pre class="lang-py prettyprint-override"><code>import numpy as np def riffle_shuffle(l, m): if m == 0: return l else: fl = [] r = np.random.binomial(len(l), 0.5) sd1 = l[:r] sd2 = l[r:] c = [sd2,sd1] l2 = sd2+sd1 for i in range(1,len(l) + 1): x = [sd2,sd1] y = [(len(sd2))/(len(l) - i+1), (len(sd1))/(len(l) - i+1)] a = np.random.choice(2, p=y) a = x[a] a1 = a[0] fl.append(a1) #Deck Split is c #Sub decks are',c #Probabilities are',y #Deck chosen is',a #fl if a1 in sd1: sd1.remove(a1) elif a1 in sd2: sd2.remove(a1) fl = riffle_shuffle(fl, m - 1) return fl a = riffle_shuffle([1, 2, 3, 4, 5, 6, 7, 8], 3) print(a) #output : [5, 6, 1, 7, 4, 8, 2, 3] (can be changed) </code></pre> <p>As you did, I called the function 'recursively' - call function in function - with break condition.</p> <p>In this way, you don't have to use global variable - using global variable is not good idea in most situation.</p> <p>And, about your question(How can I make user-supplied value to global variable), you can do something like this.</p> <pre class="lang-py prettyprint-override"><code>a = 0 def foo(m): global a a = m #and your code here... </code></pre>
python|global-variables
0
1,905,345
45,058,847
How to Execute Python Code on Server from an HTML Form Button
<p>I want to execute python code when user clicks an HTML Form Button. How is it going to be possible? Is it possible that users are not able to view the python code on the server? The forms input are going to be variables in the python code. I am using Flask framework and python 2.7. Security is not a concern yet.</p> <p>my route.py file:</p> <pre><code>from pytube import YouTube from moviepy.editor import * import os app = Flask(__name__) @app.route("/",methods=['GET','POST']) def landing(): return render_template("index.html",ytdir=ytdir,ytlink=ytlink) if __name__ == "__main__": app.run(debug=True) </code></pre> <p>The index.html in the Template folder:</p> <pre><code>&lt;form method="POST" action="/"&gt; &lt;input type="text" class="form-control" name="ytdir" placeholder="Example: C:\Downloads"&gt; &lt;h4&gt;The YouTube Video that you want to download&lt;/h4&gt; &lt;textarea class="form-control" rows="3" name="ytlink" placeholder="Example: https://www.youtube.com/watch?v=HipaBLsGB_Y"&gt;&lt;/textarea&gt; &lt;/form&gt; &lt;input class="btn btn-primary" type="submit" value="Download"&gt; </code></pre> <p>basically, I have a python code that download the YouTube video into the local drive by using <code>ytdir</code> and <code>ytlink</code>.</p> <p>Where should I place the python code so that it is executed when the button is clicked.</p>
<p>I recommend for you to do an "api call" </p> <pre><code>@app.route("/ytdl", methods=["GET"]) def download_video(link): # call your download code here # and return the video as an input stream # read the flask tutorial on how to do that </code></pre> <p>In the javascript use axios or something to download the file using jquery like <code>$button.click(() =&gt; axios.get(http://&lt;localhost url&gt;:&lt;port&gt;/ytdl)</code> go and read the doc again how to do ajax call</p>
python|ajax|button|flask|forms|http-post
0
1,905,346
64,994,207
Memory complexity of reassign in python
<p>I'm wondering what is the memory complexity of reassigning a linear variable in python to a new linear type variable. For instance, consider a function with one list parameter, which converts it to set.</p> <pre><code>def func(list_var): list_var = set(list_var) return list_var </code></pre> <p>Is it O(n) memory complexity or O(1)?</p>
<p>The assignment itself isn't necessary; the following has exactly the same semantics from the view of the caller:</p> <pre><code>def func(list_var): return set(list_var) </code></pre> <p>The important part is the call to <code>set</code>, which has to allocate a data structure with <code>n</code> new references, one per element in <code>list_var</code>, so the space complexity is O(n).</p>
python|memory|complexity-theory
1
1,905,347
61,524,872
Python: Run length encoding?
<p>I am trying to understand run length encoding and I understand the idea but I'm not sure how to write it so that the output is as follows:</p> <p>Input: </p> <pre><code>data = [5, 5, 5, 10, 10] </code></pre> <p>Output: </p> <pre><code>[(5, 3), (10, 2)] </code></pre> <p>Question: A list is run-length encoded by representing it as a list of pairs (2-tuples), where each pair is a number and the length of the "run" of that number, where the length is 1 if a number occurs once, 2 if it occurs twice in a row, etc. Write a function run_length_encode(nums) that returns the run-length encoded representation of the list of integers, nums.</p> <p>Can someone explain to me how to do this and explain what each step is doing? Unfortunately I'm struggling to grasp some things in Python but I'm slowly getting it.</p> <p>Thank you!</p>
<p>The following code will do the trick, although it's not necessarily the most "Pythonic" way to do it:</p> <pre><code>def rle_encode(in_list): # Handle empty list first. if not in_list: return [] # Init output list so that first element reflect first input item. out_list = [(in_list[0], 1)] # Then process all other items in sequence. for item in in_list[1:]: # If same as last, up count, otherwise new element with count 1. if item == out_list[-1][0]: out_list[-1] = (item, out_list[-1][1] + 1) else: out_list.append((item, 1)) return out_list print(rle_encode([5, 5, 5, 10, 10])) print(rle_encode([5, 5, 5, 10, 10, 7, 7, 7, 5, 10, 7])) print(rle_encode([])) </code></pre> <p>As expected, the output is:</p> <pre><code>[(5, 3), (10, 2)] [(5, 3), (10, 2), (7, 3), (5, 1), (10, 1), (7, 1)] [] </code></pre> <hr> <p>In a bit more detail, it sets up an output list containing a tuple representing the first input item. So, for <code>5</code>, it makes a list <code>[(5, 1)]</code> (value <code>5</code> with count <code>1</code>).</p> <p>Then it processes every other input item. If the item has the <em>same</em> value as the last one processed, it simply increases the count.</p> <p>If it's a <em>different</em> value to the last one processed, it creates a new output value in the output list with the new value and count of one, similar to what was done for the <em>initial</em> input value.</p> <p>So, as you run through the items for your example, you'll see how the list changes:</p> <pre><code>Input Output Description ----- ------ ----------- 5 [(5, 1)] First value, init with count 1. 5 [(5, 2)] Same as last, increase count. 5 [(5, 3)] Same as last, increase count. 10 [(5, 3), (10, 1)] New value, append with count 1. 10 [(5, 3), (10, 2)] Same as last, increase count. </code></pre> <p>The only other bit is detecting an empty input list <em>before</em> starting that process, so that you don't try to use a non-existent first value.</p>
python
0
1,905,348
61,545,468
How do I re-create pysftp.Connection with a proxy parameter?
<p>I am connecting to an SFTP server using <code>pysftp</code> but need to reconfigure it to go through a proxy. Since pysftp doesn't support it, I'm thinking of using <code>Paramiko</code>.</p> <p>Looks like I'm utilizing the benefits of pysftp.Connection, since it looks like my code is using recursive file transfers.</p> <p>What are the steps I would need to do to re-create <code>pysftp.Connection</code> but with the option to use a proxy? Looking through <a href="https://bitbucket.org/dundeemt/pysftp/src/master/pysftp/__init__.py" rel="nofollow noreferrer">the codebase</a> is a little frightening since I'm not sure what to edit...</p>
<p>You can do:</p> <pre><code>import pysftp import paramiko hostname, prot = 'some.host.name', 22 proxy = paramiko.proxy.ProxyCommand('/usr/bin/nc --proxy proxy.foobar:8080 %s %d' % (hostname, port)) t = paramiko.Transport(sock=proxy) t.connect(username='abc', password='123') sftp = paramiko.SFTPClient.from_transport(t) # back to pysftp wrapper sftp.listdir('.') </code></pre> <p><a href="https://stackoverflow.com/a/55670436/13448727">Here's</a> the origin of the code, with some discussion.</p>
python|proxy|paramiko|pysftp
2
1,905,349
60,750,727
Load and run test a .trt model
<p>I need to run my model in NVIDIA JETSON T2, So I converted my working yoloV3 model into tensorRT(.trt format)(<strong><a href="https://towardsdatascience.com/have-you-optimized-your-deep-learning-model-before-deployment-cdc3aa7f413d" rel="nofollow noreferrer">https://towardsdatascience.com/have-you-optimized-your-deep-learning-model-before-deployment-cdc3aa7f413d</a></strong>)This link mentioned helped me to convert the Yolo model into .trt .But after converting the model to .trt model I needed to test if it works fine (i.e) If the detection is good enough. I couldn't find any sample code for loading and testing .trt model. If anybody can help me , please pull up a sample code in the answer section or any link for reference. </p>
<p>You can load and perform the inference of your <strong>TRT Model</strong> using this snippet of code. This is executed in <strong>Tensorflow 2.1.0</strong> and <strong>Google Colab</strong> Environment.</p> <pre><code>from tensorflow.python.compiler.tensorrt import trt_convert as trt from tensorflow.python.saved_model import tag_constants saved_model_loaded = tf.saved_model.load(output_saved_model_dir, tags=[tag_constants.SERVING]) signature_keys = list(saved_model_loaded.signatures.keys()) print(signature_keys) # Outputs : ['serving_default'] graph_func = saved_model_loaded.signatures[signature_keys[0]] graph_func(x_test) # Use this to perform inference </code></pre> <p><code>output_saved_model_dir</code> is the location of your <strong>TensorRT Optimized model</strong> in <strong>SavedModel</strong> format.</p> <p>From here, you can add your <strong>testing</strong> methods to determine the performance of your <strong>pre</strong> and <strong>post-processed</strong> model. </p> <p><strong>EDIT:</strong></p> <pre><code>import tensorflow as tf from tensorflow.python.compiler.tensorrt import trt_convert as trt import numpy as np conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS conversion_params = conversion_params._replace(max_workspace_size_bytes=(1&lt;&lt;32)) conversion_params = conversion_params._replace(precision_mode="FP16") conversion_params = conversion_params._replace(maximum_cached_engines=100) converter = trt.TrtGraphConverterV2( input_saved_model_dir=input_saved_model_dir, conversion_params=conversion_params) converter.convert() converter.save(output_saved_model_dir) </code></pre> <p>Here are the codes used for <strong>Converting</strong> and <strong>Saving</strong> the <strong>Tensorflow RT Optimized</strong> model.</p>
tensorflow|yolo|tensorrt|nvidia-jetson
0
1,905,350
57,750,703
How can i update one PostgreSQL database and sync changes/updates to another PostgreSQL database on another server
<p>I have a django website with PostgreSQL database hosted on one server with a different company and a mirror of that django website is hosted on another server with another company which also have the same exact copy of the PostgreSQL database . How can i sync or update that in real time or interval</p>
<p>Postgresql has master-slave replication. Try That!</p>
django|python-3.x|postgresql
0
1,905,351
57,982,600
How do QCompleter and QAbstractItemModel work in PySide?
<p>I'm seeing a weird behavior in PySide when I use my own <a href="https://srinikom.github.io/pyside-docs/PySide/QtGui/QCompleter.html" rel="nofollow noreferrer">QCompleter</a> and <a href="https://srinikom.github.io/pyside-docs/PySide/QtCore/QAbstractItemModel.html" rel="nofollow noreferrer">QAbstractItemModel</a> together, and I can't find documentation in PySide (or Qt for that matter) which explains what it's doing.</p> <p>When I have an edit box using the QCompleter, and I type in additional characters, there is a huge number of calls to <a href="https://srinikom.github.io/pyside-docs/PySide/QtCore/QAbstractItemModel.html#PySide.QtCore.PySide.QtCore.QAbstractItemModel.data" rel="nofollow noreferrer"><code>QAbstractItemModel.data()</code></a> to get the completion column content of a whole lot of items. This call occurs for the items that are relevant, several times for each item, but also for each of the top-level items immediately below the root.</p> <p>Since my data model has hundreds (and potentially thousands) of top-level items in it, I am a little concerned I'm doing something wrong. I just want to make sure it doesn't bog down my computer doing irrelevant computations.</p> <p>I created an example here: <a href="https://gist.github.com/jason-s/6c9495e29a4caac7ddf5cd739550a310" rel="nofollow noreferrer">https://gist.github.com/jason-s/6c9495e29a4caac7ddf5cd739550a310</a> which I based off of <a href="https://stackoverflow.com/a/24718460/44330">my earlier example in another question</a></p> <p>If I run it as <code>python qtcompleter5.py -e 25</code> and I type <code>United States/Arizona/P</code> into the edit box, what I see in my console is shown below. (The <code>Miragi012stan</code> entries are intentional, to easily vary the top-level item count by varying the <code>-e</code> argument.)</p> <pre><code>splitPath: [u'United States', u'Arizona', u'P'] Canada France Germany United States Mexico Miragi000stan Miragi001stan Miragi002stan Miragi003stan Miragi004stan Miragi005stan Miragi006stan Miragi007stan Miragi008stan Miragi009stan Miragi010stan Miragi011stan Miragi012stan Miragi013stan Miragi014stan Miragi015stan Miragi016stan Miragi017stan Miragi018stan Miragi019stan Miragi020stan Miragi021stan Miragi022stan Miragi023stan Miragi024stan Peoria Peoria Phoenix Peoria Peoria Peoria Peoria Phoenix Phoenix </code></pre> <p>which seems to me like <code>.data()</code> is being called for all the top level items, and then 3-6 times for the actual items that might match the text in the completion window.</p> <p>The only time it doesn't do this is when I'm typing in the top-level item, e.g. <code>Unite</code> which produces these calls:</p> <pre><code>splitPath: [u'Unite'] United States United States United States splitPath: [u'Unite'] United States United States United States United States United States </code></pre> <p>(By the way, I can comment out the TreeView stuff, leaving only the QLineEdit, so the TreeView isn't the part causing the problem.)</p> <p>The same behavior (querying of all top-level items) happens even if I try to help by telling PySide I'm using sorted models (see <a href="https://gist.github.com/jason-s/6c9495e29a4caac7ddf5cd739550a310#file-qtcompleter5a-py" rel="nofollow noreferrer">qtcompleter5a.py</a>) by using <a href="https://srinikom.github.io/pyside-docs/PySide/QtGui/QCompleter.html#PySide.QtGui.PySide.QtGui.QCompleter.setModelSorting" rel="nofollow noreferrer"><code>completer.setModelSorting(QtGui.QCompleter.CaseInsensitivelySortedModel)</code></a></p> <p>What's going on here?</p>
<p>Hmm... I don't know if this is a good idea, but if I put all the top-level nodes under a dummy node, then only the dummy node gets a "useless" query during completion.</p> <p>See <a href="https://gist.github.com/jason-s/6c9495e29a4caac7ddf5cd739550a310#file-qtcompleter5b-py" rel="nofollow noreferrer">https://gist.github.com/jason-s/6c9495e29a4caac7ddf5cd739550a310#file-qtcompleter5b-py</a></p> <p>Changes I made from qtcompleter5.py:</p> <ul> <li>Header item is a separate item, not used as the root</li> <li>Dummy node returns <code>""</code> for the contents of the completion column in the <code>DisplayRole</code></li> <li>Paths returned by QCompleter.splitPath have an extra dummy component in front</li> <li>The first component of <code>pathFromIndex()</code> is ignored if it's a root node</li> <li>In the AbstractItemModel: <ul> <li><code>rowCount()</code> returns 1 for an invalid parent</li> <li><code>parent()</code> returns the invalid <code>QtCore.QModelIndex()</code> when the provided index is at the root (previously it was when the index's parent was at the root)</li> <li><code>index()</code> returns the root item when the provided parent is invalid (previously it looked up an appropriate child item of the root item)</li> </ul></li> </ul> <p>This makes a TreeView of this model a bit funny-looking, but for applications that don't need the TreeView, it seems to work ok; if I enter <code>United States/Arizona/P</code> then it prints:</p> <pre><code>splitPath: ['', u'United States', u'Arizona', u'P'] Peoria Peoria Phoenix Peoria Peoria Peoria Peoria Phoenix Phoenix </code></pre>
python|python-2.7|pyside|qcompleter
0
1,905,352
57,899,663
How to relocate/shade python packages?
<p>I'm looking for a python package manager that has the same 'package relocation' feature as:</p> <ul> <li>maven-shade-plugin:</li> </ul> <p><a href="https://maven.apache.org/plugins/maven-shade-plugin/examples/class-relocation.html" rel="nofollow noreferrer">https://maven.apache.org/plugins/maven-shade-plugin/examples/class-relocation.html</a></p> <p>OR</p> <ul> <li>gradle-shadow-plugin:</li> </ul> <p><a href="https://imperceptiblethoughts.com/shadow/configuration/relocation/" rel="nofollow noreferrer">https://imperceptiblethoughts.com/shadow/configuration/relocation/</a></p> <p>it is important for any diamond package dependencies that has version conflict. Where can I find a package manager that supports it?</p> <p><strong>UPDATE</strong>: If it is still missing, then how much effort does it take to implement one?</p>
<p>According to the accepted solution in <a href="https://discourse.julialang.org/t/how-does-the-julia-1-0-pkg-handle-diamond-dependencies/19558/4" rel="nofollow noreferrer">this</a>, diamond dependency will not happen.</p> <p>Think about this: A (your project) requires B and C. Both B and C requires different versions of D. </p> <p>With setup.py of A properly list the required version of B and C, pip will auto-deduce the version of D (or fails with an error). </p> <p>For example, you need version 4 of B and version 3 of C. Version 4 of B requires >= 4 of D and version 3 of C requires >=3 of D. If that's the case, a proper version (4) of D will be installed. </p> <p>If Version 4 of B requires >= 4 of D and version 3 of C requires &lt; 4 of D, pip install will already fails. As a developer, you will know it in advance and try to use version 3.5 of B etc. </p> <p>Also, <a href="https://medium.com/@jimjh/managing-dependencies-in-python-applications-b9c93dda98c2" rel="nofollow noreferrer">the document here</a> lists their internal process inspired by maven's shading. It may be of use to you (unfortunately, their method is manual and does not have a plugin/tool like you requested).</p>
python|python-3.x|dependency-management|package-managers|python-packaging
0
1,905,353
56,438,374
Keras CNN Autoencoder input shape is wrong
<p>I have build a CNN autoencoder using keras and it worked fine for the MNIST test data set. I am now trying it with a different data set collected from another source. There are pure images and I have to read them in using cv2 which works fine. I then convert these images into a numpy array which again I think works fine. But when I try to do the .fit method it gives me this error. </p> <pre><code>Error when checking target: expected conv2d_39 to have shape (100, 100, 1) but got array with shape (100, 100, 3) </code></pre> <p>I tried converting the images to grey scale but they then get the shape (100,100) and not (100,100,1) which is what the model wants. What am I doing wrong here?</p> <p>Here is the code that I am using: </p> <pre><code>def read_in_images(path): images = [] for files in os.listdir(path): img = cv2.imread(os.path.join(path, files)) if img is not None: images.append(img) return images train_images = read_in_images(train_path) test_images = read_in_images(test_path) x_train = np.array(train_images) x_test = np.array(test_images) # (36, 100, 100, 3) input_img = Input(shape=(100,100,3)) x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img) x = MaxPooling2D((2, 2), padding='same')(x) x = Conv2D(16, (3, 3), activation='relu', padding='same')(x) x = MaxPooling2D((2, 2), padding='same')(x) x = Conv2D(16, (3, 3), activation='relu', padding='same')(x) encoded = MaxPooling2D((2, 2), padding='same')(x) x = Conv2D(16, (3, 3), activation='relu', padding='same')(encoded) x = UpSampling2D((2, 2))(x) x = Conv2D(168, (3, 3), activation='relu', padding='same')(x) x = UpSampling2D((2, 2))(x) x = Conv2D(32, (3, 3), activation='relu')(x) x = UpSampling2D((2, 2))(x) decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x) autoencoder = Model(input_img, decoded) autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') autoencoder.fit(x_train, x_train, epochs=25, batch_size=128, shuffle=True, validation_data=(x_test, x_test), callbacks=[TensorBoard(log_dir='/tmp/autoencoder')]) </code></pre> <p>The model works fine with the MNIST data set but not with my own data set. Any help will be appreciated. </p>
<p>Your input and output shapes are different. That triggers the error (I think).</p> <pre><code>decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x) </code></pre> <p>should be</p> <pre><code>decoded = Conv2D(num_channels, (3, 3), activation='sigmoid', padding='same')(x) </code></pre>
python|opencv|keras|deep-learning|conv-neural-network
2
1,905,354
71,703,993
How to use itertools for getting unique row items using pandas?
<p>I have a dataframe like as shown below</p> <pre><code>ID,Region,Supplier,year,output 1,ANZ,AB,2021,1 2,ANZ,ABC,2022,1 3,ANZ,ABC,2022,1 4,ANZ,ABE,2021,0 5,ANZ,ABE,2021,1 6,ANZ,ABQ,2021,1 7,ANZ,ABW,2021,1 8,AUS,ABO,2020,1 9,KOREA,ABR,2019,0 </code></pre> <p>I am trying to generate unique combination of <code>region</code> and <code>supplier</code> values. Instead of a groupby, I was thinking to do via <code>zip_longest</code>.</p> <p>So, I tried the below</p> <pre><code>for i,j in itertools.zip_longest(region_values,supplier_values,fillvalue=&quot;ANZ&quot;): print(i,j) </code></pre> <p>But the above results in incorrect entries for <code>i and j</code>.</p> <p>I want to get each unique combination from a specific row. I don't wish to multiply/generate new combinations which is not there in the data</p> <p><strong>Currently, this results in incorrect output</strong> as shown below</p> <pre><code>ANZ AB AUS ABC #incorrect to generating new combinations like this KOREA ABE #incorrect to generating new combinations like this ANZ ABQ ANZ ABW ANZ ABO ANZ ABR </code></pre> <p><strong>I expect my output to be like as shown below</strong></p> <pre><code>ANZ AB ANZ ABC ANZ ABE ANZ ABQ ANZ ABW AUS ABO KOREA ABR </code></pre> <p>I use zip_longest because after this, I want to use the output from zip object to filter dataframe using 2 columns</p>
<p>If ordering is important need remove duplicates by both columns together, so instead <code>unique</code> need <code>drop_duplicates</code>:</p> <pre><code>column_name = &quot;Region&quot; col_name = &quot;Supplier&quot; df = data.drop_duplicates([column_name, col_name]) for i,j in zip(df[column_name],df[col_name]): print(i,j) ANZ AB ANZ ABC ANZ ABE ANZ ABQ ANZ ABW AUS ABO KOREA ABR </code></pre>
python|pandas|dataframe|numpy|pandas-groupby
1
1,905,355
69,413,682
Rotate python logs at 12am midnight every day?
<p>I'm using the <code>daiquiri</code> logging library in a simple python script (love it so far!). Out of the box it offers a timed rotating file option that I'd like to set to rotate every 24 hrs at midnight base on the machines time. Under the hood it would take direction from <code>datetime</code>, so I guess really my question is how do I use datetime to specify 12am each night?</p> <p>This is an example from daiquiri docs, and I'll need to adjust <code>interval=datetime.timedelta(1)</code> accordingly but I dont know where to start.</p> <pre><code>daiquiri.output.TimedRotatingFile( filename=&quot;logs.log&quot;, program_name=None, formatter=daiquiri.formatter.JSON_FORMATTER, level=logging.DEBUG, interval=datetime.timedelta(1), backup_count=0 ), </code></pre>
<p>Looking at the code for this library, It looks like they hardcode it to disallow you from rolling over your logs at midnight, only at a specified interval starting when the program is run.</p> <p>Why not use the standard python logging module package? It looks like most of the stuff in <code>daiquiri</code> just wraps the logging module, including the <code>TimedRotatingFileHandler</code>. If you switch to this library, you just set the <code>interval</code> value to <code>'midnight'</code> and it will work. <a href="https://docs.python.org/3/library/logging.handlers.html#logging.handlers.TimedRotatingFileHandler" rel="nofollow noreferrer">https://docs.python.org/3/library/logging.handlers.html#logging.handlers.TimedRotatingFileHandler</a></p>
python|datetime|logging|cron
0
1,905,356
55,255,627
python:How to extract a word before and after the match using regex
<p>Cosnider the follwing data as sample</p> <blockquote> <p>input_corpus = "this is an example.\n I am trying to extract it.\n"</p> </blockquote> <p>I am trying to extract exactly 2 words before and after .\n with the following code</p> <pre><code>for m in re.finditer('(?:\S+\s+){2,}[\.][\n]\s*(?:\S+\b\s*){0,2}',input_corpus): print(m) </code></pre> <p>Expected output : </p> <pre><code>an example. I am extract it. </code></pre> <p>Actual output: Nothing gets captured</p> <p>Can someone point me what is wrong with the regex.</p>
<p>You may use this regex:</p> <pre><code>r'(?:^|\S+\s+\S+)\n(?:\s*\S+\s+\S+|$)' </code></pre> <p><a href="https://regex101.com/r/NzGOiZ/1" rel="nofollow noreferrer">RegEx Demo</a></p> <p><strong>Code:</strong></p> <pre><code>&gt;&gt;&gt; input_corpus = &quot;this is an example.\n I am trying to extract it.\n&quot; &gt;&gt;&gt; print re.findall(r'(?:^|\S+\s+\S+)\n(?:\s*\S+\s+\S+|$)', input_corpus) ['an example.\n I am', 'extract it.\n'] </code></pre> <p><strong>Details:</strong></p> <ul> <li><code>(?:^|\S+\s+\S+)</code>: Match preceding 2 words or line start</li> <li><code>\n</code>: Match a new line</li> <li><code>(?:\s*\S+\s+\S+|$)</code>: Match next 2 words or line end</li> </ul>
regex|python-3.x
3
1,905,357
57,490,628
confusion matrix just takes class 0 and 1
<p>I built the following LSTM network and it works fine, although it reaches just 60% accuracy. I think this is due to the problem, that it just classifies labels 0 and 1 and not 2 and 3 because the confusion matrix has zeros for class 2 und 3. </p> <pre class="lang-py prettyprint-override"><code>import keras import numpy as np from keras.preprocessing.text import Tokenizer import numpy as np import pandas as pd from keras.models import Sequential from keras.layers import Dense from keras.preprocessing.sequence import pad_sequences from keras.layers import Input, Dense, Dropout, Embedding, LSTM, Flatten from keras.models import Model from keras.utils import to_categorical from keras.callbacks import ModelCheckpoint import seaborn as sns import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score plt.style.use('ggplot') %matplotlib inline from sklearn.metrics import precision_score from sklearn.metrics import recall_score from sklearn.metrics import f1_score from sklearn.metrics import cohen_kappa_score from sklearn.metrics import roc_auc_score from sklearn.metrics import confusion_matrix data = pd.read_csv("dataset/train_set.csv", sep="\t") data['num_words'] = data.Text.apply(lambda x : len(x.split())) num_class = len(np.unique(data.Label.values)) # 4 y = data['Label'].values MAX_LEN = 300 tokenizer = Tokenizer() tokenizer.fit_on_texts(data.Text.values) post_seq = tokenizer.texts_to_sequences(data.Text.values) post_seq_padded = pad_sequences(post_seq, maxlen=MAX_LEN) X_train, X_test, y_train, y_test = train_test_split(post_seq_padded, y, test_size=0.25) vocab_size = len(tokenizer.word_index) +1 inputs = Input(shape=(MAX_LEN, )) embedding_layer = Embedding(vocab_size, 128, input_length=MAX_LEN)(inputs) x = LSTM(64)(embedding_layer) x = Dense(32, activation='relu')(x) predictions = Dense(num_class, activation='softmax')(x) model = Model(inputs=[inputs], outputs=predictions) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc']) model.summary() filepath="weights.hdf5" checkpointer = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max') history = model.fit([X_train], batch_size=64, y=to_categorical(y_train), verbose=1, validation_split=0.25, shuffle=True, epochs=10, callbacks=[checkpointer]) df = pd.DataFrame({'epochs':history.epoch, 'accuracy': history.history['acc'], 'validation_accuracy': history.history['val_acc']}) g = sns.pointplot(x="epochs", y="accuracy", data=df, fit_reg=False) g = sns.pointplot(x="epochs", y="validation_accuracy", data=df, fit_reg=False, color='green') model.load_weights('weights.hdf5') predicted = model.predict(X_test) predicted = np.argmax(predicted, axis=1) accuracy_score(y_test, predicted) print(accuracy_score) y_pred1 = model.predict(X_test, verbose=0) yhat_classes = np.argmax(y_pred1,axis=1) # predict probabilities for test set yhat_probs = model.predict(X_test, verbose=0) # reduce to 1d array yhat_probs = yhat_probs[:, 0] yhat_classes = yhat_classes[:, ] # accuracy: (tp + tn) / (p + n) accuracy = accuracy_score(y_test, yhat_classes) print('Accuracy: %f' % accuracy) # precision tp / (tp + fp) precision = precision_score(y_test, yhat_classes, average='micro') print('Precision: %f' % precision) # recall: tp / (tp + fn) recall = recall_score(y_test, yhat_classes, average='micro') print('Recall: %f' % recall) # f1: 2 tp / (2 tp + fp + fn) f1 = f1_score(y_test, yhat_classes, average='micro') print('F1 score: %f' % f1) matrix = confusion_matrix(y_test, yhat_classes) print(matrix) </code></pre> <p>confusion matrix:</p> <pre class="lang-py prettyprint-override"><code>[[324 146 0 0] [109 221 0 0] [ 55 34 0 0] [ 50 16 0 0]] </code></pre> <p>The average is set to 'micro' and the output layer has four nodes for the four classes. The accuracy, f1-score and recall only from the train_set is this (class 2 is sometimes predicted, but class 3 not once):</p> <pre class="lang-py prettyprint-override"><code>Accuracy: 0.888539 Precision: 0.888539 Recall: 0.888539 </code></pre> <p>Does anyone know why this happens?</p>
<p>It may be that the model gets stucked into a suboptimal solution. In your problem, classes 0 and 1 represent 85% of the total instances, so it is quite imbalanced. The model is predicting class 0 and 1 because it didn't fully converge and this is a classical error mode in this kind of models. In an informal way, you can think about it like the model is lazy... What I would recommend you would be:</p> <ul> <li>Train longer</li> <li>Try to see if your model can overfit your training data. For that, I would train longer and measure the train error. You will see that if there is not a major problem in your model or in your data, the model will end up predicting classes 2 and 3 at least in your training set. From that point you can discard you have a problem in your data/model</li> <li>Use batch normalization, in practice I have seen it helps getting rid of this error mode</li> <li>Use always a bit of dropout, it helps regularizing the model.</li> </ul>
python|classification|confusion-matrix
1
1,905,358
57,477,063
How to turn a python file to an exe file, so that the code can be masked
<p>I have a python cryptography program that i created and I want to mask the code so it cannot be read by anyone. What is the best way to do that in Linux and maybe in windows? I suspect i should turn it to an exe file and if so how ??? thanks in advance.</p>
<p>You can convert Python scripts into standalone executable using:</p> <p><a href="http://www.pyinstaller.org/" rel="nofollow noreferrer">PyInstaller</a> can be used to convert Python programs into stand-alone executables, under Windows, Linux, Mac OS X, FreeBSD, Solaris and AIX. It is one of the recommended converters.</p> <p><a href="http://www.py2exe.org/" rel="nofollow noreferrer">py2exe</a> converts Python scripts into only executable in Windows platform.</p> <p><a href="http://cython.org/" rel="nofollow noreferrer">Cython</a> is a static compiler for both the Python programming language and the extended Cython programming language.</p>
python|linux|executable
2
1,905,359
42,411,293
Possible to vectorize this array operation in python?
<p>Beginner in Python here, I have a hard time wrapping my head around vectorizing my 'for' loops. I have a 2D numpy array, containing only two values -1 and 1. For each column and row I want to do the following operation: set all -1 values encountered before the first time a 1 is encountered to 0. Can this be vectorized? Even without crashing if there's no 1 in a row/column and thus the whole row/column is to be set to 0?</p>
<p>Here's one vectorized approach -</p> <pre><code>mask = a==1 a[~np.maximum.accumulate(mask,axis=0)] = 0 a[~np.maximum.accumulate(mask,axis=1)] = 0 </code></pre> <p>Sample run -</p> <pre><code>In [39]: a Out[39]: array([[ 1, -1, 1, -1, -1], [ 1, 1, -1, 1, -1], [-1, 1, -1, 1, -1], [ 1, -1, -1, -1, -1]]) In [40]: mask = a==1 In [41]: a[~np.maximum.accumulate(mask,axis=0)] = 0 In [42]: a[~np.maximum.accumulate(mask,axis=1)] = 0 In [43]: a Out[43]: array([[ 1, 0, 1, 0, 0], [ 1, 1, -1, 1, 0], [ 0, 1, -1, 1, 0], [ 1, -1, -1, -1, 0]]) </code></pre>
python|numpy|vectorization
3
1,905,360
42,330,839
Batch normalization layer in Tensorflow is not updating its moving mean and moving variance
<p>Batch Normalization is not saving its moving mean and moving variance</p> <p>When I train I get perfect overfitting on my training data (as expected). With batch normalization the training is much faster, also as expected. However, when, immediately after a training step, I run the <strong>same model</strong> on the <strong>same data</strong> with "is_training" = False it gives a vastly inferior result. Furthermore, every time I look at moving_mean and moving_variance they are their default values. They never update. </p> <pre><code>(u'main/y/y/moving_mean:0', array([ 0., 0.], dtype=float32)) (u'main/y/y/moving_variance:0', array([ 1., 1.], dtype=float32)) \ (u'main/y/y/moving_mean:0', array([ 0., 0.], dtype=float32)) (u'main/y/y/moving_variance:0', array([ 1., 1.], dtype=float32)) 700 with generated means (training = true} 1.0 with saved means {training = false} 0.4911 </code></pre> <p>I have the update_ops code in place, but it doesn't seem to be doing the trick. update_collections = None makes it function, but I've been told that's a sub-optimal solution for performance reasons.</p> <pre><code>update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) if update_ops: updates = tf.group(*update_ops) cost = with_dependencies([updates], cost) </code></pre> <p>My code is below</p> <pre><code>import numpy as np import tensorflow as tf from tensorflow.contrib.layers import fully_connected, softmax, batch_norm from tensorflow.python.ops.control_flow_ops import with_dependencies from tensorflow.python.training.adam import AdamOptimizer batch_size = 100 input_size = 10 noise_strength = 4 class Data(object): def __init__(self,obs,gold): self.obs=obs self.gold=gold def generate_data(batch_size,input_size,noise_strength): input = np.random.rand(batch_size, input_size) * noise_strength gold = np.random.randint(0, 2, (input_size,1)) input = input + gold return Data(input,gold) def ffnn_model(inputs,num_classes,batch_size,is_training,reuse=False): output = fully_connected(inputs, num_classes * 2, activation_fn=None, normalizer_fn=batch_norm, normalizer_params={'is_training': is_training, 'reuse': reuse, 'scope': 'y'}, reuse=reuse, scope='y' ) y = softmax(tf.reshape(output, [batch_size, num_classes, 2])) return y #objective function def objective_function(y,gold): indices = tf.stack([tf.range(tf.size(gold)),tf.reshape(gold,[-1])],axis=1) scores = tf.gather_nd(tf.reshape(y,[-1,2]),indices=indices) # return tf.cast(indices,tf.float32),-tf.reduce_mean(tf.log(scores+1e-6)) return -tf.reduce_mean(tf.log(scores+1e-6)) def train_op(y,gold): cost = objective_function(y,gold) update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) if update_ops: print "yes to update_ops" print update_ops updates = tf.group(*update_ops) cost = with_dependencies([updates], cost) train_step = AdamOptimizer().minimize(cost) return train_step def predictions_op(y): return tf.cast(tf.argmax(y, axis=len(y.get_shape()) - 1), dtype=tf.int32) def accuracy_op(y,gold): return tf.reduce_mean(tf.cast(tf.equal(predictions_op(y), gold),tf.float32)) def model(batch_size, num_classes, input_size, scope, reuse): with tf.variable_scope(scope) as m: if reuse: m.reuse_variables() is_training = tf.placeholder(tf.bool) x = tf.placeholder(tf.float32, shape=[batch_size, input_size]) y = ffnn_model(x, num_classes=1, batch_size=batch_size, is_training=is_training, reuse=reuse) g = tf.placeholder(tf.int32, shape=[batch_size, num_classes]) return g, x, y, is_training def train(batch_size=100,input_size = 100): scope = "main" g, x, y, is_training = model(batch_size, 1, input_size, scope,reuse=None ) with tf.Session() as sess: train_step, accuracy,predictions = train_op(y, g), accuracy_op(y, g), predictions_op(y) cost_op = objective_function(y,g) init_op = tf.group(tf.local_variables_initializer(), tf.global_variables_initializer()) sess.run(init_op) accs = [] accs2 = [] costs = [] for i in range(10000): data = generate_data(batch_size, input_size, noise_strength) _,acc,cost = sess.run([train_step,accuracy,cost_op],feed_dict={x:data.obs,g:data.gold,is_training:True}) acc2 = sess.run(accuracy, feed_dict={x: data.obs, g: data.gold, is_training: False}) accs.append(acc) accs2.append(acc2) costs.append(cost) if i%100 == 0: # print scurrs print i,"with generated means (training = true}",np.mean(accs[-100:]),"with saved means {training = false}",np.mean(accs2[-100:]) # print sess.run(predictions, feed_dict={x: data.obs, g: data.gold, is_training: False}) vars = [var for var in tf.global_variables() if 'moving' in var.name] rv = sess.run(vars, {is_training: False}) rt = sess.run(vars, {is_training: True}) print"\t".join([str((v.name, a)) for a, v in zip(rv, vars)]), \ "\n", \ "\t".join([str((v.name, a)) for a, v in zip(rt, vars)]) if __name__ == "__main__": train() </code></pre>
<p>Batch normalization creates operations that you must run in order to update the values. That said, it also adds them to particular collections, and if you use the <code>tf.contrib.layers.optimize_loss</code> function, it collects these for you and runs them whenever this op is run.</p> <p>So to resolve, replace:</p> <pre><code> train_step = AdamOptimizer().minimize(cost) </code></pre> <p>with</p> <pre><code> train_step = optimize_loss(loss, step, learning_rate, optimizer='ADAM') </code></pre>
python|tensorflow
0
1,905,361
53,953,579
Tree graph with python AnyTree package
<p>I need to generate a tree from a dictionary with python AnyTree package.so I have a dictionary like below structure.</p> <pre><code>data = {'name': 'xyz', 'children': [{'name': 'node1', 'children': [{'name': 'node2'}]}]} </code></pre> <p>This dictionary can grow as the program executes.issue that I'm facing right now is when i try to export the tree as a png with <code>DotExporter(root).to_picture("data.png")</code> it throws a file not found error like below</p> <pre><code> Traceback (most recent call last):File "C:/Users/.../data_modeling.py", line 88, in&lt;module&gt;creating_tree(main) File "C:/Users/.../data_modeling.py", line 66, in creating_tree DotExporter(root).to_picture("data.png") File "C:\Users\...\AppData\Local\Programs\Python\Python37-32\lib\site-packages\anytree\exporter\dotexporter.py", line 229, in to_picture check_call(cmd) File "C:\Users\...\AppData\Local\Programs\Python\Python37-32\lib\subprocess.py", line 323, in check_call retcode = call(*popenargs, **kwargs) File "C:\Users\...\AppData\Local\Programs\Python\Python37-32\lib\subprocess.py", line 304, in call with Popen(*popenargs, **kwargs) as p: File "C:\Users\...\AppData\Local\Programs\Python\Python37-32\lib\subprocess.py", line 756, in __init__ restore_signals, start_new_session) File "C:\Users\...\AppData\Local\Programs\Python\Python37-32\lib\subprocess.py", line 1155, in _execute_child startupinfo) FileNotFoundError: [WinError 2] The system cannot find the file specified </code></pre> <p>I have graphviz package installed and i am on python 3.7 with windows. however using <code>DotExporter(root).to_dotfile('root.dot')</code> I can export the tree as a dot file and using an online converter I can convert dot file to an image. but I need to export this as a png from my program.</p> <p>I have already googled for similar issues and did all the suggestions and solutions there.any help or suggestion would be great,any other tree graphing tool also would be okay. </p>
<p>found a solution..</p> <p>problem is with the graphviz python package.when you install graphviz with pip,graphviz python wrapper doesn't contain graphviz binary files.</p> <p>so to solve this you need to manually download graphviz from their website and set it to path or install graphviz using conda.</p> <p>alternate to this you can use pydot as a png exporter.and you can use dot file to generate a png with pydot.also pydot can be installed with pip.</p>
python-3.x|anytree
0
1,905,362
58,234,785
How to append text to a 'column' value
<p>I'm trying to run a number of cleanup on a DataFrame. In order to keep track of what happened to the data, I added a column called <code>applied_rules</code> to the DataFrame.</p> <p>At each step, I want to add a line to the <code>applied_rules</code> column if the record has been updated.</p> <p>Typically, it looks like:</p> <pre><code>mask = df['type'] == "test" df.loc[mask, 'value'] = "updated" df.loc[mask].assign(applied_rules=lambda x: x.applied_rules + "Rule 1 - ...") </code></pre> <p>All, the <code>applied_rules</code> return empty.</p> <p>If I use:</p> <pre><code>mask = df['type'] == "test" df.loc[mask, 'value'] = "updated" df[mask]['applied_rules'] += "GR001a - updated position because it was not corresponding to a standard one\n" </code></pre> <p>Only the last value gets stored.</p> <p>What is the correct way to append text to a value?</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a> with <code>mask</code>:</p> <pre><code>df = pd.DataFrame({ 'type':['text',"test",'text',"test","test",'text'], 'applied_rules':list('aaabbb') }) mask = df['type'] == "test" df.loc[mask, 'value'] = "updated" df.loc[mask, 'applied_rules'] += " GR001a... " #alternative #df.loc[mask, 'applied_rules'] = df.loc[mask, 'applied_rules'] + " GR001a... " print (df) type applied_rules value 0 text a NaN 1 test a GR001a... updated 2 text a NaN 3 test b GR001a... updated 4 test b GR001a... updated 5 text b NaN </code></pre>
python|python-3.x|pandas|dataframe
0
1,905,363
14,839,659
Python join date and filename
<p>I am learning python and i am trying to write a stupid code and got stuck here, I am need backupfile name like <code>full-backup-ucs-2013-02-12</code></p> <pre><code>#!/usr/bin/python import os from time import strftime DATE=`strftime("%Y-%m-%d")` backupfile = "full-backup-ucs-" + DATE print backupfile </code></pre> <p>When i run i got following output, did you see it print two single quote <code>' '</code> in date, I want to remove them. I am sure there is elegant way to do that please suggest: </p> <pre><code>[spatel@tux work]$ ./backup.py full-backup-ucs-'2013-02-12' </code></pre>
<p>You can use the <code>datetime</code> module to get this information.</p> <pre><code>import datetime DATE = datetime.datetime.now().strftime('%Y-%m-%d') </code></pre> <p>As I'm sure you've noticed, <a href="https://stackoverflow.com/questions/1673071/what-do-backticks-mean-to-the-python-interpreter-num">backtic substitution doesn't work in python as it does in the shell</a>. It implicitly calls <code>repr</code> (in python2.x) which is where your additional quotes are coming from.</p> <p><strong>EDIT</strong> -- Apparently you could just use remove the backtics and your code should more or less work as <a href="http://docs.python.org/2/library/time.html#time.strftime" rel="nofollow noreferrer"><code>time.strftime</code></a> uses the current localtime if you omit the second argument. </p>
python|linux
4
1,905,364
14,802,945
file transfer code python
<p>I found the code here: <a href="https://stackoverflow.com/questions/9382045/send-a-file-through-sockets-in-python">Send a file through sockets in Python</a> (the selected answer)</p> <p>But I will jut post it here again..</p> <pre><code>server.py import socket import sys s = socket.socket() s.bind(("localhost",9999)) s.listen(10) while True: sc, address = s.accept() print address i=1 f = open('file_'+ str(i)+".txt",'wb') #open in binary i=i+1 while (True): l = sc.recv(1024) while (l): print l #&lt;--- i can see the data here f.write(l) #&lt;--- here is the issue.. the file is blank l = sc.recv(1024) f.close() sc.close() s.close() client.py import socket import sys s = socket.socket() s.connect(("localhost",9999)) f=open ("test.txt", "rb") l = f.read(1024) while (l): print l s.send(l) l = f.read(1024) s.close() </code></pre> <p>On server code, the print l line prints the file contents..so that means that content is being transferred.. but then the file is empty??</p> <p>what am i missing? thanks</p>
<p>You are probably trying to inspect the file while the program is running. The file is being buffered, so you likely won't see any output in it until the <code>f.close()</code> line is executed, or until a large amount of data is written. Add a call to <code>f.flush()</code> after the <code>f.write(l)</code> line to see output in real time. Note that it will hurt performance somewhat.</p>
python
4
1,905,365
14,590,638
Pandas dataframe resample at every nth row
<p>I have a script that reads system log files into pandas dataframes and produces charts from those. The charts are fine for small data sets. But when I face larger data sets due to larger timeframe of data gathering, the charts become too crowded to discern.</p> <p>I am planning to resample the dataframe so that if the dataset passes certain size, I will resample it so there are ultimately only the SIZE_LIMIT number of rows. This means I need to filter the dataframe so every n = actual_size/SIZE_LIMIT rows would aggregated to a single row in the new dataframe. The agregation can be either average value or just the nth row taken as is.</p> <p>I am not fully versed with pandas, so may have missed some obvious means. </p>
<p>Actually I think you should not modify the data itself, but to take a view of the data in the desired interval to plot. This view would be the actual datapoints to be plotted.</p> <p>A naive approach would be, for a computer screen for example, to calculate how many points are in your interval, and how many pixels you have available. Thus, for plotting a dataframe with 10000 points in a window 1000 pixels width, you take a slice with a STEP of 10, using this syntax (whole_data would be a 1D array just for the example):</p> <pre><code>data_to_plot = whole_data[::10] </code></pre> <p>This might have undesired effects, specifically masking short peaks that might "escape invisible" from the slicing operation. An alternative would be to split your data into bins, then calculating one datapoint (maximum value, for example) for each bin. I feel that these operations might actually be fast due to numpy/pandas efficient array operations.</p> <p>Hope this helps!</p>
pandas
13
1,905,366
44,791,540
Store how many times a certain value repeats in multiple lists inside of a list to a dict
<p>I'm trying to grab the first value in a multiple lists inside a list and store how many times it repeats if it's more than once into a dictionary/hash.</p> <pre><code>coordinates = [ ['bg1955', '47.6740° N', '122.1215° W'], ['bg1955', '47.6101° N', '122.2015° W'], ['bg1955', '47.6062° N', '122.3321° W'], ['sj1955', '37.3318° N', '122.0312° W'] ] </code></pre> <p>When I try the following:</p> <pre><code>my_dict = {row[0]:coordinates.count(row[0]) for row in coordinates} </code></pre> <p>The value of <code>my_dict</code> becomes:</p> <pre><code>{'sj1955': 0, 'bg1955': 0} </code></pre> <p>instead of:</p> <pre><code>{'bg1955': 3} </code></pre> <p>How would I obtain the above in python3? The original data sample would have over 20,000 lists inside one list instead of only the 4 listed above.</p> <p>EDIT: When I mention <code>certain</code>, i mean the particular place in each row which would be row[0], not just returning only 1 result in a dictionary. if there were multiple different values that repeated, it would lead to this, as I'm looking to store any repeated value, lets say if sw1950 was in 20 lists and jb1994 was in 393 lists it would be:</p> <pre><code>{'bg1955': 3, 'sw1950': 20, 'jb1994': 393} </code></pre>
<p>The reason your existing approach doesn't work is because you're trying to do this:</p> <pre><code>&gt;&gt;&gt; x = [[1, 1, 1]] &gt;&gt;&gt; x.count(1) </code></pre> <p>Now, you think this will return <code>3</code> because 1 is present 3 times. However, this is what it returns:</p> <pre><code>0 </code></pre> <p>The reason is because those elements are in a nested list, and <code>.count()</code> does not count nested elements.</p> <p>Contrast the above with this:</p> <pre><code>&gt;&gt;&gt; x = [1, 1, 1] &gt;&gt;&gt; x.count(1) 3 </code></pre> <p>This makes sense, because those <code>1</code>s aren't in a nested list.</p> <p>One workaround is to use the <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="nofollow noreferrer"><code>collections.Counter</code></a>:</p> <pre><code>from collections import Counter coordinates = [ ['bg1955', '47.6740° N', '122.1215° W'], ['bg1955', '47.6101° N', '122.2015° W'], ['bg1955', '47.6062° N', '122.3321° W'], ['sj1955', '37.3318° N', '122.0312° W'] ] count = Counter() for coord in coordinates: count[coord[0]] += 1 print(count) </code></pre> <p>Output:</p> <pre><code>Counter({'bg1955': 3, 'sj1955': 1}) </code></pre> <p>Now, you're free to poll this dict for counts of whichever item you like. If you want to extract duplicates, you can do this:</p> <pre><code>print({ k : count[k] for k in count if count[k] &gt; 1}) </code></pre> <p>This prints <code>{'bg1955': 3}</code>. </p>
python|python-3.x|dictionary
7
1,905,367
61,784,016
How to format numbers from 1000 to 1k in django
<p>I am working on a project in Django, i have created a template_tags.py files in my project. How do i format numbers from 1000 to 1k, 2000 to 2k, 1000000 to 1m and so on. But i am having an issue with my code, instead of getting 1000 to 1k, i got 1000 to 1.0k. What am i missing in my code?</p> <pre><code>from django import template register = template.Library() @register.filter def shrink_num(value): """ Shrinks number rounding 123456 &gt; 123,5K 123579 &gt; 123,6K 1234567 &gt; 1,2M """ value = str(value) if value.isdigit(): value_int = int(value) if value_int &gt; 1000000: value = "%.1f%s" % (value_int/1000000.00, 'M') else: if value_int &gt; 1000: value = "%.1f%s" % (value_int/1000.0, 'k') return value </code></pre>
<p>You appear to be formatting with 1 decimal. If you don't want the decimal or numbers after it, change the 1 to a 0. You also need to have <code>value_int &gt;= &lt;number&gt;</code> otherwise 1000000 and 1000 won't be converted:</p> <pre><code>[...] if value_int &gt;= 1000000: value = "%.0f%s" % (value_int/1000000.00, 'M') else: if value_int &gt;= 1000: value = "%.0f%s" % (value_int/1000.0, 'k') </code></pre>
python|django
2
1,905,368
23,706,566
Call/Define Open Dataset to Run Python Calc in SPSS
<p>I have this ArcGIS python code (using arcpy module) that I need to import and run in SPSS. </p> <p>The python code works in ArcGIS, and I have been able to successfully set the python library to the ArcGIS x64 python directory. </p> <p><strong>My question is this:</strong> How can I call/define the open (or can be closed) data set that I want to run the calculations on? (My current code defines this in line 2 table "CURRENT_DATABASE_MEMORY") </p> <p>here is my code that works in ArcGIS/Python. I have not been able to successfully find a solution to this problem.</p> <pre><code>import arcpy table = "CURRENT_DATABASE_MEMORY" valueList = [r[0] for r in arcpy.da.SearchCursor(table, ["FULL_ADDRESS"])] valueDict = collections.Counter(valueList) uniqueList = valueDict.keys() uniqueList.sort() updateRows = arcpy.da.UpdateCursor(table, ["FULL_ADDRESS","ALL_LIVE"]) for updateRow in updateRows: updateRow[1] = valueDict[updateRow[0]] updateRows.updateRow(updateRow) del updateRow, updateRows valueList = [r[0] for r in arcpy.da.SearchCursor(table, ["FULL_ADDRESS_NAME"])] valueDict = collections.Counter(valueList) uniqueList = valueDict.keys() uniqueList.sort() updateRows = arcpy.da.UpdateCursor(table, ["FULL_ADDRESS_NAME","ALL_LIVE"]) for updateRow in updateRows: updateRow[1] = valueDict[updateRow[0]] updateRows.updateRow(updateRow) del updateRow, updateRows uniqueValues = {} values = [] newID = 0 with arcpy.da.UpdateCursor(table, ["FULL_ADDRESS_NAME","FEAT_SEQ"]) as updateRows: for row in updateRows: nameValue = row[0] if nameValue in uniqueValues: row[1] = uniqueValues[nameValue] else: newID += 1 uniqueValues[nameValue] = newID row[1] = newID updateRows.updateRow(row) del row, updateRows </code></pre>
<p>Adding to what Andy wrote, you might want to consider issuing a command to SPSS using the Submit api to read the dataset via ODBC (assuming that you have a driver for that source), or you could read the datas source directly in the Python code, manipulate it, and then write it to SPSS using the various apis provided for that in the Python plugin. The best choice will depend on the role that you want SPSS to play in processing this data.</p>
python|arcgis|spss|arcpy
0
1,905,369
24,429,764
PyQt4 enable button on text entry, connect windows
<p>I am attempting to write a gui program in python but have virtually no experience in gui programming. I starting learning tkinter and picked up most of what I need to know for the purposes of my program, but I recently discovered PyQt and Qt designer, and the output looks a lot nicer.</p> <p>In my program, I want to first open a small window that prompts the user for some information (that it will use to load a file or create a new file). Because this information is crucial, I don't want the user to be able to progress from the first small window without entering it so I want the 'OK' button to be disabled initially and enabled when the user enters information into the field. I have made an attempt at this (mostly created in Qt Designer and then edited) which is shown below.</p> <pre><code># -*- coding: utf-8 -*- # Form implementation generated from reading ui file 'open.ui' # # Created: Wed Jun 25 17:51:25 2014 # by: PyQt4 UI code generator 4.10.3 # # WARNING! All changes made in this file will be lost! from PyQt4 import QtCore, QtGui try: _fromUtf8 = QtCore.QString.fromUtf8 except AttributeError: def _fromUtf8(s): return s try: _encoding = QtGui.QApplication.UnicodeUTF8 def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig, _encoding) except AttributeError: def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig) class Ui_Form(object): def setupUi(self, Form): Form.setObjectName(_fromUtf8("Form")) Form.setEnabled(True) Form.resize(308, 143) self.horizontalLayout = QtGui.QHBoxLayout(Form) self.horizontalLayout.setObjectName(_fromUtf8("horizontalLayout")) self.verticalLayout = QtGui.QVBoxLayout() self.verticalLayout.setObjectName(_fromUtf8("verticalLayout")) self.horizontalLayout_3 = QtGui.QHBoxLayout() self.horizontalLayout_3.setObjectName(_fromUtf8("horizontalLayout_3")) spacerItem = QtGui.QSpacerItem(40, 20, QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Minimum) self.horizontalLayout_3.addItem(spacerItem) self.label = QtGui.QLabel(Form) self.label.setObjectName(_fromUtf8("label")) self.horizontalLayout_3.addWidget(self.label) spacerItem1 = QtGui.QSpacerItem(40, 20, QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Minimum) self.horizontalLayout_3.addItem(spacerItem1) self.verticalLayout.addLayout(self.horizontalLayout_3) self.horizontalLayout_7 = QtGui.QHBoxLayout() self.horizontalLayout_7.setObjectName(_fromUtf8("horizontalLayout_7")) self.label_2 = QtGui.QLabel(Form) self.label_2.setObjectName(_fromUtf8("label_2")) self.horizontalLayout_7.addWidget(self.label_2) self.lineEdit = QtGui.QLineEdit(Form) self.lineEdit.setObjectName(_fromUtf8("lineEdit")) self.horizontalLayout_7.addWidget(self.lineEdit) self.verticalLayout.addLayout(self.horizontalLayout_7) self.horizontalLayout_8 = QtGui.QHBoxLayout() self.horizontalLayout_8.setObjectName(_fromUtf8("horizontalLayout_8")) self.label_3 = QtGui.QLabel(Form) self.label_3.setObjectName(_fromUtf8("label_3")) self.horizontalLayout_8.addWidget(self.label_3) self.lineEdit_2 = QtGui.QLineEdit(Form) self.lineEdit_2.setObjectName(_fromUtf8("lineEdit_2")) self.horizontalLayout_8.addWidget(self.lineEdit_2) self.verticalLayout.addLayout(self.horizontalLayout_8) self.horizontalLayout_9 = QtGui.QHBoxLayout() self.horizontalLayout_9.setObjectName(_fromUtf8("horizontalLayout_9")) self.pushButton_2 = QtGui.QPushButton(Form) self.pushButton_2.setEnabled(False) self.pushButton_2.setObjectName(_fromUtf8("pushButton_2")) self.horizontalLayout_9.addWidget(self.pushButton_2) self.pushButton = QtGui.QPushButton(Form) self.pushButton.setObjectName(_fromUtf8("pushButton")) self.horizontalLayout_9.addWidget(self.pushButton) self.verticalLayout.addLayout(self.horizontalLayout_9) self.horizontalLayout.addLayout(self.verticalLayout) self.retranslateUi(Form) QtCore.QObject.connect(self.lineEdit_2, QtCore.SIGNAL(_fromUtf8("textEdited(QString)")), self.pushButton_2.setEnabled) QtCore.QMetaObject.connectSlotsByName(Form) def retranslateUi(self, Form): Form.setWindowTitle(_translate("Form", "Form", None)) self.label.setText(_translate("Form", "Please enter your name and student number:", None)) self.label_2.setText(_translate("Form", "Name: ", None)) self.label_3.setText(_translate("Form", "Student number: ", None)) self.pushButton_2.setText(_translate("Form", "OK", None)) self.pushButton.setText(_translate("Form", "Cancel", None)) if __name__ == "__main__": import sys app = QtGui.QApplication(sys.argv) Form = QtGui.QWidget() ui = Ui_Form() ui.setupUi(Form) Form.show() sys.exit(app.exec_()) </code></pre> <p>When I run the program and type something into the student number field it gives the following error: <code>TypeError: QWidget.setEnabled(bool): argument 1 has unexpected type 'str'</code>. I realise that the problem is that the it is taking a string as an input rather than a boolean but I don't know how to fix it.</p> <p>The second part of my problem is that I want a new bigger window to open when the user clicks 'OK' on the small window, this window will have a next option at the bottom where it progresses to another similar window but I have no idea how to do this (link windows that is).</p> <p>How do I do the things mentioned above? Or should I just stick to tkinter even though it seems to me to be aesthetically inferior. Thanks.</p>
<p>Firstly you should never write code into the generated UI file from Qt Designer as this gets overwritten by the pyuic tool the next time you write to it. Instead import it into a seperate file. </p> <p>This line needs to be removed from your Ui_File:</p> <pre><code>QtCore.QObject.connect(self.lineEdit_2, QtCore.SIGNAL(_fromUtf8("textEdited(QString)")), self.pushButton_2.setEnabled) </code></pre> <p>Example:</p> <pre><code>#!/usr/bin/env python # -*- coding: utf-8 -*- import sys from PyQt4.QtCore import pyqtSlot from PyQt4.QtGui import QWidget, QApplication #assumes your file is called Ui_Main_Form.py from the pyuic tool from Ui_Main_Form import Ui_Form class MainForm(QWidget): def __init__(self, parent=None): #Initialise super(MainForm, self).__init__(parent) #Setup the UI self.ui = Ui_Form() self.ui.setupUi(self) #Now can do any modifications that cant be done in Qt Designer #Handle the textChanged signal for QLineEdit self.ui.lineEdit_2.textChanged.connect(self.line_edit_text_changed) @pyqtSlot(str) def line_edit_text_changed(self, text): if text: # Check to see if text is filled in self.ui.pushButton_2.setEnabled(True) else: self.ui.pushButton_2.setEnabled(False) if __name__ == '__main__': app = QApplication(sys.argv) my_form = MainForm() my_form.show() sys.exit(app.exec_()) </code></pre> <p>There is some useful information on the signal and slots mechanism for PyQt <a href="http://pyqt.sourceforge.net/Docs/PyQt4/new_style_signals_slots.html" rel="nofollow noreferrer">here</a></p> <p>For how to launch windows from this one see this: <a href="https://stackoverflow.com/questions/13517568/how-to-create-new-pyqt4-windows-from-an-existing-window">How to create new PyQt4 windows from an existing window?</a> accepted answer.</p> <p>Hope this gets you started.</p>
python|qt|user-interface|pyqt
3
1,905,370
64,439,323
Why does Beautiful Soup view '<' as invalid character?
<p>I am attempting to use Beautiful Soup to pull css elements for the first time and I am consistently getting the following error regardless of which css element I attempt to select:</p> <blockquote> <p>soupsieve.util.SelectorSyntaxError: Invalid character '&lt;' position 0 line 1:</p> </blockquote> <p><code>soup.select(&quot;&lt;span class=&quot;regular-price&quot; data-ui=&quot;size-color-price&quot;&gt;$230.00&lt;/span&gt;)</code></p> <p>I feel like I am missing something fundamental regarding the use of the 'less than symbol' so I have tried manually typing in the CSS element as well (assuming there might be some formatting I couldn't see) but the issue continues</p>
<p><code>soup.select</code> accepts the <code>css selector</code> of the element as the argument. So you have to enter the <code>css selector</code> in <code>soup.select</code> instead of typing out the entire element.</p> <p>Right click on the element and click <code>copy css selector</code>. Then, paste the <code>css selector</code> in <code>soup.select</code>. Thus, this is how your code should look like:</p> <pre><code>soup.select('css selector of the element') </code></pre>
python|beautifulsoup
4
1,905,371
70,731,308
Debug a Python script in VSCODE while calling it from terminal with argparse
<p>suppose I have the following script:</p> <pre><code>import argparse parser = argparse.ArgumentParser() parser.add_argument('-t','--text', help=&quot;Input a text&quot;) args = parser.parse_args() def test_function(x): y = x print(y) if __name__ == '__main__': test_function(args.text) </code></pre> <p>which I call from the console with</p> <pre><code>python newtest.py -t hello </code></pre> <p>Question: In Visual Code, is there a way that I can execute the code from the command line (like shown above), but simultaneously also put a breakpoint at e.g. at the <code>y=x</code> line in the <code>test_function</code>, so that I can debug the script that I have called from the command line?</p> <p>Right now it is just executed and the breakpoint is ignored, basically it does <strong>not</strong> stop here: <a href="https://i.stack.imgur.com/7wG4h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7wG4h.png" alt="enter image description here" /></a></p>
<p>Not quite the answer to your question, but my web searching brought me here before I found what I wanted, which was to call the script with arguments and use the vscode debugger. This doesn't debug what you call in the terminal, but instead sets what is called when you run a debug. If you've got a lot of different things in your folder it may be a hassle to try to maintain them all or anything like that, but...</p> <p>If you go to your <code>launch.json</code> file in the <code>.vscode</code> directory, you'll get your debug configs. You can add a list item for <code>args</code> in there. So, mine looks like this, which then calls whatever python file I'm debugging plus <code>-r asdf</code></p> <pre class="lang-json prettyprint-override"><code>{ // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;args&quot;: [ &quot;-r asdf&quot; ], &quot;name&quot;: &quot;Python: Current File&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;${file}&quot;, &quot;console&quot;: &quot;integratedTerminal&quot;, &quot;justMyCode&quot;: true } ] } </code></pre> <p>If you have multiple arguments, you can't put them in the same string, they should be individual, separated elements. Didn't realize this at first, but it is a list after all. So having multiple would be like this:</p> <pre class="lang-json prettyprint-override"><code>... &quot;args&quot;: [ &quot;-r asdf&quot;, &quot;-m fdsa&quot; ] ... </code></pre>
python|debugging|visual-studio-code|argparse
2
1,905,372
69,798,041
Get the image url inside Javascript with Python and BeautifulSoup
<p>I am trying the get the product image from the page below, using Python and BeautifulSoup. The image is inside javascript. I am using lxml. I have created a simplified version of my code to focus on the image only.</p> <p>The image url I am after is <a href="https://lapa.co.za/pub/media/catalog/product/cache/image/700x700/e9c3970ab036de70892d86c6d221abfe/l/e/learn_to_read_l3_b05_tippie_fish_cover.jpg" rel="nofollow noreferrer">https://lapa.co.za/pub/media/catalog/product/cache/image/700x700/e9c3970ab036de70892d86c6d221abfe/l/e/learn_to_read_l3_b05_tippie_fish_cover.jpg</a></p> <pre><code>import json from bs4 import BeautifulSoup import requests headers = { 'User-Agent': 'Mozilla/5.0 (iPad; CPU OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148' } testlink = 'https://lapa.co.za/kinder-en-tienerboeke/leer-my-lees-vlak-1-grootboek-9-tippie-en-die-vis' r = requests.get(testlink, headers=headers) soup = BeautifulSoup(r.content, 'lxml') title = soup.find('h1', class_='page-title').text.strip() images = soup.find('div', class_='product-img-column') # html_data = requests.get(testlink).text # data = json.loads(re.search(r'window.INITIAL_REDUX_STATE=(\{.*?\});', html_data)) print(images) </code></pre>
<p>The json is in the <code>&lt;script&gt;</code> tags. Just need to pull that out.</p> <pre><code>import json from bs4 import BeautifulSoup import requests import re headers = { 'User-Agent': 'Mozilla/5.0 (iPad; CPU OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148' } testlink = 'https://lapa.co.za/kinder-en-tienerboeke/leer-my-lees-vlak-1-grootboek-9-tippie-en-die-vis' r = requests.get(testlink, headers=headers) soup = BeautifulSoup(r.content, 'lxml') title = soup.find('h1', class_='page-title').text.strip() images = soup.find('div', class_='product-img-column') script = images.find('script', {'type':'text/x-magento-init'}) jsonStr = re.search(r'&lt;script type=\&quot;text/x-magento-init\&quot;&gt;(.*)&lt;/script&gt;', str(script), re.IGNORECASE | re.DOTALL).group(1) data = json.loads(jsonStr) image_data = data['[data-gallery-role=gallery-placeholder]']['mage/gallery/gallery']['data'][0] image_url = image_data['full'] # OR #image_url = image_data['img'] print(image_url) </code></pre> <p><strong>Output:</strong></p> <pre><code>print(image_url) https://lapa.co.za/pub/media/catalog/product/cache/image/e9c3970ab036de70892d86c6d221abfe/9/7/9780799377347_1.jpg </code></pre>
python|web-scraping|beautifulsoup|lxml|screen-scraping
2
1,905,373
73,144,722
Finding a row in a 2d array in python if the value of the column is known
<p>I have a matrix in an excel sheet I am reading into my script using Pandas. I convert it to an np matrix like so and come out with this as a result.</p> <pre><code>df = pd.read_excel(r'C:\Users\PycharmProjects\OLS_Script\ols1.xlsx') matrix = np.matrix(matrix) print(matrix) </code></pre> <p><a href="https://i.stack.imgur.com/4c9p2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4c9p2.png" alt="enter image description here" /></a></p> <p>Now my question is regarding if some mechanics in python exist that I am not familiar with as I come from C++. Essentially what I am trying to do with this matrix is for each column, acquire the entire row that the column has values in, as well as the row corresponding to that.</p> <p>For instance, looking at column 0, I would need the entire row 0, as well as row 1 since it is the only other row with a 1 in column 0. So I would need the indices <strong>[1,2,3] in row 0, and [4,5,6,7] in row 1</strong>, excluding anything in column 0.</p> <p>Column 1 has a 1 in row 0 as well as row 2, so I would need to get those two rows and the corresponding column indices in each of those rows with a population in it. <strong>[0,2,3] in row 0, [4,8,9,10,11] in row 2</strong>, excluding anything in column 1, and so on for the following columns.</p> <p>My original idea was the have a for loop go through each column, then another for loop inside to go through each row at the same column index to find where the next corresponding row is. I am not too familiar with python and the various &quot;shortcuts&quot; it has compared to something like C++ since I have seen a lot of functionality in python beforehand that might make something like what I need to do with as few lines of code as possible, so if there is a shorter way to accomplish what I need, please let me know.</p>
<p>With an excerpt of your matrix as an example:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd matrix = [ [1, 1, 1, 1, pd.NA, pd.NA, pd.NA, pd.NA, pd.NA, pd.NA, pd.NA, pd.NA,], [1, pd.NA, pd.NA, pd.NA, 1, 1, 1, 1, pd.NA, pd.NA, pd.NA, pd.NA,], [pd.NA, 1, pd.NA, pd.NA, 1, pd.NA, pd.NA, pd.NA, 1, 1, 1, 1], ] df = pd.DataFrame(matrix) </code></pre> <pre class="lang-py prettyprint-override"><code> 0 1 2 3 4 5 6 7 8 9 10 11 0 1 1 1 1 &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; 1 1 &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; 1 1 1 1 &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; 2 &lt;NA&gt; 1 &lt;NA&gt; &lt;NA&gt; 1 &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; 1 1 1 1 </code></pre> <p>Here is one way to do it:</p> <pre class="lang-py prettyprint-override"><code>results = { i: df.dropna(subset=i).dropna(how=&quot;all&quot;, axis=1).drop(columns=i).columns.to_list() for i in range(df.shape[1]) } </code></pre> <pre class="lang-py prettyprint-override"><code>print(results) # Output { 0: [1, 2, 3, 4, 5, 6, 7], 1: [0, 2, 3, 4, 8, 9, 10, 11], 2: [0, 1, 3], 3: [0, 1, 2], 4: [0, 1, 5, 6, 7, 8, 9, 10, 11], 5: [0, 4, 6, 7], 6: [0, 4, 5, 7], 7: [0, 4, 5, 6], 8: [1, 4, 9, 10, 11], 9: [1, 4, 8, 10, 11], 10: [1, 4, 8, 9, 11], 11: [1, 4, 8, 9, 10], } </code></pre>
python|pandas
1
1,905,374
50,168,712
How to Stop a for loop, inside of a while Loop in python
<p>I am producing a simple athlete race time data entry form, I need it to ask every pass whether the user wants to continue if so then it goes again, if not then it exits the while loop. If the user has not entered atleast 4-8 pieces of data then it produces an error instead of printing out the times. I believe the error is due to after it goes through the while loop for the first time, it does not do another pass until it does 8 within the for loop. How would I get around this problem. Please explain your code, and relate it to the context I have given.</p> <pre><code>import time datasets= [] carry_on = True while carry_on == True: for i in range(0, 8): print("Inputting Data for Lane", i) gender = str(input("Is the athlete male or female ")) athlete = str(input("What is the athletes name ")) finishTime = float(input("What was the finishing time ")) dataset = [gender, athlete, finishTime] datasets.append(dataset) decision = input("Would you like to add another lane ") if decision == "yes": carry_on = True else: carry_on = False print("") if 3 &lt; i &gt; 9: print("{0:&lt;10}{1:&lt;10}{2:&lt;15}".format("Gender","Athlete","Finish time")) ds = sorted(datasets, key=lambda x:x[2], reverse=False) for s in ds: time.sleep(1) print("{0:&lt;10}{1:&lt;10}{2:&lt;15}".format(s[0], s[1], s[2])) else: print("You have not chosen enough lanes, please choose atleast 4") </code></pre>
<p>First of all, LEARN THE BASICS</p> <p>try break in for loop not sure if while is required </p> <pre><code>for i in range(0, 8): print("Inputting Data for Lane", i) gender = str(input("Is the athlete male or female ")) athlete = str(input("What is the athletes name ")) finishTime = float(input("What was the finishing time ")) dataset = [gender, athlete, finishTime] datasets.append(dataset) decision = input("Would you like to add another lane ") if decision != "yes": break </code></pre> <p>// going by your code and what you have asked</p> <pre><code>import time datasets= [] carry_on = True while carry_on == True: for i in range(0, 8): print("Inputting Data for Lane", i) gender = str(input("Is the athlete male or female ")) athlete = str(input("What is the athletes name ")) finishTime = float(input("What was the finishing time ")) dataset = [gender, athlete, finishTime] datasets.append(dataset) decision = input("Would you like to add another lane ") if decision == "yes": carry_on = True else: carry_on = False break print("") if 3 &lt; i &gt; 9: print("{0:&lt;10}{1:&lt;10}{2:&lt;15}".format("Gender","Athlete","Finish time")) ds = sorted(datasets, key=lambda x:x[2], reverse=False) for s in ds: time.sleep(1) print("{0:&lt;10}{1:&lt;10}{2:&lt;15}".format(s[0], s[1], s[2])) else: print("You have not chosen enough lanes, please choose atleast 4") </code></pre>
python|python-3.x|for-loop|while-loop|boolean
2
1,905,375
50,154,804
Python append to array and for loop for it
<p>I am trying to insert to array some links then for loop them ( to enter them).</p> <p>My code :</p> <pre><code>import requests from requests_html import HTMLSession import sys links = [] link = "http://tvil.me" pagedata = HTMLSession().get(link) info = pagedata.html.find('#page-right', first=True) sidra = info.find(".index-episode-caption") for x in sidra: link = x.absolute_links links.append(link) for i in links: print(i) ##page = HTMLSession().get(i) - Not working because the response in bottom ##print(page.xpath("//*[contains(@id, 'change-season-')]/a")) </code></pre> <p>The response of <code>print(i)</code>:</p> <pre><code>{'http://www.tvil.me/view/374/2/6/v/מסע_בזמן_Timeless.html'} {'http://www.tvil.me/view/212/3/22/v/לוציפר_Lucifer.html'} {'http://www.tvil.me/view/3048/1/7/v/תחנה_19_Station_19.html'} {'http://www.tvil.me/view/3039/1/10/v/המגדלים_הגבוהים_The_Looming_Tower.html'} {'http://www.tvil.me/view/109/5/6/v/עמק_הסיליקון_Silicon_Valley.html'} {'http://www.tvil.me/view/68/5/11/v/ילדה_אבודה_Lost_Girl.html'} {'http://www.tvil.me/view/556/1/20/v/שלדון_הצעיר_Young_Sheldon.html'} {'http://www.tvil.me/view/74/4/6/v/מפרשים_שחורים_Black_Sails.html'} {'http://www.tvil.me/view/360/2/2/v/ווסטוורלד_Westworld.html'} [Finished in 3.7s] </code></pre> <p>Its printed with {' '} because this its not entering the link.. What should I do.</p>
<p>Can you try this?</p> <p><code>{''}</code> means its a <code>set</code> and you only want the element in the <code>set</code>. And <code>pop</code> gives that</p> <pre><code>for x in sidra: link = x.absolute_links.pop() links.append(link) </code></pre>
python|python-3.x|python-requests|python-requests-html
3
1,905,376
64,093,357
Pandas apply sort_values on GroupBy object does not return a grouped DataFrame
<p>I don't understand why the following code is not working. I have the following dataframe:</p> <pre class="lang-py prettyprint-override"><code>ind = pd.MultiIndex.from_tuples([(2, 9), (2, 0), (3, 15), (3, 8), (2, 28), (2, 15), (2, 10), (3, 9)], names=['A','B']) values = [0.2719, 0.2938, 0.3281, 0.3310, 0.3323, 0.3640, 0.3647, 0.5218] df = pd.DataFrame(data = values, index=ind, columns = ['values']) </code></pre> <p><a href="https://i.stack.imgur.com/qHgsP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qHgsP.png" alt="enter image description here" /></a></p> <p>applying a groupby sort_values doesn't do anything:</p> <pre class="lang-py prettyprint-override"><code>df.groupby('A').apply(lambda x: x.sort_values(by='values')) </code></pre> <p><a href="https://i.stack.imgur.com/d7Ufj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d7Ufj.png" alt="enter image description here" /></a></p> <p>Note that the values are already <em>globally</em> sorted.</p> <p>Now when i just swap two rows, and thereby destroy the global prior sorting, then it suddenly works:</p> <pre class="lang-py prettyprint-override"><code>df1 = df.iloc[np.r_[1,0,2:len(df)]] df1.groupby('A').apply(lambda x: x.sort_values(by='values')) </code></pre> <p><a href="https://i.stack.imgur.com/kUlFN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kUlFN.png" alt="enter image description here" /></a></p> <p>This is the result I would expect from the other code also.</p>
<p>It doesn't say a great deal about the <code>combine</code> part of the <code>split-apply-combine</code> in the docs:</p> <blockquote> <p>GroupBy will examine the results of the apply step and try to return a sensibly combined result.</p> </blockquote> <p>Since you're not changing the number of rows or their order in the first example, <code>apply</code> functions more like <code>transform</code> which returns a &quot;like-indexed object&quot;.</p> <p>I think if what you want is a nested sort, you can just pass a list to <code>sort_values</code> directly, like so:</p> <pre><code>df.sort_values([&quot;A&quot;, &quot;values&quot;]) </code></pre> <pre><code> values A B 2 9 0.2719 0 0.2938 28 0.3323 15 0.3640 10 0.3647 3 15 0.3281 8 0.3310 9 0.5218 </code></pre>
python|pandas|dataframe|sorting|pandas-groupby
1
1,905,377
53,252,597
How to set context variable of all Django generic views at once?
<p>I will have standard class-based views for CRUD operations that inherit from various generic views like ListView, DetailView and so on. I will be setting all of their <code>context_object_name</code> attribute to the same value. </p> <p>I was wondering if there is a way to do it more pythonic, to not repeat the operations many times in the code, but to be able to change that variable in one place if necessary?</p> <p>ps. what comes to my mind is of course further inheritance, but maybe there is some more django-like way?</p>
<p>You can also use a mixin, instead of a middleware app:</p> <pre><code>class CommonContextMixin(object): def get_context_data(self, *args, **kwargs): context = super(CommonContextMixin, self).get_context_data(*args, **kwargs) context['foo'] = 'bar' return context </code></pre> <p>Then use that mixin in your views:</p> <pre><code>class MyView(TemplateView, CommonContextMixin): """ This view now has the foo variable as part of its context. """ </code></pre> <p>Relevant Django docs: <a href="https://docs.djangoproject.com/en/2.1/topics/class-based-views/mixins/" rel="nofollow noreferrer">https://docs.djangoproject.com/en/2.1/topics/class-based-views/mixins/</a></p>
python|django|django-class-based-views
1
1,905,378
53,014,382
how to generate ramdom values by given list and attach to dictinoary with replacement
<pre><code> a=['1100001', '1100010', '1100011', '1100100', '1100101', '1100110', '1100111'] b=[51, 51, 52, 52] c={} </code></pre> <p>In this program generate random values by taking list a In Dictonary assign b as keys and random numbers as values with replacement </p>
<p>You can <code>zip</code> together <code>b</code> with a <a href="https://docs.python.org/3/library/random.html#random.sample" rel="nofollow noreferrer"><code>random.sample</code></a> from <code>a</code> of size <code>len(b)</code></p> <pre><code>from random import sample a=['1100001', '1100010', '1100011', '1100100', '1100101', '1100110', '1100111'] b=[51, 52, 53] c = dict(zip(b, sample(a, len(b)))) print(c) # {51: '1100100', 52: '1100110', 53: '1100111'} </code></pre>
python-3.x
0
1,905,379
65,364,444
Pandas: Creating a pandas date-time series with different frequencies
<p>I need to create a pandas column that has a date range from 2015-12-01 to 2016-12-01, but with different time frequencies:</p> <ul> <li>From 01:00:00 to 07:00:00 a 1 hour frequency</li> <li>From 07:00:00 to 22:00:00 a 30 min frequency</li> <li>From 22:00:00 to 00:00:00 a 1 hour frequency</li> </ul> <p>The output for the first day should look like this, however the objective is to do it for all the date range:</p> <pre><code>1 2015-12-01 02:00:00 2 2015-12-01 03:00:00 3 2015-12-01 04:00:00 4 2015-12-01 05:00:00 5 2015-12-01 06:00:00 6 2015-12-01 07:00:00 7 2015-12-01 07:30:00 8 2015-12-01 08:00:00 9 2015-12-01 08:30:00 10 2015-12-01 09:00:00 11 2015-12-01 09:30:00 12 2015-12-01 10:00:00 13 2015-12-01 10:30:00 14 2015-12-01 11:00:00 15 2015-12-01 11:30:00 16 2015-12-01 12:00:00 17 2015-12-01 12:30:00 18 2015-12-01 13:00:00 19 2015-12-01 13:30:00 20 2015-12-01 14:00:00 21 2015-12-01 14:30:00 22 2015-12-01 15:00:00 23 2015-12-01 15:30:00 24 2015-12-01 16:00:00 25 2015-12-01 16:30:00 26 2015-12-01 17:00:00 27 2015-12-01 17:30:00 28 2015-12-01 18:00:00 29 2015-12-01 18:30:00 30 2015-12-01 19:00:00 31 2015-12-01 19:30:00 32 2015-12-01 20:00:00 33 2015-12-01 20:30:00 34 2015-12-01 21:00:00 35 2015-12-01 21:30:00 36 2015-12-01 22:00:00 37 2015-12-01 23:00:00 38 2015-12-02 00:00:00 </code></pre> <p>For this I used:</p> <pre><code>datetime_series_1 = pd.Series(pd.date_range(&quot;2015-12-01 01:00:00&quot;, periods=7 , freq=&quot;h&quot;)) datetime_series_2 = pd.Series(pd.date_range(&quot;2015-12-01 07:30:00&quot;, periods=29 , freq=&quot;30min&quot;)) datetime_series_3 = pd.Series(pd.date_range(&quot;2015-12-01 22:00:00&quot;, periods=3 , freq=&quot;h&quot;)) datetime_series = pd.concat([datetime_series_1, datetime_series_2, datetime_series_3]) datetime_series.reset_index(inplace=True, drop=True) print(datetime_series) </code></pre> <p>However I don't know how to make a for loop that can reproduce this but over the date range from 2015-12-01 to 2016-12-01 I mentioned above. Basically I don't know how in the for loop I can indicate it to change the date in the string of the date_range method.</p> <p>Any help would be greatly appreciated.</p> <p>Thank you !</p>
<p>This should do the trick:</p> <pre><code>#Your code datetime_series_1 = pd.Series(pd.date_range(&quot;2015-12-01 01:00:00&quot;, periods=7 , freq=&quot;h&quot;)) datetime_series_2 = pd.Series(pd.date_range(&quot;2015-12-01 07:30:00&quot;, periods=29 , freq=&quot;30min&quot;)) datetime_series_3 = pd.Series(pd.date_range(&quot;2015-12-01 22:00:00&quot;, periods=3 , freq=&quot;h&quot;)) datetime_series = pd.concat([datetime_series_1, datetime_series_2, datetime_series_3]) datetime_series.reset_index(inplace=True, drop=True) #loop through the number of days and use a day delta adding to list list_dates = [datetime_series]*366 #2016 was leap year :) for i in range(0,366): list_dates[i] = datetime_series + pd.Timedelta(&quot;{0} days&quot;.format(i)) #concat that list at the end datetime_series = pd.concat(list_dates) print(datetime_series) </code></pre>
python|pandas|date|datetime|date-range
1
1,905,380
65,386,932
Python: Recursive Function. How to return all subsets of targetsum
<p>My code is not showing the shortest subset eg [7] or it is not reading all the subsets, [7], [3,4] to return the shortest subset. Can explain why only 1 set of result is return and how should I modify it to show all subset? Thanks</p> <p>Image of Code that i wanted to follow as below</p> <p><a href="https://i.stack.imgur.com/5Xlaq.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Xlaq.jpg" alt="enter image description here" /></a></p> <pre><code> def howsum(targetsum,numbers,combo=None): if combo == None: combo = list() if targetsum == 0: return [ ] if targetsum &lt; 0: return None shortcombo = None for number in numbers: remainder = targetsum - number combo = howsum(remainder,numbers,combo) if combo != None: combo.append(number) if shortcombo == None or len(shortcombo) &gt; len(combo): shortcombo = combo return shortcombo return shortcombo print(howsum(7,[4,3,7])) </code></pre>
<p>Wrote code that closely matches the original JavaScript.</p> <p>Although JavaScript names will work, I refactored function and variable names to agree with <a href="https://www.python.org/dev/peps/pep-0008/#function-and-variable-names" rel="nofollow noreferrer">Python style</a>, namely:</p> <ul> <li>Function names should be lowercase, with words separated by underscores as necessary to improve readability.</li> <li>Variable names follow the same convention as function names.</li> </ul> <p><strong>Code</strong></p> <pre><code>def best_sum(target_sum, numbers): if target_sum == 0: return [] if target_sum &lt; 0: return None shortest_combination = None for num in numbers: remainder = target_sum - num remainder_combination = best_sum(remainder, numbers) if remainder_combination != None: combination = [*remainder_combination, num] # Python * equivalent to JavaSscript ... if shortest_combination == None or len(combination) &lt; len(shortest_combination): shortest_combination = combination return shortest_combination </code></pre> <p><strong>Test</strong></p> <pre><code>print(bestSum(7, [3, 4, 7])) # Output: [7] </code></pre> <p><strong>Using Memoization (i.e. caching)</strong></p> <pre><code>def best_sum(target_sum, numbers, memo = None): if memo is None: memo = {0:[]} if target_sum &lt; 0: return None if target_sum in memo: return memo[target_sum] shortest_combination = None for num in numbers: remainder = target_sum - num remainder_combination = best_sum(remainder, numbers, memo) if remainder_combination != None: combination = [*remainder_combination, num] # Python * equivalent to JavaSscript ... if shortest_combination == None or len(combination) &lt; len(shortest_combination): shortest_combination = combination memo[target_sum] = shortest_combination return memo[target_sum] print(best_sum(7, [3, 4, 7])) # Output: 7 # Following very slow on non-memoized version print(best_sum(100,[10,1,25])) # Output: [25, 25, 25, 25] </code></pre>
python|recursion
2
1,905,381
71,962,978
Is it impossible to create GUI in python without tkinter?
<p>I wanna know, Is it possible to create GUI in python without using tkinter?</p>
<p>Tk isn't technically making the GUI, it's delegating to a C library; <strong>and that's exactly what you can do too.</strong> For simplicity however, it's pretty much standard to use TkInter, or some other framework (i recommend <a href="https://pysimplegui.readthedocs.io/en/latest/" rel="nofollow noreferrer">PySimpleGUI</a>) that takes care of the Tk interaction for you</p>
python|user-interface|tkinter
0
1,905,382
10,759,353
Good results when running one by one, wrong when using a loop
<p>I have a 2D grid of ones and zeros. Cluster is defined as nondiagonal set of neighboring ones. For example, if we look at a grid:</p> <pre><code>[[0 0 0 0 0] [1 1 1 1 1] [1 0 0 0 1] [0 1 0 0 1] [1 1 1 1 0]] </code></pre> <p>One cluster would be set of coordinates (actually I use lists for this, but its not important):</p> <pre><code>c1=[[1, 0], [1, 1], [1, 2], [1, 3], [1, 4], [2, 1], [2, 4], [3, 4]] </code></pre> <p>The other cluster in this grid is given by:</p> <pre><code>c2=[[3,1], [4, 0], [4, 1], [4, 2], [4, 3]] </code></pre> <p>Now, I have made a method that for a given starting coordinate (if it's value is 1), returns a cluster to which that point belongs (for example, if I choose a [1,1] coordinate it would return c1).<br> For testing I'll choose a point <code>(1, 1)</code> and a small grid. This is the output when result is good:</p> <pre><code>Number of recursions: 10 Length of cluster: 10 [[1 1 1 0 1] [1 1 0 1 1] [0 1 0 0 1] [1 1 1 0 0] [0 1 0 1 1]] [[1 1 1 0 0] [1 1 0 0 0] [0 1 0 0 0] [1 1 1 0 0] [0 1 0 0 0]] </code></pre> <p>I was trying to get some idea how fast my algorithm is when cluster size is getting larger. If I run the program and then rerun it, and do that many times, it always gives the good result. If I use a loop, it starts giving wrong results. Here is one possible output test scenario:</p> <pre><code>Number of recursions: 10 Length of cluster: 10 [[1 1 1 0 1] [1 1 0 1 1] [0 1 0 0 1] [1 1 1 0 0] [0 1 0 1 1]] [[1 1 1 0 0] [1 1 0 0 0] [0 1 0 0 0] [1 1 1 0 0] [0 1 0 0 0]] Number of recursions: 8 Length of cluster: 8 [[0 1 1 1 0] [1 1 1 0 0] [1 0 0 0 0] [1 1 1 0 1] [1 1 0 0 0]] [[0 0 0 0 0] - the first one is always good, this one already has an error [1 1 0 0 0] [1 0 0 0 0] [1 1 1 0 0] [1 1 0 0 0]] Number of recursions: 1 Length of cluster: 1 [[1 1 1 1 1] [0 1 0 1 0] [0 1 0 0 0] [0 1 0 0 0] [0 1 1 0 1]] [[0 0 0 0 0] - till end [0 1 0 0 0] [0 0 0 0 0] [0 0 0 0 0] [0 0 0 0 0]] Number of recursions: 1 Length of cluster: 1 [[1 1 1 1 1] [0 1 1 0 0] [1 0 1 1 1] [1 1 0 1 0] [0 1 1 1 0]] [[0 0 0 0 0] [0 1 0 0 0] [0 0 0 0 0] [0 0 0 0 0] [0 0 0 0 0]] ... till end </code></pre> <p>I will give the code for loop (it's no problem giving you all code, but it's too big, and the error is probably due to something I do inside a loop):</p> <pre><code>import numpy as np from time import time def test(N, p, testTime, length): assert N&gt;0 x=1 y=1 a=PercolationGrid(N) #this is a class that creates a grid a.useFixedProbability(p) #the probability that given point will be 1 a.grid[x,y]=1 #I put the starting point as 1 manually cluster=Cluster(a) t0=time() cluster.getCluster(x,y) #this is what I'm testing how fast is it t1=time() stats=cluster.getStats() #get the length of cluster and some other data testTime.append(t1-t0) testTime.sort() length.append(stats[1]) #[1] is the length stat that interests me length.sort() #both sorts are so I can use plot later print a.getGrid() #show whole grid clusterGrid=np.zeros(N*N, dtype='int8').reshape(N, N) #create zero grid where I'll "put" the cluster of interest c1=cluster.getClusterCoordinates() #this is recursive method (if it has any importance) for xy in c1: k=xy[0] m=xy[1] clusterGrid[k, m]=1 print clusterGrid del a, cluster, clusterGrid testTime=[] length=[] p=0.59 N=35 np.set_printoptions(threshold='nan') #so the output doesn't shrink for i in range(10): test(N, p, testTime, length) </code></pre> <p>I assume that I do something wrong with freeing memory or something (if it's not some trivial error in loop I can't see)? I use python 2.7.3 on 64bit Linux. <br><br> EDIT: I'm aware that people here should not review whole codes, but specific problems, but I can't find what's happening, the only suggestion is that maybe I have some static variables, but it seems to me that that is not the case. So, if someone has good will and energy you can browse through a code and maybe you'll see something. I started using classes not while ago, so be prepared for lot of bad stuff.</p> <pre><code>import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import time class ProbabilityGrid(object): """ This class gives 2D quadratic array (a grid) which is filled with float values from 0-1, which in many cases represent probabilities """ def __init__(self, size=2, dataType='float16'): """initialization of a grid with 0. values""" assert size&gt;1 assert dataType=='float64' or dataType=='float32' or dataType=='float16' self.n=size self.dataType=dataType self.grid=np.zeros((size, size), dtype=dataType) def getGrid(self): """returns a 2D probability array""" return self.grid def getSize(self): """returns a size of a 2D array""" return self.size def fillRandom(self): """fills the grid with uniformly random values from 0 to 1""" n=self.n self.grid=np.random.rand(n, n) def fixedProbabilities(self, p): """fills the grid with fixed value from 0 to 1""" assert p&lt;1.0 self.grid=p*np.ones((self.n, self.n)) class PercolationGrid(object): """ percolation quadratic grid filled with 1 and 0, int8 which represent a state. Percolation grid is closly connected to probabilies grid. ProbabilityGrid gives the starting probabilities will the [i,j] spot be filled or not. All functions change the PercolationGrid.grid when ProbabilityGrid.grid changes, so in a way their values are connected """ def __init__(self, size=2, dataType='int8'): """ initialization of PercolationGrid, sets uniformly 0 and 1 to grid """ assert size&gt;1 assert dataType=='int64' or dataType=='int32' or dataType=='int8' self.n=size self.dataType=dataType self.grid=np.zeros((size, size), dtype=dataType) self.pGrid=ProbabilityGrid(self.n) self.pGrid.fillRandom() self.useProbabilityGrid() #def fillRandom(self, min=0, max=1, distribution='uniform'): # n=self.n # self.grid=np.random.random_integers(min, max, n*n).reshape(n, n) def getGrid(self): """returns a 2D percolation array""" return self.grid def useProbabilityGrid(self): #use probability grid to get Percolation grid of 0s and 1es """ this method fills the PercolationGrid.grid according to probabilities from Probability.grid """ comparisonGrid=np.random.rand(self.n, self.n) self.grid=np.array(np.floor(self.pGrid.grid-comparisonGrid)+1, dtype=self.dataType) # Here I used a trick. To simulate whether 1 will apear with probability p, # we can use uniform random generator which returns values from 0 to 1. If # the value&lt;p then we get 1, if value&gt;p it's 0. # But instead looping over each element, it's much faster to make same sized # grid of random, uniform values from 0 to 1, calculate the difference, add 1 # and use floor function which round everything larger than 1 to 1, and lower # to 0. Then value-p+1 will give 0 if value&lt;p, 1 if value&gt;p. The result is # converted to data type of percolation array. def useFixedProbability(self, p): """ this method fills the PercolationGrid according to fixed probabilities of being filled, for example, a large grid with parameter p set to 0.33 should, aproximatly have one third of places filed with ones and 2/3 with 0 """ self.pGrid.fixedProbabilities(p) self.useProbabilityGrid() def probabilityCheck(self): """ this method checks the number of ones vs number of elements, good for checking if the filling of a grid was close to probability we had in mind. Of course, the accuracy is larger as grid size grows. For smaller grid sizes you can still check the probability by running the test multiple times. """ sum=self.grid.sum() print float(sum)/float(self.n*self.n) #this works because values can only be 0 or 1, so the sum/size gives #the ratio of ones vs size def setGrid(self, grid): shape=grid.shape i,j=shape[0], shape[1] assert i&gt;1 and j&gt;1 if i!=j: print ("The grid needs to be NxN shape, N&gt;1") self.grid=grid def setProbabilities(self, grid): shape=grid.shape i,j=shape[0], shape[1] assert i&gt;1 and j&gt;1 if i!=j: print ("The grid needs to be NxN shape, N&gt;1") self.pGrid.grid=grid self.useProbabilityGrid() def showPercolations(self): fig1=plt.figure() fig2=plt.figure() ax1=fig1.add_subplot(111) ax2=fig2.add_subplot(111) myColors=[(1.0, 1.0, 1.0, 1.0), (1.0, 0.0, 0.0, 1.0)] mycmap=mpl.colors.ListedColormap(myColors) subplt1=ax1.matshow(self.pGrid.grid, cmap='jet') cbar1=fig1.colorbar(subplt1) subplt2=ax2.matshow(self.grid, cmap=mycmap) cbar2=fig2.colorbar(subplt2, ticks=[0.25,0.75]) cbar2.ax.set_yticklabels(['None', 'Percolated'], rotation='vertical') class Cluster(object): """This is a class of percolation clusters""" def __init__(self, array): self.grid=array.getGrid() self.N=len(self.grid[0,]) self.cluster={} self.numOfSteps=0 #next 4 functions return True if field next to given field is 1 or False if it's 0 def moveLeft(self, i, j): moveLeft=False assert i&lt;self.N assert j&lt;self.N if j&gt;0 and self.grid[i, j-1]==1: moveLeft=True return moveLeft def moveRight(self, i, j): moveRight=False assert i&lt;self.N assert j&lt;self.N if j&lt;N-1 and self.grid[i, j+1]==1: moveRight=True return moveRight def moveDown(self, i, j): moveDown=False assert i&lt;self.N assert j&lt;self.N if i&lt;N-1 and self.grid[i+1, j]==1: moveDown=True return moveDown def moveUp(self, i, j): moveUp=False assert i&lt;self.N assert j&lt;self.N if i&gt;0 and self.grid[i-1, j]==1: moveUp=True return moveUp def listOfOnes(self): """nested list of connected ones in each row""" outlist=[] for i in xrange(self.N): outlist.append([]) helplist=[] for j in xrange(self.N): if self.grid[i, j]==0: if (j&gt;0 and self.grid[i, j-1]==0) or (j==0 and self.grid[i, j]==0): continue # condition needed because of edges outlist[i].append(helplist) helplist=[] continue helplist.append((i, j)) if self.grid[i, j]==1 and j==self.N-1: outlist[i].append(helplist) return outlist def getCluster(self, i=0, j=0, moveD=[1, 1, 1, 1]): #(left, right, up, down) #moveD short for moveDirections, 1 means that it tries to move it to that side, 0 so it doesn't try self.numOfSteps=self.numOfSteps+1 if self.grid[i, j]==1: self.cluster[(i, j)]=True else: print "the starting coordinate is not in any cluster" return if moveD[0]==1: try: #if it comes to same point from different directions we'd get an infinite recursion, checking if it already been on that point prevents that self.cluster[(i, j-1)] moveD[0]=0 except: if self.moveLeft(i, j)==False: #check if 0 or 1 is left to (i, j) moveD[0]=0 else: self.getCluster(i, j-1, [1, 0, 1, 1]) #right is 0, because we came from left if moveD[1]==1: try: self.cluster[(i, j+1)] moveD[1]=0 except: if self.moveRight(i, j)==False: moveD[1]=0 else: self.getCluster(i, j+1, [0, 1, 1, 1]) if moveD[2]==1: try: self.cluster[(i-1, j)] moveD[2]=0 except: if self.moveUp(i, j)==False: moveD[2]=0 else: self.getCluster(i-1, j, [1, 1, 1, 0]) if moveD[3]==1: try: self.cluster[(i+1, j)] moveD[3]=0 except: if self.moveDown(i, j)==False: moveD[3]=0 else: self.getCluster(i+1, j, [1, 1, 0, 1]) if moveD==(0, 0, 0, 0): return def getClusterCoordinates(self): return self.cluster def getStats(self): print "Number of recursions:", self.numOfSteps print "Length of cluster:", len(self.cluster) return (self.numOfSteps, len(self.cluster)) </code></pre>
<p>Your error is coming from the getCluster method. When setting moveD to [1,1,1,1] you are essentially setting a static variable(do not quote me on this). This is causing the information from the previous executions to carry over. </p> <p><a href="http://www.deadlybloodyserious.com/2008/05/default-argument-blunders/" rel="nofollow">Here is a link to a blog post that shows an example of this.</a></p> <p>Below is a working version of the getCluster method that both fixes the default arguement problem and removes the extraneous moveD assignments that manifested the problematic behavior.</p> <pre><code>def getCluster(self, i=0, j=0, moveD=None): #(left, right, up, down) #moveD short for moveDirections, 1 means that it tries to move it to that side, 0 so it doesn't try if moveD == None: moveD = [1, 1, 1, 1] self.numOfSteps=self.numOfSteps+1 if self.grid[i, j]==1: self.cluster[(i, j)]=True else: print "the starting coordinate is not in any cluster" return if moveD[0]==1: try: #if it comes to same point from different directions we'd get an infinite recursion, checking if it already been on that point prevents that self.cluster[(i, j-1)] except: if self.moveLeft(i, j)==True: #check if 0 or 1 is left to (i, j) self.getCluster(i, j-1, [1, 0, 1, 1]) #right is 0, because we came from left if moveD[1]==1: try: self.cluster[(i, j+1)] except: if self.moveRight(i, j)==True: self.getCluster(i, j+1, [0, 1, 1, 1]) if moveD[2]==1: try: self.cluster[(i-1, j)] except: if self.moveUp(i, j)==True: self.getCluster(i-1, j, [1, 1, 1, 0]) if moveD[3]==1: try: self.cluster[(i+1, j)] except: if self.moveDown(i, j)==True: self.getCluster(i+1, j, [1, 1, 0, 1]) </code></pre>
python|for-loop
1
1,905,383
4,990,035
Convert Point Database to Body Shape
<p>I have a database containing body scan results in point format. For example : </p> <pre><code>point1=(x,y,z) point2=(x2,y2,z2) ... </code></pre> <p>I want to convert these points to body shape.<br> And I want to do some processing on this points for example calculate neck diameter and some related calculation.<br> Any suggestion ? (module , tutorial etc...)</p>
<p>You need some basic textbook on computational geometry. See this question, for example: <a href="https://stackoverflow.com/questions/3308266/computational-geometry">https://stackoverflow.com/questions/3308266/computational-geometry</a></p>
python|3d
1
1,905,384
5,006,911
Problem creating CFloat64 ENVI files with GDAL 1.6.1
<p>I'm trying to write ENVI CFloat64 files with GDAL:</p> <pre><code>import numpy from osgeo import gdal from osgeo.gdalconst import GDT_CFloat64 a = numpy.zeros((1000, 1000), dtype='complex64') driver = gdal.GetDriverByName("ENVI") outfile = driver.Create("test.bin", 1000, 1000, 1, GDT_CFloat64) outfile.GetRasterBand(1).WriteArray(a, 0, 0) outfile = None </code></pre> <p>but I can't write the array to the band in <code>outfile.GetRasterBand(1).WriteArray(a, 0, 0)</code> because <code>outfile</code> is <code>None</code>; however, the empty file does get created. Any ideas what I am doing wrong?</p> <p>EDIT: I should specify that I can read and write ENVI Float32 files, so the driver is there. Only CFloat64 that I can't write...</p>
<p>In a nutshell, when <code>driver.Create(...)</code> or <code>gdal.Open(...)</code>, etc return <code>None</code>, it's gdal's way of raising an <code>IOError</code> or indicating than the given driver name is invalid. (Or potentially indicating that another sort of error occured, but those two seem the most likely)</p> <p>(I'll skip the rant about how much I dislike gdal's python bindings...)</p> <p>You're not clearly doing anything wrong (The example creates a .bin file with all zeros and a properly formatted .hdr file, as it should, on my machine.).</p> <p>Given that it creates an empty file, you appear to have permission to write to the file, so it's not an IO problem.</p> <p>This means that either:</p> <ol> <li>Your version of gdal doesn't support ENVI files (e.g. <code>gdal.GetDriverByName("something random")</code> will return <code>None</code> as well.)</li> <li>Gdal is encountering some sort of internal error when creating a driver for an ENVI dataset. </li> </ol> <p>Check the output of <code>gdalinfo --formats</code>, and make sure that gdal is compiled with support for ENVI files (I think it should be by default, though). </p> <p>If not, check to see if you can write a geotiff (or any other format) with all zero values. If nothing is working, you need to re-install gdal.</p> <p>Hope that gets you pointed in the right direction!</p>
python|image-processing|numpy|gdal
1
1,905,385
62,741,776
Error in function, the output must be 30 but not answering where is the problem?
<p>Error in function, the output must be 30 but not answering where is the problem?</p> <pre><code>def sum(low, high): result for number in xrange(low, high): result = number return result sum(4,8) </code></pre>
<p>Update: changed to xrange, tested it on Python 2.7 and works.</p> <hr /> <p>What is your Python version? Range doesn't include last digit, that's why I increased it in 1.</p> <pre><code>def sum(low, high): result = 0 for number in xrange(low, high+1): result += number return result print(sum(4,8)) # 30 </code></pre>
python
3
1,905,386
61,744,734
Validation Loss Increases every iteration
<p>Recently I have been trying to do multi-class classification. My datasets consist of 17 image categories. Previously I was using 3 conv layers and 2 hidden layers. It resulted my model overfitting with huge validation loss around 11.0++ and my validation accuracy was very low. So I decided to decrease the conv layers by 1 and hidden layer by 1. I also have removed dropout and it still have the same problem with the validation which still overfitting, even though my training accuracy and loss are getting better.</p> <p>Here is my code for prepared datasets:</p> <pre><code> import cv2 import numpy as np import os import pickle import random CATEGORIES = ["apple_pie", "baklava", "caesar_salad","donuts", "fried_calamari", "grilled_salmon", "hamburger", "ice_cream", "lasagna", "macaroni_and_cheese", "nachos", "omelette","pizza", "risotto", "steak", "tiramisu", "waffles"] DATALOC = "D:/Foods/Datasets" IMAGE_SIZE = 50 data_training = [] def create_data_training(): for category in CATEGORIES: path = os.path.join(DATALOC, category) class_num = CATEGORIES.index(category) for image in os.listdir(path): try: image_array = cv2.imread(os.path.join(path,image), cv2.IMREAD_GRAYSCALE) new_image_array = cv2.resize(image_array, (IMAGE_SIZE,IMAGE_SIZE)) data_training.append([new_image_array,class_num]) except Exception as exc: pass create_data_training() random.shuffle(data_training) X = [] y = [] for features, label in data_training: X.append(features) y.append(label) X = np.array(X).reshape(-1, IMAGE_SIZE, IMAGE_SIZE, 1) y = np.array(y) pickle_out = open("X.pickle", "wb") pickle.dump(X, pickle_out) pickle_out.close() pickle_out = open("y.pickle", "wb") pickle.dump(y, pickle_out) pickle_out.close() pickle_in = open("X.pickle","rb") X = pickle.load(pickle_in) </code></pre> <p>Here is the code of my model:</p> <pre><code>import pickle import tensorflow as tf import time from tensorflow.keras.models import Sequential from tensorflow.keras.callbacks import TensorBoard from tensorflow.keras.layers import Activation, Conv2D, Dense, Dropout, Flatten, MaxPooling2D NAME = "Foods-Model-{}".format(int(time.time())) tensorboard = TensorBoard(log_dir='logs\{}'.format(NAME)) X = pickle.load(open("X.pickle","rb")) y = pickle.load(open("y.pickle","rb")) X = X/255.0 model = Sequential() model.add(Conv2D(32,(3,3), input_shape = X.shape[1:])) model.add(Activation("relu")) model.add(MaxPooling2D(pool_size =(2,2))) model.add(Conv2D(64,(3,3))) model.add(Activation("relu")) model.add(MaxPooling2D(pool_size =(2,2))) model.add(Flatten()) model.add(Dense(128)) model.add(Activation("relu")) model.add(Dense(17)) model.add(Activation('softmax')) model.compile(loss = "sparse_categorical_crossentropy", optimizer = "adam", metrics = ['accuracy']) model.fit(X, y, batch_size = 16, epochs = 20 , validation_split = 0.1, callbacks = [tensorboard]) </code></pre> <p>The result of the trained model:</p> <pre><code>Train on 7650 samples, validate on 850 samples Epoch 1/20 7650/7650 [==============================] - 242s 32ms/sample - loss: 2.7826 - accuracy: 0.1024 - val_loss: 2.7018 - val_accuracy: 0.1329 Epoch 2/20 7650/7650 [==============================] - 241s 31ms/sample - loss: 2.5673 - accuracy: 0.1876 - val_loss: 2.5597 - val_accuracy: 0.2059 Epoch 3/20 7650/7650 [==============================] - 234s 31ms/sample - loss: 2.3529 - accuracy: 0.2617 - val_loss: 2.5329 - val_accuracy: 0.2153 Epoch 4/20 7650/7650 [==============================] - 233s 30ms/sample - loss: 2.0707 - accuracy: 0.3510 - val_loss: 2.6628 - val_accuracy: 0.2059 Epoch 5/20 7650/7650 [==============================] - 231s 30ms/sample - loss: 1.6960 - accuracy: 0.4753 - val_loss: 2.8143 - val_accuracy: 0.2047 Epoch 6/20 7650/7650 [==============================] - 230s 30ms/sample - loss: 1.2336 - accuracy: 0.6247 - val_loss: 3.3130 - val_accuracy: 0.1929 Epoch 7/20 7650/7650 [==============================] - 233s 30ms/sample - loss: 0.7738 - accuracy: 0.7715 - val_loss: 3.9758 - val_accuracy: 0.1776 Epoch 8/20 7650/7650 [==============================] - 231s 30ms/sample - loss: 0.4271 - accuracy: 0.8827 - val_loss: 4.7325 - val_accuracy: 0.1882 Epoch 9/20 7650/7650 [==============================] - 233s 30ms/sample - loss: 0.2080 - accuracy: 0.9519 - val_loss: 5.7198 - val_accuracy: 0.1918 Epoch 10/20 7650/7650 [==============================] - 233s 30ms/sample - loss: 0.1402 - accuracy: 0.9668 - val_loss: 6.0608 - val_accuracy: 0.1835 Epoch 11/20 7650/7650 [==============================] - 236s 31ms/sample - loss: 0.0724 - accuracy: 0.9872 - val_loss: 6.7468 - val_accuracy: 0.1753 Epoch 12/20 7650/7650 [==============================] - 232s 30ms/sample - loss: 0.0549 - accuracy: 0.9895 - val_loss: 7.4844 - val_accuracy: 0.1718 Epoch 13/20 7650/7650 [==============================] - 229s 30ms/sample - loss: 0.1541 - accuracy: 0.9591 - val_loss: 7.3335 - val_accuracy: 0.1553 Epoch 14/20 7650/7650 [==============================] - 231s 30ms/sample - loss: 0.0477 - accuracy: 0.9905 - val_loss: 7.8453 - val_accuracy: 0.1729 Epoch 15/20 7650/7650 [==============================] - 233s 30ms/sample - loss: 0.0346 - accuracy: 0.9908 - val_loss: 8.1847 - val_accuracy: 0.1753 Epoch 16/20 7650/7650 [==============================] - 231s 30ms/sample - loss: 0.0657 - accuracy: 0.9833 - val_loss: 7.8582 - val_accuracy: 0.1624 Epoch 17/20 7650/7650 [==============================] - 233s 30ms/sample - loss: 0.0555 - accuracy: 0.9830 - val_loss: 8.2578 - val_accuracy: 0.1553 Epoch 18/20 7650/7650 [==============================] - 230s 30ms/sample - loss: 0.0423 - accuracy: 0.9892 - val_loss: 8.6970 - val_accuracy: 0.1694 Epoch 19/20 7650/7650 [==============================] - 236s 31ms/sample - loss: 0.0291 - accuracy: 0.9927 - val_loss: 8.5275 - val_accuracy: 0.1882 Epoch 20/20 7650/7650 [==============================] - 234s 31ms/sample - loss: 0.0443 - accuracy: 0.9873 - val_loss: 9.2703 - val_accuracy: 0.1812 </code></pre> <p>Thank You for your time. Any help and suggestion will be really appreciated.</p>
<p>Your model suggests early over-fitting.</p> <ol> <li>Get rid of the dense layer completely and use global pooling.</li> </ol> <pre><code>model = Sequential() model.add(Conv2D(32,(3,3), input_shape = X.shape[1:])) model.add(Activation("relu")) model.add(Conv2D(64,(3,3))) model.add(Activation("relu")) model.add(Conv2D(128,(3,3))) model.add(Activation("relu")) model.add(GlobalAveragePooling2D()) model.add(Dense(17)) model.add(Activation('softmax')) model.summary() </code></pre> <ol start="2"> <li>Use <code>SpatialDropout2D</code> after conv layers.</li> </ol> <p>ref: <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/SpatialDropout2D" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/layers/SpatialDropout2D</a></p> <ol start="3"> <li><p>Use early stopping to get a balanced model.</p></li> <li><p>Your output suggests <code>categorical_crossentropy</code> as a better-fit loss.</p></li> </ol>
tensorflow|image-processing|neural-network|classification|conv-neural-network
0
1,905,387
10,973,427
A more "pythonic" approach to "check for None and deal with it"
<p>I have a <code>list</code> of <code>dict</code> with keys <code>['name','content','summary',...]</code>. All the values are strings. But some values are <code>None</code>. I need to remove all the new lines in <code>content</code>, <code>summary</code> and some other keys. So, I do this:</p> <pre><code>... ... for item in item_list: name = item['name'] content = item['content'] if content is not None: content = content.replace('\n','') summary = item['summary'] if summary is not None: summary = summary.replace('\n','') ... ... ... ... </code></pre> <p>I somewhat feel that the <code>if x is not None: x = x.replace('\n','')</code> idiom not so intelligent or clean. Is there a more "pythonic" or better way to do it?</p> <p>Thanks.</p>
<p>The code feels unwieldy to you, but part of the reason is because you are repeating yourself. This is better:</p> <pre><code>def remove_newlines(text): if text is not None: return text.replace('\n', '') for item in item_list: name = item['name'] content = remove_newlines(item['content']) summary = remove_newlines(item['summary']) </code></pre>
string|coding-style|python
7
1,905,388
70,553,370
how to clean and rearrange a dataframe with pairs of date and price columns into a df with common date index?
<p>I have a dataframe of price data that looks like the following: (with more than 10,000 columns)</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>Unamed: 0</th> <th>01973JAC3 corp</th> <th>Unamed: 2</th> <th>019754AA8 corp</th> <th>Unamed: 4</th> <th>01265RTJ7 corp</th> <th>Unamed: 6</th> <th>01988PAD0 corp</th> <th>Unamed: 8</th> <th>019736AB3 corp</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>2004-04-13</td> <td>101.1</td> <td>2008-06-16</td> <td>99.1</td> <td>2010-06-14</td> <td>110.0</td> <td>2008-06-18</td> <td>102.1</td> <td>NaT</td> <td>NaN</td> </tr> <tr> <td>2</td> <td>2004-04-14</td> <td>101.2</td> <td>2008-06-17</td> <td>100.4</td> <td>2010-07-05</td> <td>110.3</td> <td>2008-06-19</td> <td>102.6</td> <td>NaT</td> <td>NaN</td> </tr> <tr> <td>3</td> <td>2004-04-15</td> <td>101.6</td> <td>2008-06-18</td> <td>100.4</td> <td>2010-07-12</td> <td>109.6</td> <td>2008-06-20</td> <td>102.5</td> <td>NaT</td> <td>NaN</td> </tr> <tr> <td>4</td> <td>2004-04-16</td> <td>102.8</td> <td>2008-06-19</td> <td>100.9</td> <td>2010-07-19</td> <td>110.1</td> <td>2008-06-21</td> <td>102.6</td> <td>NaT</td> <td>NaN</td> </tr> <tr> <td>5</td> <td>2004-04-19</td> <td>103.0</td> <td>2008-06-20</td> <td>101.3</td> <td>2010-08-16</td> <td>110.3</td> <td>2008-06-22</td> <td>102.8</td> <td>NaT</td> <td>NaN</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>NaT</td> <td>NaN</td> </tr> <tr> <td>3431</td> <td>NaT</td> <td>NaN</td> <td>2021-12-30</td> <td>119.2</td> <td>NaT</td> <td>NaN</td> <td>NaT</td> <td>NaN</td> <td>NaT</td> <td>NaN</td> </tr> <tr> <td>3432</td> <td>NaT</td> <td>NaN</td> <td>2021-12-31</td> <td>119.4</td> <td>NaT</td> <td>NaN</td> <td>NaT</td> <td>NaN</td> <td>NaT</td> <td>NaN</td> </tr> </tbody> </table> </div> <p>(Those are 9-digit CUSIPs in the header. So every two columns represent date and closed price for a security.) I would like to</p> <ol> <li>find and get rid of empty pairs of date and price like &quot;Unamed: 8&quot; and&quot;019736AB3 corp&quot;</li> <li>then rearrange the dateframe to a panel of monthly close price as following:</li> </ol> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Date</th> <th>01973JAC3</th> <th>019754AA8</th> <th>01265RTJ7</th> <th>01988PAD0</th> </tr> </thead> <tbody> <tr> <td>2004-04-30</td> <td>102.1</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>2004-05-31</td> <td>101.2</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>2021-12-30</td> <td>NaN</td> <td>119.2</td> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>2021-12-31</td> <td>NaN</td> <td>119.4</td> <td>NaN</td> <td>NaN</td> </tr> </tbody> </table> </div> <p>Edit: I wanna clarify my question.</p> <p>So my dataframe has more than 10,000 columns, which makes it impossible to just drop by column names or change their names one by one. The pairs of date and price start and end at different time and are of different length (, and of different frequency). I m looking for an efficient way to arrange therm into a less messy form. Thanks.</p> <p>Here is a sample of 30 columns. <a href="https://github.com/txd2x/datastore" rel="nofollow noreferrer">https://github.com/txd2x/datastore</a> file name: sample-question2022-01.xlsx</p> <p>I figured out: stacking and then reshaping.Thx for the help.</p> <pre><code>for i in np.arange(len(price.columns)/2): temp =DataFrame(columns = ['Date', 'ClosedPrice','CUSIP']) temp['Date'] = price.iloc[ 0:np.shape(price)[0]-1, int(2*i)] temp['ClosedPrice'] = price.iloc[0:np.shape(price)[0]-1, int(2*i+1)] temp['CUSIP'] =price.columns[int(i*2+1)][:9] # df = df.append(temp) #use for loop to stack all the column pairs df = df.dropna(axis=0, how = 'any') # drop nan rows df = df.pivot(index='Date', columns = 'CUSIP', values = 'ClosedPrice') #reshape dataframe to have Date as index and CUSIP and column headers df_monthly=df.resample('M').last() #finding last price of the month </code></pre>
<p>if you want to get rid of unusful columns then perform the following code:</p> <p><code>df.drop(&quot;name_of_column&quot;, axis=1, inplace=True)</code></p> <p>if you want to drop empty rows use:</p> <p><code>df.drop(df.index[row_number], inplace=True)</code></p> <p>if you want to rearrange the data using 'timestamp and date' you need to convert it to a datetime object and then make it as index:</p> <pre><code>import datetime df.Date=pd.to_datetime(df.Date) df = df.set_index('Date') </code></pre> <p>and you probably want to change column name before doing any of that above, <code>df.rename(columns={'first_column': 'first', 'second_column': 'second'}, inplace = True) </code></p> <p>Updated01: if you want to keep just some columns of those 10000, lets say for example 10 or 7 columns, then use <code>df = df[[&quot;first_column&quot;,&quot;second_column&quot;, ....]]</code></p> <p>if you want to get rid of all empty columns use: <code>df.dropna(axis=1, how = 'all')</code> &quot;how&quot; keyword have two values: &quot;all&quot; to drop the whole row or column if it is full of Nan, &quot;any&quot; to drop the whole row or column if it have one Nan at least.</p> <p>Update02: Now if you have got a lot of date columns and you just want to keep one of them, supposing that you have choosed a date column that have no &quot;Nan&quot; values use the following code:</p> <pre><code>columns=df.columns.tolist() for column in columns: try: if(df[column].dtypes=='object'): df[column]=pd.to_datetime(df[column]). if(df[column].dtypes=='datetime64[ns]')&amp;(column!='Date'): df.drop(column,axis=1,inplace=True) except ValueError: pass </code></pre> <p>rearrange the dataframe using months:</p> <pre><code>import datetime df.Date=pd.to_datetime(df.Date) df['Month']=df.Date.dt.month df['Year']=df.Date.dt.year df = df.set_index('Month') df.groupby([&quot;Year&quot;,&quot;Month&quot;]).mean() </code></pre> <p>update03: To combine all date columns while preserving data use the following code:</p> <pre><code>import pandas as pd import numpy as np df=pd.read_excel('sample_question2022-01.xlsx') columns=df.columns.tolist() for column in columns: if (df[column].isnull().sum()&gt;2300): df.drop(column,axis=1,inplace=True) columns=df.columns.tolist() import itertools count_date=itertools.count(1) count_price=itertools.count(1) for column in columns: if(df[column].dtypes=='datetime64[ns]'): df.rename(columns={column:f'date{next(count_date)}'},inplace=True) else: df.rename(columns={column:f'Price{next(count_price)}'},inplace=True) columns=df.columns.tolist() merged=df[[columns[0],columns[1]]].set_index('date1') k=2 for i in range(2,len(columns)-1,2): merged=pd.merge(merged,df[[columns[i],columns[i+1]]].set_index(f'date{k}'),how='outer',left_index=True,right_index=True) k+=1 </code></pre> <p>the only problem left that it will throw a memory Error.</p> <blockquote> <p>MemoryError: Unable to allocate 27.4 GiB for an array with shape (3677415706,) and data type int64</p> </blockquote>
python|pandas|dataframe
0
1,905,389
56,006,425
What is the best way to run python scripts in AWS?
<p>I have three python scripts, <code>1.py</code>, <code>2.py</code>, and <code>3.py</code>, each having 3 runtime arguments to be passed.</p> <p>All three python programs are independent of each other. All 3 may run in a sequential manner in a batch or it may happen any two may run depending upon some configuration.</p> <p>Manual approach:</p> <ol> <li>Create EC2 instance, run python script, shut it down.</li> <li>Repeat the above step for the next python script.</li> </ol> <p>The automated way would be trigger the above process through lambda and replicate the above process using some combination of services.</p> <p>What is the best way to implement this in AWS? </p>
<p>AWS Batch has a DAG scheduler, technically you could define job1, job2, job3 and tell AWS Batch to run them in that order. But I wouldn't recommend that route. </p> <p>For the above to work you would basically need to create 3 docker images. image1, image2, image3. and then put these in ECR (Docker Hub can also work if not using Fargate launch type).</p> <p>I don't think that makes sense unless each job is bulky has its own runtime that's different from the others. </p> <p>Instead I would write a Python program that calls 1.py 2.py and 3.py, put that in a Docker image and run a AWS batch job or just ECS Fargate task. </p> <p>main.py:</p> <pre><code>import subprocess exit_code = subprocess.call("python3 /path/to/1.py", shell=True) # decide if you want call 2.py and so on ... # 1.py will see the same stdout, stderr as main.py # with batch and fargate you can retrieve these form cloudwatch logs ... </code></pre> <p>Now you have a Docker image that just needs to run somewhere. Fargate is fast to startup, bit pricey, has a 10GB max limit on temporary storage. AWS Batch is slow to startup on a cold start, but can use spot instances in your account. You might need to make a custom AMI for AWS batch to work. i.e. if you want more storage. </p> <p>Note: for anyone who wants to scream at shell=True, both main.py and 1.py came from the same codebase. It's a batch job, not an internet facing API that took that from user request. </p>
python|amazon-web-services|aws-lambda|aws-step-functions|aws-batch
6
1,905,390
56,684,991
Check if string in list present in HTML using BeautifulSoup
<p>I'm using the following code to find text in my parsed HTML:</p> <pre><code>searched_word = "News" results = parsedHTML.body.find_all(string=re.compile('.*{0}.*'.format(searched_word)), recursive=True) if results: doStuff() </code></pre> <p>This works, but I'd like to use a list instead, e.g:</p> <pre><code>searched_words = ["News", "Team"] </code></pre> <p>And if my parsed HTML has any of these string elements in its contents, should return True and what element was found in the HTML. I don't know how to accomplish this.</p>
<p>This might help.</p> <pre><code>searched_words = ["News", "Team"] pattern = re.compile("|".join(searched_words)) results = parsedHTML.body.find_all(string=pattern, recursive=True) if results: doStuff() </code></pre>
python|beautifulsoup
3
1,905,391
69,708,611
TypeError: can only concatenate str (not "NoneType") to str in voting bot
<p>Im really hobbist python guys so i dont know hwy is it like this? I think it should be working fine? Can someone explain my why this is not working properly? Im really lost. There is option with arguments like -v is number of votes -s survey url id -t target box that you want to check by getting its id from oid. Its based very much on code that i found on the internet. For me it all looks good? i guess. Is that can be problem with option type strings? That is the exact thing that this code gives me on terminal:</p> <pre><code>[#] Flushing usedProxies.txt file... Traceback (most recent call last): File &quot;C:\Users\thefi\Desktop\Strawpoll-Voting-Bot-master\Main.py&quot;, line 76, in __init__ renewlist=self.renewProxyList(); File &quot;C:\Users\thefi\Desktop\Strawpoll-Voting-Bot-master\Main.py&quot;, line 257, in renewProxyList content = urllib.request.urlopen(url).read() File &quot;C:\Users\thefi\AppData\Local\Programs\Python\Python310\lib\urllib\request.py&quot;, line 216, in urlopen return opener.open(url, data, timeout) File &quot;C:\Users\thefi\AppData\Local\Programs\Python\Python310\lib\urllib\request.py&quot;, line 525, in open response = meth(req, response) File &quot;C:\Users\thefi\AppData\Local\Programs\Python\Python310\lib\urllib\request.py&quot;, line 634, in http_response response = self.parent.error( File &quot;C:\Users\thefi\AppData\Local\Programs\Python\Python310\lib\urllib\request.py&quot;, line 563, in error return self._call_chain(*args) File &quot;C:\Users\thefi\AppData\Local\Programs\Python\Python310\lib\urllib\request.py&quot;, line 496, in _call_chain result = func(*args) File &quot;C:\Users\thefi\AppData\Local\Programs\Python\Python310\lib\urllib\request.py&quot;, line 643, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;C:\Users\thefi\Desktop\Strawpoll-Voting-Bot-master\Main.py&quot;, line 266, in &lt;module&gt; Strawpoll_Multivote() File &quot;C:\Users\thefi\Desktop\Strawpoll-Voting-Bot-master\Main.py&quot;, line 151, in __init__ print(&quot;[!] &quot; + ex.strerror + &quot;: &quot; + ex.filename) </code></pre> <pre><code>try: from optparse import OptionParser import sys import os import re from bs4 import BeautifulSoup import urllib.request import requests from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC except ImportError as msg: print(&quot;[!] Library not installed: &quot; + str(msg)) exit() class Strawpoll_Multivote: # Initialization maxVotes = 1 voteFor = &quot;&quot; surveyId = &quot;&quot; domainEnd = &quot;com&quot; proxyListFile = &quot;proxies.txt&quot; saveStateFile = &quot;usedProxies.txt&quot; proxyTimeout = 10 # in seconds currentProxyPointer = 0 successfulVotes = 0 def __init__(self): try: # Parse Arguments parser = OptionParser() parser.add_option(&quot;-v&quot;, &quot;--votes&quot;, action=&quot;store&quot;, type=&quot;string&quot;, dest=&quot;votes&quot;,help=&quot;number of times to vote&quot;) parser.add_option(&quot;-s&quot;, &quot;--survey&quot;, action=&quot;store&quot;, type=&quot;string&quot;, dest=&quot;survey&quot;,help=&quot;url id of the survey&quot;) parser.add_option(&quot;-t&quot;, &quot;--target&quot;, action=&quot;store&quot;, type=&quot;string&quot;, dest=&quot;target&quot;, help=&quot;checkbox id to vote for&quot;) parser.add_option(&quot;-d&quot;, &quot;--domain&quot;, action=&quot;store&quot;, type=&quot;string&quot;, dest=&quot;domain&quot;, help=&quot;domain name end&quot;) parser.add_option(&quot;-f&quot;, &quot;--flush&quot;, action=&quot;store_true&quot;, dest=&quot;flush&quot;,help=&quot;Flushes the used proxy list&quot;) parser.add_option(&quot;-r&quot;, &quot;--renew&quot;, action=&quot;store_true&quot;, dest=&quot;renew&quot;,help=&quot;Renews the proxy list&quot;) (options, args) = parser.parse_args() if len(sys.argv) &gt; 2: if options.votes is None: print(&quot;[!] Times to vote not defined with: -v &quot;) exit(1) if options.survey is None: print(&quot;[!] Url id of the survey defined with: -s&quot;) exit(1) if options.target is None: print(&quot;[!] Target checkbox to vote for is not defined with: -t&quot;) exit(1) try: self.maxVotes = int(options.votes) except ValueError: print(&quot;[!] You incorrectly defined a non integer for -v&quot;) # Save arguments into global variable self.voteFor = options.target self.surveyId = options.survey # Flush usedProxies.txt if options.flush == True: print(&quot;[#] Flushing usedProxies.txt file...&quot;) os.remove(self.saveStateFile) open(self.saveStateFile, 'w+') # Alter domain if not None. if options.domain is not None: self.domainEnd = options.domain if options.renew == True: renewlist=self.renewProxyList(); os.remove(self.proxyListFile) with open(self.proxyListFile, &quot;a&quot;) as myfile: for i in renewlist: myfile.write(i) # Print help else: print(&quot;[!] Not enough arguments given&quot;) print() parser.print_help() exit() # Read proxy list file alreadyUsedProxy = False proxyList = open(self.proxyListFile).read().split('\n') proxyList2 = None # Check if saveState.xml exists and read file if os.path.isfile(self.saveStateFile): proxyList2 = open(self.saveStateFile).read().split('\n') # Print remaining proxies if proxyList2 is not None: print(&quot;[#] Number of proxies remaining in old list: &quot; + str(len(proxyList) - len(proxyList2))) print() else: print(&quot;[#] Number of proxies in new list: &quot; + str(len(proxyList))) print() # Go through proxy list for proxy in proxyList: # Check if max votes has been reached if self.successfulVotes &gt;= self.maxVotes: break # Increase number of used proxy integer self.currentProxyPointer += 1 # Read in saveState.xml if this proxy has already been used if proxyList2 is not None: for proxy2 in proxyList2: if proxy == proxy2: alreadyUsedProxy = True break # If it has been used print message and continue to next proxy if alreadyUsedProxy == True: print(&quot;[&quot;+ str(self.currentProxyPointer) +&quot;] Skipping proxy: &quot; + proxy) alreadyUsedProxy = False continue # Print current proxy information print(&quot;[&quot;+ str(self.currentProxyPointer) +&quot;] New proxy: &quot; + proxy) print(&quot;[#] Connecting... &quot;) # Connect to strawpoll and send vote # self.sendToWeb('http://' + proxy,'https://' + proxy) self.webdriverManipulation(proxy); # Write used proxy into saveState.xml self.writeUsedProxy(proxy) print() # Check if max votes has been reached if self.successfulVotes &gt;= self.maxVotes: print(&quot;[*] Finished voting: &quot; + str(self.successfulVotes) + ' times.') else: print(&quot;[*] Finished every proxy in the list.&quot;) exit() except IOError as ex: print(&quot;[!] &quot; + ex.strerror + &quot;: &quot; + ex.filename) except KeyboardInterrupt as ex: print(&quot;[#] Ending procedure...&quot;) print(&quot;[#] Programm aborted&quot;) exit() def writeUsedProxy(self, proxyIp): if os.path.isfile(self.saveStateFile): with open(self.saveStateFile, &quot;a&quot;) as myfile: myfile.write(proxyIp+&quot;\n&quot;) def getIp(self, httpProxy): proxyDictionary = {&quot;https&quot;: httpProxy} request = requests.get(&quot;https://api.ipify.org/&quot;, proxies=proxyDictionary) requestString = str(request.text) return requestString # Using selenium and chromedriver to run the voting process on the background def webdriverManipulation(self,Proxy): try: WINDOW_SIZE = &quot;1920,1080&quot; chrome_options = webdriver.ChromeOptions() chrome_options.add_argument(&quot;--headless&quot;) chrome_options.add_argument(&quot;--window-size=%s&quot; % WINDOW_SIZE) chrome_options.add_argument('--proxy-server=%s' % Proxy) prefs = {&quot;profile.managed_default_content_settings.images&quot;: 2} chrome_options.add_experimental_option(&quot;prefs&quot;, prefs) chrome = webdriver.Chrome(options=chrome_options) if self.domainEnd == &quot;me&quot;: chrome.get('https://www.strawpoll.' + self.domainEnd + '/' + self.surveyId) element = chrome.find_element_by_xpath('//*[@value=&quot;'+ self.voteFor +'&quot;]') webdriver.ActionChains(chrome).move_to_element(element).click(element).perform() submit_button = chrome.find_elements_by_xpath('//*[@type=&quot;submit&quot;]')[0] submit_button.click() else: chrome.get('https://strawpoll.' + self.domainEnd + '/' + self.surveyId) element = chrome.find_element_by_xpath('//*[@name=&quot;'+ self.voteFor +'&quot;]') webdriver.ActionChains(chrome).move_to_element(element).click(element).perform() submit_button = chrome.find_elements_by_xpath('//*[@id=&quot;votebutton&quot;]')[0] submit_button.click() chrome.quit() print(&quot;[*] Successfully voted.&quot;) self.successfulVotes += 1 return True except Exception as exception: print(&quot;[!] Voting failed for the specific proxy.&quot;) chrome.quit() return False # Posting through requests (previous version) def sendToWeb(self,httpProxy, httpsProxy): try: headers = \ { 'Host': 'strawpoll.'+ self.domainEnd, 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:51.0) Gecko/20100101 Firefox/51.0', 'Accept': '*/*', 'Accept-Language': 'en - us, en; q = 0.5', 'Accept-Encoding': 'gzip, deflate', 'Accept-Charset': 'ISO - 8859 - 1, utf - 8; = 0.7, *;q = 0.7', 'Referer': 'https://strawpoll.'+ self.domainEnd +'/' + self.surveyId, 'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8', 'X-Requested-With': 'XMLHttpRequest', 'Content-Length': '29', 'Cookie': 'lang=en', 'DNT': '1', 'Connection': 'close' } payload = {'pid': self.surveyId, 'oids': self.voteFor} proxyDictionary = {&quot;http&quot;: httpProxy,&quot;https&quot;: httpsProxy} # Connect to server r = requests.post('https://strawpoll.' + self.domainEnd + '/vote', data=payload, headers=headers) json = r.json() # Check if the vote was successful if(bool(json['success'])): print(&quot;[*] Successfully voted.&quot;) self.successfulVotes += 1 return True else: print(&quot;[!] Voting failed.&quot;) return False except requests.exceptions.Timeout: print(&quot;[!] Timeout&quot;) return False except requests.exceptions.ConnectionError: print(&quot;[!] Couldn't connect to proxy&quot;) return False except Exception as exception: print(str(exception)) return False # Renew Proxy List def renewProxyList(self): final_list=[] url = &quot;http://proxy-daily.com/&quot; content = urllib.request.urlopen(url).read() soup = BeautifulSoup(content,features=&quot;html5lib&quot;) center = soup.find_all(&quot;center&quot;)[0] div = center.findChildren(&quot;div&quot;, recursive=False)[0].getText(); children= div.splitlines() for child in children: final_list.append(child+&quot;\n&quot;) return (final_list) # Execute strawpoll_multivote Strawpoll_Multivote() </code></pre>
<p>The error is descriptive: one of the parts you are atempting to concatenate in your log message is a None object, and the &quot;+&quot; operator is not defined for it.</p> <p>However, concatenating strings with &quot;+&quot; in Python is usually just done by people learning Python coming from other languages. Other ways of interpolating data in strings are far easier to type and read.</p> <p>From Python 3.6+ the recomended way is interpolating with &quot;f&quot;strings: strings with an f-prefix to the opening quote, can resolve Python expressions placed inside brackets inside it.<br /> So, just replace your erroring line for:</p> <pre><code> print(f&quot;[!] { ex.strerror }: { ex.filename}&quot;) </code></pre> <p>Unlike the <code>+</code> operator, string interpolation via f-strings, the <code>%</code>operator or via the <code>.format</code> method will automatically cast its operands to string, and <code>None</code> will not error.</p>
python-3.x
0
1,905,392
69,694,258
How to compute a map for ALL houses in map to show the closest store
<p>The question asked for a map that can show the close pizza store to every location on the map, and mark them with the order of the pizza in-store, therefore the map will look like a Voronoi Diagram.</p> <p>Here is the question.</p> <blockquote> <p>In a town called Regulaville, and all the people living there have very strange habits. All the buildings have centers on a well-structured rectangular grid. The mayor of the town is at the most north-west corner of the town, and his house coordinates are <code>(0,0)</code>. And each building in town has coordinates <code>(i,j)</code> that indicate that his house is <code>i</code> km south and <code>j</code> km east from the mayor’s house for <code>i</code> and <code>j</code> are integers such that <code>0 &lt;= i &lt; r</code> and <code>0 &lt;= j &lt; c</code> for some integers <code>r</code> and <code>c</code>.</p> <p>A new brand of pizza came to town! They set up <code>n</code> stores in some of the buildings in town for some <code>n  10</code>. We store the coordinates of the pizza stores in a list. For example, a list of <code>[[10,20],[30,20],[40,50]]</code> represents three stores of pizza with Store 0 at <code>(10,20)</code>, Store 1 at <code>(30,20)</code> and Store 2 at <code>(40,50)</code>1 . (Somehow the big boss of the pizza stores knows programming and he starts counting by 0 also.) There are no two pizza stores in the same location. All the people in Regulaville only eat pizza in their own homes by calling for delivery, and all the stores will deliver pizza by flying drones that will fly directly from the stores to the destination. In order to minimize the time and power used by the drones, the mayor ordered that every home must only order pizzas from the nearest store, unless there is more than one store with equal minimal distance to that home. For example, for the three pizza stores mentioned in the example above, the house at coordinates <code>(40,20)</code> will be closest to Store 1 with the nearest distance 10 km. Stores 0 and 2 will have more than 1 store with the same distance 30 &gt; 10 km. Two utility functions <code>create_zero_matrix</code> and <code>m_tight_print</code> are provided for you.</p> <p>Write a function <code>pd_map(r,c,sites)</code> to compute a map for ALL houses in Regulaville to show the closest pizza store number to each house, where <code>r</code> and <code>c</code> are the height and width of the map respectively, and sites is a list of pizza store coordinates. The location of each house should be represented by the coordinates <code>(i,j)</code> such that <code>0 &lt;= i &lt; r</code> and <code>0 &lt;= j &lt; c</code>. You can assume the number of pizza stores will be less than or equal to 10 and their corresponding store numbers (i.e. labels) will be from 0 to 9. Below shows a sample usage of the function <code>pd_map()</code>.</p> <pre><code>def create_zero_matrix(n,m): return [[0 for i in range(m)] for j in range(n)] def m_tight_print(m): for i in range(len(m)): line = '' for j in range(len(m[0])): line += str(m[i][j]) print(line) </code></pre> </blockquote> <p>Here is my code, I want to define a function to calculate the close store for each position and return the related number first. Then use this function to calculate all of the positions on the map. However, the result always return &quot;none&quot;</p> <pre><code>def pd_map(r,c,sites): map1 = create_zero_matrix(r,c) print (map1) for i in range(0, r): for j in range(0, c): elem = map1[i][j] def near_one(x,y,sites): res = [] for n in range(len(sites)): res.append(math.sqrt((x-sites[n][0])**2 + (y-sites[n][1])**2)) if res.count(min(res))==1: res.index(min(res)) else: return X map2 = [] for i in range(0, r): row = [] for j in range(0, c): row.append(near_one(i,j,sites)) map2.append(row) return map2 </code></pre>
<ol> <li>The function returns something only when there are duplicate values in the list. Try to put two stores at the same place (no matter it is not permitted, python does not know it) and you will get the row number on each cell</li> </ol>
python
0
1,905,393
69,830,721
Sort a dictionary in decreasing order and in increasing order when there is a same number
<p>I need to sort a dictionary in python in decreasing order, but when the dictionary has two times a number I need to sort those numbers in increasing order:</p> <p>Example:</p> <p>Dictionary:</p> <pre><code>t = {0: 8, 1: 4, 2: 4, 3: 0, 4: 3, 5: 6, 6: 8, 7: 8, 8: 1, 9: 3} </code></pre> <p>My code:</p> <pre><code>dict(sorted(t.items(), reverse=True, key=lambda item: item[1])) </code></pre> <p>Output:</p> <pre><code>{0: 8, 6: 8, 7: 8, 5: 6, 1: 4, 2: 4, 4: 3, 9: 3, 8: 1, 3: 0} ^^^^^^^^^^ </code></pre> <p>Output needed:</p> <pre><code>{0: 8, 6: 8, 7: 8, 5: 6, 2: 4, 1: 4, 4: 3, 9: 3, 8: 1, 3: 0} ^^^^^^^^^^ </code></pre> <p>How can I do that with the <code>lambda</code> function?</p> <p>Thanks in advance for any response</p>
<p>For getting the keys in decreasing order when the value is the same, just add the key in the lambda function:</p> <pre><code>key=lambda item: (item[1], item[0]) </code></pre> <p>As this really reverses the order of <code>item</code>, you can also do:</p> <pre><code>key=lambda item: item[::-1] </code></pre>
python|sorting|lambda
4
1,905,394
60,911,752
Unable to display the label after the button is clicked In python using PyQt5
<p>the button is calling the clickMethod Function,as it is dispalying "You clicked PushButton" in the console but it is not displaying the label </p> <pre><code>import sys import self as self from PyQt5.QtCore import pyqtSlot from PyQt5.QtWidgets import QApplication, QWidget, QLabel, QLineEdit, QPushButton from PyQt5.uic.properties import QtWidgets if __name__ == "__main__": app = QApplication([]) w = QWidget() VideoUrl = QLabel(w) VideoUrl.setText('videoURL') VideoUrl.move(100, 40) InputText = QLineEdit(w) InputText.move(100, 60) Button = QPushButton(w) Button.setText('Download') Button.move(100, 100) def clickMethod(): print("You clicked PushButton") output = QLabel(w) output.setText("clicked") output.move(100, 70) Button.clicked.connect(clickMethod) w.resize(700, 500) w.show() sys.exit(app.exec_()) </code></pre>
<p>Try adding <code>ouput.show()</code> at the end of clickMethod.</p>
python|pyqt5
1
1,905,395
66,241,035
sort audio signals into groups based on its feature similarity using Python
<p>I have split audio files consisting of all the English letters (A, B, C, D, etc.) into separate chunks of audio .wav files. I want to sort each letter into a group. For example, I want all the audio files of letter A grouped in one folder. So then I will have 26 folders consists of different sounds of the same letters.</p> <p>I have searched for this, and I found some work done on K-mean clustering, but I could not achieve my requirement.</p>
<p>First of all, you need to convert the sounds into representation suitable for further processing, so some feature vectors for which you can apply classification or clustering algorithms.</p> <p>For audio, typical choice are features based on spectrum. To process sounds, <a href="https://librosa.org/doc/latest/index.html" rel="nofollow noreferrer">librosa</a> can be very helpful.</p> <p>Since sounds have different duration and you probably want a fixed-size feature vector for each recording, you need a way to build a single feature vector on top of series of data. Here, different methods can be used, depending on your data amount and availability of labels. Assuming you have limited amount of recordings and no labels, you can start with simply stacking several vectors together. Averaging is another possibility, but it destroys the temporal information (which can be ok in this case). Training some kind of RNN to learn representation as hidden state is the most powerful method.</p> <p>Take a look on this related answer: <a href="https://stackoverflow.com/questions/41047151/how-to-classify-continuous-audio/41057147#41057147">How to classify continuous audio</a></p>
python|machine-learning|audio|cluster-analysis|speech-recognition
2
1,905,396
69,253,108
Get a tag list from pos tagging
<p>Currently, I am working on an NLP project, and after applying pos tagging, I have received the below output.</p> <blockquote> <pre><code>[[(ද්විපාර්ශවික, NNP), (එකඟතා, NNP), (ජන, JJ), (ජීවිත, NNJ), (සෞඛ්යය, NNC), (මනාව, RB)]] </code></pre> </blockquote> <p>for my work, I need to retrieve tags, like this.</p> <pre><code>&gt; pos_tag_list = [['NNP', 'NNP', 'JJ', 'NNJ', 'NNC', 'RB']] </code></pre>
<p>I think this could work.</p> <pre><code>a = [[('ද්විපාර්ශවික', 'NNP'), ('එකඟතා', 'NNP'), ('ජන', 'JJ'), ('ජීවිත', 'NNJ'), ('සෞඛ්යය', 'NNC'), ('මනාව', 'RB')]] def foo (data): result = [] if type(data) == tuple: return data[1] if type(data) == list: for inner in data: result.append(foo(inner)) return result result = foo (a) </code></pre>
python|nlp|pos-tagger
1
1,905,397
72,495,842
Import conflict for Cython library when running unittests
<p>I'm working on a Python library written on C, Python and Cython. I wrote tests before for the same library when it was written only on python, but now the library needs to be compiled before it can be imported.</p> <p>When I try to import the library in the unittest file after building it using <code>python3 setup.py install</code> I encounter <code>ImportError</code> which states that the Cython file cannot be imported.</p> <p>The reason is because the file it tries to import, is the file that was not compiled yet. The library was compiled, but the import system prefers the files under the same project directory and not from <code>site-packages</code>.</p> <p>What can I do in such situations? I want to be able to run my unittests both locally and on a CI.</p> <p>Here's my project structure and where there errors occur:</p> <pre><code>lib-name: -lib-name/ module.pyx main.py (imports module.pyx) -tests/ test_lib_name.py (imports lib-name, raises ImportError because main.py can't import module.pyx) </code></pre> <p>Thanks!</p>
<p>Before importing do this(so that the lib doesn't get loaded from the current directory.):</p> <p><code>sys.path.remove('')</code></p> <p>After importing do this(back to default):</p> <p><code>sys.path.append('')</code></p>
python|unit-testing|cython
0
1,905,398
73,035,328
How I can detect irregular shapes and remove from image with opencv?
<p>I'm using opencv library in Python and i have this issue. I have this image ,that i previously i removed a lot of noise, but in this image there are a lot of irregular shape that i want to remove.</p> <p>For example :</p> <p>Im using this image:</p> <p><a href="https://i.stack.imgur.com/a5mws.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/a5mws.png" alt="start image" /></a></p> <p>For get the start image i use this code:</p> <pre><code>import cv2 image = cv2.imread(&quot;Image.png&quot;) ## Heading ## gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1] kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (7, 7)) inverted_thresh = 255 - thresh dilate = cv2.dilate(inverted_thresh, kernel, iterations=3) cnts = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: x, y, w, h = cv2.boundingRect(c) ROI = thresh[y:y + h, x:x + w] data = pytesseract.image_to_string(ROI, lang='eng', config='--psm 6').lower() sub = cv2.subtract(~gray, dilate) # In[4]: # Sobel Edge Detection sobelx = cv2.Sobel(src=sub, ddepth=cv2.CV_64F, dx=1, dy=0, ksize=5) # Sobel Edge Detection on the X axis sobely = cv2.Sobel(src=sub, ddepth=cv2.CV_64F, dx=0, dy=1, ksize=5) # Sobel Edge Detection on the Y axis sobelxy = cv2.Sobel(src=sub, ddepth=cv2.CV_64F, dx=1, dy=1, ksize=5) # Combined X and Y Sobel Edge Detection # In[8]: # Canny Edge Detection edges = cv2.Canny(image=sub, threshold1=45, threshold2=55) # Display Canny Edge Detection Image cv2.imshow('Canny Edge Detection', edges) </code></pre> <p>And i would to get this result</p> <p><a href="https://i.stack.imgur.com/GseaC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GseaC.png" alt="desired result" /></a></p> <p>How i can get this result?</p>
<p>If you know min line length of the border, you can easy filter other elements.</p> <pre class="lang-py prettyprint-override"><code>import cv2 gray = cv2.imread(&quot;Image.png&quot;, cv2.IMREAD_GRAYSCALE) minLineWidth = 397 hKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (minLineWidth, 1)) vKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1, minLineWidth)) hGray = cv2.morphologyEx(gray, cv2.MORPH_OPEN, hKernel) vGray = cv2.morphologyEx(gray, cv2.MORPH_OPEN, vKernel) gray = cv2.bitwise_or(hGray, vGray) cv2.imshow(&quot;gray&quot;, gray) cv2.waitKey() </code></pre> <p>Result image: <a href="https://i.stack.imgur.com/kp796.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kp796.png" alt="Result image" /></a></p> <p>If you don't know min line length, you can use <a href="https://docs.opencv.org/3.4/d9/db0/tutorial_hough_lines.html" rel="nofollow noreferrer">probabilistic hough transform</a> to find all lines. Then you can filter lines by angle and find min repeated line length. After you can apply suggested code or just draw filtered lines.</p> <blockquote> <p>P.S. Try first of all google your problem. Ex: <code>stackoverflow filter horizontal lines from image</code>. On first link you can find good approaches of your problem.</p> </blockquote>
python|opencv|image-processing|computer-vision|noise
0
1,905,399
58,954,636
Retrieving InCHI keys from a list
<p>This was a dictionary that I had to transform into a list (for me to add something to each string), with each chemical entity having it's own InChI key. I want to extract all the InChI keys but can't seem to do it since the string 'standardInChIKey' isn't a key for me to call upon.</p> <p>Would anybody have an idea on how I could do this?</p> <p>I am a beginner in programming so sorry for any obvious mistakes.</p> <pre><code>my_list = [{'1': {'oscar': [{'md5Sum': '7b6b48af2461f51e889d5f42d0f3f3fa', 'chemicalData': {}}]}}, {'2': {'oscar': [{'md5Sum': 'd9734945a440de92cc1802c1084b4874', 'chemicalData': {'Aporphinium': {'name': 'Aporphinium', 'standardInChI': 'InChI=1S/C17H17N/c1-18-10-9-12-6-4-8-15-14-7-3-2-5-13(14)11-16(18)17(12)15/h2-8,16H,9-11H2,1H3/p+1', 'standardInChIKey': 'BZKUYNBAFQJRDM-UHFFFAOYSA-O'}, 'DMSO': {'name': 'DMSO', 'standardInChI': 'InChI=1S/C2H6OS/c1-4(2)3/h1-2H3', 'standardInChIKey': 'IAZDPXIOMUYVGZ-UHFFFAOYSA-N'}, 'hydroxide': {'name': 'hydroxide', 'standardInChI': 'InChI=1S/H2O/h1H2/p-1', 'standardInChIKey': 'XLYOFNOQVPJJNP-UHFFFAOYSA-M'}, 'picrate': {'name': 'picrate', 'standardInChI': 'InChI=1S/C6H3N3O7/c10-6-4(8(13)14)1-3(7(11)12)2-5(6)9(15)16/h1-2,10H/p-1', 'standardInChIKey': 'OXNIZHLAWKMVMX-UHFFFAOYSA-M'}, 'Isothebaine': {'name': 'Isothebaine', 'standardInChI': 'InChI=1S/C19H21NO3/c1-20-8-7-12-10-15(23-3)19(21)18-16(12)13(20)9-11-5-4-6-14(22-2)17(11)18/h4-6,10,13,21H,7-9H2,1-3H3/t13-/m0/s1', 'standardInChIKey': 'RQCOQZNIQLKGTN-ZDUSSCGKSA-N'}, 'EtOH': {'name': 'EtOH', 'standardInChI': 'InChI=1S/C2H6O/c1-2-3/h3H,2H2,1H3', 'standardInChIKey': 'LFQSCWFLJHTTHZ-UHFFFAOYSA-N'}, 'oxoaporphine': {'name': 'oxoaporphine', 'standardInChI': 'InChI=1S/C17H15NO/c1-18-10-16(19)14-8-4-7-13-12-6-3-2-5-11(12)9-15(18)17(13)14/h2-8,15H,9-10H2,1H3', 'standardInChIKey': 'QOJUUOXLHHMNMK-UHFFFAOYSA-N'}, 'ethanol': {'name': 'ethanol', 'standardInChI': 'InChI=1S/C2H6O/c1-2-3/h3H,2H2,1H3', 'standardInChIKey': 'LFQSCWFLJHTTHZ-UHFFFAOYSA-N'}, 'Aporphine': {'name': 'Aporphine', 'standardInChI': 'InChI=1S/C17H17N/c1-18-10-9-12-6-4-8-15-14-7-3-2-5-13(14)11-16(18)17(12)15/h2-8,16H,9-11H2,1H3', 'standardInChIKey': 'BZKUYNBAFQJRDM-UHFFFAOYSA-N'}, 'deuteriochloroform': {'name': 'deuteriochloroform', 'standardInChI': 'InChI=1S/CHCl3/c2-1(3)4/h1H/i1D', 'standardInChIKey': 'HEDRZPFGACZZDS-MICDWDOJSA-N'}, 'methoxyphenanthrene': {'name': 'methoxyphenanthrene', 'standardInChI': 'InChI=1S/C15H12O/c1-16-15-8-4-7-13-12-6-3-2-5-11(12)9-10-14(13)15/h2-10H,1H3', 'standardInChIKey': 'ONMKCYMMGBIVPT-UHFFFAOYSA-N'}, 'TETRAMETHOXYPHENANTHRENE': {'name': 'TETRAMETHOXYPHENANTHRENE', 'standardInChI': 'InChI=1S/C18H18O4/c1-19-15-13-10-9-11-7-5-6-8-12(11)14(13)16(20-2)18(22-4)17(15)21-3/h5-10H,1-4H3', 'standardInChIKey': 'PHIDHNJRDMYWKQ-UHFFFAOYSA-N'}, 'CH3OH': {'name': 'CH3OH', 'standardInChI': 'InChI=1S/CH4O/c1-2/h2H,1H3', 'standardInChIKey': 'OKKJLVBELUTLKV-UHFFFAOYSA-N'}, 'APORPHINE': {'name': 'APORPHINE', 'standardInChI': 'InChI=1S/C17H17N/c1-18-10-9-12-6-4-8-15-14-7-3-2-5-13(14)11-16(18)17(12)15/h2-8,16H,9-11H2,1H3', 'standardInChIKey': 'BZKUYNBAFQJRDM-UHFFFAOYSA-N'}, 'DIMETHOXYOXOAPORPHINE': {'name': 'DIMETHOXYOXOAPORPHINE', 'standardInChI': 'InChI=1S/C19H19NO3/c1-20-10-15(21)13-9-16(22-2)19(23-3)18-12-7-5-4-6-11(12)8-14(20)17(13)18/h4-7,9,14H,8,10H2,1-3H3', 'standardInChIKey': 'NOQPPVHNJNQIGF-UHFFFAOYSA-N'}, 'KBr': {'name': 'KBr', 'standardInChI': 'InChI=1S/BrH.K/h1H;/q;+1/p-1', 'standardInChIKey': 'IOLCXVTUBQKXJR-UHFFFAOYSA-M'}, 'aporphine': {'name': 'aporphine', 'standardInChI': 'InChI=1S/C17H17N/c1-18-10-9-12-6-4-8-15-14-7-3-2-5-13(14)11-16(18)17(12)15/h2-8,16H,9-11H2,1H3', 'standardInChIKey': 'BZKUYNBAFQJRDM-UHFFFAOYSA-N'}, 'Actinodaphnine': {'name': 'Actinodaphnine', 'standardInChI': 'InChI=1S/C18H17NO4/c1-21-14-7-11-10(5-13(14)20)4-12-16-9(2-3-19-12)6-15-18(17(11)16)23-8-22-15/h5-7,12,19-20H,2-4,8H2,1H3/t12-/m0/s1', 'standardInChIKey': 'VYJUHRAQPIBWNV-LBPRGKRZSA-N'}}}]}} </code></pre> <p>If I do this:</p> <pre><code>my_list[1]['2'] </code></pre> <p>I get:</p> <pre><code>{'oscar': [{'md5Sum': 'b7bab051cbd99f75b61bd76f35c0e372', 'chemicalData': {'Aporphinium': {'name': 'Aporphinium', 'standardInChI': 'InChI=1S/C17H17N/c1-18-10-9-12-6-4-8-15-14-7-3-2-5-13(14)11-16(18)17(12)15/h2-8,16H,9-11H2,1H3/p+1', 'standardInChIKey': 'BZKUYNBAFQJRDM-UHFFFAOYSA-O'}, 'DMSO': {'name': 'DMSO', 'standardInChI': 'InChI=1S/C2H6OS/c1-4(2)3/h1-2H3', 'standardInChIKey': 'IAZDPXIOMUYVGZ-UHFFFAOYSA-N'}, 'hydroxide': {'name': 'hydroxide', 'standardInChI': 'InChI=1S/H2O/h1H2/p-1', 'standardInChIKey': 'XLYOFNOQVPJJNP-UHFFFAOYSA-M'}, 'picrate': {'name': 'picrate', 'standardInChI': 'InChI=1S/C6H3N3O7/c10-6-4(8(13)14)1-3(7(11)12)2-5(6)9(15)16/h1-2,10H/p-1', 'standardInChIKey': 'OXNIZHLAWKMVMX-UHFFFAOYSA-M'}, 'Isothebaine': {'name': 'Isothebaine', 'standardInChI': 'InChI=1S/C19H21NO3/c1-20-8-7-12-10-15(23-3)19(21)18-16(12)13(20)9-11-5-4-6-14(22-2)17(11)18/h4-6,10,13,21H,7-9H2,1-3H3/t13-/m0/s1', 'standardInChIKey': 'RQCOQZNIQLKGTN-ZDUSSCGKSA-N'}, 'EtOH': {'name': 'EtOH', 'standardInChI': 'InChI=1S/C2H6O/c1-2-3/h3H,2H2,1H3', 'standardInChIKey': 'LFQSCWFLJHTTHZ-UHFFFAOYSA-N'}, 'oxoaporphine': {'name': 'oxoaporphine', 'standardInChI': 'InChI=1S/C17H15NO/c1-18-10-16(19)14-8-4-7-13-12-6-3-2-5-11(12)9-15(18)17(13)14/h2-8,15H,9-10H2,1H3', 'standardInChIKey': 'QOJUUOXLHHMNMK-UHFFFAOYSA-N'}, 'ethanol': {'name': 'ethanol', 'standardInChI': 'InChI=1S/C2H6O/c1-2-3/h3H,2H2,1H3', 'standardInChIKey': 'LFQSCWFLJHTTHZ-UHFFFAOYSA-N'}, 'Aporphine': {'name': 'Aporphine', 'standardInChI': 'InChI=1S/C17H17N/c1-18-10-9-12-6-4-8-15-14-7-3-2-5-13(14)11-16(18)17(12)15/h2-8,16H,9-11H2,1H3', 'standardInChIKey': 'BZKUYNBAFQJRDM-UHFFFAOYSA-N'}, 'deuteriochloroform': {'name': 'deuteriochloroform', 'standardInChI': 'InChI=1S/CHCl3/c2-1(3)4/h1H/i1D', 'standardInChIKey': 'HEDRZPFGACZZDS-MICDWDOJSA-N'}, 'methoxyphenanthrene': {'name': 'methoxyphenanthrene', 'standardInChI': 'InChI=1S/C15H12O/c1-16-15-8-4-7-13-12-6-3-2-5-11(12)9-10-14(13)15/h2-10H,1H3', 'standardInChIKey': 'ONMKCYMMGBIVPT-UHFFFAOYSA-N'}, 'TETRAMETHOXYPHENANTHRENE': {'name': 'TETRAMETHOXYPHENANTHRENE', 'standardInChI': 'InChI=1S/C18H18O4/c1-19-15-13-10-9-11-7-5-6-8-12(11)14(13)16(20-2)18(22-4)17(15)21-3/h5-10H,1-4H3', 'standardInChIKey': 'PHIDHNJRDMYWKQ-UHFFFAOYSA-N'}, 'CH3OH': {'name': 'CH3OH', 'standardInChI': 'InChI=1S/CH4O/c1-2/h2H,1H3', 'standardInChIKey': 'OKKJLVBELUTLKV-UHFFFAOYSA-N'}, 'APORPHINE': {'name': 'APORPHINE', 'standardInChI': 'InChI=1S/C17H17N/c1-18-10-9-12-6-4-8-15-14-7-3-2-5-13(14)11-16(18)17(12)15/h2-8,16H,9-11H2,1H3', 'standardInChIKey': 'BZKUYNBAFQJRDM-UHFFFAOYSA-N'}, 'DIMETHOXYOXOAPORPHINE': {'name': 'DIMETHOXYOXOAPORPHINE', 'standardInChI': 'InChI=1S/C19H19NO3/c1-20-10-15(21)13-9-16(22-2)19(23-3)18-12-7-5-4-6-11(12)8-14(20)17(13)18/h4-7,9,14H,8,10H2,1-3H3', 'standardInChIKey': 'NOQPPVHNJNQIGF-UHFFFAOYSA-N'}, 'KBr': {'name': 'KBr', 'standardInChI': 'InChI=1S/BrH.K/h1H;/q;+1/p-1', 'standardInChIKey': 'IOLCXVTUBQKXJR-UHFFFAOYSA-M'}, 'aporphine': {'name': 'aporphine', 'standardInChI': 'InChI=1S/C17H17N/c1-18-10-9-12-6-4-8-15-14-7-3-2-5-13(14)11-16(18)17(12)15/h2-8,16H,9-11H2,1H3', 'standardInChIKey': 'BZKUYNBAFQJRDM-UHFFFAOYSA-N'}, 'Actinodaphnine': {'name': 'Actinodaphnine', 'standardInChI': 'InChI=1S/C18H17NO4/c1-21-14-7-11-10(5-13(14)20)4-12-16-9(2-3-19-12)6-15-18(17(11)16)23-8-22-15/h5-7,12,19-20H,2-4,8H2,1H3/t12-/m0/s1', 'standardInChIKey': 'VYJUHRAQPIBWNV-LBPRGKRZSA-N'}}}]} </code></pre> <p>Thanks in advance!</p> <p>UPDATE: </p> <p>This is how I obtained the list:</p> <pre><code>my_list = [] for z in old_list: subprocess.call(['wget', '-O', 'journal.pdf', z]) os.system('oscarpdf2json journal.pdf &gt; journal.json') if os.path.isfile('journal.json'): with open('journal.json') as f: journal = json.load(f) os.remove('journal.pdf') os.remove('journal.json') my_list.append({z:{'oscar':journal}}) </code></pre>
<p>I'm a beginner too but I think you can use this code to extract all the keys of each item in your dictionary but the variable names are not appropriated:</p> <pre><code>InChlkeys = [] dictionary = my_list[1]['2'] for i in dictionary.values(): for j in i: for k in j["chemicalData"].values(): InChlkeys.append(k['standardInChIKey']) </code></pre>
python|list|dictionary
1