Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,902,600
72,830,947
Plotly - Table not displaying properly until column moved / rerendered
<p>I am using Plotly to display data in a simple table. I've populated in some dummy data below.</p> <pre><code>import plotly.graph_objects as go data = [go.Table( columnorder = [1,2], header = dict( values = ['&lt;b&gt;'+'TENOR'+'&lt;/b&gt;'] + list(['a','b','c','d','e']), line_color = 'darkslategray', fill_color = 'royalblue', align = 'center', font = dict(color = 'white', size = 12), height = 20 ), cells = dict( values = [['row1','row2','row3'],[1,1,1],[2,2,2],[3,3,3],[4,4,4],[5,5,5]], line_color = 'darkslategray', fill = dict(color=['paleturquoise', 'white']), align = ['right', 'center'], font_size = 12, height = 20) )] fig = go.Figure(data = data) fig.show() </code></pre> <p>When I run this (Jupyter, but same result happens in a couple other IDEs too) it initially displays as:</p> <p><a href="https://i.stack.imgur.com/jrtwf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jrtwf.png" alt="enter image description here" /></a></p> <p>Then, as soon as I move a column / manipulate it in any way, it then redisplays (<strong>properly</strong>, except the column that I've moved) as:</p> <p><a href="https://i.stack.imgur.com/kbNsn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kbNsn.png" alt="enter image description here" /></a></p> <p>Any idea why it's not rendering correctly in the first place? I basically took this right from one of <a href="https://plotly.com/python/table/#tables-in-dash" rel="nofollow noreferrer">their table examples</a>...</p>
<p>Comment out your line <code>columnorder = [1,2,3,4,5],</code>.</p> <pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go data = [go.Table( #columnorder = [1,2], header = dict( values = ['&lt;b&gt;'+'TENOR'+'&lt;/b&gt;'] + list(['a','b','c','d','e']), line_color = 'darkslategray', fill_color = 'royalblue', align = 'center', font = dict(color = 'white', size = 12), height = 20 ), cells = dict( values = [['row1','row2','row3'],[1,1,1],[2,2,2],[3,3,3],[4,4,4],[5,5,5]], line_color = 'darkslategray', fill = dict(color=['paleturquoise', 'white']), align = ['right', 'center'], font_size = 12, height = 20) )] fig = go.Figure(data = data) fig.show() </code></pre> <p>Specifying that is saying you only want to show those listed columns.</p> <p><strong>Alternatively</strong>, change that line to <code>columnorder = [0,1,2,3,4,5],</code>.</p>
python|plotly|plotly-dash|plotly-python
1
1,902,601
59,395,242
Reading from many files in a loop and writing the read data from each file in another file in Python
<p>I am making a script to read several files and copy the information from them to another file.</p> <p>My initial script is supposed to read <code>.xlsx</code> files from a given directory, copy a certain part of a specific sheet in the file and paste it to another file. This is my code, which is working:</p> <pre class="lang-py prettyprint-override"><code>import xlsxwriter import pandas as pd import numpy data=pd.read_excel(r'C:\Users\bvi\Desktop\python script\EMC_review_template_v10.xlsx',sheet_name='Test_Summary',skiprows=5,nrows=44,usecols='A:O') df = pd.DataFrame(data) df2=df.to_excel('try.xlsx',sheet()) print(df2) </code></pre> <p>When I tried to form a loop the problems began: </p> <pre class="lang-py prettyprint-override"><code>import xlsxwriter import pandas as pd import numpy import pathlib import os path=(r"C:\Users\bvi\Desktop\files") files = [] ###r=root, d=directories, f = files for root, dirs, files in os.walk(path): for filename in files: length=len(files) for i in range(length): print(filename[i]) data = pd.read_excel(r'path\filename[i]',sheet_name='Test_Summary', skiprows=5,nrows=44, usecols='A:O') df = pd.DataFrame(data) df2=df.to_excel(r'C:\Users\bvi\Desktop\result\summary.xlsx',sheet_name='EMC_review1') print(df2, file) </code></pre> <p>I am struggling to create a loop which goes through the files in the directory. I would like to make the reading of files a function of the file name.</p>
<p>I guess that the pb comes from the double loop inside the <code>os.walk</code> loop. Try this:</p> <pre><code>path=(r"C:\Users\bvi\Desktop\files") for root, dirs, files in os.walk(path): for filename in files: print(filename) data = pd.read_excel(path+'/'+filename',sheet_name='Test_Summary', skiprows=5,nrows=44, usecols='A:O') df = pd.DataFrame(data) df2=df.to_excel(r'C:\Users\bvi\Desktop\result\summary.xlsx', sheet_name=filename) </code></pre> <p>If all your 'filename' are unique, this will combine the 'Test_summary' sheet from several xlsx files into a multi-sheet xlsx (one sheet per file). Is that what you need?</p>
python|xlsxwriter
0
1,902,602
59,422,722
How to fix various RuntimeWarnings of type "overflow encountered in..." when using Least-squares minimization in Scipy?
<p>I am using the <code>least_squares()</code> function from the <code>scipy.optimize</code> module to calibrate a Canopy structural dynamic model (CSDM). The calibrated model is then used to predict leaf area index (lai) based on thermal time (tt) data. I tried two variants, the first did not use the "loss" parameter of the least_squares() function, while the second set this parameter to produce a robust model. The first model did not fit the data so well as the second. However, with the second model I get this warnings:</p> <pre><code>res_robust = least_squares(fun, x0, loss='soft_l1', f_scale=0.1, args=(tt_train, lai_train)) __main__:2: RuntimeWarning: overflow encountered in power __main__:2: RuntimeWarning: overflow encountered in exp C:\Anaconda3\envs\geo\lib\site-packages\scipy\optimize\_lsq\least_squares.py:220: RuntimeWarning: overflow encountered in square z = (f / f_scale) ** 2 </code></pre> <p>Nevertheless, the resulting fit from the second model (in green) looks well when plotted. <a href="https://i.stack.imgur.com/E8XKD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E8XKD.png" alt="enter image description here"></a></p> <p>My dataset has too many characters after the decimal point which I don't actually need. Is it possible that the warnings has something to do with that? Can I ignore the warnings if this precision is not needed? Or the problem is more serious and the computations are compromised? </p> <p>Here is my code. I apologize for hard codding a dataset that long. </p> <pre><code># Canopy structural dinamic model (CSDM) import numpy as np from scipy.optimize import least_squares import matplotlib.pyplot as plt tt_train = np.array([394.926, 629.43017, 681.39683, 921.36142, 979.08705, 1042.42455, 1109.76622, 1191.00372, 1348.94747, 1445.08913, 1631.68705, 1986.46622]) lai_train = np.array([0.35391, 0.77602, 0.78485, 1.11895, 3.12987, 3.21052, 4.85756, 5.1311, 6.22953, 7.33323, 7.38312, 3.86341]) # The CSDM formula (Duveiller et al. 2011. Retrieving wheat Green Area Index during the growing season...) # LAI = k * (1 / ((1 + Exp(-a * (tt - T0 - Ta))) ^ c) - Exp(b * (tt - T0 - Tb))) # initial estimates of parameters To = 50 # plant emergence (x[0]) Ta = 1000 # midgrowth (x[1]) Tb = 2000 # end of cenescence (x[2]) k = 6 # scaling factor (arox. max LAI) (x[3]) a = 0.01 # rate of growth (x[4]) b = 0.01 # rate of senescence (x[5]) c = 1 # parameter allowing some plasticity to the shape of the curv (x[6]) x0 = np.array([To, Ta, Tb, k, a, b, c]) def model(x, tt): return x[3] * (1 / ((1 + np.exp(-x[4] * (tt - x[0] - x[1]))) ** x[6]) - np.exp(x[5] * (tt - x[0] - x[2]))) #Define the function computing residuals for least-squares minimization def fun(x, tt, lai): return model(x, tt) - lai # calibrate two models # first is the simpler one - no error but poor fit res_lsq = least_squares(fun, x0, args=(tt_train, lai_train)) # then the robust model - gives overflow RuntimeWarning but the result seems well when plotted res_robust = least_squares(fun, x0, loss='soft_l1', f_scale=0.1, args=(tt_train, lai_train)) # termal time data for full season tt_test = np.array([11.79375, 22.98125, 34.47708333, 45.46875, 56.95416667, 69.475, 84.39583333, 98.66875, 107.0416667, 116.7875, 129.7458333, 141.04375, 152.9333333, 165.0791667, 180.425, 195.0395833, 209.71875, 224.3958333, 238.4020833, 252.1166667, 266.0625, 281.0541667, 295.6270833, 310.4291667, 322.2916667, 331.6375, 338.11875, 346.7729167, 358.5770833, 369.5375, 380.3135, 388.6364167, 394.926, 401.8093333, 409.1926667, 418.4239167, 425.3176667, 430.351, 436.0093333, 443.4780833, 451.61975, 460.3905833, 468.851, 475.3926667, 484.2051667, 497.26975, 506.9989167, 513.2426667, 519.7780833, 525.5176667, 531.9343333, 539.2551667, 544.1426667, 549.8780833, 558.7655833, 565.49475, 568.7426667, 572.1030833, 575.55725, 578.0864167, 580.3155833, 583.0426667, 586.9280833, 592.651, 598.5155833, 602.5218333, 604.65725, 606.4968333, 610.776, 615.4135, 620.0718333, 627.8051667, 629.4301667, 629.776, 631.9176667, 635.83225, 643.8489167, 652.3989167, 662.1030833, 664.5718333, 666.3676667, 666.476, 666.476, 666.476, 666.476, 667.5801667, 673.551, 681.3968333, 683.98225, 690.0614167, 697.301, 702.6093333, 704.9905833, 706.326, 710.39475, 713.3218333, 718.3093333, 721.90725, 724.0989167, 726.5364167, 729.7905833, 732.7635, 736.4780833, 739.0676667, 742.4551667, 746.3114167, 746.5655833, 746.5655833, 746.5655833, 746.5655833, 746.5655833, 748.7864167, 751.9280833, 754.0614167, 754.0614167, 754.6801667, 754.6801667, 754.6801667, 754.6801667, 754.6801667, 754.6801667, 754.6801667, 760.0489167, 766.0593333, 771.8551667, 775.7968333, 782.6843333, 795.9135, 801.8239167, 804.5530833, 807.4280833, 808.3218333, 811.5530833, 816.31975, 817.4114167, 818.1405833, 820.3364167, 823.2301667, 825.2468333, 827.9676667, 831.8093333, 835.48225, 838.2280833, 840.1135, 840.8405833, 842.8385, 843.66975, 844.0385, 844.9551667, 844.9551667, 844.9551667, 844.9551667, 844.9551667, 844.9551667, 844.9551667, 845.8093333, 845.8093333, 845.8093333, 846.6739167, 849.451, 852.89475, 860.151, 867.6780833, 878.5593333, 889.301, 900.2155833, 911.3155833, 921.3614167, 931.2989167, 944.3405833, 947.4405833, 947.4405833, 947.4405833, 948.4426667, 948.4426667, 948.4426667, 948.4426667, 948.6078841, 949.8453841, 954.3495507, 960.8703841, 967.9453841, 979.0870507, 996.1058007, 1009.132884, 1019.322467, 1029.478717, 1042.424551, 1057.395384, 1069.930801, 1082.962051, 1095.920384, 1109.766217, 1124.191217, 1140.557884, 1156.843301, 1173.526634, 1191.003717, 1206.739134, 1221.112051, 1233.226634, 1247.184967, 1263.372467, 1278.449551, 1293.843301, 1311.147467, 1329.207884, 1348.947467, 1368.468301, 1388.553717, 1407.682884, 1426.270384, 1445.089134, 1461.795384, 1481.364134, 1499.868301, 1518.032884, 1538.305801, 1559.057884, 1578.743301, 1596.380801, 1614.578717, 1631.687051, 1648.178717, 1665.380801, 1682.168301, 1699.143301, 1713.870384, 1731.584967, 1749.303717, 1766.366217, 1784.191217, 1801.568301, 1818.239134, 1835.541217, 1853.730801, 1872.880801, 1891.470384, 1910.580801, 1929.641217, 1948.868301, 1967.932884, 1986.466217, 2005.532884, 2024.989134]) # apply the two models on the full season data lai_lsq = model(res_lsq.x, tt_test) lai_robust = model(res_robust.x, tt_test) plt.plot(tt_train, lai_train, 'o', markersize=4, label='training data') plt.plot(tt_test, lai_lsq, label='fitted lsq model') plt.plot(tt_test, lai_robust, label='fitted robust model') plt.xlabel("tt") plt.ylabel("LAI") plt.legend(loc='upper left') plt.show() </code></pre>
<p><a href="https://i.stack.imgur.com/lLz8n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lLz8n.png" alt="enter image description here"></a></p> <p>According to the above, setting a small value for <code>f_scale</code> (represented as C in the formula) increases the value of the argument given to <code>rho</code>. This potentially generates the Overflow. Increasing the value of <code>f_scale</code> helps deal with the warning, however it gives a less satisfying fit. Perhaps using a different loss function, such as <code>cauchy</code> could help.</p>
python|scipy
1
1,902,603
72,887,834
hi im trying to run a bash script using python but i am geting a permission error when i try to run it with python
<p>So long story short i am making an auto mater using python and bash so i can just run one python file and all my docker containers and self made scripts can run but every time i try to run a bash script i get permission 13 (bare in mind I did use the chmod u+x command)</p> <p>here is the code:</p> <pre><code>import os from colorama import Fore import time import subprocess while True: print(Fore.GREEN + &quot;Welcome\n&quot;) print(&quot;To Automater&quot;) options = print(Fore.WHITE + &quot;&quot;&quot;what would you like to do?\n1.Start Containers\n2.Stop Containers\n3.Run A script\n4.Add containers\n5.Exit\n&quot;&quot;&quot;) started = input(&quot;1-4:\n&quot;) if started == &quot;1&quot;: print(Fore.RED + &quot;Starting containers&quot;) #os.startfile(Start_Containers.sh) subprocess.call([&quot;./Start_Containers.sh&quot;]) </code></pre> <p>and here is the error i get:</p> <pre><code>PermissionError: [Errno 13] Permission denied: './Start_Containers.sh' Process finished with exit code 1 </code></pre> <p>here is all the code:</p> <pre><code>import os from colorama import Fore import time import subprocess while True: print(Fore.GREEN + &quot;Welcome\n&quot;) print(&quot;To Automater&quot;) options = print(Fore.WHITE + &quot;&quot;&quot;what would you like to do?\n1.Start Containers\n2.Stop Containers\n3.Run A script\n4.Add containers\n5.Exit\n&quot;&quot;&quot;) started = input(&quot;1-4:\n&quot;) if started == &quot;1&quot;: print(Fore.RED + &quot;Starting containers&quot;) #os.startfile(Start_Containers.sh) subprocess.call([&quot;./Start_Containers.sh&quot;]) if started == &quot;2&quot;: print(Fore.RED + &quot;Stoping containers&quot;) os.startfile(&quot;./Stop_Containers.sh&quot;) elif started == &quot;3&quot;: Scripts = [] for files in os.listdir(): Scripts.append(files) print(Scripts) elif started ==&quot;4&quot;: while True: add_container = input(&quot;Would you like to add a container [y/N]&quot;) if add_container.upper() == &quot;Y&quot;: print(&quot;Ok Lets add some&quot;) added_containers = input(&quot;ok lets add a container&quot;) elif add_container.upper() == &quot;N&quot;: break exit() elif started == &quot;5&quot;: print(Fore.GREEN + &quot;Thank you for using Automater&quot;) time.sleep(1) break exit() </code></pre>
<p>To run bash script, looks like you need admin access. Instead of running the script with <code>python &lt;script_name&gt;.py</code>, run it with <code>sudo python &lt;script_name&gt;.py</code> on linux.</p> <p>Strictly speaking, this is very, very bad practice. But if you are doing this just for fun(which it looks like you are), it's ok. But never, EVER do this on anything even remotely important. These are the forbidden techniques.</p>
python|bash
-2
1,902,604
62,316,220
Save the existing files without deleting existing data by splitting the DataFrame by column
<p>I have three excel files- A1/A2/A3 with some existing data in 'Sheet1'.</p> <pre><code>import glob from openpyxl import load_workbook from openpyxl.utils.dataframe import dataframe_to_rows import pandas as pd df=pd.DataFrame() for f in glob.glob(r'...\Excel\A*.xlsx'): info=pd.read_excel(f) df=df.append(info) </code></pre> <p>With the above code I have got the below DataFrame:</p> <pre><code>Sample Description ---------------------- A1 Auto A2 Manual A3 Fully-Automated </code></pre> <p>I want data of A1 to get pasted in A1 file, A2 data to get pasted in A2 file and A3 data to get pasted in A3 file in 'Sheet2' without deleting the existing 'Sheet1'.</p> <pre><code>Sample Description ---------------------- ````` this data should go in A1 File A1 Auto Sample Description ---------------------- ````` this data should go in A2 File A2 Manual Sample Description ---------------------- ````` this data should go in A3 File A3 Fully-Automated </code></pre> <p>I tried writing below code but only last row is getting pasted in all three excel files.</p> <pre><code> - File A1 *Sheet2* Sample Description ---------------------- A3 Fully-Automated - File A2 *Sheet2* Sample Description ---------------------- A3 Fully-Automated - File A3 *Sheet2* Sample Description ---------------------- A3 Fully-Automated for filename in glob.glob(r'...\Excel\A*.xlsx'): for name,data in group_df: book=load_workbook(filename) writer=pd.ExcelWriter(filename,engine='openpyxl') writer.book=book data.to_excel(writer,sheet_name='Sheet2') writer.save() writer.close() </code></pre> <p>I need to groupby the DataFrame by column 'Sample' and paste the splitted data back to respective files with New sheet as 'Sheet2' without deleting the existing 'Sheet1'.</p>
<p>It's hard to be sure without a minimum reproducible example but this should lead you to the solution; </p> <pre><code>import glob from openpyxl import load_workbook from openpyxl.utils.dataframe import dataframe_to_rows import pandas as pd # this makes things a bit easier folder_path = r'...\Excel\\' # that's ok df=pd.DataFrame() for f in glob.glob(folder_path + 'A*.xlsx'): info=pd.read_excel(f) df=df.append(info) # iterate over the groupby # name is the label in the column 'Sample' # group is a dataframe for name, group in df.groupby('Sample'): # file path for the name of the group (A1, A2, ...) filename = folder_path + name + '.xls' # do your thing book=load_workbook(filename) writer=pd.ExcelWriter(filename,engine='openpyxl') writer.book=book # save the group to excel group.to_excel(writer,sheet_name='Sheet2') writer.save() writer.close() </code></pre>
python|pandas|pandas-groupby|openpyxl|glob
1
1,902,605
62,042,811
What is the correct way to import modules when I'm writing my own module in python?
<p>I've searched for similar questions, but what I've found doesn't work for me.<br> I'm writing the report of my analysis in a jupyter notebook (let's say <code>main.ipynb</code>). I want to import an external <code>functions.py</code> file with some functions that I use to plot some results. To be precise, my working directory has the following structure:<br> -<code>main.ipynb</code><br> -<strong>utils</strong><br> ----<code>functions.py</code><br> ---- other files... </p> <p>The <code>functions.py</code> file is something like this:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt def myPlot(): plt.figure() plt.plot([0,1],[0,1]) plt.show() .... </code></pre> <p>and the first cell of the notebook is this:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt from utils.functions import * myPlot() </code></pre> <p>When I run the notebook I get this error: <code>NameError: name 'plt' is not defined</code>, although I have defined plt in both files (even if I think I really shouldn't need it in the <code>main.ipynb</code>).</p> <p>S0, what is the correct way to import a package (<code>matplotlib.pyplot</code> in this case) in an external file? What am I doing wrong?</p>
<p>I've found the flaw in my code and I think it's worth sharing, so here I am.<br> Maybe for beginners using jupyter notebook (as me) this could be tricky to detect: once you've run the cell with the import statement, it doesn't matter if you edit your file <code>functions.py</code> and re-run that cell. The kernel has already imported a file with that exact same name, so it won't notice the difference even if you've made changes. </p> <p>The solution I found (and I think is the only one) is to restart the kernel every time you change the <code>functions.py</code> file.</p>
python|matplotlib|jupyter-notebook|python-import|python-module
0
1,902,606
62,124,346
Why can't I webscrape the table that I want?
<p>I am new to BeautifulSoup and I wanted to try out some web scraping. For my little project, I wanted to get the Golden State Warrior win rate from Wikipedia. I was planning to get the table that had that information and make it into a panda so I could graph it over the years. However, my code selects the Table Key table instead of the Seasons table. I know this is because they are the same type of table (wikitable), but I don't know how to solve this problem. I am sure that there is an easy explanation that I am missing. Can someone please explain how to fix my code and explain how I could choose which tables to web scrape in the future? Thanks!</p> <pre><code>c_data = "https://en.wikipedia.org/wiki/List_of_Golden_State_Warriors_seasons" #wikipedia page c_page = urllib.request.urlopen(c_data) c_soup = BeautifulSoup(c_page, "lxml") c_table=c_soup.find('table', class_='wikitable') #this is the problem c_year = [] c_rate = [] for row in c_table.findAll('tr'): #setup for dataframe cells=row.findAll('td') if len(cells)==13: c_year = c_year.append(cells[0].find(text=True)) c_rate = c_rate.append(cells[9].find(text=True)) print(c_year, c_rate) </code></pre>
<h2>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_html.html" rel="nofollow noreferrer"><code>pd.read_html</code></a> to get all the tables</h2> <ul> <li>This function returns a list of dataframes <ul> <li><code>tables[0]</code> through <code>tables[17]</code>, in this case</li> </ul></li> </ul> <pre class="lang-py prettyprint-override"><code>import pandas as pd # read tables tables = pd.read_html('https://en.wikipedia.org/wiki/List_of_Golden_State_Warriors_seasons') print(len(tables)) &gt;&gt;&gt; 18 tables[0] 0 1 0 AHC NBA All-Star Game Head Coach 1 AMVP All-Star Game Most Valuable Player 2 COY Coach of the Year 3 DPOY Defensive Player of the Year 4 Finish Final position in division standings 5 GB Games behind first-place team in division[b] 6 Italics Season in progress 7 Losses Number of regular season losses 8 EOY Executive of the Year 9 FMVP Finals Most Valuable Player 10 MVP Most Valuable Player 11 ROY Rookie of the Year 12 SIX Sixth Man of the Year 13 SPOR Sportsmanship Award 14 Wins Number of regular season wins # display all dataframes in tables for i, table in enumerate(tables): print(f'Table {i}') display(table) print('\n') </code></pre> <h2>Select specific table</h2> <pre class="lang-py prettyprint-override"><code>df_i_want = tables[x] # x is the specified table, 0 indexed # delete tables del(tables) </code></pre>
python|python-3.x|dataframe|beautifulsoup|wikipedia
2
1,902,607
62,175,915
Pyspark - ImportError: No module named
<p>I am working on a pyspark project below is my project directory structure.</p> <pre><code>project_dir/ src/ etl/ __init__.py etl_1.py spark.py config/ __init__.py utils/ __init__.py test/ test_etl_1.py setup.py README.md requirements.txt </code></pre> <p>When I run below unit test code I get </p> <pre><code>python test_etl_1.py Traceback (most recent call last): File "test_etl_1.py", line 1, in &lt;module&gt; from src.etl.spark import get_spark ImportError: No module named src.etl.spark </code></pre> <p>This is my unit test file:</p> <pre><code>from src.etl.spark import get_spark from src.etl.addcol import with_status class TestAppendCol(object): def test_with_status(self): source_data = [ ("p", "w", "pw@sample.com"), ("j", "b", "jb@sample.com") ] source_df = get_spark().createDataFrame( source_data, ["first_name", "last_name", "email"] ) actual_df = with_status(source_df) expected_data = [ ("p", "w", "pw@sample.com", "added"), ("j", "b", "jb@sample.com", "added") ] expected_df = get_spark().createDataFrame( expected_data, ["first_name", "last_name", "email", "status"] ) assert(expected_df.collect() == actual_df.collect()) </code></pre> <p>I need to run this file as pytest but it is not working due to Module error. Can you please help me on this error.</p>
<p>Your PYTHONPATH depends on where you are navigated. Given that you say that you run <code>python test_etl_1.py</code>, you must be in <code>~/project_dir/test/</code>. Therefore, it can't find <code>src</code>.</p> <p>If you would run <code>python -m unittest</code> from <code>~/project_dir/</code> it should work. If not, you can always try to fix/improve the installation of your package like shown <a href="https://stackoverflow.com/a/53677244/10093070">here</a>.</p>
python|apache-spark|pyspark|pytest
0
1,902,608
35,444,422
Making a irc bot with a gui. Difficulties with disconnecting on toggle button toggle
<p>I'm just learning python as I need it, so hold on thight because this code gonna be messy!...</p> <p>So I worked with glade to create a gui for my client side twitch irc chat bot and created this toggle button in a toolbar:</p> <pre><code>&lt;object class="GtkToggleToolButton" id="tool_deploy_toggle"&gt; &lt;property name="use_action_appearance"&gt;False&lt;/property&gt; &lt;property name="visible"&gt;True&lt;/property&gt; &lt;property name="can_focus"&gt;False&lt;/property&gt; &lt;property name="label" translatable="yes"&gt;Connect&lt;/property&gt; &lt;property name="use_underline"&gt;True&lt;/property&gt; &lt;property name="stock_id"&gt;gtk-jump-to&lt;/property&gt; &lt;signal name="toggled" handler="on_tool_deploy_toggle_toggled" swapped="no"/&gt; &lt;/object&gt; </code></pre> <p>And I want this toggle button to open a socket and deploy the bot to the twitch irc chat when the button is toggled "down" (and also do some defining and loading stuff as you can see):</p> <pre><code>irc = botOpenSocket() joinRoom(irc) readbuffer = "" irc.send("CAP REQ :twitch.tv/membership\r\n") irc.send("CAP REQ :twitch.tv/commands\r\n") irc.send("CAP REQ :twitch.tv/tags\r\n") try: with file("commands.json","r") as commandsDatabase: commands = json.load(commandsDatabase) except IOError: commands = {} with file("commands.json","w") as commandsDatabase: json.dump(commands, commandsDatabase) globalcommands = {"spank": True} moderatorcommands = {"addcom": True, "delcom": True} stringspace = " " nothing = "" now = time.time() cooldown = lambda: time.time() &gt; now + 1 </code></pre> <p>Then I want it to stay looping this code(ignore the comments they are in portuguese)(also yes I know my code isn't the best, I'm just learning):</p> <pre><code>while True: readbuffer = readbuffer + irc.recv(1024) temp = string.split(readbuffer, "\n") readbuffer = temp.pop() for line in temp: ###Essenciais###-------------------------------------------------------------------------------------------------------------------------------------------- #Mostra a linha que e dada pelo servidor de IRC (So pelo sim pelo nao).----------------------------------------------------------------------- print (line) #--------------------------------------------------------------------------------------------------------------------------------------------- #Impede que seja desconectado pelo servidor de IRC.------------------------------------------------------------------------------------------- if line.startswith('PING'): irc.send('PONG ' + line.split( ) [ 1 ] + '\r\n') print "PONGED BACK" break #--------------------------------------------------------------------------------------------------------------------------------------------- #Le a linha que e dada pelo servidor de IRC e devevole o utilizador, a menssagem e o canal. Volta se algum for nulo.-------------------------- user = getUser(line) message = getMessage(line) channel = getChannel(line) moderator = getModerator(line) if channel == None or user == None or message == None: break #--------------------------------------------------------------------------------------------------------------------------------------------- #Formata o texto e mostra mostra na consola.-------------------------------------------------------------------------------------------------- print channel + ": " + user + " &gt; " + message #--------------------------------------------------------------------------------------------------------------------------------------------- ###Essenciais END###---------------------------------------------------------------------------------------------------------------------------------------- if message == "!commands\r": globalcommandskeys = str(globalcommands.keys()).replace("[", "").replace("]", "") moderatorcommandskeys = str(moderatorcommands.keys()).replace("[", "").replace("]", "") channelcommandskeys = str(commands.keys()).replace("[", "").replace("]", "") sendMessage(irc, "Global commands: " + globalcommandskeys) if channelcommandskeys != "": sendMessage(irc, "Channel specific commands: " + channelcommandskeys ) if moderator == "1": sendMessage(irc, "Moderator commands: " + moderatorcommandskeys) break if message.startswith("!addcom ") and (moderator == "1" or user == channel): if message.count(" ") &gt;= 2: try: commandadd = command_add(message) answer = command_answer(message) except IndexError: sendMessage(irc, user + " the command is used this way !addcom !&lt;command_name&gt; &lt;command_answer&gt;") break if globalcommands.has_key(commandadd) or moderatorcommands.has_key(commandadd): sendMessage(irc, user + " you can't add the command " + '"!' + commandadd + '" !!!') break try: commands[commandadd] except KeyError: commands[commandadd] = answer sendMessage(irc, user + " the command !" + commandadd + " has been added!!!") with file("commands.json","w") as commandsDatabase: json.dump(commands, commandsDatabase) break sendMessage(irc, user + " the command you tried to add alredy exists!!!") break sendMessage(irc, user + " the command is used this way !addcom !&lt;command_name&gt; &lt;command_answer&gt;") break if message.startswith("!delcom ") and (moderator == "1" or user == channel): if message.count(" ") == 1: try: commanddel = command_del(message) except IndexError: sendMessage(irc, user + "the command is used this way !delcom !&lt;command_name&gt;") break if globalcommands.has_key(commanddel) or moderatorcommands.has_key(commanddel): sendMessage(irc, user + " you can't delete the command " + '"!' + commanddel + '" !!!') break try: commands[commanddel] except KeyError: sendMessage(irc, user + " the command you tried to delete doens't exist!!!") break del commands[commanddel] sendMessage(irc, user + " the command !" + commanddel + " has been deleted!!!") with file("commands.json","w") as commandsDatabase: json.dump(commands, commandsDatabase) break sendMessage(irc, user + " the command is used this way !delcom !&lt;command_name&gt;") break if message.startswith("!"): if cooldown() == True: if message.count(" ") == 0: try: command = getCommand(message) except IndexError: break try: sendMessage(irc, commands[command]) now = time.time() cooldown = lambda: time.time() &gt; now + 10 except KeyError: break if message.count(" ") == 1: try: command = getCommandSpaced(message) target = getString(message) except IndexError: break try: replacing = commands[command] sendMessage(irc, replacing.replace("$target", target)) now = time.time() cooldown = lambda: time.time() &gt; now + 10 except KeyError: break break </code></pre> <p>And then finally when the button is toggled "up" I want to close the socket so the bot leaves the irc server:</p> <pre><code>irc.close() </code></pre> <p>I want all the above things to be able to be done without closing and reopening the script.</p> <p>So the problem is I can't do this. </p> <p>If I put into the main script(the one that connects the button signals from the GUI) it will break the gtk main loop and the GUI will crash.</p> <p>I've tried to use threads but I don't seem to understand them.</p>
<p>Status update I done more research on threads and got an example of a thread from another stackoverflow post and got it working!</p> <p>I created this thread(Note that after <code>joinRoom(irc, self)</code>the socket is set to nonblocking if the connection is sucessful otherwise it does a <code>loop.clear()</code> wich makes it never enter the bot main loop and just goes straight into <code>irc.close()</code>):</p> <pre><code>gobject.threads_init() class T(threading.Thread): loop = threading.Event() stop = False def start(self, *args): super(T, self).start() def run(self): while not self.stop: #Waits for button to be clicked.# self.loop.wait() #Bot Startup sequence.# deploy_button.set_label('Disconnect') irc = botOpenSocket() joinRoom(irc, self) readbuffer = "" irc.send("CAP REQ :twitch.tv/membership\r\n") irc.send("CAP REQ :twitch.tv/commands\r\n") irc.send("CAP REQ :twitch.tv/tags\r\n") try: with file("commands.json","r") as commandsDatabase: commands = json.load(commandsDatabase) except IOError: commands = {} with file("commands.json","w") as commandsDatabase: json.dump(commands, commandsDatabase) globalcommands = {"spank": True} moderatorcommands = {"addcom": True, "delcom": True} stringspace = " " nothing = "" now = time.time() cooldown = lambda: time.time() &gt; now + 1 #Keeps reading chat and awsering.# while self.loop.is_set(): try: readbuffer = readbuffer + irc.recv(1024) temp = string.split(readbuffer, "\n") readbuffer = temp.pop() except: pass else: for line in temp: ###Essenciais###-------------------------------------------------------------------------------------------------------------------------------------------- #Mostra a linha que e dada pelo servidor de IRC (So pelo sim pelo nao).----------------------------------------------------------------------- print (line) #--------------------------------------------------------------------------------------------------------------------------------------------- #Impede que seja desconectado pelo servidor de IRC.------------------------------------------------------------------------------------------- if line.startswith('PING'): irc.send('PONG ' + line.split( ) [ 1 ] + '\r\n') print "PONGED BACK" break #--------------------------------------------------------------------------------------------------------------------------------------------- #Le a linha que e dada pelo servidor de IRC e devevole o utilizador, a menssagem e o canal. Volta se algum for nulo.-------------------------- user = getUser(line) message = getMessage(line) channel = getChannel(line) moderator = getModerator(line) if channel == None or user == None or message == None: break #--------------------------------------------------------------------------------------------------------------------------------------------- #Formata o texto e mostra mostra na consola.-------------------------------------------------------------------------------------------------- print channel + ": " + user + " &gt; " + message #--------------------------------------------------------------------------------------------------------------------------------------------- ###Essenciais END###---------------------------------------------------------------------------------------------------------------------------------------- if message == "!commands\r": globalcommandskeys = str(globalcommands.keys()).replace("[", "").replace("]", "") moderatorcommandskeys = str(moderatorcommands.keys()).replace("[", "").replace("]", "") channelcommandskeys = str(commands.keys()).replace("[", "").replace("]", "") sendMessage(irc, "Global commands: " + globalcommandskeys) if channelcommandskeys != "": sendMessage(irc, "Channel specific commands: " + channelcommandskeys ) if moderator == "1": sendMessage(irc, "Moderator commands: " + moderatorcommandskeys) break if message.startswith("!addcom ") and (moderator == "1" or user == channel): if message.count(" ") &gt;= 2: try: commandadd = command_add(message) answer = command_answer(message) except IndexError: sendMessage(irc, user + " the command is used this way !addcom !&lt;command_name&gt; &lt;command_answer&gt;") break if globalcommands.has_key(commandadd) or moderatorcommands.has_key(commandadd): sendMessage(irc, user + " you can't add the command " + '"!' + commandadd + '" !!!') break try: commands[commandadd] except KeyError: commands[commandadd] = answer sendMessage(irc, user + " the command !" + commandadd + " has been added!!!") with file("commands.json","w") as commandsDatabase: json.dump(commands, commandsDatabase) break sendMessage(irc, user + " the command you tried to add alredy exists!!!") break sendMessage(irc, user + " the command is used this way !addcom !&lt;command_name&gt; &lt;command_answer&gt;") break if message.startswith("!delcom ") and (moderator == "1" or user == channel): if message.count(" ") == 1: try: commanddel = command_del(message) except IndexError: sendMessage(irc, user + "the command is used this way !delcom !&lt;command_name&gt;") break if globalcommands.has_key(commanddel) or moderatorcommands.has_key(commanddel): sendMessage(irc, user + " you can't delete the command " + '"!' + commanddel + '" !!!') break try: commands[commanddel] except KeyError: sendMessage(irc, user + " the command you tried to delete doens't exist!!!") break del commands[commanddel] sendMessage(irc, user + " the command !" + commanddel + " has been deleted!!!") with file("commands.json","w") as commandsDatabase: json.dump(commands, commandsDatabase) break sendMessage(irc, user + " the command is used this way !delcom !&lt;command_name&gt;") break if message.startswith("!"): if cooldown() == True: if message.count(" ") == 0: try: command = getCommand(message) except IndexError: break try: sendMessage(irc, commands[command]) now = time.time() cooldown = lambda: time.time() &gt; now + 10 except KeyError: break if message.count(" ") == 1: try: command = getCommandSpaced(message) target = getString(message) except IndexError: break try: replacing = commands[command] sendMessage(irc, replacing.replace("$target", target)) now = time.time() cooldown = lambda: time.time() &gt; now + 10 except KeyError: break break #When button is clicked again do the shutdown sequence.# print "coming here" irc.close() deploy_button.set_label('Connect') #Waits for 0.1 seconds before going to the top again, just to be sure.# time.sleep(0.1) </code></pre> <p>And created this button(I changed to a normal button instead of a toggle because I tougth it looked better, Im pretty sure it would work with a toggle too):</p> <pre><code>&lt;object class="GtkToolButton" id="tool_deploy_button"&gt; &lt;property name="use_action_appearance"&gt;False&lt;/property&gt; &lt;property name="visible"&gt;True&lt;/property&gt; &lt;property name="can_focus"&gt;False&lt;/property&gt; &lt;property name="label" translatable="yes"&gt;Connect&lt;/property&gt; &lt;property name="use_underline"&gt;True&lt;/property&gt; &lt;property name="stock_id"&gt;gtk-jump-to&lt;/property&gt; &lt;signal name="clicked" handler="on_tool_deploy_button_clicked" swapped="no"/&gt; &lt;/object&gt; </code></pre> <p>And defined it:</p> <pre><code>#Defines builder and glade file.# builder = gtk.Builder() builder.add_from_file("GUI.glade") #Gets the main widow and shows it.# main_Window = builder.get_object("blasterbot_mainwindow") main_Window.show_all() #Gets some buttons.# deploy_button = builder.get_object("tool_deploy_button") #Starts the thread and the main loop.# thread = T() def bot_thread(*args): if not thread.is_alive(): thread.start() thread.loop.set() #deploy_button.set_label('Disconnect') - Old Sutff return if thread.loop.is_set(): thread.loop.clear() #deploy_button.set_label('Connect') - Old Sutff else: thread.loop.set() #deploy_button.set_label('Disconnect') - Old Sutff </code></pre> <p>And connected the handlers:</p> <pre><code>#Handler Connection.# handlers = { "on_blasterbot_mainwindow_destroy": gtk.main_quit, "on_tool_deploy_button_clicked": bot_thread } builder.connect_signals(handlers) #Stuff I know I need but don't know what is for.# gtk.main() </code></pre>
python|python-2.7|pygtk|irc|glade
0
1,902,609
58,949,275
How to store response in json format using indent in robotframework?
<p>I am trying to store the response from an API in a JSON File with indentation. But when I try I am facing an issue on storing in the JSON FORMAT.</p> <p>Can anybody help me with this robot framework code?</p> <p>Robotcode.robot</p> <pre><code>${response} = [{'id': u'a123', 'tags': [{'name': u'App', 'value': u'12378'}]}] ${req_json} Json.Dumps ${response} indent=3 Create File results//test.json ${req_json} </code></pre> <p>ERROR while Running:</p> <p>TypeError: can't multiply sequence by non-int of type 'unicode'</p> <p>I expected:</p> <pre><code>[ { "name": "a123", "tags": [] }, { "name": "Stack001", "tags": [ { "name": "App", "value": "12378" }, ]}}] </code></pre> <p>something in indentation format How can I achieve this using robot framework?</p>
<p>I achieved using python.</p> <pre><code>python code def writeJson(data,type): with open(type, "w") as write_file: json.dump(data, write_file, indent=3) robot code writeJson ${response} results//test.json </code></pre>
python|robotframework
1
1,902,610
58,656,473
How to fix KeyError: "None of [Index(['0', '3'], dtype='object')] are in the [columns]" in Pandas for headerless data
<p>I am trying to remove two columns from a headerless .tsv file</p> <pre><code>df_train = pd.read_csv("train_sample.tsv", sep="\t", header=None) df_train = df_train[['0', '3']] df_train.head() </code></pre> <p>However, that gives me error:</p> <pre><code>KeyError: "None of [Index(['0', '3'], dtype='object')] are in the [columns]" </code></pre> <p>In some similar cases the problem was extra spaces or tabs in the column names, but unfortunately when I tried</p> <pre><code>for col in df_train.columns: print(col) </code></pre> <p>There seems to be no extra characters.</p> <p>Also, when I tried some other tricks, it turned out that the column names are of type int instead of str. But when I try to select the columns by int I only get some index errors.</p> <p>EDIT: The index error was caused by a typo, so everything works as expected. This question should perhaps be deleted as <code>df_train = df_train[['0', '3']]</code> actually worked as expected, but in my case due to a typo caused index error that seemed relevant.</p>
<p>There are integers columns because <code>header=None</code>, so use <code>[0, 3]</code> instead <code>['0', '3']</code>:</p> <pre><code>df_train = df_train[[0, 3]] </code></pre>
pandas|dataframe
1
1,902,611
58,676,379
How to scrape with BeautifulSoup waiting a second to save the soup element to let elements load complete in the page
<p>i'm trying to scrape data from <a href="https://www.exito.com/televisor-led-samsung-55-pulgadas-uhd-4k-smart-tv-serie-7-24449/p" rel="nofollow noreferrer">THIS WEBSITE</a> that have 3 kind of prices in some products, (muted price, red price and black price), i observed that the red price change before the page load when the product have 3 prices.</p> <p>When i scrape the website i get just two prices, i think if the code wait until the page fully load i will get all the prices.</p> <p>Here is my code:</p> <pre class="lang-py prettyprint-override"><code>url='https://www.exito.com/televisor-led-samsung-55-pulgadas-uhd-4k-smart-tv-serie-7-24449/p' req = requests.get(url) soup = BeautifulSoup(req.text, "lxml") # Muted Price MutedPrice = soup.find_all("span",{'class':'exito-vtex-components-2-x-listPriceValue ph2 dib strike custom-list-price fw5 exito-vtex-component-precio-tachado'})[0].text MutedPrice=pd.to_numeric(MutedPrice[2-len(MutedPrice):].replace('.','')) # Red Price RedPrice = soup.find_all("span",{'class':'exito-vtex-components-2-x-sellingPrice fw1 f3 custom-selling-price dib ph2 exito-vtex-component-precio-rojo'})[0].text RedPrice=pd.to_numeric(RedPrice[2-len(RedPrice):].replace('.','')) # black Price BlackPrice = soup.find_all("span",{'class':'exito-vtex-components-2-x-alliedPrice fw1 f3 custom-selling-price dib ph2 exito-vtex-component-precio-negro'})[0].text BlackPrice=pd.to_numeric(BlackPrice[2-len(BlackPrice):].replace('.','')) print('Muted Price:',MutedPrice) print('Red Price:',RedPrice) print('Black Price:',BlackPrice) </code></pre> <p>Actual Results: Muted Price: 3199900 Red Price: 1649868 Black Price: 0</p> <p>Expected Results: Muted Price: 3199900 Red Price: 1550032 Black Price: 1649868</p>
<p>It might be that those values are rendered dynamically i.e. the values might be populated by javascript in the page. </p> <p><code>requests.get()</code> simply returns the markup received from the server without any further client-side changes so it's not fully about waiting.</p> <p>You could perhaps use <a href="https://sites.google.com/a/chromium.org/chromedriver/downloads" rel="nofollow noreferrer">Selenium Chrome Webdriver</a> to load the page URL and get the page source. (Or you can use Firefox driver).</p> <p>Go to <code>chrome://settings/help</code> check your current chrome version and download the driver for that version from <a href="https://sites.google.com/a/chromium.org/chromedriver/downloads" rel="nofollow noreferrer">here</a>. Make sure to either keep the driver file in your <code>PATH</code> or the same folder where your python script is.</p> <p>Try replace top 3 lines of your existing code with this:</p> <pre class="lang-py prettyprint-override"><code>from contextlib import closing from selenium.webdriver import Chrome # pip install selenium url='https://www.exito.com/televisor-led-samsung-55-pulgadas-uhd-4k-smart-tv-serie-7-24449/p' # use Chrome to get page with javascript generated content with closing(Chrome(executable_path="./chromedriver")) as browser: browser.get(url) page_source = browser.page_source soup = BeautifulSoup(page_source, "lxml") </code></pre> <p>Outputs:</p> <pre><code>Muted Price: 3199900 Red Price: 1550032 Black Price: 1649868 </code></pre> <hr> <p>References:</p> <p><a href="https://stackoverflow.com/questions/8960288/get-page-generated-with-javascript-in-python">Get page generated with Javascript in Python</a></p> <p><a href="https://stackoverflow.com/questions/40555930/selenium-chromedriver-executable-needs-to-be-in-path/40556092">selenium - chromedriver executable needs to be in PATH</a></p>
python|python-3.x|selenium|web-scraping|beautifulsoup
2
1,902,612
48,998,371
SHA256 vs HMAC empty message
<p>why those functions return different values?</p> <pre><code>from hashlib import sha256 import hmac seed = "seed".encode('utf-8') def genSHA256(): return = sha256(seed).hexdigest() #19b25856e1c150ca834cffc8b59b23adbd0ec0389e58eb22b3b64768098d002b def genHMACsha256(): return hmac.new(seed, b"", sha256).hexdigest() #2ad1ced5a9ef8e90bce26c0ac9fae5af5e4b4442b2315ed58bf772a54e24fd50 </code></pre> <p>If I put empty string in message value in the HMAC one, why does not return same as simple SHA256? Are this two values related?</p>
<p>The seed is masked with <code>ipad</code> and <code>opad</code> when HMAC is used, and HMAC consists of two hash iterations, even when HMAC is used over an empty string.</p> <p>Basically the only way to get the same results as HMAC from a hash function is to use it to build HMAC - as the HMAC definition only uses the hash as secure primitive function.</p>
python|hash|cryptography|sha256|hmac
1
1,902,613
70,829,034
How to compare two poly lines for equality?
<p>I have two poly lines (paths), each represented by an array of 2D points. I'd like to compute a distance or similarity score between the two. There may be a different number of points in each array. If you were to plot the poly lines and they were directly on top of each other, the distance should be zero.</p> <pre><code>p1 = np.array([[0,0], [5,5], [9,9]]) p2 = np.array([[0,0], [3,3], [6,6], [9,9]]) p3 = np.array([[0,0], [3,4], [6,7], [9,9]]) p4 = np.array([[0,0], [0,9]]) polyline_dist(p1, p2) # Should be 0 since the plots are identical polyline_dist(p1, p3) # Should be small since the plots are close polyline_dist(p1, p4) # Should be larger since the plots are much different </code></pre> <p>I tried one approach where I calculated the distance from each point in array 1 to the line segments from array 2 and took the minimum distance, then took the average over all the points. This worked, but got very slow for longer arrays with hundreds of points.</p> <p>Any suggestions would be welcome!</p>
<p>You could try finding the area each plot casts onto the x and y axis, then comparing the intersection of that area. Similar to with calculus where you have two curves f(x) and g(x), you can find the area between the curves using the integral from the lower bound to the upper bound of (f(x) - g(x)) dx. If your lines don't have overlapping domains/codomains, you may need to add some penalty and start at the maximum of p1[0] and p2[0] and end at the minimum of p1[-1] and p2[-1]. Polylines that are the same would have 0 distance between each other and thus the area between each polyline would be 0.You would check both the x and y axis for cases where polylines area heavily vertical. I wrote the following code minus the sample_at_point part due to it getting late, all that part needs to do is find the lower and upper bounding points of p in p1 for the given axis, then define a line equation and find where the passed point p lies on that line equation and return the value. I'll edit this answer tomorrow but figured I would post what I have right now. This solution would work for the polylines you show above however it would fail if the polylines are not similar to basic curves (ex: a circle/any curve where f(x) could have two different values).</p> <pre><code>def polydist(p1, p2): x_area = area_between_squared(p1, p2, axis=0, value=1) y_area = area_between_squared(p1, p2, axis=1, value=0) return (x_area + y_area)**0.5 def sample_at_point(p1, p, axis, value): &quot;&quot;&quot; # There should be a fast way to do this, # Finnd the bounding points that p is between, # the bounding points are the max point &lt; p and the min point &gt; p # these points form a line, # find the value of p applied to that line formula. &quot;&quot;&quot; def area_between_squared(p1, p2, axis, value): start = max(p1[0][axis], p2[0][axis]) start_penalty = start - min(p1[0][axis], p2[0][axis]) end = min(p1[-1][axis], p2[-1][axis]) end_penalty = max(p1[-1][axis], p2[-1][axis]) - end # Getting late, may edit this later tomorrow with the code for this to work but feel free to implement the following idea below: axis_points = [x[axis] for x in p1] axis_points.extend([x[axis] for x in p2]) axis_points.sort() prev_p1_v = sample_at_point(p1, axis_points[0], axis, value) prev_p2_v = sample_at_point(p2, axis_points[0], axis, value) prev_axis_point = axis_points[0] axis_points.pop(0) # remove the first item sum = 0 for e in axis_points: p1_v = sample_at_point(p1, e, axis, value) p2_v = sample_at_point(p2, e, axis, value) point_c = p1_v - p2_v point_b = prev_p1_v - prev_p2_v area = 0.5 * (point_c + point_b) * (e - prev_axis_point) sum += area prev_p1_v = p1_v prev_p2_v = p2_v prev_axis_point = e sum *= sum # square the result value, add the penalties since this difference should be positive sum += start_penalty + end_penalty return sum </code></pre>
python|numpy|math|geometry
0
1,902,614
6,287,987
Callback inside inlineCallbacks function
<p>Let's say I have a function like this:</p> <pre><code>def display(this, that): print this, that </code></pre> <p>and a class:</p> <pre><code>class Runner(object): def __init__(self, callback): self.callback = callback self.loop = twisted.internet.task.LoopingCall(repeat) self.loop.start(0) @defer.inlineCallbacks def repeat(self): this = yield do_this() that = yield do_that() if this and that: # now I want to call the callback function yield self.callback(this, that) # makes sense? runner = Runner(display) reactor.run() </code></pre> <p>Basically what I want to do is I want to create a Runner class which will do some specific tasks and every time it gets a result, it will call the given callback function. Instead of creating a new function which does a specific thing, I want to create a generic class which does only one thing. E.g:</p> <pre><code>class TwitterReader(object): def __init__(self, callback): ... ... @defer.inlineCallbacks def get_messages(self): ... ... yield callback(messages) class MessageFilter(object): def __init__(self): self.bad_messages = open('bad_messages.txt', 'w') self.twitter = TwitterReader(self.message_received) def message_received(messages): for message in messages: for bad_word in BAD_WORDS: if bad_word in message: self.bad_messages.write(message) break </code></pre> <p>I'm new to twisted. So, I'm not sure if this is the right way to do it. Is it?</p> <p>Thanks </p>
<p>Your problem is that <code>callback</code> inside <code>repeat</code> should instead be <code>self.callback</code>.</p> <p>Other than that your example should work exactly as written.</p>
python|callback|twisted|yield
2
1,902,615
67,993,241
IndexError: Boolean Index did not match indexed array
<p>I am trying to solve this issue in a specific way. Would love pointers on how to proceed.</p> <p>I have df1, which is:</p> <pre><code>df1 = pd.DataFrame({'Model': ['model1', 'model2', 'model3']}) </code></pre> <p>Then there is df2, which is:</p> <pre><code>model1 = pd.DataFrame({'Model' : ['model1', 'model1', 'model1'], 'Rule' : ['High', 'Low', 'High'], 'Name' : ['A', 'B', 'C']}) model2 = pd.DataFrame({'Model' : ['model2', 'model2', 'model2'], 'Rule' : ['Low', 'Low', 'High'], 'Name' : ['B', 'D', 'F']}) model3 = pd.DataFrame({'Model' : ['model3', 'model3', 'model3'], 'Rule' : ['High', 'High', 'High'], 'Name' : ['D', 'E', 'F']}) df2 = [model1, model2, model3] </code></pre> <p>Then there is df3, which is:</p> <pre><code>df3 = pd.DataFrame({'Name' : ['A', 'B', 'C', 'D', 'E', 'F'], 'model1' : [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan,], 'High1' : [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan,], 'Low1' : [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan,], 'model2' : [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan,], 'High2' : [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan,], 'Low2' : [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan,], 'model3' : [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan,], 'High3' : [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan,], 'Low3' : [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan,]}) </code></pre> <p>I want the output to look like this:</p> <pre><code>df3 = pd.DataFrame({'Name' : ['A', 'B', 'C', 'D', 'E', 'F'], 'model1' : ['Yes', 'Yes', 'Yes', np.nan, np.nan, np.nan,], 'High1' : [0, np.nan, 0, np.nan, np.nan, np.nan,], 'Low1' : [np.nan, 0, np.nan, np.nan, np.nan, np.nan,], 'model2' : [np.nan, 'Yes', np.nan, 'Yes', np.nan, 'Yes',], 'High2' : [np.nan, np.nan, np.nan, np.nan, np.nan, 0,], 'Low2' : [np.nan, 0, np.nan, 0, np.nan, np.nan,], 'model3' : [np.nan, np.nan, np.nan, 'Yes', 'Yes', 'Yes',], 'High3' : [np.nan, np.nan, np.nan, 0, 0, 0,], 'Low3' : [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan,]}) </code></pre> <p>This is my code:</p> <pre><code>for model in df1['Model']: col_index = df3.columns.get_loc(model) df3.iloc[df3['Name'].isin(df2[model]['Name']), col_index] = 'Yes' df3.iloc[df3['Name'].isin(df2[model]['Name']) &amp; (df2[model]['Rule']=='High'), col_index+1] = 0 df3.iloc[df3['Name'].isin(df2[model]['Name']) &amp; (df2[model]['Rule']=='Low'), col_index+2] = 0 </code></pre> <p>This gives me the following error:</p> <pre><code>IndexError: boolean index did not match indexed array along dimension 0; dimension is 389 but corresponding boolean dimension is 853 </code></pre> <p>I'm assuming this is caused by (df2[model]['Rule']=='High') where 'High' is a scalar.</p> <p>Note: I want the code to work through this using a for loop as shown in the code above, because it helps with additional stuff i'm doing.</p>
<p>I think you are just looking for pivot</p> <pre><code>df3.pivot('Name', 'Property', 'Name').notnull() Property colA colB colC Name A True True False B False False True C True False True </code></pre>
python|pandas|for-loop|indexing|scalar
0
1,902,616
66,948,315
Pandas sql python
<p>How to find same salary in emp table using pandas dataframe Below dictionary name employee so create as a dataframe And find same salary in employee dataframe</p> <p>employee={'Name':['Bob','Steve','Mark','Lisa','Hans'],'Station':[ 1,2,3,4,5],'Salary':[2000,1750,2050,2200,2000]} employee</p>
<p>You can groupby on <code>Salary</code> column then filter out the group which has no same salary.</p> <pre class="lang-py prettyprint-override"><code>df.groupby('Salary').filter(lambda x: len(x) &gt; 1) </code></pre>
python|pandas|dataframe
0
1,902,617
66,779,161
Need to delete every file that includes a specific word
<p>I am trying to make a button that deletes all files that include the word &quot;cast&quot; because I have many files with that word included but all with a different prefix e.g. &quot;(copy1)&quot;.</p> <pre><code>import tkinter as tk import os root = tk.Tk() root.title(&quot;Matt le blanc terminator&quot;) root.geometry('400x250') def terminate(): if os.path.exists('cast.*'): os.remove('cast.*') else: print(&quot;File does not exist&quot;) TBtn = tk.Button(text=&quot;TERMINATE&quot;,fg=&quot;red&quot;, bg=&quot;black&quot;, command=terminate, width=100, height=50) TBtn.pack() root.mainloop() </code></pre>
<p>Maybe is simpler to list the directory and then check file per file if contains &quot;cast&quot; in the name</p> <pre><code>import os all_files = os.listdir() for f in all_files: file_without_extension = os.path.splitext(f)[0] if &quot;cast&quot; in file_without_extension: os.remove(f) </code></pre>
python|directory|delete-file
0
1,902,618
66,934,104
json.decoder.JSONDecodeError: Expecting value: line 1890 column 29 (char 83535)
<p>I have this json file <a href="https://www.dropbox.com/s/yjza7cu136ye06e/configClear_v2.json?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/yjza7cu136ye06e/configClear_v2.json?dl=0</a></p> <p>I get the following error message:</p> <pre><code>Traceback (most recent call last): File &quot;test_02.py&quot;, line 8, in &lt;module&gt; data.append(json.load(f)) File &quot;C:\Python\lib\json\__init__.py&quot;, line 293, in load return loads(fp.read(), File &quot;C:\Python\lib\json\__init__.py&quot;, line 357, in loads return _default_decoder.decode(s) File &quot;C:\Python\lib\json\decoder.py&quot;, line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File &quot;C:\Python\lib\json\decoder.py&quot;, line 355, in raw_decode raise JSONDecodeError(&quot;Expecting value&quot;, s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1890 column 29 (char 83535) </code></pre> <p>test_02.py</p> <pre><code>import json with open(&quot;configClear_v2.json&quot;, &quot;r&quot;, encoding='utf-8') as f: data = json.load(f) print(data) </code></pre>
<p>The JSON is invalid. The lines leading up to 1890 are this:</p> <pre><code> { &quot;sequence&quot;: 90, &quot;ace-rule&quot;: { &quot;action&quot;: &quot;deny&quot;, &quot;protocol&quot;: &quot;ip&quot;, &quot;dst-host&quot;: &quot;217.105.224.25&quot;, &quot;any&quot;: [ null ] } }, ] }, </code></pre> <p>1890 is the last <code>]</code>. The error is <code>},</code> 2 lines before. You can't have a <code>,</code> after the last element of an array or object in JSON. This is allowed in a number of programming languages (Python, PHP, JavaScript, for example), but JSON is deliberately more restrictive in its syntax, since it's not intended for humans to write by hand.</p> <p>Remove the <code>,</code> after the <code>}</code> and the error will go away. Then fix whatever program is creating the JSON. You should always use a library function to create JSON, to ensure that it's properly formatted.</p>
python|json
2
1,902,619
63,772,597
Create a combination of "relative" and "grouped" chart in Python
<p>I need to create a combination of &quot;relative&quot; and &quot;grouped&quot; chart in plotly.</p> <p>I figured out how to create stacked and grouped by using this code:</p> <pre><code>from plotly import graph_objects as go import plotly pyplt = plotly.offline.plot data = { &quot;Sports_19&quot;: [15, 23, 32, 10, 23, 22, 32, 24], &quot;Casual_19&quot;: [4, 12, 11, 14, 15, 12, 22, 14], &quot;Yoga_19&quot;: [4, 8, 18, 6, 12, 11, 10, 4], &quot;Sports_20&quot;: [11, 18, 18, 0, 20, 12, 12, 11], &quot;Casual_20&quot;: [20, 10, 9, 6, 10, 11, 17, 22], &quot;Yoga_20&quot;: [11, 18, 18, 0, 20, 12, 12, 11], &quot;labels&quot;: [&quot;January&quot;, &quot;February&quot;, &quot;March&quot;, &quot;April&quot;, &quot;May&quot;, 'June', 'July', &quot;August&quot;] } fig = go.Figure() fig.add_trace(go.Bar(name=&quot;Sports&quot;,x=data[&quot;labels&quot;],y=data[&quot;Sports_19&quot;],offsetgroup=19,marker_color='lightsalmon',text=data[&quot;Sports_19&quot;],textposition='auto')) fig.add_trace(go.Bar(name=&quot;Casual&quot;,x=data['labels'],y=data['Casual_19'],offsetgroup=19,base=data['Sports_19'],marker_color='crimson',text=data[&quot;Casual_19&quot;],textposition='auto')) fig.add_trace(go.Bar(name=&quot;Yoga&quot;,x=data['labels'],y=data['Yoga_19'],marker_color='indianred',text=data[&quot;Yoga_19&quot;],textposition='auto',offsetgroup=19,base=[val1 + val2 for val1, val2 in zip(data[&quot;Sports_19&quot;], data[&quot;Casual_19&quot;])])) fig.add_trace(go.Bar(name=&quot;Sports_20&quot;,x=data[&quot;labels&quot;],y=data[&quot;Sports_20&quot;],offsetgroup=20,marker_color='lightsalmon',showlegend=False,text=data[&quot;Sports_20&quot;],textposition='auto')) fig.add_trace(go.Bar(name=&quot;Casual_20&quot;,x=data['labels'],y=data['Casual_20'],offsetgroup=20,base=data['Sports_20'],marker_color='crimson',showlegend=False,text=data[&quot;Casual_20&quot;],textposition='auto')) fig.add_trace(go.Bar(name=&quot;Yoga_20&quot;, x=data['labels'], y=data['Yoga_20'], marker_color='indianred', text=data[&quot;Yoga_20&quot;], showlegend=False, textposition='auto', offsetgroup=20, base=[val1 + val2 for val1, val2 in zip(data[&quot;Sports_20&quot;], data[&quot;Casual_20&quot;])])) fig.update_layout(title=&quot;2019 vs 2020 Sales by Category&quot;,yaxis_title=&quot;Sales amount in US$&quot;) fig.show() pyplt(fig, auto_open=True) </code></pre> <p>Output is this: <a href="https://i.stack.imgur.com/kKmPD.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kKmPD.jpg" alt="enter image description here" /></a></p> <p>Is there is any way i can convert this graph to combination of &quot;relative&quot; and &quot;grouped&quot;? May be not with plotly, but with matplotlib or another tools?</p> <p>p.s. Here is the example of &quot;relative graph&quot;(but its not grouped): <a href="https://i.stack.imgur.com/9IQqm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9IQqm.png" alt="enter image description here" /></a></p>
<p>Probably the most straightforward way is to create two new dataframes <code>df_perc_19</code> and <code>df_perc_20</code> to store your data, normalized to relative percentages for each month in each year, rounding off to two digits using <code>.round(2)</code> since a long decimal will cause the default direction of the text to change - feel free to adjust this however you like.</p> <p>Then access the values in these new dataframes for your traces, and although it's ugly, you can get percentages to display for the <code>text</code> parameter using something like: <code>text=[str(x)+&quot;%&quot; for x in df_perc_19[&quot;Casual_19&quot;]]</code></p> <pre><code>import pandas as pd import plotly from plotly import graph_objects as go # pyplt = plotly.offline.plot data = { &quot;Sports_19&quot;: [15, 23, 32, 10, 23, 22, 32, 24], &quot;Casual_19&quot;: [4, 12, 11, 14, 15, 12, 22, 14], &quot;Yoga_19&quot;: [4, 8, 18, 6, 12, 11, 10, 4], &quot;Sports_20&quot;: [11, 18, 18, 0, 20, 12, 12, 11], &quot;Casual_20&quot;: [20, 10, 9, 6, 10, 11, 17, 22], &quot;Yoga_20&quot;: [11, 18, 18, 0, 20, 12, 12, 11], # &quot;labels&quot;: [&quot;January&quot;, &quot;February&quot;, &quot;March&quot;, &quot;April&quot;, &quot;May&quot;, 'June', 'July', &quot;August&quot;] } labels = [&quot;January&quot;, &quot;February&quot;, &quot;March&quot;, &quot;April&quot;, &quot;May&quot;, 'June', 'July', &quot;August&quot;] df = pd.DataFrame(data=data,index=labels) ## normalize data for the months of 2019, and the months of 2020 df_perc_19 = df.apply(lambda x: 100*x[[&quot;Sports_19&quot;,&quot;Casual_19&quot;,&quot;Yoga_19&quot;]] / x[[&quot;Sports_19&quot;,&quot;Casual_19&quot;,&quot;Yoga_19&quot;]].sum(),axis=1).round(2) df_perc_20 = df.apply(lambda x: 100*x[[&quot;Sports_20&quot;,&quot;Casual_20&quot;,&quot;Yoga_20&quot;]] / x[[&quot;Sports_20&quot;,&quot;Casual_20&quot;,&quot;Yoga_20&quot;]].sum(),axis=1).round(2) fig = go.Figure() ## traces for 2019 fig.add_trace(go.Bar(name=&quot;Sports&quot;,x=labels,y=df_perc_19[&quot;Sports_19&quot;],offsetgroup=19,marker_color='lightsalmon',text=[str(x)+&quot;%&quot; for x in df_perc_19[&quot;Sports_19&quot;]],textposition='auto')) fig.add_trace(go.Bar(name=&quot;Casual&quot;,x=labels,y=df_perc_19['Casual_19'],offsetgroup=19,base=df_perc_19['Sports_19'],marker_color='crimson',text=[str(x)+&quot;%&quot; for x in df_perc_19[&quot;Casual_19&quot;]],textposition='auto')) fig.add_trace(go.Bar(name=&quot;Yoga&quot;,x=labels,y=df_perc_19['Yoga_19'],marker_color='indianred',text=[str(x)+&quot;%&quot; for x in df_perc_19[&quot;Yoga_19&quot;]],textposition='auto',offsetgroup=19,base=[val1 + val2 for val1, val2 in zip(df_perc_19[&quot;Sports_19&quot;], df_perc_19[&quot;Casual_19&quot;])])) ## traces for 2020 fig.add_trace(go.Bar(name=&quot;Sports_20&quot;,x=labels,y=df_perc_20[&quot;Sports_20&quot;],offsetgroup=20,marker_color='lightsalmon',showlegend=False,text=[str(x)+&quot;%&quot; for x in df_perc_20[&quot;Sports_20&quot;]] ,textposition='auto')) fig.add_trace(go.Bar(name=&quot;Casual_20&quot;,x=labels,y=df_perc_20['Casual_20'],offsetgroup=20,base=df_perc_20['Sports_20'],marker_color='crimson',showlegend=False,text=[str(x)+&quot;%&quot; for x in df_perc_20[&quot;Casual_20&quot;]],textposition='auto')) fig.add_trace(go.Bar(name=&quot;Yoga_20&quot;, x=labels, y=df_perc_20['Yoga_20'], marker_color='indianred', text=[str(x)+&quot;%&quot; for x in df_perc_20[&quot;Yoga_20&quot;]], showlegend=False, textposition='auto', offsetgroup=20, base=[val1 + val2 for val1, val2 in zip(df_perc_20[&quot;Sports_20&quot;], df_perc_20[&quot;Casual_20&quot;])])) fig.update_layout(title=&quot;2019 vs 2020 Sales by Category&quot;,yaxis_title=&quot;Sales amount in US$ (percentage)&quot;) fig.show() # pyplt(fig, auto_open=True) </code></pre> <p><a href="https://i.stack.imgur.com/utn5G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/utn5G.png" alt="enter image description here" /></a></p>
python|matplotlib|plotly
2
1,902,620
66,635,723
Python: Optional parameter, wrong default value
<p>i have a code like this example code:</p> <pre><code>default_var = &quot;abc&quot; def set_default_var(): global default_var default_var = &quot;something different&quot; def ex_func(var1=&quot;&quot;, var2=&quot;&quot;, var3=default_var): print(var3) set_default_var() ex_func() &gt;&gt;&gt;abc </code></pre> <p>As I set <strong>var3</strong> to default_var in the parameterlist, I expect it to have the value <strong>&quot;something different&quot;</strong>, when I call the function without specifying <strong>var3</strong>. However, the print shows me <strong>&quot;abc&quot;</strong>. Even during debugging, the debugger shows me that default_var is set to <strong>&quot;something different&quot;</strong>, but <strong>var3</strong> is not. Is this a bug, or a very unexpected feature?</p> <p>Thank you!</p>
<p>The default value is evaluated and stored once when the function is defined. It is not re-checked each time the function is called. This will work as you expect:</p> <pre><code>default_var = &quot;abc&quot; def set_default_var(): global default_var default_var = &quot;something different&quot; def ex_func(var1=&quot;&quot;, var2=&quot;&quot;, var3=None): if var3 == None: var3 = default_var print(var3) set_default_var() ex_func() </code></pre>
python|python-3.x|function
1
1,902,621
72,234,953
On Python, How to open multiple command windows simultaneously (in cascade)
<p>I intend to monitor connectivity (by ping ) 3 devices into the net. So far I tried the follow scripts but in some, the commands run in a single windows import os import multiprocessing</p> <pre><code>def xxx(): while True: os.system('cmd /c &quot;ping 192.168.1.254 -t &quot; ') if __name__ == '__main__': jobs = [] for i in range(3 ): p = multiprocessing.Process(target=xxx) jobs.append(p) p.start() # _______________________________________________________ import threading import os def xxx(): os.system('cmd /c &quot;ping 192.168.1.254 -t &quot; ') def yyy(): os.system('cmd /c &quot;ping 127.1.1.0 -t &quot; ') def main(): server_thread = threading.Thread(target=xxx) client_thread = threading.Thread(target=yyy) server_thread.start() client_thread.start() #__________________________________________________ import os import multiprocessing def xxx(): os.system( 'cmd /c &quot;ping 192.168.1.254 -t &quot; ') def yyy(): os.system('cmd /c &quot;ping 127.1.1.0 -t &quot;') if __name__ == '__main__': jobs = [] p = multiprocessing.Process(target=xxx) jobs.append(p) p.start() q = multiprocessing.Process(target=yyy) jobs.append(q) q.start() </code></pre>
<p>Maybe just run another Python script where you put your ping method or other from your main script with</p> <pre><code>subprocess.call('python ping_script.py', shell=True) </code></pre> <p>But if you want to have some visual on your processes I would recommend you to use Jupyter notebooks</p>
python|windows|cmd|command|instance
0
1,902,622
65,878,027
Invalid Operation with Decimal
<p>I'm using beautiful soup to extract information from a website and to get the price of an item. I used the code below to create a variable names prices to store the information. Then I created a loop to iterate through all of the items and now I'm trying to compare it to another variable named price_thres to determine if its less than or equal to the amount. Running this code prints a few of the correct values but it also prints</p> <pre class="lang-none prettyprint-override"><code> price_in_dec = Decimal(i.text) decimal.InvalidOperation: [&lt;class 'decimal.ConversionSyntax'&gt;] </code></pre> <pre><code>prices = soup.find_all(&quot;span&quot;, itemprop=&quot;price&quot;) for i in prices: price_in_dec = Decimal(i.text) if price_in_dec &lt;= price_thres: print(price_in_dec) </code></pre>
<p>You seem to have commas (<code>,</code>) in some of your prices.</p> <p>Try this:</p> <pre><code>import locale # Set to US locale locale.setlocale(locale.LC_ALL, 'USA') prices = soup.find_all(&quot;span&quot;, itemprop=&quot;price&quot;) for i in prices: price_in_dec = Decimal(locale.atof(i.text)) if price_in_dec &lt;= price_thres: print(price_in_dec) </code></pre> <p>By using the correct locale, you can parse the prices according to the way you write them in the US.</p> <p>Another option would be to simply remove commas using <code>i.text.replace(&quot;,&quot;, &quot;&quot;)</code>.</p>
python
1
1,902,623
50,903,448
modify a function of a class from another class
<p>In pymodbus library in <a href="https://github.com/riptideio/pymodbus/blob/master/pymodbus/server/sync.py" rel="nofollow noreferrer">server.sync</a>, <a href="https://docs.python.org/2/library/socketserver.html" rel="nofollow noreferrer">SocketServer.BaseRequestHandler</a> is used, and defines as follow:</p> <pre><code>class ModbusBaseRequestHandler(socketserver.BaseRequestHandler): """ Implements the modbus server protocol This uses the socketserver.BaseRequestHandler to implement the client handler. """ running = False framer = None def setup(self): """ Callback for when a client connects """ _logger.debug("Client Connected [%s:%s]" % self.client_address) self.running = True self.framer = self.server.framer(self.server.decoder, client=None) self.server.threads.append(self) def finish(self): """ Callback for when a client disconnects """ _logger.debug("Client Disconnected [%s:%s]" % self.client_address) self.server.threads.remove(self) def execute(self, request): """ The callback to call with the resulting message :param request: The decoded request message """ try: context = self.server.context[request.unit_id] response = request.execute(context) except NoSuchSlaveException as ex: _logger.debug("requested slave does not exist: %s" % request.unit_id ) if self.server.ignore_missing_slaves: return # the client will simply timeout waiting for a response response = request.doException(merror.GatewayNoResponse) except Exception as ex: _logger.debug("Datastore unable to fulfill request: %s; %s", ex, traceback.format_exc() ) response = request.doException(merror.SlaveFailure) response.transaction_id = request.transaction_id response.unit_id = request.unit_id self.send(response) # ----------------------------------------------------------------------- # # Base class implementations # ----------------------------------------------------------------------- # def handle(self): """ Callback when we receive any data """ raise NotImplementedException("Method not implemented by derived class") def send(self, message): """ Send a request (string) to the network :param message: The unencoded modbus response """ </code></pre> <p>raise NotImplementedException("Method not implemented by derived class")</p> <p>setup() is called when a client is connected to the server, and finish() is called when a client is disconnected. I want to manipulate these methods (setup() and finish()) in another class in another file which use the library (pymodbus) and add some code to setup and finish functions. I do not intend to modify the library, since it may cause strange behavior in specific situation.</p> <p>---Edited ---- To clarify, I want setup function in ModbusBaseRequestHandler class to work as before and remain untouched, but add sth else to it, but this modification should be done in my code not in the library. </p>
<p>The simplest, and usually best, thing to do is to not manipulate the methods of <code>ModbusBaseRequestHandler</code>, but instead inherit from it and override those methods in your subclass, then just use the subclass wherever you would have used the base class:</p> <pre><code>class SoupedUpModbusBaseRequestHandler(ModbusBaseRequestHandler): def setup(self): # do different stuff # call super().setup() if you want # or call socketserver.BaseRequestHandler.setup() to skip over it # or call neither </code></pre> <p>Notice that a <code>class</code> statement is just a normal statement, and can go anywhere any other statement can, even in the middle of a method. So, even if you need to dynamically create the subclass because you won't know what you want <code>setup</code> to do until runtime, that's not a problem.</p> <hr> <p>If you actually need to monkeypatch the class, that isn't very hard—although it is easy to screw things up if you aren't careful.</p> <pre><code>def setup(self): # do different stuff ModbusBaseRequestHandler.setup = setup </code></pre> <p>If you want to be able to call the normal implementation, you have to stash it somewhere:</p> <pre><code>_setup = ModbusBaseRequestHandler.setup def setup(self): # do different stuff # call _setup whenever you want ModbusBaseRequestHandler.setup = setup </code></pre> <p>If you want to make sure you copy over the name, docstring, etc., you can use `wraps:</p> <pre><code>@functools.wraps(ModbusBaseRequestHandler.setup) def setup(self): # do different stuff ModbusBaseRequestHandler.setup = setup </code></pre> <p>Again, you can do this anywhere in your code, even in the middle of a method.</p> <hr> <p>If you need to monkeypatch one instance of <code>ModbusBaseRequestHandler</code> while leaving any other instances untouched, you can even do that. You just have to manually bind the method:</p> <pre><code>def setup(self): # do different stuff myModbusBaseRequestHandler.setup = setup.__get__(myModbusBaseRequestHandler) </code></pre> <p>If you want to call the original method, or <code>wraps</code> it, or do this in the middle of some other method, etc., it's otherwise basically the same as the last version.</p>
python|handler|socketserver|pymodbus
0
1,902,624
35,268,167
Low accuracy with change to TensorFlow Cifar10 example
<p>I am trying to modify the network structure provided by Cifar10 in TensorFlow. Typically, I added another convolution layer (conv12) after the first convolution layer (conv1). No matter how I set the filter (I tried all 1x1, 3x3, 5x5) and whether using weight decay or not, having a new layer will decrease the accuracy to below than 10%. This is equivalent to a random guess in Cifar10 since there are 10 classes.</p> <p>The code structure is as following, I don't modify any other part of the cifar except setting the size of input image to be 48x48 (instead of 24x24). I guess the input size should not matter.</p> <p>Note that the conv12 is a depthwise convolution layer because I want to add just a linear layer after the conv1 layer in order to minimize the change to the original code. Doing that I expected that the accuracy should be similar to the original version, but it decreases to around 10%. (I also tried a normal convolution layer but it didn't work also.)</p> <pre><code> with tf.variable_scope('conv1') as scope: kernel1 = _variable_with_weight_decay('weights', shape=[5, 5, 3, 64], stddev=1e-4, wd=0.0) conv_1 = tf.nn.conv2d(images, kernel1, [1, 1, 1, 1], padding='SAME') biases1 = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0)) bias1 = tf.nn.bias_add(conv_1, biases1) conv1 = tf.nn.relu(bias1, name=scope.name) _activation_summary(conv1) with tf.variable_scope('conv12') as scope: kernel12 = _variable_with_weight_decay('weights', shape=[1, 1, 64, 1], stddev=1e-4, wd=0.0) #conv_12 = tf.nn.conv2d(conv1, kernel12, [1, 1, 1, 1], padding='SAME') conv_12 = tf.nn.depthwise_conv2d(conv1, kernel12, [1, 1, 1, 1], padding='SAME') biases12 = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0)) bias12 = tf.nn.bias_add(conv_12, biases12) conv12 = tf.nn.relu(bias12) _activation_summary(conv12) pool1 = tf.nn.max_pool(conv12, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool1') ..... </code></pre> <p>Could someone please tell me what wrong with the code?</p>
<p>Your second convolution:</p> <pre><code>kernel12 = _variable_with_weight_decay('weights', shape=[1, 1, 64, 1] </code></pre> <p>is taking the depth-64 output of the previous layer and squeezing it down to a depth-1 output. That doesn't seem like it will match with whichever code you have following this (if it's <code>conv2</code> from the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/cifar10/cifar10.py" rel="nofollow">cifar example from TensorFlow</a>, then it definitely isn't going to work well, because that one expects a depth-64 input.)</p> <p>Perhaps you really wanted <code>shape=[1, 1, 64, 64]</code>, which would simply add an extra "inception-style" 1x1 convolutional layer into your model?</p>
machine-learning|computer-vision|tensorflow|image-recognition|object-detection
0
1,902,625
34,958,340
How to parse through a nest of nested JSON?
<p>I am stuck using this API at my work, and having trouble parsing through the dataset because of it's complexity.</p> <p>In the following JSON, the only values of any importance to me are "name" and the actual host name. I am trying to make a dictionary that consists of {"name":"host, host, host, host"}. If anyone knows how to parse through this or can point me in the right direction, that would be very much appreciated.</p> <pre><code>{ "hugeData":[ { "env1":[ { "sins":[ {"host": "ip-10-12-138-225.va1.b2c.test.com", "deployTime": "2015-07-23 11:54 AM", "sin": "0"}, {"host": "ip-10-12-129-193.va1.b2c.test.com", "deployTime": "2015-09-01 01:09 PM", "sin": "7"}, {"host": "ip-10-12-138-235.va1.b2c.test.com", "deployTime": "2015-07-23 11:54 AM", "sin": "9"}, {"host": "ip-10-12-138-250.va1.b2c.test.com", "deployTime": "2015-07-23 11:53 AM", "sin": "12"}, {"host": "ip-10-12-138-223.va1.b2c.test.com", "deployTime": "2015-07-23 11:53 AM", "sin": "14"}, {"host": "ip-10-12-138-237.va1.b2c.test.com", "deployTime": "2015-07-23 11:54 AM", "sin": "17"}, {"host": "ip-10-12-138-244.va1.b2c.test.com", "deployTime": "2015-07-23 11:53 AM", "sin": "18"}, ], "status": "success", "buildTime": "2015-05-26T17:06:06", } ], "name": "apache-6" }, { "env1":[ { "sins":[ {"host": "ip-10-12-138-225.va1.b2c.test.com", "deployTime": "2015-12-16 05:23 PM", "sin": "0"}, {"host": "ip-10-12-129-193.va1.b2c.test.com", "deployTime": "2015-12-16 05:23 PM", "sin": "7"}, {"host": "ip-10-12-138-235.va1.b2c.test.com", "deployTime": "2015-12-16 05:23 PM", "sin": "9"}, {"host": "ip-10-12-138-250.va1.b2c.test.com", "deployTime": "2015-12-16 05:23 PM", "sin": "12"}, {"host": "ip-10-12-138-223.va1.b2c.test.com", "deployTime": "2015-12-16 05:23 PM", "sin": "14"}, {"host": "ip-10-12-138-237.va1.b2c.test.com", "deployTime": "2015-12-16 05:23 PM", "sin": "17"}, {"host": "ip-10-12-138-244.va1.b2c.test.com", "deployTime": "2015-12-16 05:23 PM", "sin": "18"}, {"host": "ip-10-12-138-248.va1.b2c.test.com", "deployTime": "2015-12-16 05:23 PM", "sin": "21"}, ], "status": "success", "buildTime": "2015-12-16T17:07:44", } ], "name": "apache-5" }, { "env1":[ { "sins":[ {"host": "ip-10-12-138-234.va1.b2c.test.com", "deployTime": "2015-08-06 03:13 PM", "sin": "10"}, {"host": "ip-10-12-138-246.va1.b2c.test.com", "deployTime": "2015-08-06 03:15 PM", "sin": "20"}, {"host": "ip-10-12-138-216.va1.b2c.test.com", "deployTime": "2015-08-06 03:04 PM", "sin": "28"} ], "status": "success", "buildTime": "2013-02-21T15:41:59", } ], "name": "app-steel" }, } </code></pre>
<p>Maybe you like the answer this time.</p> <p>(assigned dict/json to a varibale named test)</p> <pre><code>your_dict = {i['name']: ", ".join([host['host'] for host in i['env1'][0]['sins']]) \ for i in test["hugeData"]} </code></pre>
python|json|dictionary
0
1,902,626
26,738,676
Does scipy.integrate.ode.set_solout work?
<p>The <code>scipy.integrate.ode</code> interface to integration routines provides a method for stopping the integration if a constraint is violated at any step, <code>set_solout</code>. However, I cannot get this method to work, even in the simplest examples. Here's one attempt:</p> <pre><code>import numpy as np from scipy.integrate import ode def f(t, y): """Exponential decay.""" return -y def solout(t, y): if y[0] &lt; 0.5: return -1 else: return 0 y_initial = 1 t_initial = 0 r = ode(f).set_integrator('dopri5') # Integrator that supports solout r.set_initial_value(y_initial, t_initial) r.set_solout(solout) # Integrate until t = 5, but stop when solout constraint violated r.integrate(5) # The time when solout should have terminated integration: intersection_time = np.log(2) </code></pre> <p>The integration should have been stopped by solout when <code>t = log(2) = 0.693...</code>, but instead happily continues until <code>t = 5</code>, when <code>y = 0.007</code>.</p> <p>Is this a bug in <code>scipy</code>, or am I not using <code>set_solout</code> correctly?</p>
<p>It turns out you need to call <code>set_solout</code> <em>before</em> calling <code>set_initial_value</code>. (I figured this out by studying the <a href="https://github.com/scipy/scipy/blob/master/scipy/integrate/tests/test_integrate.py#L214" rel="nofollow"><code>set_solout</code> tests</a> in the <code>scipy</code> test suite.) So, reversing the order of the two calls in my question code produces the correct result.</p> <p>Even if this behavior is correct, it ought to be mentioned in the documentation for <code>set_solout</code>. I've posted <a href="https://github.com/scipy/scipy/issues/4118" rel="nofollow">an issue with SciPy on GitHub</a>.</p> <p><strong>UPDATE:</strong> This issue is fixed in SciPy 0.17.0; <code>set_solout</code> will work even if called after <code>set_initial_value</code>, and the question code will produce the correct result.</p>
python|scipy|ode
10
1,902,627
26,616,643
Improving Javadoc regex
<p>I'm currently using this fragment in a Python script to detect Javadoc comments:</p> <pre><code># This regular expression matches Javadoc comments. pattern = r'/\*\*(?:[^*]|\*(?!/))*\*/' # Here's how it works: # /\*\* matches leading '/**' (have to escape '*' as metacharacters) # (?: starts a non-capturing group to match one comment character # [^*] matches any non-asterisk characters... # | or... # \* any asterisk... # (?!/) that's not followed by a slash (negative lookahead) # ) end non-capturing group # * matches any number of these non-terminal characters # \*/ matches the closing '*/' (again, have to escape '*') comments = re.findall(pattern, large_string_of_java_code) </code></pre> <p>This regex doesn't work perfectly. I'm okay with it not matching Unicode escape sequences (e.g., the comment <code>/** a */</code> can be written as <code>\u002f** a */</code>). The main problem that I have is that it will yield a false positive on a comment like this:</p> <pre class="lang-java prettyprint-override"><code>// line comment /** not actually a javadoc comment */ </code></pre> <p>and will probably break on comments like this:</p> <pre class="lang-java prettyprint-override"><code>// line comment /** unfinished "Javadoc comment" // regex engine is still searching for closing slash </code></pre> <p>I tried using a negative lookbehind for <code>^.$//</code>, but, according to the <a href="https://docs.python.org/2/library/re.html" rel="nofollow">Python docs</a>,</p> <blockquote> <p>…the contained pattern must only match strings of some fixed length.</p> </blockquote> <p>So that doesn't work.</p> <p>I also tried starting from the beginning of the line, something like this:</p> <pre><code>pattern = r'^(?:[^/]|/(?!/))*(the whole regex above)' </code></pre> <p>but I couldn't get this to work.</p> <hr> <p>Are regular expressions appropriate for this task? How can I get this to work?</p> <p>If regex isn't the right tool, I'm happy to use any lightweight-ish built-in Python 2 module.</p>
<p>If you want exactitude and you're working with Java code, you're likely better off integrating with javadoc (or doxygen). Maybe this would help: <a href="https://stackoverflow.com/questions/13024881/how-to-extract-javadoc-comments-from-the-source-files">How to extract JavaDoc comments from the source files</a></p> <p>If you don't need exactitude, you should be able to get regular expressions working well enough for most cases by proceeding by stages: maybe start by eliminating the confusing parts first (// and non-javadoc /* */ comments), then look for javadoc comments. But you also have to decide on an approach for dealing with Javadoc delimiters that happen to be embedded in strings... The problem is more about lexical analysis. Maybe that is enough for your application?</p>
python|regex|javadoc
0
1,902,628
44,870,149
Keras: linear model unstable results
<p>I'm trying to apply linear model to MNIST dataset and it works, but it's 'unstable', i.e. each re-run of code can produce drastically different results in terms of accuracy. I understand that NN learn weights in 'stochastic' way and maybe solution converges to different local minima, but maybe there some way to make it more 'stable'?</p> <p>Here is my model definition:</p> <pre><code>def get_model(): w=28 h=28 input_shape = (w*h,) model = Sequential() model.add(Dense(n_classes, activation='linear', input_shape=input_shape)) model.add(Activation('softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) print(model.summary()) return model </code></pre> <p><strong>Update:</strong> seems adding regularization is valid answer to initial question of instability of solution in terms of accuracy.</p> <pre><code>def get_model(): w=28 h=28 input_shape = (w*h,) model = Sequential() #v1 - Not valid, because applying softmax to one layer network is meaningless #model.add(Dense(n_classes, init='uniform', activation='linear', # kernel_regularizer=regularizers.l2(0.01), # activity_regularizer=regularizers.l1(0.01), # input_shape=input_shape)) #model.add(Dropout(0.2)) #model.add(Activation('softmax')) #v2 model.add(Dense(n_classes, init='uniform', activation='softmax', kernel_regularizer=regularizers.l2(0.01), activity_regularizer=regularizers.l1(0.01), input_shape=input_shape)) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) #print(model.summary()) return model </code></pre>
<p>L1/L2 regularization or dropout should help to stablize the learning here.</p>
python|machine-learning|keras
2
1,902,629
64,844,139
Python 2.7 Getting error TypeError: unsupported operand type(s) for /: 'datetime.timedelta' and 'datetime.timedelta'
<p>I am trying to calculate the implied volatility of options pricing. Here i am trying to calculate the Time to Expiry using the below code But its giving me error</p> <pre><code>import sys import json import time from datetime import datetime, date, time, timedelta from py_vollib.black_scholes.implied_volatility import * currentTime = datetime(2020, 11, 15, 15, 30) expiryTime = datetime(2020, 11, 19, 15, 30) tl = expiryTime - currentTime print(tl) a = tl/timedelta(days = 1) t = tl/365 iv = implied_volatility(371.85, 28594.30, 28500, t, 0.1, 'P') #ltp, underlaying price, strike, time, risk free rate, type (C,P call or put) iv = round(iv*100, 2) print(iv) quit() </code></pre> <pre><code>OutPut 4 days, 0:00:00 Traceback (most recent call last): File &quot;option_chain_calculator.py&quot;, line 17, in &lt;module&gt; a = tl/timedelta(days = 1) TypeError: unsupported operand type(s) for /: 'datetime.timedelta' and 'datetime.timedelta' </code></pre> <p>Thanks in Advance.</p>
<p>As far as I can see (I am used to Python3.+) you are trying to get the difference between 2 dates (expiry and current) in days (I am guessing that it is why you are dividing their difference by one day). To get that difference as an integer you could use (MWE):</p> <pre><code>from datetime import datetime, date, time, timedelta currentTime = datetime(2020, 11, 15, 15, 30) expiryTime = datetime(2020, 11, 19, 15, 30) tl = expiryTime - currentTime print(tl) a = tl.days print(a, type(a)) </code></pre> <p>Then you can use the accrual period in your function as a floating point: <code>float(tl.days) / 365</code>.</p>
python|datetime
-1
1,902,630
61,432,346
Can arbirary configuration for tests be set in pytest.ini?
<p>I want to centralize configuration for my tests and it seems like pytest.ini would be the place but I'm having trouble finding an example/feature for this.</p> <p>For example I have a directory with files my tests might need called &quot;test_resources&quot;. this is my structure:</p> <pre><code>├── pytest.ini ├── myproject │ └── someFunctionsOne │ ├── one.py │ └── two.py └── tests ├── integration │ └── someFunctionsOneTestblah.py ├── test_resources │ ├── sample-data.json │ └── test-data-blahablahb.csv └── unit └── someFunctionsOne ├── one.py └── two.py </code></pre> <p>I want to set the path of &quot;test_resources&quot; in <code>pytest.ini</code>. So integration and unit tests know where to find that folder- I'd like to avoid hard coding paths like this in my test files themselves.</p> <p>Is there a feature to set arbitrary config in <code>pytest.ini</code> and retrieve it from tests? I suspect I might have other config settings like this for my tests and if all that lives in the <code>pytest.ini</code> file that makes things much clearer for other devs on this project- one place to go for all test configuration stuff. I already have a configuration file for my <em>application</em> which is loaded when it starts but that's different. I want to isolate the unit/integration test config from my application config. <code>pytest.ini</code> seems like the best place because its already there and used by the tests. This way I don't need to create another config file and roll my own mechanism for loading it for the tests</p> <p>Also, I know there is nothing preventing me from using configparser or even loading and parsing <code>pytest.ini</code> myself but if tests are already using it I was hoping there would be a built-in feature to read arbitrary kvs from it or something like that.</p>
<p>You define custom keys in <code>pytest.ini</code> same way as you define custom command line arguments, only using the <a href="https://docs.pytest.org/en/latest/reference.html#_pytest.config.argparsing.Parser.addini" rel="nofollow noreferrer"><code>Parser.addini</code></a> method:</p> <pre><code># conftest.py def pytest_addoption(parser): parser.addini("mykey", help="help for my key", default="fizz") </code></pre> <p>(Note that <code>pytest_addoption</code> hook impls should be located in the top-level <code>conftest.py</code>).</p> <p>You now can define <code>mykey</code> in <code>pytest.ini</code>:</p> <pre><code>[pytest] mykey = buzz </code></pre> <p>Access <code>mykey</code> value in tests:</p> <pre><code>def test_spam(request): value = request.config.getini("mykey") assert value == "buzz" </code></pre>
python|pytest
2
1,902,631
56,159,087
How can I whitelist characters from a string in python 3?
<p>My question is quite simple, I am trying to strip any character that is not A-Z, or 0-9 from a string.</p> <p>Basically this is the process I am trying to do:</p> <pre><code>whitelist=['a',...'z', '0',...'9'] name = '_abcd!?123' name.strip(whitelist) print(name) &gt;&gt;&gt; abcd123 </code></pre> <p>What's important to know is that I can't just only print valid characters in name. I need to actually use the variable in its changed state.</p>
<p>You can use <code>re.sub</code> and provide a pattern that exactly matches what you are trying to remove:</p> <pre><code>import re result = re.sub('[^a-zA-Z0-9]', '', '_abcd!?123') </code></pre> <p>Output:</p> <pre><code>'abcd123' </code></pre>
python|python-3.x|string|strip
7
1,902,632
18,399,067
how to replace default navigation portlet with my template
<p>can anyone tell how to replace a navigation portlet with my own template. I did like this </p> <ol> <li><p>I created one new class for portlet in my .py file and its look like below</p> <pre><code>class navigation_address(Renderer): index = ViewPageTemplateFile('templates/portlet_address.pt') </code></pre></li> <li><p>I registered the portlets in overrides.zcml like </p> <pre><code>&lt;plone:portletRenderer portlet="plone.app.portlets.portlets.navigation.INavigationPortlet" class=".browser.createPictMenu.navigation_address" /&gt; </code></pre></li> </ol> <p>Thanks in advance </p>
<p>With <code>plone:portletrenderer</code> you just have to specify the origin portlet like you did, the new template and a layer (So it's only active on your plone site if your custom package is installed).</p> <pre class="lang-xml prettyprint-override"><code>&lt;include package="plone.app.portlets" /&gt; &lt;plone:portletRenderer portlet="plone.app.portlets.portlets.navigation.INavigationPortlet" class=".my.module.MyRenderer" layer=".interfaces.IMyPackageLayer" /&gt; </code></pre> <pre class="lang-py prettyprint-override"><code>from plone.app.portlets.portlets.navigation import Renderer as NavigationRenderer class MyRenderer(NavigationRenderer): _template = ViewPageTemplateFile('template/my_navi_template.pt') </code></pre> <p><code>&lt;include package="plone.app.portlets" /&gt;</code>makes sure the portlets stuff is loaded. </p> <p>The browserlayer is registered with GenericSetup: Place a browserlayer.xml in your profile:</p> <pre class="lang-xml prettyprint-override"><code>&lt;?xml version="1.0"?&gt; &lt;layers&gt; &lt;layer name="my.package.layer" interface="my.package.interfaces.IMyPackageLayer" /&gt; &lt;/layers&gt; </code></pre> <p>And the interface:</p> <pre class="lang-py prettyprint-override"><code>from zope.interface import Interface class IMyPackageLayer(Interface): """Request marker interface""" </code></pre>
python|plone
2
1,902,633
69,474,393
Is it possible to create a recursive dataclass using make_dataclass in python?
<p>Here is a simple example where I am trying to create a recursive Node definition which contains an optional child that is also a Node. The code compiles but when I try to access the type definitions I get <code>node</code> is not defined. Is it possible to get around this error?</p> <pre class="lang-py prettyprint-override"><code>import dataclasses import typing as t node_type = dataclasses.make_dataclass( &quot;node&quot;, [(&quot;child&quot;, t.Optional[&quot;node&quot;], dataclasses.field(default=None))] ) print(t.get_type_hints(node_type)) </code></pre> <p>Outputs</p> <pre class="lang-sh prettyprint-override"><code>NameError: name 'node' is not defined </code></pre> <p>I'm using python 3.9.2.</p>
<p>There are three problems here. They're solvable, but they may not be cleanly solvable in the kinds of situations where you would actually use <code>dataclasses.make_dataclass</code>.</p> <p>The first problem is that <code>typing.get_type_hints</code> is looking for a class named <code>'node'</code>, but you called the global variable <code>node_type</code>. The name you pass to <code>make_dataclass</code>, the name you use in the annotations, and the name you assign the dataclass to all have to be the same:</p> <pre><code>Node = dataclasses.make_dataclass( &quot;Node&quot;, [(&quot;child&quot;, t.Optional[&quot;Node&quot;], dataclasses.field(default=None))] ) </code></pre> <p>But that's still not going to be enough, because <code>typing.get_type_hints</code> isn't looking in the right namespace. That's the second problem.</p> <p>When you call <code>typing.get_type_hints</code> on a class, <code>typing.get_type_hints</code> will try to resolve string annotations by looking in the module where the class was defined. It determines that module by looking at the <code>__module__</code> entry in the class's <code>__dict__</code>. Because you've created your node class in a weird way that doesn't go through the normal <code>class</code> statement, the class's <code>__module__</code> isn't set up to refer to the right module. Instead, it's set to <code>'types'</code>.</p> <p>You can fix this by manually pre-setting <code>__module__</code> to the <code>__name__</code> of the current module:</p> <pre><code>Node = dataclasses.make_dataclass( &quot;Node&quot;, [(&quot;child&quot;, t.Optional[&quot;Node&quot;], dataclasses.field(default=None))], namespace={'__module__': __name__} ) </code></pre> <p>Then <code>typing.get_type_hints</code> will be able to resolve the string annotations.</p> <p>The meta-problem is, if you're using <code>dataclasses.make_dataclass</code> in practice, you probably don't know the class name. You're probably using it in a function, and/or inside a loop. <code>typing.get_type_hints</code> has to be able to find the class through a global variable matching the class name, but dynamic variable names are messy.</p> <p>You can take the simple approach of just setting a global with <code>globals()</code>:</p> <pre><code>globals()[your_dataclass.__name__] = your_dataclass </code></pre> <p>but that's dangerous. If two generated classes have the same name, the second will replace the first. If a generated class has the same name as something else in the global namespace, such as if you did <code>from some_dependency import Thing</code> and then generated a class named <code>Thing</code>, the generated class will stomp the existing global value.</p> <p>If you can guarantee those things won't happen, <code>globals()</code> might be fine. If you can't make such guarantees, you might need to do something like generate a new module for each generated class to live in, so they each get their own independent global namespace, or you might just accept and document the fact that <code>get_type_hints</code> won't work for your generated classes.</p>
python|python-typing|python-dataclasses
1
1,902,634
69,365,323
Changing user-agent in chrome Not Working in selenium webdriver python
<pre><code> self.options.add_argument(&quot;--headless&quot;) self.options.add_argument('--no-sandbox') self.options.add_argument(&quot;disable-infobars&quot;) # self.options.add_argument('--start-maximized') # self.options.add_argument('--start-fullscreen') self.options.add_argument('--single-process') self.options.add_argument('--disable-dev-shm-usage') self.options.add_argument( '--disable-blink-features=AutomationControlled') self.options.add_experimental_option('useAutomationExtension', False) self.options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;]) self.options.add_argument(&quot;log-level=3&quot;) # self.options.add_argument(&quot;--incognito&quot;) self.options.add_argument( 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36' ) # Change chrome driver path accordingly self.driver = webdriver.Chrome( executable_path=&quot;B:\mydriver\Resources\chromedriver.exe&quot;, chrome_options=self.options, ) self.driver.set_window_size(1400, 920) self.waitdriver = WebDriverWait(self.driver, 10) self.locators = Locators() self.driver.get('https://www.facebook.com') self.driver.save_screenshot(ROOT_DIR + r&quot;\Temp\Debug\Error1.PNG&quot;) a = self.driver.execute_script(&quot;return navigator.userAgent&quot;) print(a) &gt;&gt;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/96.0.4643.0 Safari/537.36 </code></pre> <p><strong>Its not same what i provided to it.I tried to change other user-agents but seems same problem wit it. I am using binary files of chromium.</strong></p>
<pre><code>self.options.add_argument('user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36') </code></pre> <p><strong>Solved by Adding user-agent before the useragent.</strong> <a href="https://github.com/tamimibrahim17/List-of-user-agents" rel="nofollow noreferrer">You can check here for much more browser agent.To just rotate then using random.</a></p>
python|selenium|selenium-chromedriver|undetected-chromedriver
0
1,902,635
55,509,021
Issue in predict function of the trained model in Keras
<p>I was performing a classification problem on some set of images, where my number of classes are three. Now since I am performing CNN, so it has a convolution layer and Pooling layer and then a few dense layers; the model parameters are shown below:</p> <pre><code>def baseline_model(): model = Sequential() model.add(Conv2D(32, (5, 5), input_shape=(1, 100, 100), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(60, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model </code></pre> <p>The model runs perfectly and shows me the accuracy and validation error, etc,. as shown below:</p> <pre><code>model = baseline_model() model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=5, batch_size=20, verbose=1) scores = model.evaluate(X_test, y_test, verbose=0) print("CNN Error: %.2f%%" % (100-scores[1]*100)) </code></pre> <p>Which gives me output:</p> <pre><code>Train on 514 samples, validate on 129 samples Epoch 1/5 514/514 [==============================] - 23s 44ms/step - loss: 1.2731 - acc: 0.4202 - val_loss: 1.0349 - val_acc: 0.4419 Epoch 2/5 514/514 [==============================] - 18s 34ms/step - loss: 1.0172 - acc: 0.4416 - val_loss: 1.0292 - val_acc: 0.4884 Epoch 3/5 514/514 [==============================] - 17s 34ms/step - loss: 0.9368 - acc: 0.5817 - val_loss: 0.9915 - val_acc: 0.4806 Epoch 4/5 514/514 [==============================] - 18s 34ms/step - loss: 0.7367 - acc: 0.7101 - val_loss: 0.9973 - val_acc: 0.4961 Epoch 5/5 514/514 [==============================] - 17s 32ms/step - loss: 0.4587 - acc: 0.8521 - val_loss: 1.2328 - val_acc: 0.5039 CNN Error: 49.61% </code></pre> <p>The issue occurs in the prediction part. So for my test images, for whom I need predictions; when I run <code>model.predict()</code>, it gives me this error:</p> <pre><code>TypeError: data type not understood </code></pre> <p>I can show the full error if required. And just to show, the shape of my training images and images I am finally using to predict on:</p> <pre><code>X_train.shape (514, 1, 100, 100) final.shape (277, 1, 100, 100) </code></pre> <p>So I've no idea what this error means and what's the issue. Even the data type of my image values is same <code>'float32'</code>. So the shape is same and data type is same, then why is this issue coming?</p>
<p>It is similar to <a href="https://stackoverflow.com/questions/54903852/predict-with-keras-fails-due-to-faulty-environment-setup">predict with Keras fails due to faulty environment setup</a> I had the same issue with anaconda and python 3.7. I resolved it when I changed to WPy-3670 with python 3.6 and everything downgraded.</p>
python|tensorflow|keras|neural-network|conv-neural-network
1
1,902,636
57,590,285
Sort the key:value pairs wrt value
<p>A column in the test excel that I'm loading looks something like this: </p> <pre><code>Apple:3, Mango:2, Orange:2, Fig:5, Berry:1, Cherry:99 </code></pre> <p>This is in a single column.</p> <p>I am trying this using python.</p> <p>There are 100 rows which contains records like this in a single column.</p> <p>I am trying to split it into different columns. I have tried to split ",". I just am not able to get it right with anything</p> <p>Now I want the out put of the sort like this</p> <pre><code>Cherry:99,Fig:5,Apple:3,Mango:2,Orange:2,Berry:1 </code></pre>
<p>Using Regex with <code>sorted</code></p> <p><strong>Ex:</strong></p> <pre><code>import re data = "Apple:3, Mango:2, Orange:2, Fig:5, Berry:1, Cherry:99" print(", ".join(sorted(data.split(", "), key=lambda x: int(re.search(r"(\d+)", x).group(1)), reverse=True))) </code></pre> <p><strong>Output:</strong></p> <pre><code>Cherry:99, Fig:5, Apple:3, Mango:2, Orange:2, Berry:1 </code></pre> <ul> <li><code>int(re.search(r"(\d+)", x).group(1))</code> to find the integer in the string.</li> </ul> <hr> <p>For Pandas DF </p> <p><strong>Ex:</strong></p> <pre><code>import re import pandas as pd df = pd.DataFrame({"data": ["Apple:3, Mango:2, Orange:2, Fig:5, Berry:1, Cherry:99"]}) df["data"] = df["data"].apply(lambda z: ", ".join(sorted(z.split(", "), key=lambda x: int(re.search(r"(\d+)", x).group(1)), reverse=True))) print(df) </code></pre>
python|key-value
1
1,902,637
59,243,684
Python multi-conditional if statement with action for each conditional
<p>I want to check multiple bools and do something if they are all true, but also do something for each that is true. Essentially combining the following into one:</p> <pre class="lang-py prettyprint-override"><code># I want to combine this: if funcA() and funcB() and funcC(): #do something #with these: if funcA(): #do A's something if funcB(): #do B's something if funcC(): #do C's something </code></pre> <p>All the functions above just return bools.</p> <p>I'm looking for a more efficient (line wise) way of doing this.</p>
<p>My solution was to put A, B, &amp; C's code inside the functions themselves</p> <pre><code>def funcA(): if random.choice([True, False]): print("doing A's something") return True </code></pre> <p>Then i can call each function and not use its return value, and then do my conditional check afterwards. </p> <pre><code>funcA() funcB() funcC() if funcA() and funcB() and funcC(): print("do something if all true") </code></pre>
python|if-statement|conditional-statements
0
1,902,638
54,179,087
Easiest way to keep only last column after splitting text to columns
<p>So I'm using pandas and I have a dataframe where one column looks like</p> <ol> <li>abc/def/ghi/jkl</li> <li>mno/pqr/stu/vwx/yz</li> <li>123/23/24/24/24/53/523/23/111</li> </ol> <p>What I'm trying to do is split the text to columns (delimiter of /) and keep only the last column so that the data looks like:</p> <ol> <li>jkl</li> <li>yz</li> <li>111</li> </ol> <p>Is there any relatively simple way of doing this? Thanks in advance!</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>split</code></a> with indexing for select last value of lists:</p> <pre><code>df['new'] = df['col'].str.split('/').str[-1] </code></pre> <p>If no missing values and performance is important use list comprehension:</p> <pre><code>df['new'] = [x.split('/')[-1] for x in df['col']] </code></pre> <hr> <pre><code>df['new'] = df['col'].str.split('/').str[-1] print (df) col new 0 abc/def/ghi/jkl jkl 1 mno/pqr/stu/vwx/yz yz 2 123/23/24/24/24/53/523/23/111 111 </code></pre>
python|pandas
0
1,902,639
65,335,292
how to get value(python)
<pre><code>'window.__additionalDataLoaded(\'/p/CI3mtIABQDO/\',{&quot;graphql&quot;:{&quot;shortcode_media&quot;:{&quot;__typename&quot;:&quot;GraphImage&quot;,&quot;id&quot;:&quot;2465609547742773454&quot;,&quot;shortcode&quot;:&quot;CI3mtIABQDO&quot;,&quot;dimensions&quot;:{&quot;height&quot;:1316,&quot;width&quot;:1080},&quot;gating_info&quot;:null,&quot;fact_check_overall_rating&quot;:null,&quot;fact_check_information&quot;:null,&quot;sensitivity_friction_info&quot;:null,&quot;sharing_friction_info&quot;:{&quot;should_have_sharing_friction&quot;:false,&quot;bloks_app_url&quot;:null},&quot;media_overlay_info&quot;:null,&quot;media_preview&quot;:&quot;ACIqqXC5hJ/z1rKUEnjrWzOP9Gb2x/NapWDKjsz9lOB6n0B7f5FT0K6lqeJAhMeS4PIweP8APX/61ZjFifmOfeukURFPlOVPTjr7H0welYkZVUK9W7nGfT/69JMGVjjsKKsNHkn6+lFO4jQc7rZh14/+JqGCMQqOVDvgnIyw9h2A9See1SJMifKpH0HPtULSSMOVJX5iRt4xn1/maQy0bl43XDIQSchFGPQlgOT6/qKrNAVkyMYYZ3Doc+2eOe3FEbRh9w6N1GPu88Y+9/Lp3qS8RHVXICgenQZ7cevWgCqzEE9Dz1oqsTzweO1FMRIAR07U2WZnA3EkD/OPpViLv9DUHZv92kMfbuQQE6ueAexx79vSrTyOMqTz6A5Ax+A5/lVdeQD7f41IKLjsR+WKKloqSj//2Q==&quot;,&quot;display_url&quot;:&quot;https://scontent-gmp1-1.cdninstagram.com/v/t51.2885-15/e35/p1080x1080/131072573_200156958440171_7958560074248767851_n.jpg?_nc_ht=scontent-gmp1-1.cdninstagram.com\\u0026_nc_cat=1\\u0026_nc_ohc=yxooq3IfF44AX9mIzGL\\u0026tp=1\\u0026oh=a4ba1164d97a86464b2a0bbb4d7ce19c\\u0026oe=6002C769&quot;,&quot;display_resources&quot;:[{&quot;src&quot;:&quot;https://scontent-gmp1-1.cdninstagram.com/v/t51.2885-15/sh0.08/e35/p640x640/131072573_200156958440171_7958560074248767851_n.jpg?_nc_ht=scontent-gmp1-1.cdninstagram.com\\u0026_nc_cat=1\\u0026_nc_ohc=yxooq3IfF44AX9mIzGL\\u0026tp=1\\u0026oh=040a7b7d5fda5772ad0668d27ae2333b\\u0026oe=6003172E&quot;,&quot;config_width&quot;:640,&quot;config_height&quot;:780},{&quot;src&quot;:&quot;https://scontent-gmp1-1.cdninstagram.com/v/t51.2885-15/sh0.08/e35/p750x750/131072573_200156958440171_7958560074248767851_n.jpg?_nc_ht=scontent-gmp1-1.cdninstagram.com\\u0026_nc_cat=1\\u0026_nc_ohc=yxooq3IfF44AX9mIzGL\\u0026tp=1\\u0026oh=84141972d27eba9381408204011b3109\\u0026oe=600594EA&quot;,&quot;config_width&quot;:750,&quot;config_height&quot;:914},{&quot;src&quot;:&quot;https://scontent-gmp1-1.cdninstagram.com/v/t51.2885-15/e35/p1080x1080/131072573_200156958440171_7958560074248767851_n.jpg?_nc_ht=scontent-gmp1-1.cdninstagram.com\\u0026_nc_cat=1\\u0026_nc_ohc=yxooq3IfF44AX9mIzGL\\u0026tp=1\\u0026oh=a4ba1164d97a86464b2a0bbb4d7ce19c\\u0026oe=6002C769&quot;,&quot;config_width&quot;:1080,&quot;config_height&quot;:1316}],&quot;accessibility_caption&quot;:&quot;Photo by Fashion Selection \\ud83e\\udd84\\ud83d\\udc95 on December 16, 2020. \\uc0ac\\uc9c4 \\uc124\\uba85\\uc774 \\uc5c6\\uc2b5\\ub2c8\\ub2e4..&quot;,&quot;is_video&quot;:false,&quot;tracking_token&quot;:&quot;eyJ2ZXJzaW9uIjo1LCJwYXlsb2FkIjp7ImlzX2FuYWx5dGljc190cmFja2VkIjp0cnVlLCJ1dWlkIjoiM2Y1NDUzZjJjNmFkNGZmM2FkZDEyZDNiMTBjYWMwNmEyNDY1NjA5NTQ3NzQyNzczNDU0Iiwic2VydmVyX3Rva2VuIjoiMTYwODE4MTQ2Mjg2NnwyNDY1NjA5NTQ3NzQyNzczNDU0fDQ0NzQ2NDA2ODk5fGI2ZmY3Y2Q5NTA5NjQ4Njk3ZTA5MzI0OWU0ZWU4OTU3ZDQ3N2EwZDU4YmZiYTJiNDVkYzIyYmM4NmFkOWU1NTEifSwic2lnbmF0dXJlIjoiIn0=&quot;,&quot;edge_media_to_tagged_user&quot;:{&quot;edges&quot;:[]},&quot;edge_media_to_caption&quot;:{&quot;edges&quot;:[{&quot;node&quot;:{&quot;text&quot;:&quot;Yes or No? \\ud83d\\ude0d&quot;}}]},&quot;caption_is_edited&quot;:false,&quot;has_ranked_comments&quot;:true,&quot;edge_media_to_parent_comment&quot;:{&quot;count&quot;:175,&quot;page_info&quot;.........(type=str) </code></pre> <p>In above string I want to find <code>count:175</code> in <code>edge_media_to_parent_comment:{count:175}</code> please help</p>
<p>This should get your work done without any regex. And yes you were accessing wrong js object, you have to access <code>window._sharedData</code></p> <pre class="lang-py prettyprint-override"><code>from selenium import webdriver driver = webdriver.Chrome(&quot;Driver's path&quot;) URL = &quot;https://www.instagram.com/p/CI3mtIABQDO/&quot; driver.get(URL) data = driver.execute_script('return window._sharedData') #it will return that js object as python dictionary stored in data variable #You can access data using access operator count = data['entry_data']['PostPage'][0]['graphql']['shortcode_media']['edge_media_to_parent_comment']['count'] print(count) </code></pre> <p>Above returned <code>182</code> to me.</p> <p>If you want to just access the whole object than you may use below code, it will return entire dict object inside <code>edge_media_to_parent_comment</code> attribute</p> <pre><code>count = data['entry_data']['PostPage'][0]['graphql']['shortcode_media']['edge_media_to_parent_comment'] </code></pre>
python|json|string|web-crawler
1
1,902,640
45,292,436
Python 3.6 Error when installing pip install gearman
<p>I have been trying to install gearman on python 3.6 but I'm getting this error:</p> <pre><code>$ pip install gearman Collecting gearman Using cached gearman-2.0.2.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "&lt;string&gt;", line 1, in &lt;module&gt; File "/tmp/pip-build-xmf1cqe7/gearman/setup.py", line 5, in &lt;module&gt; from gearman import __version__ as version File "/tmp/pip-build-xmf1cqe7/gearman/gearman/__init__.py", line 7, in &lt;module&gt; from gearman.admin_client import GearmanAdminClient File "/tmp/pip-build-xmf1cqe7/gearman/gearman/admin_client.py", line 4, in &lt;module&gt; from gearman import util File "/tmp/pip-build-xmf1cqe7/gearman/gearman/util.py", line 62 except select_lib.error, exc: ^ SyntaxError: invalid syntax ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-xmf1cqe7/gearman/ </code></pre> <p>I am using Ubuntu 16.04 LTS</p> <p>pip -V -> pip 9.0.1 from [my_project_folder]/venv/lib/python3.6/site-packages (python 3.6)</p> <p>python --version -> Python 3.6.2</p> <p>How do I fix that?</p>
<p>The gearman package does not support Python 3. Only python 2.4-2.7 are supported according to their <a href="https://github.com/YelpArchive/python-gearman/blob/master/setup.py" rel="nofollow noreferrer">setup.py</a>. There is an open <a href="https://github.com/YelpArchive/python-gearman/pull/82" rel="nofollow noreferrer">pull request</a> to add python 3 support but it has been untouched for a year. I believe that Yelp! may have stopped supporting this library. </p>
python|python-3.6|gearman
3
1,902,641
45,478,843
Python database dynamic visualization
<p>My python script is simply gathering and storing some data with SQLite. I would like to visualize this data in the really simplest possible way, for example by using single HTML page. The database is dynamically updated to I need the page to be dynamically updated also, for example by using jQuery.</p> <p>Is it somehow possible to call JS function from python script to update layout? Is there any better but still simple way to that? Maybe some other visualization tools? </p>
<p>If you don't have a lot of data, you could use <a href="http://jinja.pocoo.org/" rel="nofollow noreferrer">Jinja2</a> to generate the html for the site and regenerate it either on an interval or on every change.</p> <p>If you want it to look a bit nicer (with search and sorting), you can instead use a static HTML page with <a href="https://www.datatables.net/" rel="nofollow noreferrer">DataTables</a> <a href="https://datatables.net/examples/data_sources/ajax.html" rel="nofollow noreferrer">ajax loading</a> and generate the required JSON in your script. This would also allow you to make the site live-update the data (<code>dt.ajax.reload()</code>).</p> <p>If you have a lot of entries (more than a few hundred rows), DataTables <a href="https://www.datatables.net/examples/server_side/simple.html" rel="nofollow noreferrer">server-side processing</a> would be better. For the ajax endpoint, I'd recommend a <a href="http://flask.pocoo.org/" rel="nofollow noreferrer">Flask</a> app with the <a href="https://github.com/orf/datatables/" rel="nofollow noreferrer">datatables library</a>.</p>
jquery|python|html|visualization
1
1,902,642
28,534,684
If power is lost while a file is being read in read-only mode, can that file's data be lost?
<p>If power is lost while a file is being read in read-only mode, can that file's data be lost?</p> <p>Example in Python:</p> <pre><code>&gt;&gt;&gt; f = open("example.txt", "r") &gt;&gt;&gt; first_line = f.readline() &gt;&gt;&gt; second_line = f.readline() &gt;&gt;&gt; # Here the machine executing the above code unexpectedly powers off. </code></pre>
<blockquote> <p>If power is lost while a file is being read in read-only mode, can that file's data be lost?</p> </blockquote> <p>One would think that since you have the file open in read-only mode, the answer would be a solid "no". There's two scenarios that come to mind:</p> <h2>Hardware failure</h2> <p>In the case of a hard disk, the head must be above the platter to read the file. If the power dies, that could be just the last straw that causes the disk to just fail outright.</p> <h2>Access times</h2> <p><strong>File metadata</strong>. Even when opening a file read-only, "last access date" might still need to get updated, and <em>thus cause a write</em>. Whether this is true depends; consider:</p> <ul> <li>does the filesystem that the file exists on support a last access time?</li> <li>Is the filesystem configured to use it? (Linux, for example, has a <code>noatime</code> attribute that means access times are not updated)</li> <li>Is the filesystem read-only? (again, Linux is a good example here; you can mount an FS as read-only)</li> </ul> <p>If there is an access time that could be written, the next big question is <em>does the FS at hand journal metadata</em>? A "journal" is a data structure many FSs use to prevent corruption. If the answer is "no", then I'd say "yes, it is possible."</p> <p>Corrupting the file metadata, could, conceivably, render the data in the file itself corrupt. (More likely, the metadata that stores where on disk the file is located is likely near where the access time; this might cause that data to itself get corrupt. The file contents are probably fine, but the thing that says where they are is what got corrupt.)</p> <p>At the end of the day, if you need to protect against such things,</p> <ol> <li>Use a filesystem that journals metadata. (<code>ext3</code>, for example, can do this.) Do note that some FSs with journals <em>do not</em> journal metadata. (They journal only the main file data.) (Also note that some are configurable either way.)</li> <li>Always have a backup. The disk can always outright fail.</li> </ol>
python|file|file-io
3
1,902,643
56,968,669
Divide by maximum within a group in pandas dataframe
<p>I do have a dataframe below:</p> <pre><code>cola colb a 10 a 12 a 30 b 20 b 25 </code></pre> <p>I would like to add new column like: for each group find the maximum and then calculate </p> <p>newcol=(max(withingroupcola)-colb)/max(withingroupcola) within each group like below:</p> <pre><code>cola colb newcol a 10 (30-10)/30 a 12 (30-12)/30 a 30 (30-30)/30 b 20 (25-20)/25 b 25 (25-25)/25 </code></pre> <p>and then sort within group desc. How can I do that in pandas dataframe? Please help. Thank you.</p> <p>Not:I am trying to scale if there is a function for scaling please let me know.</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> for new <code>Series</code>, then first subtract by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.sub.html" rel="nofollow noreferrer"><code>Series.sub</code></a> and then divide by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.div.html" rel="nofollow noreferrer"><code>Series.div</code></a>:</p> <pre><code>s = df.groupby('cola')['colb'].transform('max') df['new'] = s.sub(df['colb']).div(s) print (df) cola colb new 0 a 10 0.666667 1 a 12 0.600000 2 a 30 0.000000 3 b 20 0.200000 4 b 25 0.000000 </code></pre> <p>Another solution, slowier:</p> <pre><code>df['new'] = df.groupby('cola')['colb'].apply(lambda x: (x.max()- x) / x.max()) </code></pre>
pandas|dataframe|group-by|max
2
1,902,644
56,979,039
"The handshake operation timed out" with urllib, works with requests
<p>First of all, this is for a Gira Homeserver, which is a home automation server. It has Python 2.7, and I can't install external modules.</p> <p>But for testing and examples, I've been using both python 2.7.15 and python 3.6.8 (but have tried a few other version as well - same result)</p> <p>What I'm trying to do, is to read content from the webserver of my Philips Android TV. This works fine with browser, it works fine with Curl, and it works fine with Python requests. But it does not work with urllib2, which is what I need to use for this to work with my home automation system.</p> <p>The TV is providing json output on a https webpage requiring Digest auth.</p> <p>Urllib example (Python3, to be able to compare with requests):</p> <pre><code>import urllib.request import ssl url="https://192.168.3.100:1926/6/powerstate" username="6AJeu5Ffdm9dQnum" password="5a21386d952c2f1fbe66be2471d98c391bb918a6d2130cdf1b6deb2b87872eaa" ctx = ssl._create_unverified_context() auth = urllib.request.HTTPDigestAuthHandler(urllib.request.HTTPPasswordMgrWithDefaultRealm()) auth.add_password (None, url, username, password) opener = urllib.request.build_opener(urllib.request.HTTPSHandler(context=ctx,debuglevel=1), auth) urllib.request.install_opener(opener) response = urllib.request.urlopen(url,None,10) </code></pre> <p>Requests example:</p> <pre><code>import requests import ssl url="https://192.168.3.100:1926/6/powerstate" username="6AJeu5Ffdm9dQnum" password="5a21386d952c2f1fbe66be2471d98c391bb918a6d2130cdf1b6deb2b87872eaa" from requests.auth import HTTPDigestAuth from requests.packages.urllib3.exceptions import InsecureRequestWarning requests.packages.urllib3.disable_warnings(InsecureRequestWarning) r = requests.get(url, auth=HTTPDigestAuth(username, password), timeout=10, verify=False) print(r.status_code) print(r.content) </code></pre> <p>The urllib example times out with following error:</p> <pre><code>stianj@buick:~$ python3 stack.py send: b'GET /6/powerstate HTTP/1.1\r\nAccept-Encoding: identity\r\nHost: 192.168.3.100:1926\r\nUser-Agent: Python-urllib/3.6\r\nConnection: close\r\n\r\n' reply: 'HTTP/1.1 401 Unauthorized\r\n' header: Date: Wed, 10 Jul 2019 21:36:45 GMT+00:00 header: Accept-Ranges: bytes header: Server: Restlet-Framework/2.3.12 header: WWW-Authenticate: Digest realm="XTV", domain="/", nonce="MTU2Mjc5NDYwNTI3ODo1NTNlMTFhYzk5MjJjODQyMTYyZjAxZjRhYmYyYzNhMA==", algorithm=MD5, qop="auth" header: Content-Length: 424 header: Content-Type: text/html; charset=UTF-8 Traceback (most recent call last): File "/usr/lib/python3.6/urllib/request.py", line 1318, in do_open encode_chunked=req.has_header('Transfer-encoding')) File "/usr/lib/python3.6/http/client.py", line 1239, in request self._send_request(method, url, body, headers, encode_chunked) File "/usr/lib/python3.6/http/client.py", line 1285, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/usr/lib/python3.6/http/client.py", line 1234, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.6/http/client.py", line 1026, in _send_output self.send(msg) File "/usr/lib/python3.6/http/client.py", line 964, in send self.connect() File "/usr/lib/python3.6/http/client.py", line 1400, in connect server_hostname=server_hostname) File "/usr/lib/python3.6/ssl.py", line 407, in wrap_socket _context=self, _session=session) File "/usr/lib/python3.6/ssl.py", line 817, in __init__ self.do_handshake() File "/usr/lib/python3.6/ssl.py", line 1077, in do_handshake self._sslobj.do_handshake() File "/usr/lib/python3.6/ssl.py", line 689, in do_handshake self._sslobj.do_handshake() socket.timeout: _ssl.c:835: The handshake operation timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "stack.py", line 15, in &lt;module&gt; response = urllib.request.urlopen(url,None,10) File "/usr/lib/python3.6/urllib/request.py", line 223, in urlopen return opener.open(url, data, timeout) File "/usr/lib/python3.6/urllib/request.py", line 532, in open response = meth(req, response) File "/usr/lib/python3.6/urllib/request.py", line 642, in http_response 'http', request, response, code, msg, hdrs) File "/usr/lib/python3.6/urllib/request.py", line 564, in error result = self._call_chain(*args) File "/usr/lib/python3.6/urllib/request.py", line 504, in _call_chain result = func(*args) File "/usr/lib/python3.6/urllib/request.py", line 1208, in http_error_401 host, req, headers) File "/usr/lib/python3.6/urllib/request.py", line 1089, in http_error_auth_reqed return self.retry_http_digest_auth(req, authreq) File "/usr/lib/python3.6/urllib/request.py", line 1103, in retry_http_digest_auth resp = self.parent.open(req, timeout=req.timeout) File "/usr/lib/python3.6/urllib/request.py", line 526, in open response = self._open(req, data) File "/usr/lib/python3.6/urllib/request.py", line 544, in _open '_open', req) File "/usr/lib/python3.6/urllib/request.py", line 504, in _call_chain result = func(*args) File "/usr/lib/python3.6/urllib/request.py", line 1361, in https_open context=self._context, check_hostname=self._check_hostname) File "/usr/lib/python3.6/urllib/request.py", line 1320, in do_open raise URLError(err) urllib.error.URLError: &lt;urlopen error _ssl.c:835: The handshake operation timed out&gt; </code></pre> <p>while the request example works just fine (shows output from webpage. Since I'm using the same Python version for both examples, I'd expect ssl parameters and ciphers etc. to be the same. What is <em>extremely</em> interesting, is that a POST works just fine with urllib. It's just the GET that times out.</p> <p>I know it's always recommended to use requests these days, but that's not an option for me. Would anyone have an explanation to the handshake error?</p> <p>Been banging my head at this for a few days...</p>
<p>I am not familiar with task you are working on, but such error is encountered because of bad internet. I had encountered the same error while using youtube-dl to download the videos, but solved it by simply switching to another internet. Also SSH tunnels do not work because of bad network. </p>
python|python-3.x|ssl|urllib2|digest-authentication
4
1,902,645
57,232,543
Jython CP720 is not supported in this JVM so it can't be used in python.console.encoding
<p>I want to start using Jython ,I downloaded and installed jython 2.5.2 , i have JDK 1.8 installed and python 3.7 already. After installing jython and following this <a href="https://www.tutorialspoint.com/jython/jython_installation.htm" rel="nofollow noreferrer">Tutorial</a> and running this command:</p> <pre><code>C:\jython2.5.2\bin&gt;jython </code></pre> <p>I get this output in the CMD , windows 7 32 bit machine</p> <blockquote> <p>C:\jython2.5.2\bin>jython Jython 2.5.2 (Release_2_5_2:7206, Mar 2 2011, 23:12:06) [Java HotSpot(TM) Client VM (Oracle Corporation)] on java1.8.0_161 Type "help", "copyright", "credits" or "license" for more information. cp720 is not a supported encoding on this JVM, so it can't be used in python.con sole.encoding.</p> </blockquote> <p>What to do? <strong>Edit</strong> It was Windows CMD encoding problem, It doesn't accept <code>cp720</code> so How can I force the CMD to use <code>utf-8</code> instead for running jython?</p>
<p>Based on <a href="https://stackoverflow.com/questions/30443537/how-do-i-fix-unsupportedcharsetexception-in-eclipse-kepler-luna-with-jython-pyde">How do I fix UnsupportedCharsetException in Eclipse Kepler/Luna with Jython/PyDev?</a> you'll need to pass <code>-Dpython.console.encoding=UTF-8</code> (or a different character set) on the command line:</p> <pre class="lang-none prettyprint-override"><code>jython -Dpython.console.encoding=UTF-8 </code></pre>
java|python|windows|cmd|jython
1
1,902,646
54,083,393
How to group elements effectively in a huge list by their first character in python
<p>I am following the answer of the following stackoverflow question to accomplish my task as follows. <a href="https://stackoverflow.com/questions/17876130/python-list-group-by-first-character">python list group by first character</a></p> <pre><code>import json from itertools import groupby #Load data with open('input.txt', 'r') as f: concepts = [] for concept in f: concepts.append(concept.strip()) print(len(concepts)) concepts_list = [list(g) for k, g in groupby(concepts, key=lambda x: x[0])] concepts_dict = {} for item in concepts_list: concepts_dict[item[0][0]] = item with open("concepts_preprocessed_dictionary.txt", "w") as fw: fw.write(json.dumps(concepts_dict)) </code></pre> <p>However, I am wondering why this code is not working when there are huge number of concepts in the list (aproximately 13,000,000 concepts). Surprisingly the program executes in seconds and when I check the dictionary it contains wrong results (in other words the dictionary file is only 1KB in size and contain mostly one or two elements per grouped lists).</p> <p>Unfortunately, I am not in a position to share my concept list as it violates some privacy issues. </p> <p>But I found a long word list in the following github page: <a href="https://raw.githubusercontent.com/dwyl/english-words/master/words.txt" rel="nofollow noreferrer">https://raw.githubusercontent.com/dwyl/english-words/master/words.txt</a></p> <p>However, unlike the above mentioned dataset my current dataset is only àlphabetically ordered by first character` (i.e. as follows)</p> <p><strong>My dataset:</strong> Only first letter is <code>m</code>, but the remaining words are not albetically ordered</p> <ul> <li>methods </li> <li>machine learning </li> <li>mic</li> </ul> <p><strong>Dataset I have mentioned</strong>: Nicely ordered based on characters</p> <ul> <li>machine learning</li> <li>methods</li> <li>mic</li> </ul> <p>Please let me know if there is any further details needed.</p>
<p>You don't really <em>need</em> to use <code>groupby</code> to do this.</p> <p>Consider your linked example:</p> <pre><code>list1=['hello','hope','hate','hack','bit','basket','code','come','chess'] </code></pre> <p>You can create the groups described with a native Python dict:</p> <pre><code>groups={} for word in list1: groups.setdefault(word[0],[]).append(word) &gt;&gt;&gt; groups {'h': ['hello', 'hope', 'hate', 'hack'], 'b': ['bit', 'basket'], 'c': ['code', 'come', 'chess']} </code></pre> <p>Or, with <code>defaultdict</code> if your prefer:</p> <pre><code>from collections import defaultdict groups=defaultdict(list) for word in list1: groups[word[0]].append(word) &gt;&gt;&gt; groups defaultdict(&lt;class 'list'&gt;, {'h': ['hello', 'hope', 'hate', 'hack'], 'b': ['bit', 'basket'], 'c': ['code', 'come', 'chess']}) </code></pre> <p>Both of these methods will work with completely unsorted data and gather the words based on the first letter. You are then free to use the values of that dict to make a list of lists if desired:</p> <pre><code>&gt;&gt;&gt; sorted(groups.values(), key=lambda s: s[0]) [['bit', 'basket'], ['code', 'come', 'chess'], ['hello', 'hope', 'hate', 'hack']] </code></pre> <p>Now if you <em>still</em> want to use <code>groupby</code> for some reason, you would likely do something like this:</p> <pre><code>groups={} for k,v in groupby(list1, key=lambda s: s[0]): groups.setdefault(k,[]).extend(v) </code></pre>
python
2
1,902,647
44,749,063
django channels on aws : daphne and workers running but websocket taret unhealthy
<p>I have been following this article - <a href="https://blog.mangoforbreakfast.com/2017/02/13/django-channels-on-aws-elastic-beanstalk-using-an-alb/" rel="nofollow noreferrer">https://blog.mangoforbreakfast.com/2017/02/13/django-channels-on-aws-elastic-beanstalk-using-an-alb/</a></p> <p>to get my django-channels app working on aws..but only non-websockets request are getting handled.</p> <p>my channel layer setting is :</p> <pre><code> CHANNEL_LAYERS = { "default": { "BACKEND": "asgi_redis.RedisChannelLayer", "CONFIG": { "hosts": [os.environ.get('REDIS_URL', 'redis://localhost:6379')], }, "ROUTING": "malang.routing.channel_routing", }, } </code></pre> <p>I have two target group as mentioned in the article. One forwarding path / to port 80 and /ws/* to 5000.</p> <p>My supervisord.conf is - </p> <pre><code>[program:Daphne] environment=PATH="/opt/python/run/venv/bin" command=/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 5000 malang.asgi:channel_layer directory=/opt/python/current/app autostart=true autorestart=true redirect_stderr=true user=root stdout_logfile=/tmp/daphne.out.log [program:Worker] environment=PATH="/opt/python/run/venv/bin" command= /opt/python/run/venv/bin/python manage.py runworker directory=/opt/python/current/app process_name=%(program_name)s_%(process_num)02d numprocs=4 autostart=true autorestart=true redirect_stderr=true stdout_logfile=/tmp/workers.out.log </code></pre> <p>When I check the result of supervisorctl status on aws logs it shows them running fine. But still I get 404 response for ws.</p> <p>Please help and let me know if you want some more info..</p>
<p>It makes no sense to run a Redis backend locally on each instance, provided the fact that you actually deployed it, which you don't given your info. Redis is a cache system that allow data sharing through different instances. Closer to a DB on architectural point of view that a simple daemon thread. You should use a external Redis Cache instead and refer to it on you Django conf. </p> <pre><code>CHANNEL_LAYERS = { "default": { "BACKEND": "asgi_redis.RedisChannelLayer", "ROUTING": "&lt;YOUR_APP&gt;.routing.application", "CONFIG": { "hosts": ["redis://"+REDIS_URL+":6379"], }, }, </code></pre> <p>}</p> <p>See AWS ElasticCache service for that. </p>
python|django|amazon-web-services|django-channels|daphne
0
1,902,648
24,431,122
Using Django session key to use request sessions between views
<p>Is it a good idea to store django <code>_session_key</code> in another model as an identifier for specific session.</p> <p>I am using django <code>_session_key</code> to store a unique session inside a view and then I am saving the <code>_session_key</code> in another object .</p> <pre><code>def myview(request): if request.method == "POST": myform = Myform(request.form) if myform.is_valid(): name = myform.cleaned_data['name'] title = myform.cleaned_data['title'] author_session = request.session._session_key # Creating a model object model1(name=name, title=title, author_session=author_session).save() return HttpResponseRedirect(reverse('myview2', kwargs={'name':model1.name})) else: # Some renders else: # Some other renders def myview2(request, name): obj1 = model1.objects.get(name=name) if request.session._session_key == obj1.author_session: # Some render else: # Some other render </code></pre> <p>Now ,I am thinking is this a good idea to use <code>_session_key</code> as unique identity for sessions between different views . Is there any other way to identify unique session between views ?</p> <p>P.S- I have read that using <code>_session_key</code> is generally disregarded. </p> <p><strong>Also please suggest how to write tests for the sessions between views</strong> </p>
<p>No, this is entirely backwards. You should store the key of the model1 instance in the session in the first view, and get it out in the second.</p>
python|django|session|django-sessions
2
1,902,649
24,427,395
Search into excel file with python
<p>First of all, I'm a python beginner so I apologize for the trivial question :D I try to search into a *.xls file a specific word using python (v 2.7)</p> <p>Short problem description/specification : 1. test.xls is the input file 2. The target word is 2, I want exctract only the cell which contains only 2 and not sometihngs with 2 (e.g. cell value = 2 -> right! cell value=2345-> wrong !!)</p> <p>below the code :</p> <pre><code>book = open_workbook('test.xls',on_demand=True) item = 2 row=-1 n=0 for name in book.sheet_names(): if name.endswith('Traceability Matrix'): sheet = book.sheet_by_name(name) rowIndex = -1 for cell in sheet.col(1): # n=n+1 if item in cell.value: print "VAL ",cell.value print "ROW ",sheet.row(n) break if row != -1: cells = sheet.row(row) for cell in cells: print"&gt;&gt;", cell.value book.unload_sheet(name) </code></pre> <p>Now, my output is list of rows which contains NOT only 2 (see wrong case above, point 2), see below the "print" results:</p> <p>ROW [text:u'SRS5617\nSRS5618\nSRS5619\nSRS5620', text:u'RQ - 5282', empty:'', text:u'Function Plus', text:u'See Note ', empty:'', empty:'', empty:'', empty:'', empty:'', empty:'', text:u'Code inspection', text:u'(**), 2']</p> <p>Someone can help me? Some suggestion?</p> <p>Thanksssss !!!!!!!!</p>
<p>Your problem is on this lines</p> <pre><code> if item in cell.value: print "VAL ",cell.value print "ROW ",sheet.row(n) break </code></pre> <p>That search if a "2" <em>is on</em> the cell.value, not if <em>it is</em> 2.</p> <p>it could be changed to <code>if int(cell.value) == int(item):</code></p>
python|excel|python-2.7
1
1,902,650
24,299,068
Find all matches between two strings with regex
<p>I am just starting to use regex for the first time and am trying to use it to parse some data from an HTML table. I am trying to grab everything between the <code>&lt;tr &gt;</code> and <code>&lt;/tr&gt;</code> tags, and then make a similar regex again to create a JSON array.</p> <p>I tried using this but it only is matching to the first group and not all of the rest.</p> <pre><code>&lt;tr &gt;(.*?)&lt;/tr&gt; </code></pre> <p>How do I make that find all matches between those tags?</p>
<p>Although using regex for this job is a bad idea (there are many ways for things to go wrong), your pattern is basically correct.</p> <p><strong>Returning All Matches with Python</strong></p> <p>The question then becomes about returning all matches or capture groups in Python. There are two basic ways:</p> <ol> <li>finditer</li> <li>findall</li> </ol> <p><strong>With finditer</strong></p> <pre><code>for match in regex.finditer(subject): print("The Overall Match: ", match.group(0)) print("Group 1: ", match.group(1)) </code></pre> <p><strong>With findall</strong></p> <p><code>findall</code> is a bit strange. When you have capture groups, to access both the capture groups and the overall match, you have to wrap your original regex in parentheses (so that the overall match is captured too). In your case, if you wanted to be able to access both the outside of the tags and the inside (which you captured with Group 1), your regex would become: <code>(&lt;tr &gt;(.*?)&lt;/tr&gt;)</code>. Then you do:</p> <pre><code>matches = regex.findall(subject) if len(matches)&gt;0: for match in matches: print ("The Overall Match: ",match[0]) print ("Group 1: ",match[1]) </code></pre>
python|regex
1
1,902,651
20,726,855
How to register filterChainProxy in CherryPy?
<p>I recently read 'Spring Python 1.1' by Greg L. Turnquist. to learn spring framework on python. While learning, I tried a few examples introduced in the book and I bumped into a trouble on chapter 8, "Case Study I - Intergrating Spring Python with your Web Application."</p> <p>And, I get this error below. I don't have a clue even on google.. 500 Internal Server Error</p> <p>The server encountered an unexpected condition which prevented it from fulfilling the request.</p> <p>Traceback (most recent call last): File "c:\Python27\lib\site-packages\cherrypy_cprequest.py", line 633, in respond self.namespaces(self.config) File "c:\Python27\lib\site-packages\cherrypy\lib\reprconf.py", line 115, in <strong>call</strong> exit(None, None, None) File "c:\Python27\lib\site-packages\cherrypy_cptools.py", line 446, in <strong>exit</strong> tool = getattr(self, name) AttributeError: 'Toolbox' object has no attribute 'filterChainProxy' Powered by CherryPy 3.2.4 I figured out that CherryPy doesn't have the attribute 'filterChainProxy' in cherrypy.tools. However, the example code in main2.py below tries to use "tools.filterChainProxy.on" and that's the reason as far as i know.</p> <p>I cannot find any solution for this issue on google. Why in the first place the author made this code using tools.filterChainProxy? Was it supported back then, and now it's deprecated? Or, why else? I looked up in the documentation page of cherrypy and still no clue.</p> <p>Please help me with the code below (I put the related parts in <strong>BOLD</strong>): </p> <pre><code>import cherrypy import os from springpython.context import ApplicationContext from springpython.security.context import * from ctx2 import * if __name__ == '__main__': cherrypy.config.update({'server.socket_port': 8009}) ctx = ApplicationContext(SpringBankAppContext()) SecurityContextHolder.setStrategy(SecurityContextH older.MODE_GLOBAL) SecurityContextHolder.getContext() conf = {"/": {"tools.sessions.on":True, **"tools.filterChainProxy.on":True}**} cherrypy.tree.mount( ctx.get_object("view"), '/', config=conf) cherrypy.engine.start() cherrypy.engine.block() </code></pre> <p></p> <pre><code>from springpython.config import PythonConfig, Object from springpython.security.providers import * from springpython.security.providers.dao import * from springpython.security.userdetails import * from springpython.security.vote import * from springpython.security.web import * from springpython.security.cherrypy3 import * from app2 import * class SpringBankAppContext(PythonConfig): def __init__(self): PythonConfig.__init__(self) @Object def view(self): view = SpringBankView() view.auth_provider = self.auth_provider() view.filter = self.auth_processing_filter() view.http_context_filter = self.httpSessionContextIntegrationFilter() return view @Object def **filterChainProxy**(self): return CP3FilterChainProxy(filterInvocationDefinitionSour ce = [ ("/login.*", ["httpSessionContextIntegrationFilter"]), ("/.*", ["httpSessionContextIntegrationFilter", "exception_translation_filter", "auth_processing_filter", "filter_security_interceptor"]) ]) @Object def httpSessionContextIntegrationFilter(self): filter = HttpSessionContextIntegrationFilter() filter.sessionStrategy = self.session_strategy() return filter @Object def session_strategy(self): return CP3SessionStrategy() @Object def exception_translation_filter(self): filter = ExceptionTranslationFilter() filter.authenticationEntryPoint = self.auth_filter_entry_pt() filter.accessDeniedHandler = self.accessDeniedHandler() return filter @Object def auth_filter_entry_pt(self): filter = AuthenticationProcessingFilterEntryPoint() filter.loginFormUrl = "/login" filter.redirectStrategy = self.redirectStrategy() return filter @Object def accessDeniedHandler(self): handler = SimpleAccessDeniedHandler() handler.errorPage = "/accessDenied" handler.redirectStrategy = self.redirectStrategy() return handler @Object def redirectStrategy(self): return CP3RedirectStrategy() @Object def auth_processing_filter(self): filter = AuthenticationProcessingFilter() filter.auth_manager = self.auth_manager() filter.alwaysReauthenticate = False return filter @Object def auth_manager(self): auth_manager = AuthenticationManager() auth_manager.auth_providers = [self.auth_provider()] return auth_manager @Object def auth_provider(self): provider = DaoAuthenticationProvider() provider.user_details_service = self.user_details_service() provider.password_encoder = PlaintextPasswordEncoder() return provider @Object def user_details_service(self): user_details_service = InMemoryUserDetailsService() user_details_service.user_dict = { "alice": ("alicespassword",["ROLE_CUSTOMER"], True), "bob": ("bobspassword", ["ROLE_MGR"], True), "carol": ("carolspassword", ["ROLE_SUPERVISOR"], True) } return user_details_service @Object def filter_security_interceptor(self): filter = FilterSecurityInterceptor() filter.auth_manager = self.auth_manager() filter.access_decision_mgr = self.access_decision_mgr() filter.sessionStrategy = self.session_strategy() filter.obj_def_source = [ ("/.*", ["ROLE_CUSTOMER", "ROLE_MGR", "ROLE_SUPERVISOR"]) ] return filter @Object def access_decision_mgr(self): access_decision_mgr = AffirmativeBased() access_decision_mgr.allow_if_all_abstain = False access_decision_mgr.access_decision_voters = [RoleVoter()] return access_decision_mgr </code></pre> <p></p> <pre><code>import cherrypy from springpython.security import * from springpython.security.providers import * from springpython.security.context import * class SpringBankView(object): def __init__(self): self.filter = None self.auth_provider = None self.http_context_filter = None @cherrypy.expose def index(self): return """ Welcome to SpringBank! &lt;p&gt; &lt;p&gt; &lt;a href="logout"&gt;Logout&lt;/a href&gt; """ @cherrypy.expose def login(self, from_page="/", login="", password="", error_msg=""): if login != "" and password != "": try: self.attempt_auth(login, password) raise cherrypy.HTTPRedirect(from_page) except AuthenticationException, e: raise cherrypy.HTTPRedirect( "?login=%s&amp;error_msg=Username/password failure" % login) return """ %s&lt;p&gt; &lt;form method="POST" action=""&gt; &lt;table&gt; &lt;tr&gt; &lt;td&gt;Login:&lt;/td&gt; &lt;td&gt;&lt;input type="text" name="login" value="%s"/&gt;&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;Password:&lt;/td&gt; &lt;td&gt;&lt;input type="password" name="password"/&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt; &lt;input type="hidden" name="from_page" value="%s"/&gt;&lt;br/&gt; &lt;input type="submit"/&gt; &lt;/form&gt; """ % (error_msg, login, from_page) def attempt_auth(self, username, password): token = UsernamePasswordAuthenticationToken(username, password) SecurityContextHolder.getContext().authentication = \ self.auth_provider.authenticate(token) self.http_context_filter.saveContext() @cherrypy.expose def logout(self): self.filter.logout() self.http_context_filter.saveContext() raise cherrypy.HTTPRedirect("/") </code></pre> <p>The full source code is found in the url: <a href="https://www.packtpub.com/sites/defau.../0660_Code.zip" rel="nofollow">https://www.packtpub.com/sites/defau.../0660_Code.zip</a></p> <p>Thanks</p>
<p>The <code>filterChainProxy</code> is not a CherryPy thing, it is defined in your second code block, and apparently it should be then assigned to <code>tools.filterChainProxy</code>, but for some reason it's not.</p>
python|spring|cherrypy
0
1,902,652
20,645,770
Log-In Automation
<p>I am trying to write a Python script that will automate logging in to a web-client. This is to automatically log-in to the web-client with a provided user name and password. Below is my Python code:</p> <pre><code>import httplib import urllib import urllib2 header = { 'Host' : 'localhost.localdomain', 'Connection' : 'keep-alive', 'Origin' : 'localhost.localdomain', #check what origin does 'User-Agent' : 'Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20131029 Firefox/17.0', 'Content-Type' : 'application/x-www-form-urlencoded', 'Accept' : 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Referer' : 'http://localhost.localdomain/mail/index.php/mail/auth/processlogin', 'Accept-Encoding' : 'gzip, deflate', 'Accept-Language' : 'en-US,en;q=0.5', 'Cookie' : 'atmail6=tdl3ckcf4oo88fsgvt5cetoc92' } content = { 'emailName' : 'pen.test.clin', 'emailDomain' : '', 'emailDomainDefault' : '', 'cssStyle' : 'original', 'email' : 'pen.test.clin', 'password' : 'aasdjk34', 'requestedServer' : '', 'MailType' : 'IMAP', 'Language' : '' } def runBruteForceTesting(): url="http://localhost.localdomain/mail/index.php/mail/auth/processlogin" for i in range (0,100): data = urllib.urlencode(content) request = urllib2.Request(url, data, header) response = urllib2.urlopen(url, request) print 'hi' print request, response runBruteForceTesting() </code></pre> <p>However: I am getting the following error:</p> <pre><code>Traceback (most recent call last): File "C:/Users/dheerajg/Desktop/python/log.py", line 39, in &lt;module&gt; runBruteForceTesting() File "C:/Users/dheerajg/Desktop/python/log.py", line 35, in runBruteForceTesting response = urllib2.urlopen(url, request) File "C:\Python27\lib\urllib2.py", line 127, in urlopen return _opener.open(url, data, timeout) File "C:\Python27\lib\urllib2.py", line 402, in open req = meth(req) File "C:\Python27\lib\urllib2.py", line 1123, in do_request_ 'Content-length', '%d' % len(data)) File "C:\Python27\lib\urllib2.py", line 229, in __getattr__ raise AttributeError, attr AttributeError: __len__ </code></pre>
<p>The <code>request</code> object that you received from <code>urllib2.Request</code> does not have a <code>__len__</code> method ; in your context, it means you're calling <code>urllib2.urlopen</code> with a wrong second argument.</p> <p>Looking at documentation, it is written it needs a string:</p> <blockquote> <p>data may be a string specifying additional data to send to the server, or None if no such data is needed.</p> </blockquote> <p>So what about calling <code>urlopen</code> like this:</p> <blockquote> <p>response = urllib2.urlopen(url, request.get_data())</p> </blockquote> <p>?</p>
python|http|automation
1
1,902,653
72,052,157
Why cv2.write saves black images?
<p><em><strong>hi folks, greetings</strong></em></p> <p>am using this code that I found on the web, to apply a wiener filter on an image, the code :</p> <pre><code>from scipy.signal.signaltools import deconvolve from skimage import color, data, restoration img = color.rgb2gray(img) from scipy.signal import convolve2d psf = np.ones((5, 5)) / 25 img = convolve2d(img, psf, 'same') img += 0.1 * img.std() * np.random.standard_normal(img.shape) deconvolved_img = restoration.wiener(img, psf, 1100) f, (plot1, plot2) = plt.subplots(1, 2) plot1.imshow(img) plot2.imshow(deconvolved_img) plt.show() cv2.imwrite(&quot;wiener result 2.jpeg&quot;,deconvolved_img) </code></pre> <p>the issue is when I plot the result using Matplotlib I get this :</p> <p><a href="https://i.stack.imgur.com/eEzOl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eEzOl.png" alt="enter image description here" /></a></p> <p><em><strong>but</strong></em> when I type <code>cv2.imwrite(&quot;wiener result 2.jpeg&quot;,deconvolved_img)</code> to save the image, I get this :</p> <p><a href="https://i.stack.imgur.com/9rXAz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9rXAz.jpg" alt="enter image description here" /></a></p> <p>why do I get a black image when I save it ??</p>
<p>There are two ways to save your images as a file:</p> <p><strong>Method 1: Using <code>matplotlib</code></strong></p> <p>Since you are using <code>matplotlib</code> library to show the image <code>plt.show()</code>, you can use the same library to save your plots as well using <code>plt.savefig()</code></p> <pre><code>plot1.imshow(img) plot2.imshow(deconvolved_img) plt.savefig('path_to_save) # mention the path you want to save the plots plt.show() </code></pre> <p><strong>Method 2: Using <code>OpenCV</code></strong></p> <p>You can also save your file using OpenCV.</p> <p>But prior to saving your image there is a conversion required. The image variable <code>deconvolved_img </code> is of <code>float</code> data type, with values ranging between [0 - 1]. Hence when you save such images they are perceived as a black image.</p> <p>In OpenCV you can convert your image to <code>int</code> data type and scaling the pixel intensities between the expected [0 - 255] using <code>cv2.normalize()</code> function:</p> <pre><code>result = cv2.normalize(deconvolved_img, dst=None, alpha=0, beta=255,norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U) cv2.imwrite('path_to_save', result) # mention the path you want to save the result </code></pre>
python|opencv|image-processing|filter|scikit-image
2
1,902,654
36,040,029
How to organize imports in a Python class?
<p>Suppose I have a Python class ABC; I want to import some non-default modules into my project, but I'm not sure if the user who runs my code has them installed. To check, I've enclosed my imports in the class inside a try and catch block, as such:</p> <pre><code>class ABC: _canRun = True try: import XYZ except Exception: _canRun = False def test_function(self): if self._canRun: import XYZ #do stuff using XYZ module else: print("Cannot do stuff") return None </code></pre> <p>I feel like this is bad design for some reason. Is there a better pattern I can use?</p>
<p>imports are usually placed at the start of the py file: </p> <pre><code>try: import XYZ except ImportError: XYZ = None class ABC: def test_function(self): if XYZ is None: raise Exception("Cannot do stuff") </code></pre> <p>However the try/except ImportError trick is usually done when you can pick an alternative:</p> <pre><code>try: import XYZ except ImportError: import ZZTop as XYZ # if ZZTop fails, all fails. And that's fine. class ABC: def test_function(self): XYZ.something() </code></pre> <p>Otherwise it is advisable to just fail as easly as possible:</p> <pre><code>import XYZ class ABC: def test_function(self): XYZ.something() </code></pre>
python|class|import
1
1,902,655
35,886,405
python is not logging all content to file
<p>I want to log the script output to a file while still displaying the output to the screen. It works fine, except for some cases where not all the content is written to the file (one or two lines can be missed, if the output is long) Below is my code:</p> <pre><code>class Tee(object): def __init__(self, *files): self.files = files def write(self, obj): for f in self.files: f.write(obj) f.flush() write_log = open("log.txt", 'a', 0) sys.stdout = Tee(sys.stdout, write_log) sys.stderr = Tee(sys.stderr, write_log) </code></pre> <p>Tried all the following options at the end of the code, but the result is the same:</p> <pre><code>os.fsync(write_log.fileno()) write_log.flush() write_log.close() </code></pre>
<p>Try using the <code>with</code> statement or use <code>try-except</code> and explicitly close the file.</p>
python
0
1,902,656
15,350,260
Python PIL library not working image.thumbnail(size, Image.ANTIALIAS)
<p>I'm trying to debug this script in python from PIL import Image, ImageChops, ImageOps</p> <p>I've searched all over the problem seems to be "image.thumbnail(size, Image.ANTIALIAS)" here. Anyone have any ideas? Thanks</p> <pre><code>image = Image.open(f_in) print "got here" image.thumbnail(size, Image.ANTIALIAS) print "cannot get here" image_size = image.size if pad: thumb = image.crop( (0, 0, size[0], size[1]) ) offset_x = max( (size[0] - image_size[0]) / 2, 0 ) offset_y = max( (size[1] - image_size[1]) / 2, 0 ) thumb = ImageChops.offset(thumb, offset_x, offset_y) else: thumb = ImageOps.fit(image, size, Image.ANTIALIAS, (0.5, 0.5)) thumb.save(f_out) </code></pre> <p><strong>EDIT</strong> Thanks for the quick answer Mark. I figured it out.</p> <p>I had to:</p> <pre><code>pip uninstall PIL sudo apt-get install libjpeg8-dev pip install PIL </code></pre> <p>I didn't have libjpeg installed. Not sure why I didn't get an error.</p>
<p>If the program never gets to the line "cannot get here" then the problem is that <code>thumbnail</code> is throwing an exception. You didn't mention that in the question though, it should have generated an error.</p> <p>PIL uses lazy image loading - in the <code>open</code> call it might open the file, but it doesn't actually try to read the whole thing in. If your file is corrupt or in the wrong format it will fail once you try to do something with the image, as <code>thumbnail</code> is doing.</p>
python|python-imaging-library
1
1,902,657
15,312,889
Django, serving static files with default settings
<p>I'm writing an app using the Django Python web framework. I added my apps to the INSTALLED_APPS in settings.py and my templates are served without problems. But, concerning the static files, i have a little problem with them. I wanted to use the default parameters in STATICFILES_FINDER :</p> <pre><code>STATICFILES_FINDERS = ( 'django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', ) </code></pre> <p>When i want to include a css file for example, in my template i just do (supposing '/' will be my static/applicationName in my application folder) :</p> <pre><code>&lt;link rel='stylesheet' href="/css/style.css"&gt; </code></pre> <p>Am i doing it wrong, and if so, what's the good way to deal with static files ?</p> <p>UPDATE : My base.html template :</p> <pre><code>{% load staticfiles %} &lt;!doctype html&gt; &lt;html&gt; &lt;head&gt; {% block head %} &lt;meta charset='utf-8'&gt; &lt;title&gt; {% block title %} {% endblock %} - Find Something &lt;/title&gt; &lt;link rel='stylesheet' href="{% static 'css/style.css' %}"&gt; {% endblock head %} &lt;/head&gt; &lt;body&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>And my inheriting template is :</p> <pre><code>{% extends "frontend/base.html" %} {% block title %} {{ title }} {% endblock %} {% block head %} {{ super() }} {% endblock %} </code></pre>
<pre><code> {% static %} &lt;link rel='stylesheet' href="{% static 'css/style.css' %}"&gt; {% block head %} {{block.super}} {% endblock %} </code></pre>
python|django
1
1,902,658
15,418,900
Why isn't this assignment statement one-way?
<p>This problem is very simple to appreciate, here is the program - </p> <pre><code>hisc = [1,2,3,4] print("\n", hisc) ohisc = hisc hisc.append(5) print("\nPreviously...", ohisc) print("\nAnd now...", hisc) input("\nETE") </code></pre> <p>When I run it ohisc gets the 5. Why does ohisc change? How can I stop it from changing? Apologies if this is something obvious.</p>
<p><a href="http://docs.python.org/3/reference/datamodel.html" rel="nofollow">Python variables are references</a>. As such, the assignment copies the <em>reference</em> rather than the content of the variable.</p> <p>In order to avoid this, all you have to do is create a <em>new</em> object:</p> <pre><code>ohisc = list(hisc) </code></pre> <p>This is using the <a href="http://docs.python.org/3/library/functions.html#list" rel="nofollow"><code>list</code> constructor</a> which creates a new list from a given iterable.</p> <p>Alternatively you can also assign from a slice (which creates a new object):</p> <pre><code>ohisc = hisc[:] </code></pre> <p><code>[]</code> is the <a href="http://docs.python.org/3/library/stdtypes.html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange" rel="nofollow">general slice operator</a> which is used to extract a subset from a given collection. We simply leave out the start and end position (they default to the begin and end of the collection, respectively).</p>
python|python-3.x
5
1,902,659
29,741,388
ImportError for perl in python terminal
<p>I have Perl v5.10.1 installed on my CentOS 6 linux box. However, when I try to import perl in the python terminal, I get an ImportError.</p> <pre><code>&gt;&gt;&gt;import perl Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; ImportError: No module named perl </code></pre> <p>I had a similar issue earlier with Gnuplot, but that was resolved by simply installing another package. <a href="https://stackoverflow.com/questions/29429886/importerror-for-gnuplot-in-python-terminal/29430484#29430484">ImportError for Gnuplot in python terminal</a></p> <p>I cannot figure out what is wrong.</p> <p>Please help. Thanks.</p>
<p>First you have to download pyperl package and then install it.<br> Download link <a href="http://www.felix-schwarz.name/files/opensource/pyperl/" rel="nofollow">http://www.felix-schwarz.name/files/opensource/pyperl/</a></p> <ol> <li>Download the package</li> <li>unzip it if it is zipped</li> <li>open cmd and cd into the directory containing setup.py</li> <li>type in python setup.py install</li> </ol>
python|linux|perl|centos6|importerror
0
1,902,660
46,414,074
Python seems to incorrectly identify case-sensitive string using regex
<p>I'm checking for a case-sensitive string pattern using Python 2.7 and it seems to return an incorrect match. I've run the following tests:</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; rex_str = "^((BOA_[0-9]{4}-[0-9]{1,3})(?:CO)?.(?i)pdf$)" &gt;&gt;&gt; not re.match(rex_str, 'BOA_1988-148.pdf') &gt;&gt;&gt; False &gt;&gt;&gt; not re.match(rex_str, 'BOA_1988-148.PDF') &gt;&gt;&gt; False &gt;&gt;&gt; not re.match(rex_str, 'BOA1988-148.pdf') &gt;&gt;&gt; True &gt;&gt;&gt; not re.match(rex_str, 'boa_1988-148.pdf') &gt;&gt;&gt; False </code></pre> <p>The first three tests are correct, but the final test, 'boa_1988-148.pdf' should return True because the pattern is supposed to treat the first 3 characters (BOA) as case-sensitive.</p> <p>I checked the expression with an online tester (<a href="https://regex101.com/" rel="nofollow noreferrer">https://regex101.com/</a>) and the pattern was correct, flagging the final as a no match because the 'boa' was lower case. Am I missing something or do you have to explicitly declare a group as case-sensitive using a case-sensitive mode like (?c)?</p>
<p>Flags do not apply to portions of a regex. You told the regex engine to match case insensitively:</p> <pre><code>(?i) </code></pre> <p>From the the <a href="https://docs.python.org/3/library/re.html#regular-expression-syntax" rel="nofollow noreferrer">syntax documentation</a>:</p> <blockquote> <pre><code>(?aiLmsux) </code></pre> <p>(One or more letters from the set <code>'a'</code>, <code>'i'</code>, <code>'L'</code>, <code>'m'</code>, <code>'s'</code>, <code>'u'</code>, <code>'x'</code>.) The group matches the empty string; the letters set the corresponding flags: <code>re.A</code> (ASCII-only matching), <code>re.I</code> (ignore case), <code>re.L</code> (locale dependent), <code>re.M</code> (multi-line), <code>re.S</code> (dot matches all), and <code>re.X</code> (verbose), <strong><em>for the entire regular expression</em></strong>. (The flags are described in Module Contents.) This is useful if you wish to include the flags as part of the regular expression, instead of passing a flag argument to the <code>re.compile()</code> function. Flags should be used first in the expression string.</p> </blockquote> <p>Emphasis mine, the flag applies to the <strong>whole pattern</strong>, not just a substring. If you need to match just <code>pdf</code> or <code>PDF</code>, use that in your pattern directly:</p> <pre><code>r"^((BOA_[0-9]{4}-[0-9]{1,3})(?:CO)?.(?:pdf|PDF)$)" </code></pre> <p>This matches either <code>.pdf</code> or <code>.PDF</code>. If you need to match any mix of uppercase and lowercase, use:</p> <pre><code>r"^((BOA_[0-9]{4}-[0-9]{1,3})(?:CO)?.[pP][dD][fF]$)" </code></pre>
python|regex|python-2.7|case-sensitive
0
1,902,661
46,415,701
(Python) Help fixing a Hangman Game
<p>I really need help to fix this code. Basically, it is a hangman game whereby users can guess <em>letters</em> or <em>words</em>.</p> <p>If the user enters a letter, it will work like a regular hangman game. However, the final chance (Guess #6) must be a word. If the user used letter guesses and got it correct before Guess #6, user would not go through the word guess. </p> <p>The user can guess the word beforehand but if it is wrong, they will lose 2 chances. So if they guess the wrong word in the beginning, they will have 4 chances left but the last chance will still be the word guess.</p> <p>edit: The program will now only have 5 letter guesses whether right or wrong but the final guess will still be a word guess and we still have the optional early word guesses. The counter appears to be working now but I am unsure how to separate the words. (edit: I have realised that there is also a problem to append the words.)</p> <p>This is my current code. I changed the part when its (counter &lt; 5) in user guess. The part where (counter == 5) is like the old code to compare.</p> <pre><code>import random wordlist = 'artist breeze circle decent enroll filthy growth honest invest kernel letter narrow meteor policy pursue roster runway scheme ripple toddle wobbly zeroes'.upper().split() random.shuffle(wordlist) counter = 0 def draw_board(): #Display words here for i in secret_word: if i in correct: print(i, end=' ') else: print('_', end=' ') print("\n") print("*** MISSES ***") for i in incorrect: print(i, end=' ') print('\n*********************') def user_guess(): #For user to input guess global counter secret_word = wordlist.pop() while(counter &lt; 5): guess = input("Guess a letter or word\n: ").upper() if(len(guess) &gt; 1): guess_list = list(guess) if(guess_list == secret_list): correct.append(guess_list) else: counter = counter + 1 elif guess in secret_word: correct.append(guess) else: incorrect.append(guess) return counter if(counter == 5): wordguess = input("Enter your word\n: ").upper() if(guess == secret_word): correct.append(wordguess) else: counter = counter + 1 print(counter) return counter def check_win(): #Check if user has won or not global counter if(counter &gt; 5): return 'loss' for i in secret_word: if i not in correct: return 'no win' return 'win' #pop is used to retrieve a word from word list secret_word = wordlist.pop() secret_list = list(secret_word) correct = [] incorrect = [] print("DEBUG: %s" % secret_word) while True: draw_board() user_guess() counter = counter + 1 win_condition = check_win() if win_condition == 'loss': print("You lose!") break elif win_condition == 'win': print("You win!") </code></pre>
<p>There are at least 3 things that should be changed here to avoid extra loops.</p> <ol> <li>Add a break after <code>print("you win")</code></li> <li>Delete <code>counter = 0</code> in user_guess and replace <code>def user_guess():</code> with <code>def user_guess(counter):</code> </li> <li>Replace <code>user_guess()</code> with <code>counter = user_guess(counter)</code></li> </ol> <p>Additionally you should probably change the whiles in user_guess() to ifs.</p>
python|python-3.x
0
1,902,662
60,785,209
Replace a number behind word in a text file
<p>I seem to be a bit confused about how to use strings, int, and floats. Im trying to read a .txt file name TEST.txt. This contains:</p> <pre><code>Hoi Total: 350 </code></pre> <p>I want to add a value to the total number. I am trying this like so:</p> <pre><code># open the file with open("TEST.txt") as f: # read lines lines = f.readlines() # set a sting only to read line 2 string = (lines[2]) # remove the characters from the string stringNub = string.replace("Total: ","") print (stringNub) min = 300 sum3 = int(stringNub) + int(min) print(sum3) # replace the string with open('TEST.txt','r') as file: filedata = file.read() filedata = filedata.replace(stringNub,sum3) with open('TEST.txt','w') as file: file.write(filedata) # replace the string with open('TEST.txt','r') as file: filedata = file.read() filedata = filedata.replace(string2,sum3) with open('TEST.txt','w') as file: file.write(filedata) </code></pre> <p>I was hoping that the code would write to the txt like so:</p> <pre><code>Hoi Total: 650 </code></pre> <p>Instead I ended up with this error: TypeError: replace() argument 2 must be str, not int</p> <p>But if I make my <strong>int</strong> a <strong>str</strong> the output will be <strong>350300</strong>.</p> <p>(I'm very much a beginner / hobbyist). I know the code probably doesn't look that pretty, but can anyone tell me what I'm doing wrong?</p>
<p>Here is one solution. Replace the number rewriting it in the correct file position.</p> <pre><code>with open('TEST.txt', 'r+') as f: text = f.read() i = text.index('Total: ') + 7 num = int(text[i:]) + 300 f.seek(i) f.write(str(num)) </code></pre>
python|string|text
1
1,902,663
53,602,404
Computing row-wise correlation coefficients between two 2d arrays in Python
<p>I have two numpy arrays of identical size <code>M X T</code> (let's call them <code>A</code> and <code>B</code>). I'd like to compute the Pearson correlation coefficient across T between each pair of <em>the same row</em> m in A and B (so, <code>A[i,:]</code> and <code>B[i,:]</code>, then <code>A[j,:]</code> and <code>B[j,:]</code>; but never <code>A[i,:]</code> and <code>B[j,:]</code>, for example).</p> <p>I'm expecting my output to be either a one-dimensional array with shape <code>(M,)</code> or a two-dimensional array with shape <code>(M,1)</code>. </p> <p>The arrays are quite large (on the order of 1-2 million rows), so I'm looking for a vectorized solution that will let me avoid a for-loop. Apologies if this has already been answered, but it seems like many of the code snippets in previous answers (e.g., <a href="https://stackoverflow.com/questions/30143417/computing-the-correlation-coefficient-between-two-multi-dimensional-arrays">this one</a>) are designed to give the full <code>M X M</code> correlation matrix -- i.e., correlation coefficients between all possible pairs of rows, rather than just index-matched rows; what I am looking for is basically just the diagonal of this matrix, but it feels wasteful to calculate the whole thing if all I need is the diagonal -- and in fact it's throwing memory errors when I try to do that anyway.... </p> <p>What's the fastest way to implement this? Thanks very much in advance.</p>
<p>I think I'd just use a list-comprehension and a module for calculating the coefficient:</p> <pre><code>from scipy.stats.stats import pearsonr import numpy as np M = 10 T = 4 A = np.random.rand(M*T).reshape((M, T)) B = np.random.rand(M*T).reshape((M, T)) diag_pear_coef = [pearsonr(A[i, :], B[i, :])[0] for i in range(M)] </code></pre> <p>Does that work for you? Note that <code>pearsonr</code> returns more than just the correlation coefficient, hence the <code>[0]</code> indexing.<br> Good luck!</p>
python|arrays|numpy|correlation
1
1,902,664
46,001,090
Detect space between text (OpenCV, Python)
<p>I have the following code (which is in fact just 1 part of 4 needed to run all the project I am working on..):</p> <pre><code>#python classify.py --model models/svm.cpickle --image images/image.png from __future__ import print_function from sklearn.externals import joblib from hog import HOG import dataset import argparse import mahotas import cv2 ap = argparse.ArgumentParser() ap.add_argument("-m", "--model", required = True, help = "path to where the model will be stored") ap.add_argument("-i", "--image", required = True, help = "path to the image file") args = vars(ap.parse_args()) model = joblib.load(args["model"]) hog = HOG(orientations = 18, pixelsPerCell = (10, 10), cellsPerBlock = (1, 1), transform = True) image = cv2.imread(args["image"]) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) blurred = cv2.GaussianBlur(gray, (5, 5), 0) edged = cv2.Canny(blurred, 30, 150) (_, cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) cnts = sorted([(c, cv2.boundingRect(c)[0]) for c in cnts], key = lambda x: x[1]) for (c, _) in cnts: (x, y, w, h) = cv2.boundingRect(c) if w &gt;= 7 and h &gt;= 20: roi = gray[y:y + h, x:x + w] thresh = roi.copy() T = mahotas.thresholding.otsu(roi) thresh[thresh &gt; T] = 255 thresh = cv2.bitwise_not(thresh) thresh = dataset.deskew(thresh, 20) thresh = dataset.center_extent(thresh, (20, 20)) cv2.imshow("thresh", thresh) hist = hog.describe(thresh) digit = model.predict([hist])[0] print("I think that number is: {}".format(digit)) cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 1) cv2.putText(image, str(digit), (x - 10, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 1.2, (0, 255, 0), 2) cv2.imshow("image", image) cv2.waitKey(0) </code></pre> <p>This code is detecting and recognizing handwriten digits from images. Here is an example:</p> <p><a href="https://i.stack.imgur.com/ayw7v.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ayw7v.jpg" alt="Image"></a></p> <p>Let's say I don't care about the accuracy recognition. </p> <p>My problem is the following: as you can see, the program take all the numbers he can <em>see</em> and print them in console. From console I can save them in a text file if I want BUT I can't tell the program that there is a space between the numbers.</p> <p><a href="https://i.stack.imgur.com/kDXwP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kDXwP.png" alt="Image2"></a></p> <p>What I want is that, if I print the numbers in a text file, they should be separated as in the image (sorry but it's a bit hard to explain..). The numbers should not be (even in console) printed all together but, where there is blank space, printed a blank area also.</p> <p>Take a look at the firs image. After the first 10 digits, there is a blank space in image which there isn't in console. </p> <p>Anyway, here is a link to full code. There are 4 <code>.py</code> files and 3 folders. To execute, open a CMD in the folder and paste the command <code>python classify.py --model models/svm.cpickle --image images/image.png</code> where <code>image.png</code> is the name of one file in images folder.</p> <p><a href="https://drive.google.com/open?id=0B7zqe2yJoD0sQlFUbWZReFNBQ0E" rel="nofollow noreferrer">Full Code</a></p> <p>Thanks in advance. In my opinion all this work would have to be done using neural networks but I want to try it first this way. I'm pretty new to this.</p>
<p>This is a starter solution.</p> <p>I don't have anything in Python for the time being but it shouldn't be hard to convert this plus the OpenCV function calls are similar and I've linked them below.</p> <hr> <p><strong>TLDR;</strong></p> <p>Find the centre of your boundingRects, then find the distance between them. If one rect is a certain threshold away, you may <em>assume</em> it as being a space.</p> <hr> <p>First, find the centres of your bounding rectangles</p> <pre><code>vector&lt;Point2f&gt; centres; for(size_t index = 0; index &lt; contours.size(); ++index) { Moments moment = moments(contours[index]); centres.push_back(Point2f(static_cast&lt;float&gt;(moment.m10/moment.m00), static_cast&lt;float&gt;(moment.m01/moment.m00))); } </code></pre> <p><em>(Optional but recommended)</em></p> <p>You can draw the centres to have a visual understanding of them.</p> <pre><code>for(size_t index = 0; index &lt; centres.size(); ++index) { Scalar colour = Scalar(255, 255, 0); circle(frame, circles[index], 2, colour, 2); } </code></pre> <p>With this, just iterate through them confirming that the distance to the next one is within a <em>reasonable</em> threshold</p> <pre><code>for(size_t index = 0; index &lt; centres.size(); ++index) { // this is just a sample value. Tweak it around to see which value actually makes sense double distance = 0.5; Point2f current = centres[index]; Point2f nextPoint = centres[index + 1]; // norm calculates the euclidean distance between two points if(norm(nextPoint - current) &gt;= distance) { // TODO: This is a potential space?? } } </code></pre> <p>You can read more about <a href="http://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html" rel="nofollow noreferrer">moments</a>, <a href="http://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#double%20norm(InputArray%20src1,%20int%20normType,%20InputArray%20mask)" rel="nofollow noreferrer">norm</a> and <a href="http://docs.opencv.org/3.1.0/dc/da5/tutorial_py_drawing_functions.html" rel="nofollow noreferrer">circle drawing</a> calls in Python.</p> <p>Happy coding, Cheers mate :)</p>
python|opencv|handwriting-recognition
1
1,902,665
33,232,830
newline and dash not working correctly in jinja
<p>How could I generate the expected output ? Thanks</p> <h1>jinja template</h1> <pre><code>{%- for field in fields -%} - name: {{field}} type: string {%- endfor -%} </code></pre> <h1>output</h1> <pre><code>- name: operating revenue type: string- name: gross operating profit type: string- </code></pre> <h1>expected output</h1> <pre><code>- name: operating revenue type: string - name: gross operating profit type: string </code></pre> <h1>code</h1> <pre><code>from jinja2 import Template fields = ["operating revenue", "gross operating profit", "EBITDA", "operating profit after depreciation", "EBIT", "date"] template_file = open('./fields_template.jinja2').read() template = Template(template_file) html_rendered = template.render(fields=fields) print(html_rendered) </code></pre>
<p>The <code>-</code> removes all whitespace between <em>that side</em> of the Jinja tag and the first character. You are using <code>-</code> on the 'inside' of the tags, so whitespace is removed up to the <code>-</code> character and after the word <code>string</code>, joining up the two. Remove one or the other.</p> <p>You could remove the extra newlines at the start and end of your text for example, and remove the <code>-</code> from the inside side of the opening tag:</p> <pre><code>{%- for field in fields %} - name: {{field}} type: string {%- endfor -%} </code></pre> <p>Demo:</p> <pre><code>&gt;&gt;&gt; from jinja2 import Template &gt;&gt;&gt; fields = ["operating revenue", "gross operating profit", "EBITDA", "operating profit after depreciation", "EBIT", "date"] &gt;&gt;&gt; template_file = '''\ ... {%- for field in fields %} ... - ... name: {{field}} ... type: string ... {%- endfor -%} ... ''' &gt;&gt;&gt; template = Template(template_file) &gt;&gt;&gt; html_rendered = template.render(fields=fields) &gt;&gt;&gt; print(html_rendered) - name: operating revenue type: string - name: gross operating profit type: string - name: EBITDA type: string - name: operating profit after depreciation type: string - name: EBIT type: string - name: date type: string </code></pre>
python|python-2.7|yaml|jinja2|removing-whitespace
11
1,902,666
33,485,294
Mezzanine - Can't load css and js in Heroku
<p>I'm having some problems hosting a simple website I've created in Heroku. The website was created using Mezzanine and uses whitenoise and gunicorn. The problem is: I'm getting 404 error in some static resources, like css and js. You can see the problems at <a href="http://blrg-advogados.herokuapp.com" rel="nofollow noreferrer">http://blrg-advogados.herokuapp.com</a>.</p> <p>This is the Procfile content:</p> <pre><code>web: python manage.py collectstatic --noinput; gunicorn --workers=4 site_advogados.wsgi 0.0.0.0:$PORT </code></pre> <p><strong>wsgi.py</strong></p> <pre><code>import os os.environ.setdefault("DJANGO_SETTINGS_MODULE", "site_advogados.settings") from django.core.wsgi import get_wsgi_application from whitenoise.django import DjangoWhiteNoise application = get_wsgi_application() application = DjangoWhiteNoise(application) </code></pre> <p>and here is some part of <strong>settings.py</strong>:</p> <pre><code>ALLOWED_HOSTS = ['*'] DEBUG = False PROJECT_APP_PATH = os.path.dirname(os.path.abspath(__file__)) PROJECT_APP = os.path.basename(PROJECT_APP_PATH) PROJECT_ROOT = BASE_DIR = os.path.dirname(PROJECT_APP_PATH) CACHE_MIDDLEWARE_KEY_PREFIX = PROJECT_APP STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage' STATIC_ROOT = 'staticfiles' STATIC_URL = '/static/' STATICFILES_DIRS = ( os.path.join(BASE_DIR, 'static'), ) MEDIA_URL = STATIC_URL + "media/" MEDIA_ROOT = os.path.join(PROJECT_ROOT, *MEDIA_URL.strip("/").split("/")) ROOT_URLCONF = "%s.urls" % PROJECT_APP TEMPLATE_DIRS = (os.path.join(PROJECT_ROOT, "templates"),) </code></pre> <p><strong>urls.py</strong> is like this:</p> <pre><code>from __future__ import unicode_literals from django.conf.urls import patterns, include, url from django.conf.urls.i18n import i18n_patterns from django.contrib import admin from mezzanine.core.views import direct_to_template from mezzanine.conf import settings from views import contato admin.autodiscover() urlpatterns = i18n_patterns("", ("^admin/", include(admin.site.urls)), ) if settings.USE_MODELTRANSLATION: urlpatterns += patterns('', url('^i18n/$', 'django.views.i18n.set_language', name='set_language'), ) urlpatterns += patterns('', url("^$", direct_to_template, {"template": "index.html"}, name="home"), url(r'^contato/$', contato, name='contato'), ("^", include("mezzanine.urls")), ) handler404 = "mezzanine.core.views.page_not_found" handler500 = "mezzanine.core.views.server_error" </code></pre> <p>Log:</p> <pre><code>2015-12-27T12:44:56.109833+00:00 app[web.1]: Traceback (most recent call last): 2015-12-27T12:44:56.109850+00:00 app[web.1]: self.handle_request(listener, req, client, addr) 2015-12-27T12:44:56.109851+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 171, in handle_request 2015-12-27T12:44:56.109852+00:00 app[web.1]: respiter = self.wsgi(environ, resp.start_response) 2015-12-27T12:44:56.109853+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/whitenoise/base.py", line 119, in __call__ 2015-12-27T12:44:56.109854+00:00 app[web.1]: return self.application(environ, start_response) 2015-12-27T12:44:56.109855+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 189, in __call__ 2015-12-27T12:44:56.109855+00:00 app[web.1]: response = self.get_response(request) 2015-12-27T12:44:56.109857+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/base.py", line 175, in get_response 2015-12-27T12:44:56.109858+00:00 app[web.1]: response = self.get_exception_response(request, resolver, 404) 2015-12-27T12:44:56.109858+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/base.py", line 90, in get_exception_response 2015-12-27T12:44:56.109861+00:00 app[web.1]: return callback(request, **param_dict) 2015-12-27T12:44:56.109863+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/mezzanine/core/views.py", line 222, in server_error 2015-12-27T12:44:56.109861+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/utils/decorators.py", line 110, in _wrapped_view 2015-12-27T12:44:56.109859+00:00 app[web.1]: response = self.handle_uncaught_exception(request, resolver, sys.exc_info()) 2015-12-27T12:44:56.109862+00:00 app[web.1]: response = view_func(request, *args, **kwargs) 2015-12-27T12:44:56.109860+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/base.py", line 268, in handle_uncaught_exception 2015-12-27T12:44:56.109864+00:00 app[web.1]: return HttpResponseServerError(t.render(context)) 2015-12-27T12:44:56.109864+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/template/backends/django.py", line 74, in render 2015-12-27T12:44:56.109865+00:00 app[web.1]: return self.template.render(context) 2015-12-27T12:44:56.109866+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 209, in render 2015-12-27T12:44:56.109866+00:00 app[web.1]: return self._render(context) 2015-12-27T12:44:56.109867+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 201, in _render 2015-12-27T12:44:56.109868+00:00 app[web.1]: return self.nodelist.render(context) 2015-12-27T12:44:56.109869+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 903, in render 2015-12-27T12:44:56.109870+00:00 app[web.1]: bit = self.render_node(node, context) 2015-12-27T12:44:56.109870+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 917, in render_node 2015-12-27T12:44:56.109871+00:00 app[web.1]: return node.render(context) 2015-12-27T12:44:56.109872+00:00 app[web.1]: return compiled_parent._render(context) 2015-12-27T12:44:56.109872+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py", line 135, in render 2015-12-27T12:44:56.109874+00:00 app[web.1]: return self.nodelist.render(context) 2015-12-27T12:44:56.109873+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 201, in _render 2015-12-27T12:44:56.109875+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 903, in render 2015-12-27T12:44:56.109875+00:00 app[web.1]: bit = self.render_node(node, context) 2015-12-27T12:44:56.109878+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/templatetags/static.py", line 105, in render 2015-12-27T12:44:56.109876+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 917, in render_node 2015-12-27T12:44:56.109877+00:00 app[web.1]: return node.render(context) 2015-12-27T12:44:56.109878+00:00 app[web.1]: url = self.url(context) 2015-12-27T12:44:56.109879+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/staticfiles/templatetags/staticfiles.py", line 16, in url 2015-12-27T12:44:56.109880+00:00 app[web.1]: return static(path) 2015-12-27T12:44:56.109882+00:00 app[web.1]: return staticfiles_storage.url(path) 2015-12-27T12:44:56.109881+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/staticfiles/templatetags/staticfiles.py", line 9, in static 2015-12-27T12:44:56.109884+00:00 app[web.1]: hashed_name = self.stored_name(clean_name) 2015-12-27T12:44:56.109883+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/staticfiles/storage.py", line 131, in url 2015-12-27T12:44:56.109884+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/staticfiles/storage.py", line 280, in stored_name 2015-12-27T12:44:56.109885+00:00 app[web.1]: cache_name = self.clean_name(self.hashed_name(name)) 2015-12-27T12:44:56.109886+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/staticfiles/storage.py", line 94, in hashed_name 2015-12-27T12:44:56.109886+00:00 app[web.1]: (clean_name, self)) 2015-12-27T12:44:56.109887+00:00 app[web.1]: ValueError: The file 'img/favicon.ico' could not be found with &lt;whitenoise.django.GzipManifestStaticFilesStorage object at 0x7f6dc4a1e2d0&gt;. 2015-12-27T12:44:56.329261+00:00 app[web.1]: [2015-12-27 12:44:56 +0000] [15] [ERROR] Error handling request 2015-12-27T12:44:56.329266+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 130, in handle 2015-12-27T12:44:56.329265+00:00 app[web.1]: Traceback (most recent call last): 2015-12-27T12:44:56.329268+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 171, in handle_request 2015-12-27T12:44:56.329267+00:00 app[web.1]: self.handle_request(listener, req, client, addr) 2015-12-27T12:44:56.329270+00:00 app[web.1]: respiter = self.wsgi(environ, resp.start_response) 2015-12-27T12:44:56.329272+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/whitenoise/base.py", line 119, in __call__ 2015-12-27T12:44:56.329287+00:00 app[web.1]: return self.application(environ, start_response) 2015-12-27T12:44:56.329288+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 189, in __call__ 2015-12-27T12:44:56.329288+00:00 app[web.1]: response = self.get_response(request) 2015-12-27T12:44:56.329289+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/base.py", line 175, in get_response 2015-12-27T12:44:56.329290+00:00 app[web.1]: response = self.get_exception_response(request, resolver, 404) 2015-12-27T12:44:56.329290+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/base.py", line 90, in get_exception_response 2015-12-27T12:44:56.329292+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/base.py", line 268, in handle_uncaught_exception 2015-12-27T12:44:56.329291+00:00 app[web.1]: response = self.handle_uncaught_exception(request, resolver, sys.exc_info()) 2015-12-27T12:44:56.329292+00:00 app[web.1]: return callback(request, **param_dict) 2015-12-27T12:44:56.329293+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/utils/decorators.py", line 110, in _wrapped_view 2015-12-27T12:44:56.329293+00:00 app[web.1]: response = view_func(request, *args, **kwargs) 2015-12-27T12:44:56.329294+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/mezzanine/core/views.py", line 222, in server_error 2015-12-27T12:44:56.329295+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/template/backends/django.py", line 74, in render 2015-12-27T12:44:56.329294+00:00 app[web.1]: return HttpResponseServerError(t.render(context)) 2015-12-27T12:44:56.329296+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 209, in render 2015-12-27T12:44:56.329295+00:00 app[web.1]: return self.template.render(context) 2015-12-27T12:44:56.329297+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 201, in _render 2015-12-27T12:44:56.329296+00:00 app[web.1]: return self._render(context) 2015-12-27T12:44:56.329297+00:00 app[web.1]: return self.nodelist.render(context) 2015-12-27T12:44:56.329297+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 903, in render 2015-12-27T12:44:56.329298+00:00 app[web.1]: bit = self.render_node(node, context) 2015-12-27T12:44:56.329298+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 917, in render_node 2015-12-27T12:44:56.329299+00:00 app[web.1]: return node.render(context) 2015-12-27T12:44:56.329299+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/template/loader_tags.py", line 135, in render 2015-12-27T12:44:56.329300+00:00 app[web.1]: return compiled_parent._render(context) 2015-12-27T12:44:56.329306+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 201, in _render 2015-12-27T12:44:56.329307+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 903, in render 2015-12-27T12:44:56.329307+00:00 app[web.1]: return self.nodelist.render(context) 2015-12-27T12:44:56.329308+00:00 app[web.1]: return node.render(context) 2015-12-27T12:44:56.329307+00:00 app[web.1]: bit = self.render_node(node, context) 2015-12-27T12:44:56.329308+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/template/base.py", line 917, in render_node 2015-12-27T12:44:56.329310+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/staticfiles/templatetags/staticfiles.py", line 16, in url 2015-12-27T12:44:56.329309+00:00 app[web.1]: url = self.url(context) 2015-12-27T12:44:56.329313+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/staticfiles/storage.py", line 94, in hashed_name 2015-12-27T12:44:56.329308+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/templatetags/static.py", line 105, in render 2015-12-27T12:44:56.329314+00:00 app[web.1]: ValueError: The file 'img/favicon.ico' could not be found with &lt;whitenoise.django.GzipManifestStaticFilesStorage object at 0x7f6dc4a1e2d0&gt;. 2015-12-27T12:44:56.329310+00:00 app[web.1]: return static(path) 2015-12-27T12:44:56.329313+00:00 app[web.1]: cache_name = self.clean_name(self.hashed_name(name)) 2015-12-27T12:44:56.329312+00:00 app[web.1]: hashed_name = self.stored_name(clean_name) 2015-12-27T12:44:56.329312+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/staticfiles/storage.py", line 280, in stored_name 2015-12-27T12:44:56.329314+00:00 app[web.1]: (clean_name, self)) 2015-12-27T12:44:56.329311+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/staticfiles/storage.py", line 131, in url 2015-12-27T12:44:56.329311+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/staticfiles/templatetags/staticfiles.py", line 9, in static 2015-12-27T12:44:56.329311+00:00 app[web.1]: return staticfiles_storage.url(path) 2015-12-27T12:44:56.330945+00:00 heroku[router]: at=info method=GET path="/favicon.ico/" host=blrg-advogados.herokuapp.com request_id=3c54ce79-8686-42a9-a335-f217abb8d6f2 fwd="177.36.203.24" dyno=web.1 connect=2ms service=31ms status=500 bytes=244 </code></pre> <p>My project folder is like this:</p> <p><a href="https://i.stack.imgur.com/qoZEQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qoZEQ.png" alt="Project layout"></a></p> <p>and the output of collect static is this:</p> <pre><code>Running python manage.py collectstatic --noinput on blrg-advogados... up, run.4811 Traceback (most recent call last): File "manage.py", line 14, in &lt;module&gt; execute_from_command_line(sys.argv) File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/__init__.py", line 351, in execute_from_command_line utility.execute() File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/__init__.py", line 343, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/base.py", line 394, in run_from_argv self.execute(*args, **cmd_options) File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/base.py", line 445, in execute output = self.handle(*args, **options) File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 168, in handle collected = self.collect() File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 98, in collect for path, storage in finder.list(self.ignore_patterns): File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/staticfiles/finders.py", line 112, in list for path in utils.get_files(storage, ignore_patterns): File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/staticfiles/utils.py", line 28, in get_files directories, files = storage.listdir(location) File "/app/.heroku/python/lib/python2.7/site-packages/django/core/files/storage.py", line 300, in listdir for entry in os.listdir(path): OSError: [Errno 2] No such file or directory: '/app/site_advogados/static' </code></pre> <p>as you can imagine, when I set <strong>DEBUG = True</strong> it works correctly, but as I want test the production environment I need use <strong>DEBUG = False</strong>. What I'm doing wrong?</p>
<p>You should not need to run <code>collectstatic</code> every time you start your web dyno. What is the output of <code>heroku run python manage.py collectstatic --noinput</code>?</p> <p>Looking through <a href="https://devcenter.heroku.com/articles/django-assets" rel="nofollow noreferrer">the heroku docs</a>, it is not clear if you have performed this step:</p> <blockquote> <p>Django won’t automatically create the target directory that collectstatic uses, so we recommend adding a dummy file to your repository, as seen here.</p> </blockquote>
python|django|heroku|gunicorn|mezzanine
3
1,902,667
12,824,745
how to run a method from time a to time b every t minutes
<p>Since I'm new to Python, I need some advice from experienced people. What is the best way to run a Python method from time A to time B every T minutes using only core Python libraries?</p> <p>To be more specific:</p> <p>I need single threaded app which will start monitor timestamps of pair of files to make sure that the difference in file creation is always greater than 0. I need to run this monitor only from 9 to 6 every 2 minutes. I will take a look at schedule and time library...</p>
<p>You could: </p> <ol> <li><p>Use cron (on *nix) or Windows task scheduler to run your script at a desired time.</p> <p>It will make your solution both simpler and more robust.</p> <p>Or </p></li> <li><p>Run your script as a daemon and subscribe to file system events to monitor your files.</p> <p>You could use pyinotify and the like depending on your OS. It provides the best reaction to changes time</p></li> </ol> <p>Solutions based on time, threading, sched modules are more complex, harder to implement and less reliable.</p>
python|timer|schedule
1
1,902,668
24,494,726
Testing csrf token in django
<p>I'm looking to test if csrf tokens are working in my django site. The issue is that csrf_token returns a token value rather that the custom value of 'csrftoken'. Is there a way to set the value of the csrf for testing? This is the code that I am working with:</p> <pre><code>token = 'csrftoken' client = Client(enforce_csrf_checks=True) client.login(username='user', password='pass') client.get("/my/web/page/") csrf_token = client.cookies[token].value assetEqual(token, csrf_token) </code></pre>
<p>Is there a particular reason you're testing something that <a href="https://github.com/django/django/blob/master/tests/csrf_tests/tests.py" rel="nofollow">Django's own tests already cover</a> in a fuller way? </p> <p>Or, put another way, is there something specific/non-standard that you're doing with the CSRF token that means you need to test it? </p> <p>If you're just using it as per the docs, save yourself some time and put the effort into testing your own code, not Django's </p>
python|django|csrf
4
1,902,669
40,185,437
No module named 'numpy': Visual Studio Code
<p>I'm trying to setup Visual Studio Code for python development</p> <p>to begin with, I've installed </p> <ol> <li>Anaconda Python</li> <li>Visual Studio Code</li> </ol> <p>and in a new file I have the following code</p> <pre><code>import numpy as np import pandas as pd from pandas import Series, DataFrame </code></pre> <p>upon hitting Ctrl+Shift+B I get the following error</p> <pre><code>import numpy as np </code></pre> <p>ImportError: No module named 'numpy'</p> <p>Also, is there python interactive window in VS Code? How to open it.</p>
<p>Changing python environment in VS code helped me. Default the visual studio code takes original Python environment, it requires numpy to install. If you have anaconda python (numpy comes with it) installed, you could switch the original python environment to anaconda python environment in visuals studio code. This can be done from the command palette <code>Ctrl+Shift+P</code> in visual studio</p> <p>Check <a href="https://code.visualstudio.com/docs/python/environments#_select-and-activate-an-environment" rel="noreferrer">this link</a> for how to switch from original python to anaconda python environment, specifically:</p> <p><a href="https://i.stack.imgur.com/NsIAT.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NsIAT.png" alt="Snippet from VSCode instructions" /></a> <a href="https://i.stack.imgur.com/DRNSG.png" rel="noreferrer"><img src="https://i.stack.imgur.com/DRNSG.png" alt="enter image description here" /></a></p>
python|pandas|numpy|visual-studio-code
37
1,902,670
58,823,374
How to get all tweets containing a certain username or name
<p>I am writing a code using the tweepy library to collect all tweets containing a specific user id. For this example let's say I want to find all tweets related to <a href="https://twitter.com/_austrian" rel="nofollow noreferrer">Austrian Airlines</a> </p> <p>What I would do to achieve such goal (assuming I have access to the twitter API) is something like this:</p> <pre><code>import pandas as pd import numpy as np from tweepy.streaming import StreamListener from tweepy import OAuthHandler from tweepy import Stream from tweepy import API from tweepy import Cursor auth = OAuthHandler(twitter_credentials['CONSUMER_KEY'], twitter_credentials['CONSUMER_SECRET']) auth.set_access_token(twitter_credentials['ACCESS_TOKEN'], twitter_credentials['ACCESS_TOKEN_SECRET']) api = API(auth,wait_on_rate_limit=True,wait_on_rate_limit_notify=True) # Search word/hashtag value HashValue = '_austrian' # search start date value. the search will start from this date to the current date. StartDate = "2019-11-11" # yyyy-mm-dd for tweet in Cursor(api.search,q=HashValue,count=1,lang="en",since=StartDate, tweet_mode='extended').items(): print (tweet.created_at, tweet.full_text) </code></pre> <p>However this approach doesnt seem to return what I expect. I just get a series of tweets where the word austrian is mentioned.</p> <p>What should i do in order to get just the tweets containing _austrian?</p>
<p>What I would do is use this package instead: <a href="https://github.com/Mottl/GetOldTweets3" rel="nofollow noreferrer">GetOldTweets3</a></p> <p>I used the following code.</p> <pre><code>tweetCriteria = got.manager.TweetCriteria().setQuerySearch('@_austrian')\ .setSince("2019-11-11")\ .setMaxTweets(10) tweet = got.manager.TweetManager.getTweets(tweetCriteria) </code></pre> <p>Currently, its set to look for all tweets that contains '_austrian' with a given date and is limited to 10 tweet searches on the code. Adjust according to your needs.</p> <p>To loop through the results you'll need to for loop it.</p> <pre><code>for item in tweet: print(item.username, item.text) </code></pre> <p>Sample Output</p> <pre><code>HofmannAviation In the 1980s I joined a #tyrolean Airways Dash 7 pilot training flight to Courchevel in the French Alps. In the Cockpit also Armin Kogler @_austrian @AHoensbroech @Flugthier @AlexInAir @_ABierwirth_ #dash7 @courchevel @BBD_Aircraft @GabyAttersee @AgueraMartin @GuillaumeFaurypic.twitter.com/NULpX4WSkA </code></pre> <p>You can read more on the github page on how you can control the searches. You can get more than the usernames and the content using this package.</p>
python|api|web-scraping|twitter|tweepy
1
1,902,671
51,622,185
Quick way to convert string to date in python
<p>Is there a quick way to convert a string to a datetime object in Python WITHOUT having to specify the format?</p> <p>I know I can do:</p> <pre><code>datetime.strptime('Sun 10 May 2015 13:54:36 -0700', '%a %d %b %Y %H:%M:%S %z') </code></pre> <p>But since this is the default format, is there someway to get it to autoparse? Seems like I should be able to just create a new object from this without having to specify the format since it is the default system format.</p>
<p>use <code>parser.parse</code> method from <a href="https://dateutil.readthedocs.io/en/stable/" rel="nofollow noreferrer"><code>dateutil</code></a> module. You can install it by doing <code>pip install python-dateutil</code></p> <pre><code>&gt;&gt;&gt; from dateutil import parser &gt;&gt;&gt; parser.parse('Sun 10 May 2015 13:54:36 -0700') datetime.datetime(2015, 5, 10, 13, 54, 36, tzinfo=tzoffset(None, -25200)) </code></pre>
python
1
1,902,672
51,636,537
Google Finance error: invalid literal
<p>I was trying to work on a person project (stock market predictions) for school, when Google started acting up again...</p> <p>I realize that Google Finance has been complete garbage this past year, but it still seemed to be working somewhat up until this morning. I got an error the first time I ran the code even though it worked fine yesterday.</p> <p>So I tried just running a sample code from the actual library page: <a href="https://pypi.org/project/googlefinance.client/" rel="nofollow noreferrer">https://pypi.org/project/googlefinance.client/</a></p> <pre><code>!pip install googlefinance.client from googlefinance.client import get_price_data, get_prices_data, get_prices_time_data # Dow Jones param = { 'q': ".DJI", # Stock symbol (ex: "AAPL") 'i': "86400", # Interval size in seconds ("86400" = 1 day intervals) 'x': "INDEXDJX", # Stock exchange symbol on which stock is traded (ex: "NASD") 'p': "1Y" # Period (Ex: "1Y" = 1 year) } # get price data (return pandas dataframe) df = get_price_data(param) print(df) params = [ # Dow Jones { 'q': ".DJI", 'x': "INDEXDJX", }, # NYSE COMPOSITE (DJ) { 'q': "NYA", 'x': "INDEXNYSEGIS", }, # S&amp;P 500 { 'q': ".INX", 'x': "INDEXSP", } ] period = "1Y" # get open, high, low, close, volume data (return pandas dataframe) df = get_prices_data(params, period) print(df) </code></pre> <p>and still got </p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-2-df3429694fd0&gt; in &lt;module&gt;() 9 } 10 # get price data (return pandas dataframe) ---&gt; 11 df = get_price_data(param) 12 print(df) 13 /usr/local/lib/python3.6/dist-packages/googlefinance/client.py in get_price_data(query) 13 cols = price.split(",") 14 if cols[0][0] == 'a': ---&gt; 15 basetime = int(cols[0][1:]) 16 index.append(datetime.fromtimestamp(basetime)) 17 data.append([float(cols[4]), float(cols[2]), float(cols[3]), float(cols[1]), int(cols[5])]) ValueError: invalid literal for int() with base 10: 'nd&amp;nbsp;...&lt;/span&gt;&lt;br&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class="g"&gt;&lt;h3 class="r"&gt;&lt;a href="/url?q=https://en.wikipedia.org/wiki/DJI_(company)&amp;amp;sa=U&amp;amp;ved=0ahUKEwiB-e_gjMzcAhUpwlkKHTTUC74QFghGMAw&amp;amp;usg=AOvVaw1ugw </code></pre> <p>has anyone run into this before and know what's wrong or how to fix it?</p> <p>Or, on a separate note, does anyone know of a good alternative to Google Finance?</p>
<p>It was a problem with the example code. If you go to the <a href="https://github.com/pdevty/googlefinance-client-python" rel="nofollow noreferrer">GitHub Homepage</a>, you'll get the latest version—even the small updates.</p> <p>I slightly modified <code>client.py</code> and had no problems with the output.</p> <pre><code>#!/usr/bin/env python # coding: utf-8 import requests from datetime import datetime import pandas as pd def get_price_data(query): r = requests.get( "https://finance.google.com/finance/getprices", params=query) lines = r.text.splitlines() data = [] index = [] basetime = 0 for price in lines: cols = price.split(",") if cols[0][0] == 'a': basetime = int(cols[0][1:]) index.append(datetime.fromtimestamp(basetime)) data.append([float(cols[4]), float(cols[2]), float( cols[3]), float(cols[1]), int(cols[5])]) elif cols[0][0].isdigit(): date = basetime + (int(cols[0]) * int(query['i'])) index.append(datetime.fromtimestamp(date)) data.append([float(cols[4]), float(cols[2]), float( cols[3]), float(cols[1]), int(cols[5])]) return pd.DataFrame(data, index=index, columns=['Open', 'High', 'Low', 'Close', 'Volume']) def get_closing_data(queries, period): closing_data = [] for query in queries: query['i'] = 86400 query['p'] = period r = requests.get( "https://finance.google.com/finance/getprices", params=query) lines = r.text.splitlines() data = [] index = [] basetime = 0 for price in lines: cols = price.split(",") if cols[0][0] == 'a': basetime = int(cols[0][1:]) date = basetime data.append(float(cols[1])) index.append(datetime.fromtimestamp(date).date()) elif cols[0][0].isdigit(): date = basetime + (int(cols[0]) * int(query['i'])) data.append(float(cols[1])) index.append(datetime.fromtimestamp(date).date()) s = pd.Series(data, index=index, name=query['q']) closing_data.append(s[~s.index.duplicated(keep='last')]) return pd.concat(closing_data, axis=1) def get_open_close_data(queries, period): open_close_data = pd.DataFrame() for query in queries: query['i'] = 86400 query['p'] = period r = requests.get( "https://finance.google.com/finance/getprices", params=query) lines = r.text.splitlines() data = [] index = [] basetime = 0 for price in lines: cols = price.split(",") if cols[0][0] == 'a': basetime = int(cols[0][1:]) date = basetime data.append([float(cols[4]), float(cols[1])]) index.append(datetime.fromtimestamp(date).date()) elif cols[0][0].isdigit(): date = basetime + (int(cols[0]) * int(query['i'])) data.append([float(cols[4]), float(cols[1])]) index.append(datetime.fromtimestamp(date).date()) df = pd.DataFrame(data, index=index, columns=[ query['q'] + '_Open', query['q'] + '_Close']) open_close_data = pd.concat( [open_close_data, df[~df.index.duplicated(keep='last')]], axis=1) return open_close_data def get_prices_data(queries, period): prices_data = pd.DataFrame() for query in queries: query['i'] = 86400 query['p'] = period r = requests.get( "https://finance.google.com/finance/getprices", params=query) lines = r.text.splitlines() data = [] index = [] basetime = 0 for price in lines: cols = price.split(",") if cols[0][0] == 'a': basetime = int(cols[0][1:]) date = basetime data.append([float(cols[4]), float(cols[2]), float( cols[3]), float(cols[1]), int(cols[5])]) index.append(datetime.fromtimestamp(date).date()) elif cols[0][0].isdigit(): date = basetime + (int(cols[0]) * int(query['i'])) data.append([float(cols[4]), float(cols[2]), float( cols[3]), float(cols[1]), int(cols[5])]) index.append(datetime.fromtimestamp(date).date()) df = pd.DataFrame(data, index=index, columns=[ query['q'] + '_Open', query['q'] + '_High', query['q'] + '_Low', query['q'] + '_Close', query['q'] + '_Volume']) prices_data = pd.concat( [prices_data, df[~df.index.duplicated(keep='last')]], axis=1) return prices_data def get_prices_time_data(queries, period, interval): prices_time_data = pd.DataFrame() for query in queries: query['i'] = interval query['p'] = period r = requests.get( "https://finance.google.com/finance/getprices", params=query) lines = r.text.splitlines() data = [] index = [] basetime = 0 for price in lines: cols = price.split(",") if cols[0][0] == 'a': basetime = int(cols[0][1:]) date = basetime data.append([float(cols[4]), float(cols[2]), float( cols[3]), float(cols[1]), int(cols[5])]) index.append(datetime.fromtimestamp(date)) elif cols[0][0].isdigit(): date = basetime + (int(cols[0]) * int(query['i'])) data.append([float(cols[4]), float(cols[2]), float( cols[3]), float(cols[1]), int(cols[5])]) index.append(datetime.fromtimestamp(date)) df = pd.DataFrame(data, index=index, columns=[ query['q'] + '_Open', query['q'] + '_High', query['q'] + '_Low', query['q'] + '_Close', query['q'] + '_Volume']) prices_time_data = pd.concat( [prices_time_data, df[~df.index.duplicated(keep='last')]], axis=1) return prices_time_data </code></pre> <p><strong><em>Snippet</em></strong></p> <pre><code>params = { 'q': ".DJI", # Stock symbol (ex: "AAPL") 'i': "86400", # Interval size in seconds ("86400" = 1 day intervals) # Stock exchange symbol on which stock is traded (ex: "NASD") 'x': "INDEXDJX", 'p': "1Y" # Period (Ex: "1Y" = 1 year) } df = get_price_data(params) print(df) </code></pre> <p><strong><em>Output</em></strong></p> <blockquote> <p>Volume Open High ... Close<br> 328405532 2017-08-01 15:00:00 21961.42 21990.96 ... 21963.92<br> 328405532 2017-08-02 15:00:00 22004.36 22036.10 ... 22016.24<br> 336824836 2017-08-03 15:00:00 22007.58 22044.85 ... 22026.10<br> 278731064 2017-08-04 15:00:00 22058.39 22092.81 ... 22092.81<br> 253635270 2017-08-07 15:00:00 22100.20 22121.15 ... 22118.42<br> 213012378 2017-08-08 15:00:00 22095.14 22179.11 ... 22085.34 </p> </blockquote>
python|google-finance|stockquotes
1
1,902,673
51,936,297
Trouble to read an excel file with pandas
<p>I'm trying to read an excel file with pandas (+50000 rows), and it gives me the same error in all cases. the code:</p> <pre><code>strfile='C:\\Users\\...\\excel_files\\excelfile_01.xls' </code></pre> <p>Try 01:</p> <pre><code>import pandas as pd data = pd.read_excel(strfile, low_memory=False) </code></pre> <p>Try 02:</p> <pre><code>import pandas as pd data = pd.read_excel(strfile, encoding='utf-16-le',low_memory=False) </code></pre> <p>Try 03:</p> <pre><code>import pandas as pd data = pd.read_excel(strfile, encoding='sys.getfilesystemencoding()',low_memory=False) </code></pre> <p>Try 04:</p> <pre><code>import pandas as pd data = pd.read_excel(strfile, encoding='latin-1',low_memory=False) </code></pre> <p>The error in all cases:</p> <pre><code>UnicodeDecodeError: 'utf-16-le' codec can't decode bytes in position 146-147: unexpected end of data </code></pre> <p>Any help/tip will be greatly appreciated. Thanks in advance.</p>
<p>Posting my previous comment as an answer:</p> <p>Try saving your legacy <code>.xls</code> file in the modern <code>.xlsx</code> format and send it to <code>pd.read_excel()</code></p>
python|excel|pandas
2
1,902,674
36,273,488
python: defaultdict not working with string formatting
<p>This is easy-peasy:</p> <pre><code>'foo {bar}'.format(**{'bar': 0}) </code></pre> <p>This doesn't work, yielding a <code>KeyError</code>:</p> <pre><code>from collections import defaultdict d = defaultdict(int) 'foo {bar}'.format(**d) </code></pre> <p>Is there a way to accommodate a <code>defaultdict</code> in string formatting?</p>
<p><code>**</code> unpacking produces a <code>dict</code>, which is why this isn't working. If you're running Python 3.2 or higher though, you can pass the <code>defaultdict</code> without unpacking to <a href="https://docs.python.org/3/library/stdtypes.html#str.format_map"><code>str.format_map</code></a> which exists for precisely the purpose of passing non-<code>dict</code> mapping types:</p> <pre><code>'foo {bar}'.format_map(d) </code></pre> <p><strong>Edit</strong>: Apparently, in Python 3.5 at least, <code>'foo {bar}'.format(**d)</code> actually does work with a <code>defaultdict(int)</code>, and <code>d</code> is modified (after the formatting, the <code>repr</code> is <code>defaultdict(&lt;class 'int'&gt;, {'bar': 0})</code>), so it looks like in modern Python, <code>format_map</code> may not be necessary for subclasses of <code>dict</code>. Interesting.</p>
python|dictionary|string-formatting|default-value
6
1,902,675
19,425,857
env: python\r: No such file or directory
<p>My Python script <code>beak</code> contains the following shebang:</p> <pre><code>#!/usr/bin/env python </code></pre> <p>When I run the script <code>$ ./beak</code>, I get</p> <pre><code>env: python\r: No such file or directory </code></pre> <p>I previously pulled this script from a repository. What could be the reason for this?</p>
<p>Open the file in <strong><code>vim</code></strong> or <strong><code>vi</code></strong>, and administer the following command:</p> <pre><code>:set ff=unix </code></pre> <p>Save and exit:</p> <pre><code>:wq </code></pre> <p>Done!</p> <h2>Explanation</h2> <p><code>ff</code> stands for <em><a href="http://vim.wikia.com/wiki/File_format" rel="noreferrer">file format</a></em>, and can accept the values of <code>unix</code> (<code>\n</code>), <code>dos</code> (<code>\r\n</code>) and <code>mac</code> (<code>\r</code>) <em>(only meant to be used on pre-intel macs, on modern macs use <strong><code>unix</code></strong>)</em>.</p> <p>To read more about the <code>ff</code> command:</p> <pre><code>:help ff </code></pre> <p><code>:wq</code> stands for <strong>W</strong>rite and <strong>Q</strong>uit, a faster equivalent is <kbd>Shift</kbd>+<kbd>zz</kbd> (i.e. hold down <em>Shift</em> then press <code>z</code> twice).</p> <p>Both commands must be used in <a href="http://www.radford.edu/%7Emhtay/CPSC120/VIM_Editor_Commands.htm" rel="noreferrer">command mode</a>.</p> <h2>Usage on multiple files</h2> <p>It is not necessary to actually open the file in vim. The modification can be made directly from the command line:</p> <pre><code> vi +':wq ++ff=unix' file_with_dos_linebreaks.py </code></pre> <p>To process multiple <code>*.py</code> files (in <code>bash</code>):</p> <pre class="lang-bash prettyprint-override"><code>for file in *.py ; do vi +':w ++ff=unix' +':q' &quot;${file}&quot; done </code></pre> <blockquote> <p><sup> <em>offtopic</em>: if by chance you are stuck in vim and need to exit, <a href="https://github.com/hakluke/how-to-exit-vim" rel="noreferrer">here</a> are some easy ways.</sup></p> </blockquote> <h2>Removing the <a href="https://en.wikipedia.org/wiki/Byte_order_mark" rel="noreferrer">BOM</a> mark</h2> <p>Sometimes even after setting unix line endings you might still get an error running the file, especially if the file is executable and has a <a href="https://en.wikipedia.org/wiki/Shebang_(Unix)" rel="noreferrer">shebang</a>. The script might have a BOM marker (such as <code>0xEFBBBF</code> or other) which makes the shebang invalid and causes the shell to complain. In these cases <code>python myscript.py</code> will work fine (since python <em>can</em> handle the BOM) but <code>./myscript.py</code> will fail even with the execution bit set because your shell (sh, bash, zsh, etc) <em>can't</em> handle the BOM mark. (It's usually windows editors such as <em>Notepad</em> which create files with a BOM mark.)</p> <p>The BOM can be removed by opening the file in <code>vim</code> and administering the following command:</p> <pre class="lang-sh prettyprint-override"><code>:set nobomb </code></pre>
python|macos|osx-mountain-lion|shebang|env
94
1,902,676
21,998,143
Is there an all(map(...)) optimization in Python?
<p>I'd like to use <code>all(map(func, iterables))</code>, because it is clear, but I'm very interested in whether this approach is optimized? For example, if any calculated result of <code>map()</code> is not <code>True</code> mapping should stop.</p> <p>Example from my project:</p> <pre><code>for item in candidate_menu: if not item.is_max_meals_amount_ok(daily_menus): return False return True </code></pre> <p>I prefer to use functional-like style:</p> <pre><code>all(map(operator.methodcaller('is_max_meals_amount_ok', daily_menus), candidate_menu) </code></pre> <p>I there any optimization for <code>all(map(...))</code> or <code>any(map(...))</code> in Python?</p> <p>Edit: Python 2.7 on board.</p>
<p>In addition to <code>all(func(i) for i in iterable)</code> suggested by others, you can also use <code>itertools.imap</code> - <code>all(imap(func, iterable))</code>. Compared to <code>map</code>, <code>imap</code> is an iterator thus if <code>all</code> stops consuming from it, it won't advance any further.</p>
python|python-2.7|optimization|map
8
1,902,677
16,828,414
Django-How to combine two object set & remove the common objects in these two sets
<p>i am now trying to implement a search function in django and is using the filter functions. After looking through the functions ,I couldn't find the particular function how I am going to combing two different object sets and remove the common objects in the two sets.</p> <pre><code>set1= book.objects.filter(name='Python') set2= book.objects.filter(author_name='Mona') </code></pre> <p>Is there any function can be called to do so? </p> <p>Thanks so much</p>
<p>You can try this using <code>exclude()</code> for objects in other set.</p> <pre><code>set1= book.objects.filter(name='Python') set2= book.objects.filter(author_name='Mona') non_common = set1.exclude(id__in=[o.id for o in set2]) </code></pre>
python|django|filter
1
1,902,678
53,582,455
Regex to get each section
<p>Below is the content: <code> router eigrp 1 redistribute ospf 14 route-map test2 redistribute static vrf Automation autonomous-system 52 exec-timeout 5 default router ospf 14 router-id 1.1.1.1 area 0.0.0.25 nssa redistribute static route-map test1 redistribute eigrp 10 route-map test2 area 0.0.0.0 range 10.10.10.0/24 area 0.0.0.0 authentication message-digest area 0.0.0.25 authentication message-digest log-adjacency-changes maximum-paths 8 auto-cost reference-bandwidth 10 Gbps </code></p> <p>Need help with regex to capture first section of eigrp starting with router eigrp and capturing anything indented by either 2 space or 4 spaces. Same thing for second section starting with router ospf. greedy=yes. Thank you!</p>
<p>You could use the following regex combined with the <a href="https://docs.python.org/3/library/re.html#re.MULTILINE" rel="nofollow noreferrer">multiline</a> flag:</p> <p><a href="https://regex101.com/r/d2kXyZ/1" rel="nofollow noreferrer"><code>^router (?:.*)(?:\n .*)*</code></a></p>
python|regex|ansible
0
1,902,679
9,327,527
Flask Tutorial - "Attribute Error"
<p>I am a newbie - just reached the end of Learn Python the Hard Way and am now trying my hand at Flask. I started with the official tutorial at the Flask website, but am getting stuck at <a href="http://flask.pocoo.org/docs/tutorial/dbinit/#tutorial-dbinit" rel="nofollow">this step</a>: </p> <pre><code>from __future__ import with_statement import sqlite3 from flask import Flask, request, session, g, redirect, url_for, abort, render_template, flash from contextlib import closing DATABASE = 'tmp/flaskr.db' DEBUG = True SECRET_KEY = 'development key' USERNAME = 'admin' PASSWORD = 'default' app = Flask(__name__) app.config.from_object(__name__) app.config.from_envvar('FLASKR_SETTINGS',silent=True) def connect_db(): return sqlite3.connect(app.config['DATABASE']) def init_db(): with closing(connect_db()) as db: with app.open.resource('schema.sql') as f: db.cursor.executescript(f.read()) db.commit() if __name__ == '__main__': app.run() </code></pre> <p>At the Python Shell, if I do</p> <pre><code>from flaskr import init_db init_db() </code></pre> <p>I see: <code>Attribute Error: Flask object has no attribute 'open' (in the line containing app.open.resource in init_db)</code>. How do i fix this?</p>
<p>Your code:</p> <pre><code>app.open.resource </code></pre> <p>The example code:</p> <pre><code>app.open_resource </code></pre> <p>Can you spot the difference? In case you can't, the example code uses an underscore where you use a dot. A Flask object has no attribute called <code>open</code>, but it does have a method called <a href="http://flask.pocoo.org/docs/api/#flask.Flask.open_resource" rel="nofollow"><code>open_resource</code></a>.</p>
python|flask
2
1,902,680
9,316,666
django request.user.is_authenticated is always true?
<p>Can anyone tell me why in the following code I get redirected to yahoo.com instead of google.com?</p> <p>urls</p> <pre><code>urlpatterns = patterns('', (r'^$', initialRequest,)) </code></pre> <p>view</p> <pre><code>def initialRequest(request): if request.user.is_authenticated: return HttpResponseRedirect('http://yahoo.com') else: return HttpResponseRedirect('http://google.com') </code></pre>
<p>Shouldn't it be <code>request.user.is_authenticated()</code> i.e. with brackets as it's a function?</p> <h3>For Django 1.10 +</h3> <p><code>is_authenticated</code> is now an attribute (although it is being kept backwards compatible for now).</p>
python|django
50
1,902,681
52,500,722
Selective shifting of dataframe columns
<p>I have a dataframe with a number of columns and would selectively like to lag all bar one column (named target) by a specified number of steps.</p> <pre><code>def: shift_target(df,target,lag): df[df.columns != target].shift(lag) df = df.dropna() return df </code></pre> <p>The above does not seem to work. Can i specify do not shift one column?</p> <p>eg</p> <p><a href="https://i.stack.imgur.com/XNSAZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XNSAZ.png" alt="enter image description here"></a></p>
<p>You can construct a new dataframe. This is usually more efficient than manipulating an existing dataframe via Pandas methods. For example:</p> <pre><code>n = 4 res = df.iloc[:-n, 1:].copy() res.index = df.index[n:] res['Target'] = df['Target'].iloc[n:].values </code></pre>
python|pandas|dataframe
0
1,902,682
47,659,289
Static attributes in Cheetah
<p>I'm working on a project using Cheetah. I'd like to create a class like the following one:</p> <pre><code>from Cheetah.Template import Template class TemplateObject(Template): className = "Default ClassName" def header(self): return "My Class name is {}".format(type(self).className) </code></pre> <p>and then i'd like to be able in one of my template to do something like:</p> <pre><code>#extends TemplateObject #staticarg className = "CustomClassName" ## Something to declare a static member ? $self.header() </code></pre> <p>is it possible to do this ? </p> <p>Thank you</p>
<p>You're looking for <a href="http://cheetahtemplate.org/users_guide/inheritanceEtc.html#attr" rel="nofollow noreferrer">#attr</a>.</p> <p>Example:</p> <pre><code>#attr className = "CustomClassName" </code></pre>
python|python-3.x|cheetah
0
1,902,683
47,817,209
django context processor form not showing errors when submitting
<p>I`m using <em>context processor</em> to render form to the base template in my project and the form seems to work ok, except it doesnt show any errors for required fields being blank and etc. The page is simply reloaded even if fields are not filled in.</p> <p>I used this approach in other project before, and it worked just fine, but now I cant really figure out what happened and why it is like so.</p> <p>Here is my <em>forms.py</em>:</p> <pre><code>from django import forms class VersionSelectorForm(forms.Form): mode = forms.ChoiceField(widget=forms.RadioSelect(), choices=(('live', 'Live'), ('history', 'History')), initial='live', required=True, help_text='Required') date = forms.DateField(widget=forms.TextInput(attrs={'class': 'datepicker'}), required=True, help_text='required') def clean(self): cleaned_data = super(VersionSelectorForm, self).clean() mode = cleaned_data.get('mode') date = cleaned_data.get('date') if mode == 'history' and not date: msg = 'Date should be picked if \'History\' mode selected' self.add_error('date', msg) </code></pre> <p><em>view.py</em>:</p> <pre><code>from django.shortcuts import redirect from .forms import VersionSelectorForm def select_version(request): if request.method == "POST": form = VersionSelectorForm(request.POST) if form.is_valid(): print('I am valid') mode = form.cleaned_data["mode"] date = form.cleaned_data["date"] if mode == "History": request.session['selected_date'] = date else: request.session['selected_date'] = None else: form = VersionSelectorForm() return redirect(request.META['HTTP_REFERER']) </code></pre> <p><em>context_processors.py</em>:</p> <pre><code>from .forms import VersionSelectorForm def VersionSelectorFormGlobal(request): return {'version_selector_form': VersionSelectorForm()} </code></pre> <p>urls.py:</p> <pre><code>from django.contrib import admin from diagspecgen import views urlpatterns = [ url(r'^admin/', include(admin.site.urls)), url(r'^select_version/$', views.select_version, name='select_version'), ] </code></pre> <p><em>snippet from base.html</em>:</p> <pre><code>&lt;section&gt;&lt;div&gt; &lt;form method="post" action="{% url 'select_version'%}"&gt; {% csrf_token %} {{ version_selector_form.as_p }} &lt;button type="submit"&gt;Submit&lt;/button&gt; &lt;/form&gt; &lt;/div&gt;&lt;/section&gt; </code></pre> <p>and of course I have added <code>'diagspecgen.context_processors.VersionSelectorFormGlobal'</code> to <em>context_processors</em> list in <em>settings.py</em>.</p> <p>Looking forward for any help and thanks in advance for that.</p>
<p>You should not use context processor to render your form, but instead pass it to the django shortcut <code>render</code> function.</p> <p>You could do something like this:</p> <pre><code>def select_version(request): if request.method == "POST": form = VersionSelectorForm(request.POST) if form.is_valid(): print('I am valid') mode = form.cleaned_data["mode"] date = form.cleaned_data["date"] if mode == "History": request.session['selected_date'] = date else: request.session['selected_date'] = None else: form = VersionSelectorForm() return render(request, 'template.html', {'form': form}) </code></pre> <p>Don't forget the import <code>from django.shortcuts import render</code></p> <p>Link to the doc: <a href="https://docs.djangoproject.com/en/2.0/topics/forms/#the-view" rel="nofollow noreferrer">https://docs.djangoproject.com/en/2.0/topics/forms/#the-view</a></p>
python|django|forms
0
1,902,684
37,236,834
in a method getting: AttributeError <classname> instance has no attribute <name>
<p>I have a button on my main window and want to display an information in a label when the mouse enters it. When the mouse leaves the button the label should change to empty. It should work like a status bar but split into a right and a left side. </p> <p>I also tried to change the label according to Bryan Oakley's post via <code>labelname.configure</code> but it didn't work for me so I still use the <code>StringVar</code> version:</p> <pre><code># -*- coding: utf-8 -*- import Tkinter as Tk, os class C_MasterGui(): def __init__(self): self.root = Tk.Tk() self.root.title("Main window") self.root.geometry("900x600+500+300") #C_Menubar(self.root) C_Buttonbar(self.root) C_Workframe(self.root) statusbar = C_Statusbar(self.root) statusbar.create_statusbar(self.root) def open_gui(self): self.root.mainloop() def close_gui(self): pass class C_Buttonbar(): def __init__(self,p_parent): ICONPATH = os.getcwd() + "/graphics/" ICONEXIT = Tk.PhotoImage(file=ICONPATH + "application-exit-5.png") self.updatestatus = C_Statusbar(p_parent) # Connect to statusbar frIconToolbar = Tk.Frame(p_parent, height=30, relief="flat", bd=2) frIconToolbar.pack(fill="x") butExit = Tk.Button(frIconToolbar, image=ICONEXIT, width=30, relief="groove") butExit.bind("&lt;Enter&gt;", self.iconExit_in) butExit.bind("&lt;Leave&gt;", self.icon_out) butExit.pack(side="left", ipadx=4, ipady=2, padx=4) def iconExit_in(self, arg): self.updatestatus.set_statusbar("your name","Programm beenden") def icon_out(self, arg): self.updatestatus.set_statusbar("your name","") class C_Workframe(): def __init__(self,p_parent): self.workframe = Tk.Frame(p_parent) self.workframe.pack(fill="both", expand=1) class C_Statusbar(): def __init__(self,p_parent): self.parent = p_parent def create_statusbar(self,p_parent): # Initialize Texts for left and right statusbar self.player_data = Tk.StringVar(p_parent) self.action_text = Tk.StringVar(p_parent) self.set_statusbar("current user: none","") statusbar_left = Tk.Label(p_parent, textvariable=self.player_data, relief="groove", bd=2, font="Arial, 10", anchor="w") statusbar_left.pack(side="left", expand="yes",fill="x") statusbar_right = Tk.Label(p_parent, textvariable=self.action_text, relief="groove", bd=2, font="Arial, 10", anchor="e") statusbar_right.pack(side="right", expand="yes", fill="x") def set_statusbar(self,p_playerdata, p_actiontext): print "Statusbar left:", p_playerdata, "Statusbar right:", p_actiontext self.player_data.set(p_playerdata) self.action_text.set(p_actiontext) # # if __name__ == '__main__': myInstance = C_MasterGui() myInstance.open_gui() </code></pre> <p>When the mouse enters or leaves the button method <code>set_statusbar</code> throws this error:</p> <pre><code>Exception in Tkinter callback Traceback (most recent call last): File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1489, in __call__ return self.func(*args) File "/home/gonzo/Spaces/Dokumente_remote/Python_Programmierprojekte/GolfTracker/test_statusbar.py", line 49, in iconExit_in self.updatestatus.set_statusbar("your name","Programm beenden") File "/home/gonzo/Spaces/Dokumente_remote/Python_Programmierprojekte/GolfTracker/test_statusbar.py", line 87, in set_statusbar self.player_data.set(p_playerdata) AttributeError: C_Statusbar instance has no attribute 'player_data' </code></pre> <p>Can anyone help me with this error?</p>
<p>You need to create your status bar first:</p> <pre><code>class C_Statusbar(): def __init__(self,p_parent): self.parent = p_parent self.create_statusbar(p_parent) </code></pre> <p>and remove <code>statusbar.create_statusbar(self.root)</code> here:</p> <pre><code>class C_MasterGui(): def __init__(self): self.root = Tk.Tk() self.root.title("Main window") self.root.geometry("900x600+500+300") #C_Menubar(self.root) C_Buttonbar(self.root) C_Workframe(self.root) statusbar = C_Statusbar(self.root) # called in `C_Statusbar.__init__)` already #statusbar.create_statusbar(self.root) </code></pre>
python|python-2.7|oop|tkinter
0
1,902,685
66,028,877
netmiko script connexion router
<p>When I run my <em>Python</em> script, I get errors on the <em>paramiko library</em>...</p> <p>But the result of the script is not understanding.Can anyone help with this issue..!?</p> <p>Here is the <strong>Error</strong>:</p> <pre><code>**Traceback (most recent call last): File &quot;/usr/local/lib/python3.5/dist-packages/paramiko/transport.py&quot;, line 2044, in _check_banner buf = self.packetizer.readline(timeout) File &quot;/usr/local/lib/python3.5/dist-packages/paramiko/packet.py&quot;, line 353, in readline buf += self._read_timeout(timeout) File &quot;/usr/local/lib/python3.5/dist-packages/paramiko/packet.py&quot;, line 540, in _read_timeout x = self.__socket.recv(128) ConnectionResetError: [Errno 104] Connection reset by peer </code></pre> <p>During handling of the above exception, I got another exception</p> <p>Here is the <strong>new Exception</strong>:</p> <pre><code>Traceback (most recent call last): File &quot;/usr/local/lib/python3.5/dist-packages/paramiko/transport.py&quot;, line 1893, in run self._check_banner() File &quot;/usr/local/lib/python3.5/dist-packages/paramiko/transport.py&quot;, line 2049, in _check_banner 'Error reading SSH protocol banner' + str(e) paramiko.ssh_exception.SSHException: Error reading SSH protocol banner[Errno 104] Connection reset by peer Script run time = 00:02:34** </code></pre> <p><strong>Here is the script</strong>:</p> <pre><code>#!/usr/bin/python3.6 #import ipaddress import csv import os from os import listdir from os.path import isfile,join,expanduser import getpass import threading import logging import re import time from netmiko import ConnectHandler from netmiko.ssh_exception import NetMikoTimeoutException from netmiko.ssh_exception import NetMikoAuthenticationException #----------------------------------------------------------- def get_wd(): wd = os.path.expanduser('temp/') if not os.path.exists(wd): os.makedirs(wd) return wd #----------------------------------------------------------- def del_temp_files(): list_temp_dir = os.listdir(wd) ext = (&quot;.json&quot;,&quot;.csv&quot;,&quot;.txt&quot;,&quot;.log&quot;) for item in list_temp_dir: if item.endswith(ext): os.remove(os.path.join(wd, item)) #---------------------------------------------------------------------- def ssh_connection(ip, ref, username, password): try: return ConnectHandler(device_type='cisco_ios',ip=ip,username=username,password=password) except Exception as error: logger.error('. %&amp;%&amp;%&amp;%&amp;%&amp; {} {} \t {}'.format(ref, ip, error)) with open (&quot;{}conn_error.txt&quot;.format(wd), &quot;a&quot;) as efile: efile.write('{} {} \n'.format(ref, ip)) #--------------------envoie de commande et stocker le resultat dans un fichier---------------------------------------------------- def get_worker(ip, ref, device): try: result = device.send_command(&quot;show run | inc username&quot;) if &quot;cisco&quot; in result: usert=&quot;yes&quot; else: usert=&quot;no&quot; with open (&quot;{}result.csv&quot;.format(wd), &quot;a&quot;) as file1: file1.write('{} {}\n'.format(ref,usert)) except Exception as error: logger.error(&quot;. Get Error {} {} \t {}&quot;.format(ref, ip, error)) #-----------------------connection aux équipements-------------------------------------------------------- def main(ip, ref, username, password): device = ssh_connection(ip, ref, username, password) if device == None: sema.release() return output = get_worker(ip, ref, device) device.disconnect() sema.release() if __name__ == '__main__': wd = get_wd() del_temp_files() threads = [] max_threads = 20 sema = threading.BoundedSemaphore(value=max_threads) user = &quot;cisco&quot; passwd = &quot;cisco&quot; start_time = time.time() logger = logging.getLogger(&quot;LOG&quot;) handler = logging.FileHandler(&quot;{}main.log&quot;.format(wd)) logger.setLevel(logging.DEBUG) logger.addHandler(handler) #-------------------------------------------------------------------- with open(&quot;/home/net/inventaire_routes/cisco.csv.save&quot;) as fh: devices = csv.reader(fh,delimiter=';') for host in devices: sema.acquire() ip = host[1] ref = host[0] thread = threading.Thread(target=main, args=(ip, ref, user, passwd)) threads.append(thread) thread.start() elapsed_time = time.time() - start_time print(&quot;Script run time = &quot; + time.strftime(&quot;%H:%M:%S&quot;, time.gmtime(elapsed_time))) #---------------------------------------------------------------------------------- </code></pre> <p>How can I resolve this issue..!??</p>
<pre><code> ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect('192.16.16.2', username= 'admin', password= 'cisco') stdin, stdout, stderr =ssh.exec_command(&quot;show run | inc username &quot;) output = stdout.readlines() print('\n'.join(output)) ssh.close() username admin privilege 15 password 7 00071A150754 Exception ignored in: &lt;object repr() failed&gt; Traceback (most recent call last): File &quot;/usr/local/lib/python3.6/site-packages/paramiko/file.py&quot;, line 66, in __del__ File &quot;/usr/local/lib/python3.6/site-packages/paramiko/channel.py&quot;, line 1392, in close File &quot;/usr/local/lib/python3.6/site-packages/paramiko/channel.py&quot;, line 991, in shutdown_write File &quot;/usr/local/lib/python3.6/site-packages/paramiko/channel.py&quot;, line 963, in shutdown File &quot;/usr/local/lib/python3.6/site-packages/paramiko/channel.py&quot;, line 1246, in _send_eof File &quot;/usr/local/lib/python3.6/site-packages/paramiko/message.py&quot;, line 232, in add_int TypeError: 'NoneType' object is not callable </code></pre>
python|paramiko|cisco|netmiko
-1
1,902,686
72,596,285
Variable value in while True
<pre><code>while True: lenreplay = len(mail.reply()) print(lenreplay) if len(mail.reply()) &gt; lenreplay: print('done') print(mail.reply()[0]) </code></pre> <p>At some point the length of <code>mail.reply()</code> will no longer be what it was before. I want the <code>if</code> to work. Please note that <code>if</code> here in my code doesn't always work. It only works if the computing process was between <code>lenreplay = len(mail.reply())</code> and <code>if len(mail.reply()) &gt; lenreplay:</code>. I want for it to always work.</p> <p><code>mail.reply()</code> is a group of emails inside a list. If I sent an email that list will get a new length, which means it will become bigger than it was before.</p> <p>Any suggestions?</p>
<p>Is <code>len(mail.reply())</code> checking the current number of replies to some mail? And you want this loop to continue until an additional reply has been received? Why not put <code>lenreplay = len(mail.reply())</code> outside the loop and add a small delay (like <code>time.sleep(1)</code>) to avoid constantly checking?</p> <p>I.e.:</p> <pre><code>from time import sleep number_of_replies = len(mail.reply()) while True: print(number_of_replies) sleep(1) # maybe wait a second here, to avoid spamming the call if len(mail.reply()) &gt; number_of_replies: print('done') print(mail.reply()[0]) break </code></pre> <p>I renamed the variable, since its previous name didn't make that much sense. Also, I added <code>break</code> to exit the loop; if you didn't have <code>break</code> because you want this to run forever, this would work:</p> <pre><code>from time import sleep while True: number_of_replies = len(mail.reply()) while True: print(number_of_replies) sleep(1) if len(mail.reply()) &gt; number_of_replies: print('done') print(mail.reply()[0]) break </code></pre>
python
1
1,902,687
39,864,595
How to pass a python variable as a sqlite column name in a for-loop
<p>It looks like similar questions have been asked but I didn't find an answer.</p> <p>The cities in the cities list have corresponding column names in a database. I'm trying to find a specific city's pressure with pyowm, then insert that value into the appropriate column.</p> <p>The error is that there is not a column named "city". I can see what the problem is but do not know how to fix it. Any help is greatly appreciated!</p> <pre><code>import pyowm import sqlite3 conn = sqlite3.connect(r"C:\Users\Hanley Smith\Desktop\machinelearning\pressure_table.db") cursor = conn.cursor() owm = pyowm.OWM('eb68e3b0c908251771e67882d7a8ddff') cities = ["tokyo", "jakarta"] for city in cities: weather = owm.weather_at_place(city).get_weather() pressure = weather.get_pressure()['press'] cursor.execute("INSERT INTO PRESSURE (city) values (?)", (pressure,)) conn.commit() </code></pre>
<p>concatenate the variable name with the string like this.</p> <pre><code>cursor.execute("INSERT INTO PRESSURE (" + city + ") values (?)", (pressure,)) </code></pre> <p>or a much cleaner way with <code>%s</code></p> <pre><code>cursor.execute("INSERT INTO PRESSURE (%s) values (?)" % (city), (pressure,)) </code></pre>
python|sqlite
3
1,902,688
40,659,648
I can't figure out how to feed a Placeholder by data from matlab
<p>I am trying to implement a simple feed forward network. However, I can't figure out how to feed a Placeholder by data from matlab. This example:</p> <pre><code>import tensorflow as tf import numpy as np import scipy.io as scio import math # # create data train_input=scio.loadmat('/Users/liutianyuan/Desktop/image_restore/data/input_for_tensor.mat') train_output=scio.loadmat('/Users/liutianyuan/Desktop/image_restore/data/output_for_tensor.mat') x_data=np.float32(train_input['input_for_tensor']) y_data=np.float32(train_output['output_for_tensor']) print x_data.shape print y_data.shape ## create tensorflow structure start ### def add_layer(inputs, in_size, out_size, activation_function=None): Weights = tf.Variable(tf.random_uniform([in_size,out_size], -4.0*math.sqrt(6.0/(in_size+out_size)), 4.0*math.sqrt(6.0/(in_size+out_size)))) biases = tf.Variable(tf.zeros([1, out_size])) Wx_plus_b = tf.matmul(inputs, Weights) + biases if activation_function is None: outputs = Wx_plus_b else: outputs = activation_function(Wx_plus_b) return outputs xs = tf.placeholder(tf.float32, [None, 256]) ys = tf.placeholder(tf.float32, [None, 1024]) y= add_layer(xs, 256, 1024, activation_function=None) loss = tf.reduce_mean(tf.square(y - ys)) optimizer = tf.train.GradientDescentOptimizer(0.1) train = optimizer.minimize(loss) init = tf.initialize_all_variables() ### create tensorflow structure end ### sess = tf.Session() sess.run(init) for step in range(201): sess.run(train) if step % 20 == 0: print(step, sess.run(loss,feed_dict={xs: x_data, ys: y_data})) </code></pre> <p><strong>Gives me the following error:</strong></p> <pre><code>/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/bin/python2.7 /Users/liutianyuan/PycharmProjects/untitled1/easycode.py (1, 256) (1, 1024) Traceback (most recent call last): File "/Users/liutianyuan/PycharmProjects/untitled1/easycode.py", line 46, in &lt;module&gt; sess.run(train) File "/Library/Python/2.7/site-packages/tensorflow/python/client/session.py", line 340, in run run_metadata_ptr) File "/Library/Python/2.7/site-packages/tensorflow/python/client/session.py", line 564, in _run feed_dict_string, options, run_metadata) File "/Library/Python/2.7/site-packages/tensorflow/python/client/session.py", line 637, in _do_run target_list, options, run_metadata) File "/Library/Python/2.7/site-packages/tensorflow/python/client/session.py", line 659, in _do_call e.code) tensorflow.python.framework.errors.InvalidArgumentError: **You must feed a value for placeholder tensor 'Placeholder' with dtype float** [[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]] Caused by op u'Placeholder', defined at: File "/Users/liutianyuan/PycharmProjects/untitled1/easycode.py", line 30, in &lt;module&gt; xs = tf.placeholder(tf.float32, [None, 256]) File "/Library/Python/2.7/site-packages/tensorflow/python/ops/array_ops.py", line 762, in placeholder name=name) File "/Library/Python/2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 976, in _placeholder name=name) File "/Library/Python/2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 655, in apply_op op_def=op_def) File "/Library/Python/2.7/site-packages/tensorflow/python/framework/ops.py", line 2154, in create_op original_op=self._default_original_op, op_def=op_def) File "/Library/Python/2.7/site-packages/tensorflow/python/framework/ops.py", line 1154, in __init__ self._traceback = _extract_stack() </code></pre> <p>I have checked both the type and shape of x_data and y_data, it seams they are corret. So i have no ideal where goes wrong. </p>
<p>Thanks for mrry and the problem has been solved. It turns out that I miss the feed_dict part at sess.run(train). The right program is:</p> <pre><code>for step in range(201): sess.run(train,feed_dict={xs: x_data, ys: y_data}) if step % 20 == 0: print(step, sess.run(loss,feed_dict={xs: x_data, ys: y_data})) </code></pre>
python|machine-learning|neural-network|tensorflow
0
1,902,689
40,738,001
IndexError: list index out of range(Python)
<p>Please find my code :</p> <p># # Parse a new post. #</p> <pre><code> def parse_new_post(self,response,review,created_at,data): data.update({ 'cool_count':self.set_int(review.css('a[rel=cool]').css('span[class=count]::text').extract()), 'created_at':self.set_date(review.css('meta[itemprop=datePublished]::attr(content)').extract()[0]), 'elite':len(review.css('.is-elite')) == 1, 'funny_count':self.set_int(review.css('a[rel=funny]').css('span[class=count]::text').extract()), 'owner_comment_text':self.set_text(review.css('span[class=js-content-toggleable\ hidden]::text').extract()).replace("\n"," "), #'rating':review.css('div[itemprop=reviewRating]').css('div').css('i::attr(title)').re('(\\d\.\\d)'), 'rating':review.css('div[itemprop=reviewRating]').css('meta').css('::attr(content)').re('(\\d\.\\d)')[0].encode('utf-8'), #'review_id':review.css('div::attr(data-review-id)').extract()[0].encode('utf-8'), #'review_id':review.xpath('.//div[contains(@class,"review review--with-sidebar")]/@data-review-id').extract(), 'review_text':self.set_text(review.css('p[itemprop=description]::text').extract()).replace("\n"," "), 'total_friends':self.set_int(review.css('li[class=friend-count]').css('b::text').extract()), #'total_friends':int(review.xpath('.//li[contains(@class,"friend-count")]/span/b/text()').extract()[0].strip()), #'total_reviews':int(review.xpath('.//li[contains(@class,"review-count")]/span/b/text()').extract()[0].strip()), #'total_friends':int(review.xpath('.//li[contains(@class,"friend-count")]/b/text()').extract()[0].strip()), #'total_reviews':int(review.xpath('.//li[contains(@class,"review-count")]/b/text()').extract()[0].strip()), 'total_reviews':self.set_int(review.css('li[class=review-count]').css('b::text').extract()), 'user_id':review.css('div[class*=photo-box]').css('a::attr(href)').extract(), 'useful_count':self.set_int(review.css('a[rel=useful]').css('span[class=count]::text').extract()), #'user_location':review.css('li[class=user-location]').css('b::text').extract()[0].encode('utf-8'), 'user_location':review.xpath('.//li[@class="user-location responsive-hidden-small"]/b/text()').extract(), 'username':review.css('meta[itemprop=author]::attr(content)').extract()[0].encode('utf-8'), 'review_id':review.xpath('.//div[contains(@class,"review review--with-sidebar")]/@data-review-id').extract()[0].encode('utf-8'), }) </code></pre> <p>When I am crawling the website I am getting error below:</p> <pre><code>2016-11-22 01:27:52 [scrapy] ERROR: Spider error processing &lt;GET https://www.yelp.com/biz/lexus-of-glendale-glendale?utm_campaign=yelp_api&amp;utm_medium=api_v2_phone_search&amp;utm_source=HPtU-ro8MXX3MOY_DQkP6A?sort_by=date_desc&gt; (referer: None) Traceback (most recent call last): File "/usr/lib64/python2.7/site-packages/scrapy/utils/defer.py", line 102, in iter_errback yield next(it) File "/usr/lib64/python2.7/site-packages/scrapy/spidermiddlewares/offsite.py", line 28, in process_spider_output for x in result: File "/usr/lib64/python2.7/site-packages/scrapy/spidermiddlewares/referer.py", line 22, in &lt;genexpr&gt; return (_set_referer(r) for r in result or ()) File "/usr/lib64/python2.7/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in &lt;genexpr&gt; return (r for r in result or () if _filter(r)) File "/usr/lib64/python2.7/site-packages/scrapy/spidermiddlewares/depth.py", line 54, in &lt;genexpr&gt; return (r for r in result or () if _filter(r)) File "/c360/apps/c360nextgen/src/crawlers/yelp_new/yelp_new/spiders/lexus_posts.py", line 85, in parse yield self.check_for_new_post(response,review,created_at,data) File "/c360/apps/c360nextgen/src/crawlers/yelp_new/yelp_new/spiders/lexus_posts.py", line 95, in check_for_new_post return self.parse_new_post(response,review,created_at,data) File "/c360/apps/c360nextgen/src/crawlers/yelp_new/yelp_new/spiders/lexus_posts.py", line 123, in parse_new_post 'review_id':review.xpath('.//div[contains(@class,"review review--with-sidebar")]/@data-review-id').extract()[0].encode('utf-8'), IndexError: list index out of range </code></pre>
<p>the indexerror: list out of range, simply means that you are trying to call an index/item in the list that doesnt exist.</p> <p>Here is an example:</p> <pre><code>list = [1, 2] print(list[4]) </code></pre> <p>Notice that there isnt a 5. item in the list therefor it will result in a: IndexError: list index out of range(Python)</p> <p>In your ocation, it returns en empty list, which you try to find the first element of [0] (0, is the first item, but there is none items in the list)</p>
python|xpath
0
1,902,690
40,367,992
Is there a letter at this position on the list of lists?
<p>I'm trying to write a function that determines if a letter is present at a certain row and column on a list of lists. </p> <pre><code>#Input: lst2 = [['.', 'W', 'W', 'E', 'E'], ['.', '.', '.', '.', 'f'], ['A', 'A', 'A', '.', 'f']] #Output: is_space_occupied(0, 1, lst2) should return True because 'W' is present on that spot while is_space_occupied(1, 1, lst2) should return False because '.' is present on that spot and not a letter. </code></pre> <p>This is the code I have so far:</p> <pre><code>def letter_on_spot(row,col,lst): A = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz' for row in lst: for col in lst: if col == A: return True else: return False </code></pre> <p>Edit: I'm getting return outside function as an error for return True and am not sure if my function works correctly</p>
<p>There's two issues here. </p> <p>First, you're looping unnecessarily; you provide the indices for the list so you only need to check if the value with those indices is contained inside the string <code>A</code>. No need to traverse through every element just to check one; index the list <code>lst</code>.</p> <p>Second, <code>col == A</code> will always fail (unless <code>col = 'ABCD..yz'</code>). You compare their values when you should be checking if <code>A</code> contains <code>col</code> with the <code>in</code> operator.</p> <p>In short, you could change your function to:</p> <pre><code>def letter_on_spot(row,col,lst): A = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz' return lst[row][col] in A </code></pre> <p>and get the wanted result -- <code>True</code>/<code>False</code> based on the contents of a given index.</p> <p>Ideally, some error checking should be performed in order to now allow indices that result in <code>IndexError</code>s; play around with the lists' <code>len</code> for that, something like this:</p> <pre><code>def letter_on_spot(row,col,lst): A = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz' if row &gt;= len(lst) or col &gt;= len(lst[row]): return False return lst[row][col] in A </code></pre> <p>where, before trying to access <code>lst</code>, you check if the bounds are acceptable by testing against the length of the list <code>lst</code> and the length of the sub-list <code>lst[row]</code>.</p>
python|list|python-3.x|position|letter
1
1,902,691
9,869,524
How to convert list of intable strings to int
<p>In Python, I want to convert a list of strings:</p> <pre><code>l = ['sam','1','dad','21'] </code></pre> <p>and convert the integers to integer types like this:</p> <pre><code>t = ['sam',1,'dad',21] </code></pre> <p>I tried:</p> <pre><code>t = [map(int, x) for x in l] </code></pre> <p>but is showing an error.</p> <p>How could I convert all <em>intable</em> strings in a list to int, leaving other elements as strings?</p> <p>My list might be multi-dimensional. A method which works for a generic list would be preferable:</p> <p><code>l=[['aa','2'],['bb','3']]</code></p>
<p>I'd use a custom function:</p> <pre><code>def try_int(x): try: return int(x) except ValueError: return x </code></pre> <p>Example: </p> <pre><code>&gt;&gt;&gt; [try_int(x) for x in ['sam', '1', 'dad', '21']] ['sam', 1, 'dad', 21] </code></pre> <hr> <p><strong>Edit:</strong> If you need to apply the above to a list of lists, <strong>why didn't you converted those strings to int while building the nested list?</strong></p> <p>Anyway, if you need to, it's just a matter of choice on how to iterate over such nested list and apply the method above.</p> <p>One way for doing that, might be:</p> <pre><code>&gt;&gt;&gt; list_of_lists = [['aa', '2'], ['bb', '3']] &gt;&gt;&gt; [[try_int(x) for x in lst] for lst in list_of_lists] [['aa', 2], ['bb', 3]] </code></pre> <p>You can obviusly reassign that to <code>list_of_lists</code>:</p> <pre><code>&gt;&gt;&gt; list_of_lists = [[try_int(x) for x in lst] for lst in list_of_lists] </code></pre>
python|string|list|int
12
1,902,692
9,876,616
Which spam corpus I can use in NLTK?
<p>My question is fairly related to <a href="https://stackoverflow.com/questions/5248100/using-document-length-in-the-naive-bayes-classifier-of-nltk-python">this one</a>, but I decided to open another question thread. I hope it is fine. </p> <p>I am building a spam filter using the NLTK in Python as well, but I've just started.</p> <p>I am wondering which spam corpus I can use and how to import it? I have not found any 'built-in in the NLTK' spam corpora (<a href="http://nltk.googlecode.com/svn/trunk/nltk_data/index.xml" rel="nofollow noreferrer">here</a>).</p> <p>Thank you in advance.</p>
<p>This <a href="http://www.cs.ucf.edu/courses/cap5636/fall2011/nltk.pdf">presentation</a> uses the <a href="http://www.aueb.gr/users/ion/data/enron-spam/">enron-spam dataset</a> (200,000+ emails).</p> <blockquote> <p>The training and testing sets come from a dataset of 200,000+ Enron emails which contain both “spam” and “ham” emails</p> </blockquote>
python|nltk|spam-prevention|corpus
9
1,902,693
26,377,427
django / python collections issue
<p>I have some code that I can not get my head around. The more I look at it the more confused I become.</p> <p>There are two date values and a language code that is passed into a js function. Then there is a django collection (I think!) that interacts with the django language tag to assign the correct values.</p> <p>I thought I had set this up correctly, but the code is not working and I cannot see the reason for it as my experience is not good enough to see where I have gone wrong.</p> <p>The error occurs when I try to call the names.month (as shown on the last line), so I think I have made an error in the name_map code or in the assignment of the variables of lc and LANGUAGE_CODES.</p> <p>The passed in values are:</p> <p>date1: 10/2000;</p> <p>date2: 12/2004;</p> <p>dynamic_language_code: de;</p> <p><strong>Any suggestions would be great.</strong> </p> <pre><code>function dateCalculation(date1, date2, dynamic_language_code) { //this function will accept two dates (format: mm/yyyy) and calculate the difference between the 2 dates and display the difference as x months or x years, x months. var a = date1; var b = date2; var lc = dynamic_language_code; var LANGUAGE_CODES = 'ar, zh-CN, zh-TW, en-GB, en, fr, fr-CA, de, it, pl, pt, pt-BR, ru, es-419, es'; var name_map = { {% for lc in LANGUAGE_CODES %} {{ lc }}: { month: "{% language lc %}{% trans 'month' %}{% endlanguage %}", months: "{% language lc %}{% trans 'months' %}{% endlanguage %}", year: "{% language lc %}{% trans 'year' %}{% endlanguage %}", years: "{% language lc %}{% trans 'years' %}{% endlanguage %}" } {% if not forloop.last %},{% endif %} {% endfor %} } names = name_map[lc]; if(names === undefined) { names = name_map['en']; } .... time_span = total_months + " " + names.month; .... </code></pre>
<p>You confused the server-side code with client-side code</p> <p>For example,</p> <p>the server-side code</p> <pre><code>from django.shortcut import render_to_response name_map_handler(request, **kwargs): """ some code to handler the ajax request """ render_to_response('the_template_you_want_to_use.html', {'LANGEAGE_CODES': ['zh-hans', 'de']) </code></pre> <p>only the server-side variable that you decide to render can be use in the Django template. Just like the <code>LANGAGE_CODES</code> in my example.</p>
javascript|python|django|dictionary|python-collections
1
1,902,694
1,792,918
Weird MySQL Python mod_wsgi Can't connect to MySQL server on 'localhost' (49) problem
<p>There have been similar questions on StackOverflow about this, but I haven't found quite the same situation. This is on a OS X Leopard machine using MySQL</p> <p>Some starting information:</p> <pre><code>MySQL Server version 5.1.30 Apache/2.2.13 (Unix) Python 2.5.1 mod_wsgi 3 </code></pre> <p>mysqladmin also has skip-networking listed as OFF</p> <p>I am able to connect to mysql from the python command line. But when I try to do it through mod_wsgi using code that is copy and pasted or via Django I receive the generic connection refusal</p> <pre><code>OperationalError: (2003, "Can't connect to MySQL server on 'localhost' (49)") </code></pre> <p>I've looked at the mysql manual and tried its troubleshooting tips such as</p> <pre><code>telnet localhost 3306 </code></pre> <p>and I <strong>do</strong> get a connection.</p> <p>I am <strong>not</strong> trying to connect as root, either.</p> <p>Any ideas on what else I could check? Thanks in advance!</p>
<p>I came across this error and it was due to an SELinux denial. /usr/bin/httpd didn't have permission to connect to port 3306. I corrected the issue with:</p> <pre><code>setsebool httpd_can_network_connect_db on </code></pre> <p>Seems to work great and should be more secure than just disabling SELinux. As Avinash Meetoo points out below, you can use:</p> <pre><code>setsebool -P httpd_can_network_connect_db </code></pre> <p>To make the selinux change persist across reboots.</p>
python|mysql|django|mod-wsgi|mysql-error-2003
13
1,902,695
28,006,564
Using Cython correctly in sample code with numpy
<p>I was wondering if I'm missing something when using Cython with Numpy because I haven't seen much of an improvement. I wrote this code as an example.</p> <p>Naive version:</p> <pre><code>import numpy as np from skimage.util import view_as_windows it = 16 arr = np.arange(1000*1000, dtype=np.float64).reshape(1000,1000) windows = view_as_windows(arr, (it, it), it) container = np.zeros((windows.shape[0], windows.shape[1])) def test(windows): for i in range(windows.shape[0]): for j in range(windows.shape[1]): container[i,j] = np.mean(windows[i,j]) return container %%timeit test(windows) 1 loops, best of 3: 131 ms per loop </code></pre> <p>Cythonized version:</p> <pre><code>%%cython --annotate import numpy as np cimport numpy as np from skimage.util import view_as_windows import cython cdef int step = 16 arr = np.arange(1000*1000, dtype=np.float64).reshape(1000,1000) windows = view_as_windows(arr, (step, step), step) @cython.boundscheck(False) def cython_test(np.ndarray[np.float64_t, ndim=4] windows): cdef np.ndarray[np.float64_t, ndim=2] container = np.zeros((windows.shape[0], windows.shape[1]),dtype=np.float64) cdef int i, j I = windows.shape[0] J = windows.shape[1] for i in range(I): for j in range(J): container[i,j] = np.mean(windows[i,j]) return container %timeit cython_test(windows) 10 loops, best of 3: 126 ms per loop </code></pre> <p>As you can see, there is a very modest improvement, so maybe I'm doing something wrong. By the way, the annotation that Cython produces the following:</p> <p><img src="https://i.stack.imgur.com/YVQgV.png" alt="enter image description here"></p> <p>As you can see, the numpy lines have a yellow background even after including the efficient indexing syntax <code>np.ndarray[DTYPE_t, ndim=2]</code>. Why?</p> <p>By the way, in my view the ideal outcome is being able to use most numpy functions but still get some reasonable improvement after taking advantage of efficient indexing syntax or maybe memory views as in HYRY's answer.</p> <p><strong>UPDATE</strong></p> <p>It seems I'm not doing anything wrong in the code I posted above and that the yellow background in some lines is normal, so I was left wondering the following: In which situations I can get a benefit from typing <code>cdef np.ndarray[np.float64_t, ndim=2]</code> in front of numpy arrays? I suppose there are specific instances where this is helpful, otherwise there wouldn't be much purpose in doing it. </p>
<p>You need to implement the <code>mean()</code> function yourself to speedup the code, this is because the overhead of calling a numpy function is very high.</p> <pre><code>@cython.boundscheck(False) @cython.wraparound(False) def cython_test(double[:, :, :, :] windows): cdef double[:, ::1] container cdef int i, j, k, l cdef int n0, n1, n2, n3 cdef double inv_n cdef double s n0, n1, n2, n3 = windows.base.shape container = np.zeros((n0, n1)) inv_n = 1.0 / (n2 * n3) for i in range(n0): for j in range(n1): s = 0 for k in range(n2): for l in range(n3): s += windows[i, j, k, l] container[i,j] = s * inv_n return container.base </code></pre> <p>Here is the <code>%timeit</code> results:</p> <ul> <li><code>python_test(windows)</code>: 63.7 ms</li> <li><code>cython_test(windows)</code>: 1.24 ms</li> <li><code>np.mean(windows, axis=(2, 3))</code>: 2.66 ms</li> </ul>
python|numpy|cython
3
1,902,696
14,048,531
Tastypie: include computed field from a related model?
<p>I've looked through Tastypie's documentation and searched for a while, but can't seem to find an answer to this.</p> <p>Let's say that we've got two models: <code>Student</code> and <code>Assignment</code>, with a one-to-many relationship between them. The <code>Assignment</code> model includes an <code>assignment_date</code> field. Basically, I'd like to build an API using Tastypie that returns <code>Student</code> objects <strong>sorted by most recent assignment date</strong>. Whether the sorting is done on the server or in the client side doesn't matter - but wherever the sorting is done, the <code>assignment_date</code> is needed to sort by.</p> <p>Idea #1: just return the assignments along with the students.</p> <pre><code>class StudentResource(ModelResource): assignments = fields.OneToManyField( AssignmentResource, 'assignments', full=True) class Meta: queryset = models.Student.objects.all() resource_name = 'student' </code></pre> <p>Unfortunately, each student may have tens or hundreds of assignments, so this is bloated and unnecessary.</p> <p>Idea #2: augment the data during the dehydrate cycle.</p> <pre><code>class StudentResource(ModelResource): class Meta: queryset = models.Student.objects.all() resource_name = 'student' def dehydrate(self, bundle): bundle.data['last_assignment_date'] = (models.Assignment .filter(student=bundle.data['id']) .order_by('assignment_date')[0].assignment_date) </code></pre> <p>This is not ideal, since it'll be performing a separate database roundtrip for each student record. It's also not very declarative, nor elegant.</p> <p>So, is there a good way to get this kind of functionality with Tastypie? Or is there a better way to do what I'm trying to achieve?</p>
<p>You can sort a ModelResource by a field name. Check out this part of the documentation <a href="http://django-tastypie.readthedocs.org/en/latest/resources.html#ordering" rel="nofollow">http://django-tastypie.readthedocs.org/en/latest/resources.html#ordering</a></p> <p>You could also set this ordering by default in the Model: <a href="https://docs.djangoproject.com/en/dev/ref/models/options/#ordering" rel="nofollow">https://docs.djangoproject.com/en/dev/ref/models/options/#ordering</a></p>
python|django|tastypie
1
1,902,697
13,828,531
Problems in Python getting multiple selections from Tkinter listbox
<p>This is the same problem I posed earlier today and which a couple of you tried to help me with, but I can't get it to work. All I want to do is to populate "ichose" with the multiple selections I make when I click on the listbox.</p> <pre><code>import Tkinter as tk from Tkinter import * global ichose class App(tk.Frame): def __init__(self, master): tk.Frame.__init__(self,master) self.master=master self.grid() self.ichose = () self.l = Listbox(self, height=10, selectmode=EXTENDED) # Selectmode can be SINGLE, BROWSE, MULTIPLE or EXTENDED. Default BROWSE self.l.grid(column=0, row=0, sticky=(N,W,E,S)) self.l.bind("Double-Button-1", self.entered) s = Scrollbar(self, orient=VERTICAL, command=self.l.yview) s.grid(column=0, row=0, sticky=(N,S,E)) self.l['yscrollcommand'] = s.set for i in range(1,101): self.l.insert('end', 'Line %d of 100' % i) def entered(self, event): self.ichose = self.selection_get() self.ichose = ('hello') root=tk.Tk() root.title('Listbox Problem') root.geometry('200x200') app=App(root) root.mainloop() print app.ichose </code></pre> <p>Whatever I do, "ichose" comes out as an empty tuple (). It's clear that the function "entered" is never called because I never see the test string 'hello'.</p> <p>I also don't know what the various options are as in "Double-Button-", "&lt;>" etc. Where can I find a list and explanation of each one?</p> <p>If somebody could please just modify my program so the "print ichose" works, I'd be really grateful. You can see from my program that I don't really know what I'm doing but am keen to learn. Thank you.</p>
<p>I've finally found the answer to my own question. This is REALLY useful if you want to capture multiple responses from a listbox. I've commented a lot. Hope it helps!</p> <pre><code>import Tkinter as tk from Tkinter import * class App(tk.Frame): def __init__(self, master): tk.Frame.__init__(self,master) self.master=master self.grid() self.ichose = [] self.l = Listbox(self, height=10, selectmode=MULTIPLE) # Selectmode can be SINGLE, BROWSE, MULTIPLE or EXTENDED. Default BROWSE self.l.grid(column=0, row=0, sticky=(N,W,E,S)) s = Scrollbar(self, orient=VERTICAL, command=self.l.yview) s.grid(column=0, row=0, sticky=(N,S,E)) self.l['yscrollcommand'] = s.set for i in range(1,101): self.l.insert('end', 'Line %d of 100' % i) # Create Textbox that will display selected items from list self.selected_list = Text(self,width=20,height=10,wrap=WORD) self.selected_list.grid(row=12, column=0, sticky=W) # Now execute the poll() function to capture selected list items self.ichose = self.poll() def poll(self): items =[] self.ichose = [] # Set up an automatically recurring event that repeats after 200 millisecs self.selected_list.after(200, self.poll) # curselection retrieves the selected items as a tuple of strings. These # strings are the list indexes ('0' to whatever) of the items selected. # map applies the function specified in the 1st parameter to every item # from the 2nd parameter and returns a list of the results. So "items" # is now a list of integers items = map(int,self.l.curselection()) # For however many values there are in "items": for i in range(len(items)): # Use each number as an index and get from the listbox the actual # text strings corresponding to each index, and append each to # the list "ichose". self.ichose.append(self.l.get(items[i])) # Write ichose to the textbox to display it. self.update_list() return self.ichose def update_list(self): self.selected_list.delete(0.0, END) self.selected_list.insert(0.0, self.ichose) root=tk.Tk() root.title('Listbox Multi-Capture') root.geometry('200x340') app=App(root) root.mainloop() print app.ichose </code></pre>
python|listbox|get|ltk
2
1,902,698
54,656,022
Am I training with gpu?
<p>I'm training a neural model with keras and tensorflow as backend. The log file starts with the following message: </p> <pre><code>nohup: ignoring input 2019-02-12 17:44:29.414526: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA 2019-02-12 17:44:30.191565: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 0 with properties: name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335 pciBusID: 0000:65:00.0 totalMemory: 7.93GiB freeMemory: 7.81GiB 2019-02-12 17:44:30.191601: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0 2019-02-12 17:44:30.409790: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-02-12 17:44:30.409828: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0 2019-02-12 17:44:30.409834: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0: N 2019-02-12 17:44:30.410015: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7535 MB memory) -&gt; physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:65:00.0, compute capability: 6.1) </code></pre> <p>Does this mean that the training is performed on gpu ? </p> <p>I would say yes, but when I execute <code>nvtop</code>, I see that all the gpu memory is used while 0% of the gpu calculation capacity is used (see yellow screenshot below):</p> <p><a href="https://i.stack.imgur.com/Q0S1X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Q0S1X.png" alt="see screenshot"></a></p> <p>Also, when I type <code>htop</code> in the command line, I see that one CPU is fully used (see black screenshot below)</p> <p><a href="https://i.stack.imgur.com/qlO1t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qlO1t.png" alt="see screenshot"></a></p> <p>How come the gpu memory is used and the cpu capacity calculation is used instead of the gpu capacity calculation ?</p>
<p>I think that you have compiled (or you installed already compiled package) tensorflow with CUDA support, but not with support all instructions available for your CPU (your CPU supports AVX2, AVX512F and FMA instructions that tensorflow can use).</p> <p>This means, that tensorflow will work fine (with full GPU support), but you can't use your processor at full capacity.</p> <p>Try compare time (GPU vs CPU) with this example: <a href="https://stackoverflow.com/a/54661896/10418812">https://stackoverflow.com/a/54661896/10418812</a></p>
python-3.x|tensorflow|keras|gpu
0
1,902,699
34,497,134
Understanding Django Pagination: need piece of documentation code explained
<p>I am working with Django's built in <em>Pagination</em> and figured out how to functionally use it via the Django Documentation on <a href="https://docs.djangoproject.com/en/1.9/topics/pagination/" rel="nofollow">Pagination</a>. Despite the fact that I am able to make it work with my app, there is a part of the example logic displayed on the documentations example that I find important to understand well before moving on. They don't explain it that well and I am not finding any other questions on StackOverflow (or the internet) that address it. </p> <p><strong><em>The view file example given for Django Pagination...</em></strong> <br><em>This example assumes class 'Contact' has already been imported.</em></p> <pre><code>from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger from django.shortcuts import render def listing(request): contact_list = Contacts.objects.all() paginator = Paginator(contact_list, 25) # Show 25 contacts per page page = request.GET.get('page') try: contacts = paginator.page(page) except PageNotAnInteger: # If page is not an integer, deliver first page. contacts = paginator.page(1) except EmptyPage: # If page is out of range (e.g. 9999), deliver last page of results. contacts = paginator.page(paginator.num_pages) return render(request, 'list.html', {'contacts': contacts}) </code></pre> <p><strong><em>The portion of this code that I would like explained (found in the view file)...</em></strong></p> <pre><code>page = request.GET.get('page') </code></pre> <p>What is the significance of <code>('page')</code>? I can't figure it out.</p> <p><strong><em>Here is the template file (incase it helps with understanding...</em></strong><br></p> <pre><code>{% for contact in contacts %} {# Each "contact" is a Contact model object. #} {{ contact.full_name|upper }}&lt;br /&gt; ... {% endfor %} &lt;div class="pagination"&gt; &lt;span class="step-links"&gt; {% if contacts.has_previous %} &lt;a href="?page={{ contacts.previous_page_number }}"&gt;previous&lt;/a&gt; {% endif %} &lt;span class="current"&gt; Page {{ contacts.number }} of {{ contacts.paginator.num_pages }}. &lt;/span&gt; {% if contacts.has_next %} &lt;a href="?page={{ contacts.next_page_number }}"&gt;next&lt;/a&gt; {% endif %} &lt;/span&gt; &lt;/div&gt; </code></pre>
<p>In the links, you are using <code>page</code> as the variable to store the page number, for example</p> <pre><code>&lt;a href="?page={{ contacts.previous_page_number }}"&gt; </code></pre> <p>In the rendered template, this will be be something like</p> <pre><code>&lt;a href="?page=5"&gt; </code></pre> <p>When the use clicks on this link, then <code>page=5</code> will be included in the GET parameters for the request. The page number is then fetched in the view with:</p> <pre><code>page = request.GET.get('page') </code></pre> <p>There is nothing special about the chosen variable <code>page</code>. The important thing is that the string used in <code>request.GET.get()</code> is the same as that used in the template.</p>
python|django|pagination
2