Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,905,400 | 31,491,583 |
How can I get randomized grid search to be more verbose? (seems stopped, but can't diagnose)
|
<p>I'm running a relatively large job, which involves doing a randomized grid search on a dataset, which (with a small n_iter_search) already takes a long time. </p>
<p>I'm running it on a 64 core machine, and for about 2 hours it kept 2000 threads active working on the first folds. It then stopped reporting completely into the stdout. It's last report was:
<code>[Parallel(n_jobs=-1)]: Done 4 out of 60 | elapsed: 84.7min remaining: 1185.8min</code> </p>
<p>I've noticed on htop that almost all cores are at 0%, which would not happen when training random forests. No feedback or errors from the program, if it weren't for htop I would assume it is still training. This has happened before, so it is a recurring problem. The machine is perfectly responsive and the process seems alive. </p>
<p>I already have verbose = 10. Any thoughts on how I can diagnose what it going on inside the RandomizedSearchCV? </p>
<p>The grid search I'm doing:</p>
<blockquote>
<p>rfc = RandomForestClassifier(n_jobs=-1) </p>
<p>param_grid = {
'n_estimators': sp_randint(100, 5000),
'max_features' : ['auto', None],
'min_samples_split' : sp_randint(2, 6)
} </p>
<p>n_iter_search = 20</p>
<p>CV_rfc = RandomizedSearchCV(estimator=rfc, param_distributions=param_grid, n_iter = n_iter_search, verbose = 10,n_jobs = -1)</p>
</blockquote>
|
<p>As a first step, adding the <code>verbose</code> parameter to the <code>RandomForestClassifier</code> as well could let you see if the search is really stuck. It will display progress in fitting the trees (<code>building tree 88 out of 100</code> ...). </p>
<p>I don't really know why your search got stuck, but thinking about it removing the search on <code>n_estimators</code> should enable you to grid search the entire space of parameters you specified here in just 8 iterations.</p>
|
python|scikit-learn|random-forest|cross-validation|grid-search
| 3 |
1,905,401 | 15,923,422 |
How to read the pdf file in a line by line string in Python/Django?
|
<p>I am dealing with the text and pdf file equal or less than <code>5KB</code>. If the file is a text file, I get a file from the form and get the required input in a string to summarize:</p>
<pre><code> file = file.readlines()
file = ''.join(file)
result = summarize(file, num_sentences)
</code></pre>
<p>It's easily done but for pdf file it turns out it's not that easy. Is there a way to get the sentences of pdf file as a string like I did with my txt file in Python/Django?</p>
|
<p>I dont think its possible to read pdfs just the way you are doing it with txt files, you need to convert the pdfs into txt files(refer <a href="https://stackoverflow.com/questions/25665/python-module-for-converting-pdf-to-text">Python module for converting PDF to text</a>) and then process it.
you can also refer to this to convert pdf to txt easily <a href="http://code.activestate.com/recipes/511465-pure-python-pdf-to-text-converter/" rel="nofollow noreferrer">http://code.activestate.com/recipes/511465-pure-python-pdf-to-text-converter/</a></p>
|
python|django|pdf|file-io|readlines
| 3 |
1,905,402 | 15,556,302 |
Python write as a file, unknown format
|
<p>So here's a small snippet of a file opened in python. But I'm not sure what kind of formatting this is. Doesn't look like binary. How do I write this back out into a file?</p>
<pre><code>hDuwHkAbG9hZGVyX21jAEAAiQYJHgBpAEAQAHBAAIkGCR4AaQBAEACsQACJBgkeAGkAQBAA5EAAiQYJHgBpAEAQARxAAIkGCR4AaQBAEAFUQACJBgkeAGkAQBABkEAAiQYJHgBpAEAQAchAAIkGCR4AaQBAEAIAQACJBgkeAGkAQBACOEAAiQYJHgBpAEAQAnBAAIkGCR4AaQBAEAKsQACJBgkeAGkAQ
</code></pre>
|
<p>It's a fragment of a <a href="http://en.wikipedia.org/wiki/Base64" rel="nofollow">base-64</a> encoded binary file of some sort. Unfortunately, it hasn't been cut at a byte boundary; however, when I insert a letter at the front, the decoded version looks like this:</p>
<pre><code>รรฎรy๏ฟฝloader_mc๏ฟฝ@๏ฟฝ ๏ฟฝi๏ฟฝ@๏ฟฝp@๏ฟฝ ๏ฟฝi๏ฟฝ@๏ฟฝยฌ@๏ฟฝ ๏ฟฝi๏ฟฝ@๏ฟฝรค@๏ฟฝ ๏ฟฝi๏ฟฝ@@๏ฟฝ ๏ฟฝi๏ฟฝ@T@๏ฟฝ ๏ฟฝi๏ฟฝ@@๏ฟฝ ๏ฟฝi๏ฟฝ@ร@๏ฟฝ ๏ฟฝi๏ฟฝ@๏ฟฝ@๏ฟฝ ๏ฟฝi๏ฟฝ@8@๏ฟฝ ๏ฟฝi๏ฟฝ@p@๏ฟฝ ๏ฟฝi๏ฟฝ@ยฌ@๏ฟฝ ๏ฟฝi๏ฟฝ
</code></pre>
<p>You can see some clear meaningful ASCII in there. However, what the binary data represents is anyone's guess.</p>
<p>If you just need to decode it into the above binary format, use the <a href="http://docs.python.org/2/library/base64.html" rel="nofollow"><code>base64</code></a> module.</p>
|
python|file|output
| 1 |
1,905,403 | 15,667,750 |
Coloring a tab in openpyxl
|
<p>We have a situation where we want to color the tabs for the worksheets using openpyxl. Is there a way to do this within the library? Or, has anyone found a way to do this external to the library (i.e. by extension or something similar)?</p>
|
<p>You can color the tabs with openpyxl by using RRGGBB color code for sheet_properties.tabColor property:</p>
<pre><code>from openpyxl import Workbook
wb = Workbook()
ws = wb.create_sheet('My_Color_Title')
ws.sheet_properties.tabColor = 'FFFF00'
wb.save('My_book_with_Yellow_Tab.xlsx')
</code></pre>
<p><a href="https://i.stack.imgur.com/NtZxf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NtZxf.png" alt="enter image description here"></a></p>
|
python|openpyxl
| 10 |
1,905,404 | 49,117,565 |
How Do I Write Google Sheets Data to Microsoft Excel?
|
<p>Primer: I'm <em>extremely</em> new to Python. </p>
<p>I am working on taking some Google Sheets data, and creating .xlsx sheets with it. With the code below, I am able to get Python to read the data and put it into arrays. However, I can't seem to figure out how to get openpyxl to write it to a document successfully. I'm assuming it has something to do with trying to write an array rather than iterating over rows that appear.</p>
<p>Any help/advice you all could provide would be greatly appreciated.</p>
<p>I get the following error when trying to run it:</p>
<pre><code>Traceback (most recent call last):
File "quickstart.py", line 94, in <module>
main()
File "quickstart.py", line 90, in main
ws.append([values])
File "C:\ProgramData\Anaconda3\lib\site-packages\openpyxl\worksheet\worksheet.py", line 763, in append
cell = Cell(self, row=row_idx, col_idx=col_idx, value=content)
File "C:\ProgramData\Anaconda3\lib\site-packages\openpyxl\cell\cell.py", line 115, in __init__
self.value = value
File "C:\ProgramData\Anaconda3\lib\site-packages\openpyxl\cell\cell.py", line 299, in value
self._bind_value(value)
File "C:\ProgramData\Anaconda3\lib\site-packages\openpyxl\cell\cell.py", line 206, in _bind_value
raise ValueError("Cannot convert {0!r} to Excel".format(value))
ValueError: Cannot convert [['Student Name', 'Gender', 'Class Level', 'Home State', 'Major', 'Extracurricular Activity'], ['Alexandra', 'Female', '4. Senior', 'CA', 'English', 'Drama Club'], ['Andrew', 'Male', '1. Freshman', 'SD', 'Math', 'Lacrosse'], ['Anna', 'Female', '1. Freshman', 'NC', 'English', 'Basketball'], ['Becky', 'Female', '2. Sophomore', 'SD', 'Art', 'Baseball'], ['Benjamin', 'Male', '4. Senior', 'WI', 'English', 'Basketball'], ['Carl', 'Male', '3. Junior', 'MD', 'Art', 'Debate'], ['Carrie', 'Female', '3. Junior', 'NE', 'English', 'Track & Field'], ['Dorothy', 'Female', '4. Senior', 'MD', 'Math', 'Lacrosse'], ['Dylan', 'Male', '1. Freshman', 'MA', 'Math', 'Baseball'], ['Edward', 'Male', '3. Junior', 'FL', 'English', 'Drama Club'], ['Ellen', 'Female', '1. Freshman', 'WI', 'Physics', 'Drama Club'], ['Fiona', 'Female', '1. Freshman', 'MA', 'Art', 'Debate'], ['John', 'Male', '3. Junior', 'CA', 'Physics', 'Basketball'], ['Jonathan', 'Male', '2. Sophomore', 'SC', 'Math', 'Debate'], ['Joseph', 'Male', '1. Freshman', 'AK', 'English', 'Drama Club'], ['Josephine', 'Female', '1. Freshman', 'NY', 'Math', 'Debate'], ['Karen', 'Female', '2. Sophomore', 'NH', 'English', 'Basketball'], ['Kevin', 'Male', '2. Sophomore', 'NE', 'Physics', 'Drama Club'], ['Lisa', 'Female', '3. Junior', 'SC', 'Art', 'Lacrosse'], ['Mary', 'Female', '2. Sophomore', 'AK',
'Physics', 'Track & Field'], ['Maureen', 'Female', '1. Freshman', 'CA', 'Physics', 'Basketball'], ['Nick', 'Male', '4. Senior', 'NY', 'Art', 'Baseball'], ['Olivia', 'Female', '4. Senior', 'NC', 'Physics', 'Track & Field'], ['Pamela', 'Female', '3. Junior', 'RI', 'Math', 'Baseball'], ['Patrick', 'Male', '1. Freshman', 'NY', 'Art', 'Lacrosse'], ['Robert', 'Male', '1. Freshman', 'CA', 'English', 'Track & Field'], ['Sean', 'Male', '1. Freshman', 'NH', 'Physics', 'Track & Field'], ['Stacy', 'Female', '1. Freshman', 'NY', 'Math', 'Baseball'], ['Thomas', 'Male', '2. Sophomore', 'RI', 'Art', 'Lacrosse'], ['Will', 'Male', '4. Senior', 'FL', 'Math', 'Debate']] to Excel
</code></pre>
<p>Here's the code I have do far:</p>
<pre><code>from __future__ import print_function
import httplib2
import oauth2client
import os
import googleapiclient
import openpyxl
from apiclient import discovery
from oauth2client import client
from oauth2client import tools
from oauth2client.file import Storage
from googleapiclient.discovery import build
from openpyxl import Workbook
""" This is the code to get raw data from a specific Google Sheet"""
try:
import argparse
flags = argparse.ArgumentParser(parents=[tools.argparser]).parse_args()
except ImportError:
flags = None
# If modifying these scopes, delete your previously saved credentials
# at ~/.credentials/sheets.googleapis.com-python-quickstart.json
SCOPES = 'https://www.googleapis.com/auth/spreadsheets'
CLIENT_SECRET_FILE = 'client_secret_noemail.json'
APPLICATION_NAME = 'Google Sheets API Python'
def get_credentials():
"""Gets valid user credentials from storage.
If nothing has been stored, or if the stored credentials are invalid,
the OAuth2 flow is completed to obtain the new credentials.
Returns:
Credentials, the obtained credential.
"""
home_dir = os.path.expanduser('~')
credential_dir = os.path.join(home_dir, '.credentials')
if not os.path.exists(credential_dir):
os.makedirs(credential_dir)
credential_path = os.path.join(credential_dir,
'sheets.googleapis.com-python-quickstart.json')
store = Storage(credential_path)
credentials = store.get()
if not credentials or credentials.invalid:
flow = client.flow_from_clientsecrets(CLIENT_SECRET_FILE, SCOPES)
flow.user_agent = APPLICATION_NAME
if flags:
credentials = tools.run_flow(flow, store, flags)
else: # Needed only for compatibility with Python 2.6
credentials = tools.run_flow(flow, store)
print('Storing credentials to ' + credential_path)
return credentials
def main():
"""Shows basic usage of the Sheets API.
Creates a Sheets API service object and prints the names and majors of
students in a sample spreadsheet:
https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit
"""
credentials = get_credentials()
http = credentials.authorize(httplib2.Http())
discoveryUrl = ('https://sheets.googleapis.com/$discovery/rest?'
'version=v4')
service = build('sheets', 'v4', http=http,
discoveryServiceUrl=discoveryUrl)
spreadsheetId = '1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms'
rangeName = 'Class Data!A:F'
result = service.spreadsheets().values().get(
spreadsheetId=spreadsheetId, range=rangeName).execute()
values = result.get('values', [])
if not values:
print('No data found.')
else:
wb = Workbook()
ws = wb.active
# Add new row at bottom
ws.append([values])
wb.save("users.xlsx") # Write to disk
if __name__ == '__main__':
main()
</code></pre>
<p>Update: when trying to do ws.append(values), I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "quickstart.py", line 94, in <module>
main()
File "quickstart.py", line 90, in main
ws.append(values)
File "C:\ProgramData\Anaconda3\lib\site-packages\openpyxl\worksheet\worksheet.py", line 763, in append
cell = Cell(self, row=row_idx, col_idx=col_idx, value=content)
File "C:\ProgramData\Anaconda3\lib\site-packages\openpyxl\cell\cell.py", line 115, in __init__
self.value = value
File "C:\ProgramData\Anaconda3\lib\site-packages\openpyxl\cell\cell.py", line 299, in value
self._bind_value(value)
File "C:\ProgramData\Anaconda3\lib\site-packages\openpyxl\cell\cell.py", line 206, in _bind_value
raise ValueError("Cannot convert {0!r} to Excel".format(value))
ValueError: Cannot convert ['Student Name', 'Gender', 'Class Level', 'Home State', 'Major', 'Extracurricular Activity'] to Excel
</code></pre>
<p><strong>Update with Pandas Code:</strong></p>
<p>When leaving the index flag as true (unedited), this is the layout the data has:</p>
<pre><code>"""
Using pandas and dataframes. For writing large amounts of data, this is probably the best way.
"""
df=DataFrame(data=values)
df.to_excel('Pandas.xlsx')
</code></pre>
<p><a href="https://i.stack.imgur.com/JxNcx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JxNcx.png" alt="Index Flag True"></a></p>
<p>When marking the index flag as false, this is how the data looks:</p>
<pre><code>"""
Using pandas and dataframes. For writing large amounts of data, this is probably the best way.
"""
df=DataFrame(data=values)
df.to_excel('Pandas.xlsx', index=False)
</code></pre>
<p><a href="https://i.stack.imgur.com/x4jiD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x4jiD.png" alt="Index Flag False"></a></p>
<p><strong>Update 2: Solved solution!</strong>
The code below from Zyd and Ben.T completely solved my issues.</p>
<pre><code>"""
Using pandas and dataframes. For writing large amounts of data, this is probably the best way.
"""
df=DataFrame(data=values)
df.to_excel('Pandas.xlsx', header=False, index=False)
</code></pre>
<p><a href="https://i.stack.imgur.com/dcTwG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dcTwG.png" alt="No Indexes"></a></p>
<p>Thank you all for the help!</p>
|
<p>It seems that the variable called <code>values</code> is a list containing a list for each row of your google spreadsheet.
I tried:</p>
<pre><code>import openpyxl
# to recreate a list of list
list_input = [['one','two'],['second row','with',14]]
wb = openpyxl.Workbook()
ws = wb.active
ws.append(list_input)
wb.save("test.xlsx")
</code></pre>
<p>and I got the same error than your update.</p>
<p>By replacing the line <code>ws.append(list_input)</code> by</p>
<pre><code>for row in list_input:
ws.append(row)
</code></pre>
<p>My code worked, so I would replace your line <code>ws.append([values])</code> by</p>
<pre><code>for row in values:
ws.append(row)
</code></pre>
<p>Hope it is some help for you.</p>
|
python|excel|google-sheets
| 2 |
1,905,405 | 70,885,791 |
How to prevent re-assignment of a class method outside the class?
|
<p>I want some of my class methods to be not re-assignable outside (or both outside and inside) of the class. <code>A</code>'s functions are objects too in python, is it possible to accomplish what I want? Here is the method;</p>
<pre><code>def x(self):
"""Return x, the horizontal distance between the shape and the leftmost
of the screen, in pixels.
"""
return self.__x
</code></pre>
<p>Now outside of the class, it is easy to mess with this method;</p>
<pre><code>shape = Shape(10, 20)
shape.x = 30
print(shape.x) # Prints 30
print(shape.x()) # TypeError: 'int' object is not callable
</code></pre>
<p>Is it possible to prevent this kind of assignment of class methods?</p>
|
<p>Make <code>x</code> a property with no setter, not a method.</p>
<pre><code>class Shape:
...
@property
def x(self):
return self.__x
</code></pre>
<p>Attempts to assign to <code>x</code> will result in an exception:</p>
<pre><code>>>> shape.x = 30
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: can't set attribute
</code></pre>
<p>If you really want something that must be called, have the property return that instead of the value the function will return.</p>
<pre><code>class Shape:
@property
def x(self):
return lambda: self.__x
</code></pre>
|
python|python-3.x|class|oop|methods
| 2 |
1,905,406 | 70,852,651 |
Pandas rolling up column values based upon max value in column when aggregating
|
<p>I have a problem regarding my pandas data frame which contains row level for users, such as which group they belong to, their country, type, and total number of impressions from that user. An example slice of my df:</p>
<pre><code>โโโโโโโโโโโโฌโโโโโโโโโโโโฌโโโโโโโโโโโโโโฌโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโ
โ userID โ userGroup โ userCountry โ userType โ totalImpressions โ
โโโโโโโโโโโโผโโโโโโโโโโโโผโโโโโโโโโโโโโโผโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโค
โ Xn2r5u8x โ one โ USA โ premium โ 7454 โ
โ fTjWnZr4 โ two โ USA โ basic โ 657 โ
โ MbQeThWm โ two โ USA โ standard โ 578 โ
โ Xp2s5v8y โ two โ JP โ core โ 34509 โ
โ gUkXn2r5 โ three โ JP โ core โ 43 โ
โ NcRfUjXn โ four โ UK โ premium โ 85656 โ
โ WmZq4t7w โ four โ USA โ core โ 3456 โ
โ eThVmYq3 โ four โ JP โ standard โ 7689 โ
โ KbPeShVk โ four โ UK โ standard โ 92834 โ
โโโโโโโโโโโโดโโโโโโโโโโโโดโโโโโโโโโโโโโโดโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโ
</code></pre>
<p>As you can see, the info is 1 row per user, where users in a group can belong to different country, type and different no of total impressions.</p>
<p>What I would like to do is roll this data up to the <code>userGroup</code> level, getting rid of the <code>userID</code>, keeping the <code>userCountry</code> and <code>userType</code> of the user with the highest number of <code>totalImpressions</code> , and summing up the <code>totalImpression</code> for all users in that group. This should result in a data frame like:</p>
<pre><code>โโโโโโโโโโโโโฌโโโโโโโโโโโโโโโฌโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโ
โ userGroup โ groupCountry โ groupType โ groupTotalImpressions โ
โโโโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโค
โ one โ USA โ premium โ 7454 โ
โ two โ JP โ core โ 35744 โ
โ three โ JP โ core โ 43 โ
โ four โ UK โ standard โ 189635 โ
โโโโโโโโโโโโโดโโโโโโโโโโโโโโโดโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโ
</code></pre>
<p>As you can see <code>groupCountry</code> and <code>groupType</code> are coming from the user within that group with the highest <code>totalImpression</code> rather than the first row value in a group.</p>
<p>Is this something possible in pandas, I know I could aggregate using <code>pd.groupby</code>, but from here I am not sure how to select the country/type of the the top user by totalImpressions. Any help would be greatly appreciated!</p>
<p>To generate my example data :</p>
<pre><code>import pandas as pd
lst = [['Xn2r5u8x', 'one', 'USA', 'premium', 7454],
['fTjWnZr4', 'two', 'USA', 'basic', 657],
['MbQeThWm', 'two', 'USA', 'standard', 578],
['Xp2s5v8y', 'two', 'JP', 'core', 34509],
['gUkXn2r5', 'three', 'JP', 'core', 43],
['NcRfUjXn', 'four', 'UK', 'premium', 85656],
['WmZq4t7w', 'four', 'USA', 'core', 3456],
['eThVmYq3', 'four', 'JP', 'standard', 7689],
['KbPeShVk', 'four', 'UK', 'standard', 92834]]
df = pd.DataFrame(lst, columns =['userID', 'userGroup', 'userCountry', 'userType', 'totalImpressions'])```
</code></pre>
|
<p>If you sort ascending your dataframe by the <code>totalImpressions</code> column, you just have to keep the last row for each group and sum impressions.</p>
<p>Use <code>groupby.agg</code>:</p>
<pre><code>out = df.sort_values('totalImpressions').groupby('userGroup', as_index=False) \
.agg({'userGroup': 'last', 'userCountry': 'last',
'userType': 'last', 'totalImpressions': 'sum'})
print(out)
# Output
userGroup userCountry userType totalImpressions
0 four UK standard 189635
1 one USA premium 7454
2 three JP core 43
3 two JP core 35744
</code></pre>
|
python|pandas
| 1 |
1,905,407 | 2,494,141 |
Evaluating for loops in python, containing an array with an embedded for loop
|
<p>I was looking at the following code in python:</p>
<pre><code>for ob in [ob for ob in context.scene.objects if ob.is_visible()]:
pass
</code></pre>
<p>Obviously, it's a for each loop, saying for each object in foo array. However, I'm having a bit of trouble reading the array. If it just had:</p>
<pre><code>[for ob in context.scene.objects if ob.is_visible()]
</code></pre>
<p>that would make some sense, granted the use of different objects with the same name ob seems odd to me, but it looks readable enough, saying that the array consists of every element in that loop. However, the use of 'ob', before the for loop makes little sense, is there some syntax that isn't in what I've seen in the documentation?</p>
<p>Thank you</p>
|
<p>That is the syntax for <a href="http://docs.python.org/tutorial/datastructures.html#list-comprehensions" rel="nofollow noreferrer">list comprehension</a>.</p>
|
python|arrays|list|foreach
| 1 |
1,905,408 | 6,063,669 |
how can I take an specific line in big file which has many similar lines, using python?
|
<p>have different files with same name, in different directories. In these files there are lines which are almost equal, I would like to take out only the last line of these ones( there are more lines after it) and write it in another file.</p>
<p>So far what I have done:</p>
<pre><code>#!/usr/bin/env python
import os
def cd_grep():
for file in os.listdir("."):
if os.path.isfile(file):
for line in open("graph.txt"):
if " 4.49" in line:
line_list=[line]
g = open('comparation','a')
g.write ("%s" % (line[0:4]))
g.close()
os.chdir('4.294')
cd_grep()
os.chdir(os.pardir)
os.chdir('4.394')
cd_grep()
os.chdir(os.pardir)
os.chdir('4.494')
cd_grep()
os.chdir(os.pardir)
os.chdir('4.594')
cd_grep()
os.chdir(os.pardir)
os.chdir('4.694')
cd_grep()
</code></pre>
<p>I've created a list because I am gonna take only a specific information of the whole line.</p>
<p>Finally I got that this procedure only works for small files and only if the last line of the file contains the term I'm searching.
For big files, I got this message ( inside the file, which I was hoping to get the line):<br>
Voluntary context switches: 3403</p>
<p>Any idea or suggestion will be very appreciate.</p>
|
<p>Not sure about the error you are receiving (after your last edit). </p>
<p>I have tried to rewrite the code a bit, hope it gives you a result similar to what you need (WARNING: not tested). </p>
<pre><code>with open ('comparation', 'a') as write_file:
for path, dirs, files in os.walk(os.getcwd()):
for filename in [f for f in files if f == "graph.txt"]:
filepath = os.path.abspath(os.path.join(path, filename))
with open(filepath) as f:
for line in f:
if " 4.49" in line:
last = line
write_file.write("File: %s, Line: %s\n" % (filepath, last[0:4]))
</code></pre>
|
python|text|readline
| 2 |
1,905,409 | 67,975,328 |
SLURM and Python multiprocessing pool on a cluster
|
<p>I am trying to run a simple parallel program on a SLURM cluster (4x raspberry Pi 3) but I have no success. I have been reading about it, but I just cannot get it to work. The problem is as follows:</p>
<p>I have a Python program named <strong>remove_duplicates_in_scraped_data.py</strong>. This program is executed on a single node (node=1xraspberry pi) and inside the program there is a multiprocessing loop section that looks something like:</p>
<pre><code>pool = multiprocessing.Pool()
input_iter= product(FeaturesArray_1, FeaturesArray_2, repeat=1)
results = pool.starmap(refact_featureMatch, input_iter)
</code></pre>
<p>The idea is that when it hits that part of the program it should distribute the calculations, one thread per element in the iterator and combine the results in the end.
So, the program <strong>remove_duplicates_in_scraped_data.py</strong> runs once (not multiple times) and it spawns different threads during the pool calculation.</p>
<p>On a single machine (without using SLURM) it works just fine, and for the particular case of a raspberry pi, it spawns 4 threads, does the calcuations, saves it in results and continues the progarm as a single thread.</p>
<p>I would like to exploit all the 16 threads of the SLURM cluster but I cannot seem to get it to work. And I am confident that the cluster has been configured correctly, since it can run all the multiprocessing examples (e.g. calculate the digits of pi) using SLURM in all 16 threads of the cluster.</p>
<p>Now, looking at the SLURM configuration with <code>sinfo -N -l</code> we have:</p>
<pre><code>NODELIST NODES PARTITION STATE CPUS S:C:T MEMORY TMP_DISK WEIGHT AVAIL_FE REASON
node01 1 picluster* idle 4 4:1:1 1 0 1 (null) none
node02 1 picluster* idle 4 4:1:1 1 0 1 (null) none
node03 1 picluster* idle 4 4:1:1 1 0 1 (null) none
node04 1 picluster* idle 4 4:1:1 1 0 1 (null) none
</code></pre>
<p>Each cluster reports 4 sockets, 1 Core and 1 Thread and as far as SLURM is concerned 4 CPUs.</p>
<p>I wish to exploit all the 16 CPUs and if I run my progam as:</p>
<pre><code>srun -N 4 -n 16 python3 remove_duplicates_in_scraped_data.py
</code></pre>
<p>It will just run 4 copies of the main progam in each node, resulting in 16 threads. But this is not what I want. I want a single instance of the program, which then spawns the 16 threads across the cluster. At least we know that with srun -N -n 16 the cluster works.</p>
<p>So, I tried instead changing the program as follows:</p>
<pre><code>
#!/usr/bin/python3
#SBATCH -p picluster
#SBATCH --nodes=4
#SBATCH --ntasks=16
#SBATCH --cpus-per-task=1
#SBATCH --ntasks-per-node=4
#SBATCH --ntasks-per-socket=1
#SBATCH --sockets-per-node=4
sys.path.append(os.getcwd())
...
...
...
pool = multiprocessing.Pool()
input_iter= product(FeaturesArray_1, FeaturesArray_2, repeat=1)
results = pool.starmap(refact_featureMatch, input_iter)
...
...
</code></pre>
<p>and executing it with</p>
<pre><code>sbatch remove_duplicates_in_scraped_data.py
</code></pre>
<p>The slurm job is created successfully and I see that all nodes have been allocated on the cluster</p>
<pre><code>PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
picluster* up infinite 4 alloc node[01-04]
</code></pre>
<p>The program starts running as a single thread on node01 but when it hits the parallel part it only spawns 4 threads on node01 and nothing on all the other nodes.</p>
<p>I tried different combination of settings, even tried to run it via a script</p>
<pre><code>#!/bin/bash
#SBATCH -p picluster
#SBATCH --nodes=4
#SBATCH --ntasks=16
#SBATCH --cpus-per-task=1
#SBATCH --ntasks-per-node=4
#SBATCH --ntasks-per-socket=1
#SBATCH --ntasks-per-core=1
#SBATCH --sockets-per-node=4
python3 remove_duplicates_in_scraped_data.py
</code></pre>
<p>but I just cannot get it to spawn on the other nodes.</p>
<p>Can you please help me?
Is it even possible to do this? i.e. use python's multiprocessing pool on different nodes of a cluster?
If not, what other options do I have?
The cluster also has dask configured. Would that be able to work better?</p>
<p>Please help as I am really stuck with this.</p>
<p>Thanks</p>
|
<p>So, instead I run DASK with the SLURM cluster and the Python script sweems to parallelise well. This required the least amount of code changes. So the above multiprocessing pool code was changed to:</p>
<pre><code> cluster = SLURMCluster( header_skip=['--mem'],
queue='picluster',
cores=4,
memory='1GB'
)
cluster.scale(cores=16) #the number of nodes to request
dask_client = Client(cluster)
lazy_results=[]
for pair in input_iter:
res = dask_client.submit(refact_featureMatch, pair[0], pair[1])
lazy_results.append(res)
results = dask_client.gather(lazy_results)
</code></pre>
<p>There might be of course better ways of doing this via DASK. I am open to suggestions :)</p>
|
python-3.x|raspberry-pi|multiprocessing|slurm
| 1 |
1,905,410 | 30,289,891 |
Recursive function with same arguments
|
<p>I'm creating a text-based game in python and have hit a road block. I have a function that checks if user input contains certain words, and if it does it returns the user-input, otherwise it will re-ask for the input. If you write something that doesn't contains one of the words, it will re-call the function. </p>
<pre><code>def contains_words(prompt, words):
user_input = raw_input(prompt).strip().lower()
if user_input == "instructions":
print
instructions()
print
contains_words(prompt, words)
elif user_input == "i" or user_input == "inventory":
if len(inventory) == 0:
print
print "There is nothing in your inventory."
print
contains_words(prompt, words)
else:
print "Your inventory contains: " + inventory
contains_words(prompt, words)
else:
if user_input in words:
return user_input
else:
print
print "I did not understand your answer, consider rephrasing."
contains_words(prompt , words)
</code></pre>
<p>Here is me calling it:</p>
<pre><code>pizza = contains_words("Do you like pizza?", ["yes", "no"])
</code></pre>
<p>Within this function you can bring up the instructions or your inventory and then it will recall the function. Everything works like normal if you put one of the words in your answer the first time you are asked. The problem happens when you enter something incorrect, bring up the inventory or bring up the instructions. It causes the function to return nothing, instead of user-input. Why is this happening? Is it because the function resets so the parameter equal none? </p>
|
<p>When you recurse, you need to return the result.</p>
<pre><code>return contains_words(prompt , words)
</code></pre>
|
python|user-input|text-based
| 1 |
1,905,411 | 67,092,524 |
Display HTML list from nested python dictionary
|
<p>I got the following Python dictionary:</p>
<pre class="lang-py prettyprint-override"><code>d = {'DataType': 'XY', 'DataReportGrp': {'NoSides_1': {'Side': '2', 'Reports': {'NoReportIDs_1': {'ReportID': '250001'}, 'NoReportIDs_2': {'ReportID': '250002'}}}}}
</code></pre>
<p>I want to transform it into an HTML list similar to this:</p>
<pre class="lang-html prettyprint-override"><code> <li> DataType </li>
<li> DataReportGrp </li>
<ul>
<li> NoSides_1 </li>
<ul>
<li> Side </li>
<li> Reports </li>
<ul>
<li> NoReportIDs_1 </li>
<ul>
<li> ReportID </li>
</ul>
<li> NoReportIDs_2 </li>
<ul>
<li> ReportID </li>
</ul>
</ul>
</ul>
</ul>
</code></pre>
<p>I tried to do this with the below code. Unfortunately I do not know what logic to implement to find out when to close the <code></ul></code> group at the right time and it might be that this dictionary gets more nested levels in the future:</p>
<pre class="lang-py prettyprint-override"><code> def nested_groups(levels):
groups = list()
def going_through(d):
if isinstance(levels, dict):
for kd, kv in d.items():
if isinstance(kv, dict):
groups.append(kd)
print(f" <li> --- {kd} </li>")
print(" <ul>")
going_through(kv)
else:
print(f" <li> --- {kd} </li>")
going_through(levels)
print("</ul>" * len(groups))
</code></pre>
<p>This results into:</p>
<pre class="lang-html prettyprint-override"><code>
<li> --- DataType </li>
<li> --- DataReportGrp </li>
<ul>
<li> --- NoSides_1 </li>
<ul>
<li> --- Side </li>
<li> --- Reports </li>
<ul>
<li> --- NoReportIDs_1 </li>
<ul>
<li> --- ReportID </li>
<li> --- NoReportIDs_2 </li>
<ul>
<li> --- ReportID </li>
</ul></ul></ul></ul></ul>
</code></pre>
<p>Some help would be appreciated.</p>
|
<p>You can use a recursive generator function:</p>
<pre><code>d = {'DataType': 'XY', 'DataReportGrp': {'NoSides_1': {'Side': '2', 'Reports': {'NoReportIDs_1': {'ReportID': '250001'}, 'NoReportIDs_2': {'ReportID': '250002'}}}}}
def to_html(d, c = 0):
for a, b in d.items():
yield '{}<li>{}</li>'.format(' '*c, a)
if isinstance(b, dict):
yield '{}<ul>\n{}\n{}</ul>'.format(' '*c, "\n".join(to_html(b, c + 1)), ' '*c)
print('\n'.join(to_html(d)))
</code></pre>
<p>Output:</p>
<pre><code><li>DataType</li>
<li>DataReportGrp</li>
<ul>
<li>NoSides_1</li>
<ul>
<li>Side</li>
<li>Reports</li>
<ul>
<li>NoReportIDs_1</li>
<ul>
<li>ReportID</li>
</ul>
<li>NoReportIDs_2</li>
<ul>
<li>ReportID</li>
</ul>
</ul>
</ul>
</ul>
</code></pre>
|
python|html
| 1 |
1,905,412 | 64,019,993 |
Google Cloud SQL - Python Flask connection issue
|
<p>I'm trying to make a Flask app work with Google Cloud SQL following this tutorial:</p>
<p><a href="https://www.smashingmagazine.com/2020/08/api-flask-google-cloudsql-app-engine/" rel="nofollow noreferrer">https://www.smashingmagazine.com/2020/08/api-flask-google-cloudsql-app-engine/</a></p>
<p>Everything is going well except for the MySQL connection, which triggers this <code>UnboundLocalError: local variable 'conn' referenced before assignment</code> error that seems more to do with python than the frameworks in the tutorial. Any thoughts?</p>
<p>Here's the code that is causing the problem:</p>
<pre><code>def open_connection():
unix_socket = '/cloudsql/{}'.format(db_connection_name)
try:
if os.environ.get('GAE_ENV') == 'standard':
conn = pymysql.connect(user=db_user, password=db_password,
unix_socket=unix_socket, db=db_name,
cursorclass=pymysql.cursors.DictCursor
)
except pymysql.MySQLError as e:
print(e)
return conn
def get_songs():
conn = open_connection()
with conn.cursor() as cursor:
result = cursor.execute('SELECT * FROM songs;')
songs = cursor.fetchall()
if result > 0:
got_songs = jsonify(songs)
else:
got_songs = 'No Songs in DB'
conn.close()
return got_songs
</code></pre>
<p>Error message:</p>
<pre><code> return self.view_functions[rule.endpoint](**req.view_args)
File "/Users/timpeterson/python/flask-app/api/main.py", line 16, in songs
return get_songs()
File "/Users/timpeterson/python/flask-app/api/db.py", line 26, in get_songs
conn = open_connection()
File "/Users/timpeterson/python/flask-app/api/db.py", line 23, in open_connection
return conn
UnboundLocalError: local variable 'conn' referenced before assignment
</code></pre>
|
<p>According to the <a href="https://cloud.google.com/appengine/docs/standard/python/config/appref" rel="nofollow noreferrer">official documentation</a>:</p>
<pre><code>Optional. You can define environment variables in your app.yaml file to make them available to your app.
Environment variables that are prefixed with GAE are reserved for system use and not allowed in the app.yaml file.
These variables will be available in the os.environ dictionary:
env_variables:
DJANGO_SETTINGS_MODULE: "myapp.settings"
</code></pre>
<p>I checked the tutorial <code>app.yaml</code> file and the variable <code>GAE_ENV</code> was not set.</p>
<pre><code>#app.yaml
runtime: python37
env_variables:
CLOUD_SQL_USERNAME: YOUR-DB-USERNAME
CLOUD_SQL_PASSWORD: YOUR-DB-PASSWORD
CLOUD_SQL_DATABASE_NAME: YOUR-DB-NAME
CLOUD_SQL_CONNECTION_NAME: YOUR-CONN-NAME
</code></pre>
<p>Therefore I believe your <code>if condition</code> is false and the <code>con</code> is referenced before assignment</p>
|
python|google-app-engine|flask|global|google-cloud-sql
| 2 |
1,905,413 | 50,684,663 |
Returning an alternate column in pandas dataframe
|
<p>I have two dataframes. Dataframe A is an experimental dataframe which contains a list of things which have been used (along with date etc). Dataframe B is a reference dataframe. Dataframe A and B have matching index numbers. I want to update Dataframe A with extra information from Dataframe B, where the index numbers match. </p>
<p>For example </p>
<p>dfA</p>
<pre><code>REF
ABC
DEF
DEF
XYZ
</code></pre>
<p>dfB </p>
<pre><code>REF VALUE
ABC 1.23
DEF 2.22
XYZ 3.33
</code></pre>
<p>In reality the reference dataframe is much larger than the experimental dataframe. I would like to create a new column in dataframe A with the value from dataframe B based on matching references. I have tried 'is in' and where but the mis matched lengths of the series creating an error.
I have tried using merge but as dataframe A has repetitions of the reference value the merged dataframe has too many rows.
Is there an effective way to do this without creating a new series or column for each reference?</p>
|
<p>Using <strong><code>map</code></strong> with <strong><code>set_index</code></strong></p>
<pre><code>df1['res'] = df1.REF.map(df2.set_index('REF')['VALUE'])
REF res
0 ABC 1.23
1 DEF 2.22
2 DEF 2.22
3 XYZ 3.33
</code></pre>
|
python|pandas|dataframe|merge
| 1 |
1,905,414 | 51,017,395 |
not able to send mail using python
|
<p>I'm using the following method to send mail from Python using SMTP.
I have tried many times to run this code which is right but it always shows me the same error is there anyone who can help me in this.</p>
<pre><code>import smtplib
TO = 'xxx@gmail.com'
SUBJECT = 'TEST MAIL'
TEXT = 'Here is a message from python.'
gms = 'xxxx@gmail.com'
gm = 'www'
server = smtplib.SMTP('smtp.gmail.com', 587)
server.ehlo()`enter code here`
server.starttls()
server.login(gms, gm)
BODY = '\r\n'.join(['To: %s' % TO,
'From: %s' % gms,
'Subject: %s' % SUBJECT,
'', TEXT])
try:
server.sendmail(gms, [TO], BODY)
print('email sent')
finally:
print('Error sending mail')
server.quit()
</code></pre>
<p>And the errors are:-</p>
<pre><code> Traceback (most recent call last):
File "E:/projects/Intern/Mail/Maill.py", line 10, in <module>
server = smtplib.SMTP('smtp.gmail.com', 587)
File "C:\Users\KD\AppData\Local\Programs\Python\Python36\lib\smtplib.py", line 251, in __init__
(code, msg) = self.connect(host, port)
File "C:\Users\KD\AppData\Local\Programs\Python\Python36\lib\smtplib.py", line 336, in connect
self.sock = self._get_socket(host, port, self.timeout)
File "C:\Users\KD\AppData\Local\Programs\Python\Python36\lib\smtplib.py", line 307, in _get_socket
self.source_address)
File "C:\Users\KD\AppData\Local\Programs\Python\Python36\lib\socket.py", line 724, in create_connection
raise err
File "C:\Users\KD\AppData\Local\Programs\Python\Python36\lib\socket.py", line 713, in create_connection
sock.connect(sa)
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
</code></pre>
|
<p>This is most likely a local issue with your firewall or ISP, which is probably blocking outgoing SMTP connections. There is no other reason why smtp.gmail.com should timeout - it should always be online. Even if the credentials were invalid, you usually get a smart error code from the gmail server.</p>
<p>Check your antivirus software or firewall. Or try using a Web/HTTP API for sending email, instead of SMTP.</p>
|
python-3.x|smtp
| 3 |
1,905,415 | 50,895,070 |
How to call a python function in PySpark?
|
<p>I have a multiple files (CSV and XML) and I want to do some filters.
I defined a functoin doing all those filters, and I want to knwo how can I call it to be applicable for my CSV file?<br>
PS: The type of my dataframe is: pyspark.sql.dataframe.DataFrame<br>
Thanks in advance</p>
|
<p>For example, if you read in your first CSV files as <code>df1 = spark.read.csv(..)</code> and your second CSV file as <code>df2 = spark.read.csv(..)</code></p>
<p>Wrap up all the multiple <code>pyspark.sql.dataframe.DataFrame</code> that came from CSV files alone into a list..</p>
<p><code>csvList = [df1, df2, ...]</code></p>
<p>and then,</p>
<pre><code>for i in csvList:
YourFilterOperation(i)
</code></pre>
<p>Basically, for every <code>i</code> which is <code>pyspark.sql.dataframe.DataFrame</code> that came from a CSV file stored in <code>csvList</code>, it should iterate one by one, go inside the loop and perform whatever filter operation that you've written.</p>
<p>Since you haven't provided any reproducible code, I can't see if this works on my Mac.</p>
|
python|pyspark
| 0 |
1,905,416 | 3,977,007 |
Buttons in vizard
|
<p>I am developing a GUI in Vizard in which i am using radio buttons to select correct options, but I have a problem that I cannot solve.</p>
<p>In every group of radio buttons the 1st button always appears to be selected, but is actually not selected. The only way to select it is by choosing another button and then choosing the 1st button again. </p>
<p>Does anyone know how to solve this? I would like that in the beginning none of the buttons are selected.</p>
|
<p>I don't know Vizard but radio buttons probably have a method or another way of deselecting them similar to radiobutton.deselect() method of Tkinter. Have you looked at their documentation?</p>
<p>Also someone have done this trick, may be you should try it:
Create another radio button in that group, let it be selected by default and make it invisible</p>
<pre><code>QuestionBox = vizinfo.add("") #Add the vizinfo object
#Create an invisible Radio button to be the default selected one, so that one of the visible ones must be chosen by the user
invisibleRadio = QuestionBox.add(viz.RADIO, 0, "")
invisibleRadio.visible(0) #invisible
</code></pre>
<p>source: <a href="http://forum.worldviz.com/showthread.php?t=1611" rel="nofollow">http://forum.worldviz.com/showthread.php?t=1611</a></p>
|
python|radio-button|vizard
| 2 |
1,905,417 | 56,848,013 |
Pythonic way of repeating until a return value is not None
|
<p>I have a list of URLs, where some of the URLs might be invalid. I want to gather an image from one randomly chosen URL. The <code>get_image_from_url</code> function returns <code>None</code> if the URL or the image is invalid. Therefore, I want to repeat the line <code>get_image_from_url(random.choice(urls))</code> until it does not return <code>None</code>. Because <code>urls</code> might be a large list I do not want to filter out invalid urls for the complete list before. I did the following:</p>
<pre><code>image = None
while not image:
image = ImageNetUtilities.get_image_from_url(random.choice(urls))
return image
</code></pre>
<p>I wonder if there is any better (more pythonic) way of achieving my goal. I'm using Python3. </p>
<p><strong>EDIT</strong></p>
<p>As suggested in the comments I should remove any url I already selected. I replaced</p>
<pre><code>image = ImageNetUtilities.get_image_from_url(random.choice(urls))
</code></pre>
<p>with </p>
<pre><code>image = ImageNetUtilities.get_image_from_url(urls.pop(random.randrange(len(urls))))
</code></pre>
|
<p>A version that 1) avoids an endless loop if <em>all</em> URLs are invalid, 2) tests each URL only once and 3) uses cool Python features would be:</p>
<pre><code>image = next(filter(None, map(ImageNetUtilities.get_image_from_url, random.sample(urls, k=len(urls)))))
</code></pre>
<p>Instead of <code>random.sample(urls, k=len(urls))</code> you could also <code>random.shuffle</code> your list beforehand.</p>
<p>To pick this apart:</p>
<ul>
<li><a href="https://docs.python.org/3/library/random.html#random.sample" rel="nofollow noreferrer"><code>random.sample(urls, k=len(urls))</code></a> produces a randomly shuffled version of your list</li>
<li><a href="https://docs.python.org/3/library/functions.html#map" rel="nofollow noreferrer"><code>map(ImageNetUtilities.get_image_from_url, ...)</code></a> produces a generator which successively applies <code>ImageNetUtilities.get_image_from_url</code> to each URL</li>
<li><a href="https://docs.python.org/3/library/functions.html#filter" rel="nofollow noreferrer"><code>filter(None, ...)</code></a> removes all <code>None</code> values from the <code>map</code> generator</li>
<li><a href="https://docs.python.org/3/library/functions.html#next" rel="nofollow noreferrer"><code>next(...)</code></a> iterates this generator and returns the first value, or raises a <code>StopIteration</code> exception if no URLs are valid</li>
</ul>
|
python|python-3.x
| 2 |
1,905,418 | 64,997,823 |
How to get stable plot using matplotlib
|
<p>I'm trying to plot 3D poses of hand using matplotlib. For every frame of video or we can say if data is changed then plot size(X, Y, Z) also changed with respect to object(hand poses) size and position. For more detail below are two screenshots with next and previous frame, in screenshots we can see that x, y and z axis are changed as hand pose is changed.</p>
<p><a href="https://i.stack.imgur.com/rZbEE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rZbEE.png" alt="Image_1" /></a> <a href="https://i.stack.imgur.com/zeQ0V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zeQ0V.png" alt="Image_2" /></a></p>
<p>My question is that how to get a stable plot that it's size will not change even object position will be changed.</p>
<p>As a reference below is another image that shows a stable plot and I'm trying to get like this. As we can see that input frames and object size is changing but plot is with same size. No matter if object size will be changed in plot.</p>
<p><a href="https://i.stack.imgur.com/qRtip.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qRtip.gif" alt="Result_Image" /></a></p>
<p>Below is my plotting code. Kindly take it just as plotting sample, I'm sharing only plotting code because problem is in plotting code.</p>
<pre><code>fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.view_init(elev=20, azim=75)
ax.set_xlabel('X Axis')
ax.set_ylabel('Y Axis')
ax.set_zlabel('Z Axis')
rgb_dict = get_keypoint_rgb(skeleton)
for i in range(len(skeleton)): # here is skeleton, just ignore it
joint_name = skeleton[i]['name']
pid = skeleton[i]['parent_id']
parent_joint_name = skeleton[pid]['name']
x = np.array([kps_3d[i, 0], kps_3d[pid, 0]]) # kps_3d is pose data
y = np.array([kps_3d[i, 1], kps_3d[pid, 1]])
z = np.array([kps_3d[i, 2], kps_3d[pid, 2]])
ax.plot(x, z, -y, c = np.array(rgb_dict[parent_joint_name])/255., linewidth = line_width)
ax.scatter(kps_3d[i, 0], kps_3d[i, 2], -kps_3d[i, 1], c = np.array(rgb_dict[joint_name]).reshape(1, 3)/255., marker='o')
ax.scatter(kps_3d[pid, 0], kps_3d[pid, 2], -kps_3d[pid, 1], c = np.array(rgb_dict[parent_joint_name]).reshape(1, 3)/255., marker='o')
</code></pre>
<p>As <code>ax.scatter</code> <a href="https://matplotlib.org/3.3.3/api/_as_gen/matplotlib.axes.Axes.scatter.html" rel="nofollow noreferrer">docs</a> show that it is used to get <em>a scatter plot of y vs. x with varying marker size and/or color</em>.</p>
<p>Which alternative function I can use to get stable plots?</p>
<p>Looking for some valuable suggestions.</p>
|
<p>Try set the plot limits by these lines of code:</p>
<pre><code>ax.set_xlim3d(-0.1, 0.1)
ax.set_ylim3d(-0.1, 0.1)
ax.set_zlim3d(-0.1, 0.1)
</code></pre>
<p>Adjust the numbers as needed. You may also need to adjust viewing angle:</p>
<pre><code>ax.view_init(elev=28., azim=55)
</code></pre>
|
python|matplotlib
| 1 |
1,905,419 | 64,775,577 |
Python 3.9 check if input is INT or FLOAT
|
<p>I am gradually learning Python, using version 3.9.</p>
<p>I want to check if an input() is an INT or FLOAT, i have the following script but whatever i enter the first IF always runs.</p>
<pre><code>i = input("Please enter a value: ")
if not isinstance(i, int) or not isinstance(i, float):
print("THis is NOT an Integer or FLOAT")
elif isinstance(i, int) or isinstance(i, float):
print("THis is an Integer or FLOAT")
</code></pre>
<p>could someone explain what i am doing wrong please</p>
|
<p>You can check with something like this. Get the input and covert it to float if it raises value error then it's not <code>float/int</code>. then use <a href="https://docs.python.org/3/library/stdtypes.html#float.is_integer" rel="nofollow noreferrer"><code>is_integer()</code></a> method of <code>float</code> to determine whether it's int or float</p>
<p>Test cases</p>
<pre><code>-22.0 -> INT
22.1 -> FLOAT
ab -> STRING
-22 -> INT
</code></pre>
<p>Code:</p>
<pre><code>i = input("Please enter a value: ")
try:
if float(i).is_integer():
print("integer")
else:
print("float")
except ValueError:
print("not integer or float")
</code></pre>
|
python|python-3.x
| 4 |
1,905,420 | 61,553,486 |
WSGIServer backend with React - Handling long running script
|
<p>My Python WSGI server calls a separate python script that can take ~5 minutes to process. What is the best way for me to handle this? At the moment, the web browser waits and returns once the script is successful. </p>
<p>I'm having trouble finding examples of how to accept the request, run a long script, and then tell React 5 minutes later, "hey it's done!" - instead of having chrome stalled in a loading state.</p>
<p>Thanks for your advice</p>
|
<p>You can use <code>rq</code> or <code>celery</code> as Worker Instance. <code>rq</code> is pretty simple and minimal where <code>celery</code> is full-featured but a bit overwhelming Here is how it will work.</p>
<pre><code>def execute_background_task(param):
your task...
def request(param):
execute(param)
return "Your Request is getting processed"
</code></pre>
<p>So <code>execute_background_task</code> will be executed by rq or celery in the background. And from the frontend you send request to sever than 'Hey rq/celery, did you finish the task?'. So how do you get to know did they finish it or not?</p>
<p><strong>Solution:</strong> You will get a Job ID of each task that is getting executed in background. You can query with that , if it finished or not.</p>
<p>I think you should check some wonderful documentation to do play with it. Here are some cool links that i followed:</p>
<p><a href="https://realpython.com/flask-by-example-implementing-a-redis-task-queue/" rel="nofollow noreferrer">https://realpython.com/flask-by-example-implementing-a-redis-task-queue/</a></p>
<p><a href="https://flask-rq2.readthedocs.io/en/latest/" rel="nofollow noreferrer">https://flask-rq2.readthedocs.io/en/latest/</a></p>
|
python|reactjs|flask|wsgi
| 0 |
1,905,421 | 57,739,335 |
Simple-Salesforce query error cannot find field
|
<p>I am using Simple-Salesforce to query records via .query_all, but when I include a recently created custom field, I receive the <code>No such column</code> error.</p>
<p>An example of the query that creates the error is below, with <code>Problem_Field__c</code> as a stand-in for my field.</p>
<pre><code>s.query_all('SELECT ID, Name, Problem_Field__c FROM Custom_Object___c')
</code></pre>
<p>I have already reviewed the field-level security of this field and do have access to it. </p>
<p>As additional information, my login to the sandbox in which I am using this custom field is below:</p>
<pre><code>s = simple_salesforce.Salesforce(username='myUsername.TestDomain',
password='myPassword',
organizationId='mySandboxOrgId',
security_token='',
domain='test')
</code></pre>
<p>The problem field is a lookup field to the <code>Contact</code> object.</p>
|
<p>A lookup is a relationship between two objects. When you use a relationship in a query and the query is navigating the relationship in a child-to-parent direction (Contact = parent, your custom object = child), you must use the relationship names. Custom relationships are named with an <code>__r</code> rather than <code>__c</code> suffix (<a href="https://developer.salesforce.com/docs/atlas.en-us.soql_sosl.meta/soql_sosl/sforce_api_calls_soql_relationships_and_custom_objects.htm" rel="nofollow noreferrer">docs</a>). The relationship name is typically the same as the API name of the lookup on Lookup definition screen but with suffix replaced. Your query should be</p>
<pre><code>s.query_all('SELECT ID, Name, MyRelationship__r.Some_Contact_Field FROM Custom_Object___c')
</code></pre>
<p>To know the relationship name for sure, you can use <a href="https://help.salesforce.com/articleView?id=000315335&type=1&mode=1" rel="nofollow noreferrer">take a look at the object schema</a>.</p>
|
python|python-3.x|salesforce|soql|simple-salesforce
| 0 |
1,905,422 | 57,790,962 |
Automatically detecting clusters in a 2d array/heatmap
|
<p>I have run object detection on a video file and summed the seconds each pixel is activated to find the amount of time an object is shown in this area which gives me a 2d array of time values. Since these objects are in the same position of the video most of the time it leads to some areas of the screen having much higher activation than others. Now I would like to find a way to automatically detect "clusters" without knowing the number of clusters beforehand. I have considered using something like k-means but also read a little about finding local maximums, but I can't quite figure out how to put all this together or which method is the best to go with. Also, the objects vary in size, so I'm not sure I can go with the local maximum method?</p>
<p>The final result would be a list of ids and maximum time value for each cluster.</p>
<pre><code>[[3, 3, 3, 0, 0, 0, 0, 0, 0]
[3, 3, 3, 0, 0, 0, 2, 2, 2]
[3, 3, 3, 0, 0, 0, 2, 2, 2]
[0, 0, 0, 0, 0, 0, 2, 2, 2]]
</code></pre>
<p>From this example array I would end out with a list:</p>
<pre><code>id | Seconds
1 | 3
2 | 2
</code></pre>
<p>I havn't tried much since I have no clue where to start and any recommendations of methods with code examples or links to where I can find it to accomplish this would be greatly appreciated! :)</p>
|
<p>You could look at different methods for clustering in: <a href="https://scikit-learn.org/stable/modules/clustering.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/clustering.html</a></p>
<p>If you do not know the number of clusters beforehand you might want to use a different algorithm than K-means (one which is not dependent on the number of clusters). I would suggest reading about dbscan and hdbscan for this task. Good luck :)</p>
|
python|machine-learning|computer-vision
| 1 |
1,905,423 | 57,958,039 |
I have 2 codes in Python that use to find prime number. Why do in these 2 codes, one produces results much faster than other
|
<p>I have written program to find prime numbers, use as much technique as I can to find prime numbers. However, when searching the Internet (on GeekforGeek), I found one that kinda similar to mine (same on algorithm ideas), but produce the same result much faster. I wonder what is the difference between the two</p>
<p>We both reduce the test by 1) only check the odd numbers. 2) have the divisor only at odd numbers 3) only allow divisor to reach square root of that number</p>
<pre><code>#my code
import time
import math
start = time.time()
upperlimit = 1000000
counter = 1
number = 3
while number<upperlimit: #loop to check number
shittyvalue = 0
division = 3
while math.sqrt(number) >= division: # conditional loop
if number % division == 0:
shittyvalue = 1 #for giving the annoucement on whether this number is a prime
break
division = division + 2
if shittyvalue == 0:
counter = counter + 1
number = number + 2
print ("There are ",counter, " prime numbers")
end = time.time()
print ("Found in ",end-start, " seconds")
</code></pre>
<pre><code>#GeekforGeek code's
# Python Program to find prime numbers in a range
import math
import time
def is_prime(n):
if n <= 1:
return False
if n == 2:
return True
if n > 2 and n % 2 == 0:
return False
max_div = math.floor(math.sqrt(n))
for i in range(3, 1 + max_div, 2):
if n % i == 0:
return False
return True
# Driver function
t0 = time.time()
c = 0 #for counting
for n in range(1,1000000):
x = is_prime(n)
c += x
print("Total prime numbers in range :", c)
t1 = time.time()
print("Time required :", t1 - t0)
</code></pre>
<p>The result shown:</p>
<p>Mine: There are 78498 prime numbers
Found in 17.29092025756836 seconds</p>
<p>GeekforGeek's: Total prime numbers in range : 78498
Time required : 3.9572863578796387</p>
|
<p>You can take Math.sqrt(number) out of the while loop. It is a heavy operation when n is large.</p>
<p>For loops are faster than while loops in Python
<a href="https://stackoverflow.com/questions/869229/why-is-looping-over-range-in-python-faster-than-using-a-while-loop">Why is looping over range() in Python faster than using a while loop?</a></p>
|
python|performance|primes
| 2 |
1,905,424 | 71,486,678 |
How do I extract dtafrom dynamically populated site?
|
<p>I want to extract domain data from <a href="https://www.whois.com/whois/" rel="nofollow noreferrer">https://www.whois.com/whois/</a> using this site for example to get information for domain named tinymail.com i want to use <a href="https://www.whois.com/whois/tinymail.com" rel="nofollow noreferrer">https://www.whois.com/whois/tinymail.com</a>, if I open it in browser first then soup gives credible data otherwise no domain dtata is received (I guess it is something like site putting data on cache). I do not want to use selenium method (as it will increase time required) I have tried inspecting networking option in inspect element but saw only two updates none of them is showing any data.</p>
|
<p>you can use requests to get data:</p>
<p>This retrieves data from the website in the question.</p>
<pre class="lang-py prettyprint-override"><code>import requests
url = 'https://www.whois.com/whois/'
r = requests.get(url)
if r.status_code==200:
# page works
print(r.text)
else:
print('no website')
</code></pre>
<p>Here is a link for more: <a href="https://docs.python-requests.org/en/latest/" rel="nofollow noreferrer">https://docs.python-requests.org/en/latest/</a></p>
<p>Also, you can sign up for an <code>API</code> key to get specific data. This might be free for limited data requests.</p>
|
python|php|python-3.x|web-scraping
| 0 |
1,905,425 | 69,398,683 |
Extract street network from a raster image
|
<p>I have a 512x512 image of a street grid:</p>
<p><a href="https://i.stack.imgur.com/1mYBD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1mYBD.png" alt="Street grid" /></a></p>
<p>I'd like to extract polylines for each of the streets in this image (large blue dots = intersections, small blue dots = points along polylines):</p>
<p><a href="https://i.stack.imgur.com/5k3nD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5k3nD.png" alt="Street grid with polylines" /></a></p>
<p>I've tried a few techniques! One idea was to start with <a href="https://scikit-image.org/docs/dev/api/skimage.morphology.html#skeletonize" rel="nofollow noreferrer"><code>skeletonize</code></a> to compress the streets down to 1px wide lines:</p>
<pre class="lang-py prettyprint-override"><code>from skimage import morphology
morphology.skeletonize(streets_data))
</code></pre>
<p><a href="https://i.stack.imgur.com/m8PfI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m8PfI.png" alt="Skeletonized streets" /></a></p>
<p>Unfortunately this has some gaps that break the connectivity of the street network; I'm not entirely sure why, but my guess is that this is because some of the streets are 1px narrower in some places and 1px wider in others. <em><strong>(update: the gaps aren't real; they're entirely artifacts of how I was displaying the skeleton. See <a href="https://stackoverflow.com/questions/8056458/display-image-with-a-zoom-1-with-matplotlib-imshow-how-to/69453359#comment122759502_8057182">this comment</a> for the sad tale. The skeleton is well-connected.)</strong></em></p>
<p>I can patch these using a <code>binary_dilation</code>, at the cost of making the streets somewhat variable width again:</p>
<pre class="lang-py prettyprint-override"><code>out = morphology.skeletonize(streets_data)
out = morphology.binary_dilation(out, morphology.selem.disk(1))
</code></pre>
<p><a href="https://i.stack.imgur.com/mDch0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mDch0.png" alt="re-connected streets" /></a></p>
<p>With a re-connected grid, I can run the Hough transform to find line segments:</p>
<pre class="lang-py prettyprint-override"><code>import cv2
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi / 180 # angular resolution in radians of the Hough grid
threshold = 8 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 10 # minimum number of pixels making up a line
max_line_gap = 2 # maximum gap in pixels between connectable line segments
# Run Hough on edge detected image
# Output "lines" is an array containing endpoints of detected line segments
lines = cv2.HoughLinesP(
out, rho, theta, threshold, np.array([]),
min_line_length, max_line_gap
)
line_image = streets_data.copy()
for i, line in enumerate(lines):
for x1,y1,x2,y2 in line:
cv2.line(line_image,(x1,y1),(x2,y2), 2, 1)
</code></pre>
<p>This produces a whole jumble of overlapping line segments, along with some gaps (look at the T intersection on the right side):</p>
<p><a href="https://i.stack.imgur.com/gxldE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gxldE.png" alt="Results of Hough" /></a></p>
<p>At this point I could try to de-dupe overlapping line segments, but it's not really clear to me that this is a path towards a solution, especially given that gap.</p>
<p>Are there more direct methods available to get at the network of polylines I'm looking for? In particular, what are some methods for:</p>
<ol>
<li>Finding the intersections (both four-way and T intersections).</li>
<li>Shrinking the streets to all be 1px wide, allowing that there may be some variable width.</li>
<li>Finding the polylines between intersections.</li>
</ol>
|
<p>If you want to improve your "skeletonization", you could try the following algorithm to obtain the "1-px wide streets":</p>
<pre class="lang-py prettyprint-override"><code>import imageio
import numpy as np
from matplotlib import pyplot as plt
from scipy.ndimage import distance_transform_edt
from skimage.segmentation import watershed
# read image
image_rgb = imageio.imread('1mYBD.png')
# convert to binary
image_bin = np.max(image_rgb, axis=2) > 0
# compute the distance transform (only > 0)
distance = distance_transform_edt(image_bin)
# segment the image into "cells" (i.e. the reciprocal of the network)
cells = watershed(distance)
# compute the image gradients
grad_v = np.pad(cells[1:, :] - cells[:-1, :], ((0, 1), (0, 0)))
grad_h = np.pad(cells[:, 1:] - cells[:, :-1], ((0, 0), (0, 1)))
# given that the cells have a constant value,
# only the edges will have non-zero gradient
edges = (abs(grad_v) > 0) + (abs(grad_h) > 0)
# extract points into (x, y) coordinate pairs
pos_v, pos_h = np.nonzero(edges)
# display points on top of image
plt.imshow(image_bin, cmap='gray_r')
plt.scatter(pos_h, pos_v, 1, np.arange(pos_h.size), cmap='Spectral')
</code></pre>
<p><a href="https://i.stack.imgur.com/11Wqn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/11Wqn.png" alt="output" /></a></p>
<p>The algorithm works on the "blocks" rather than the "streets", take a look into the <code>cells</code> image:</p>
<p><a href="https://i.stack.imgur.com/RYAZs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RYAZs.png" alt="cells" /></a></p>
|
python|opencv|scikit-image|hough-transform
| 4 |
1,905,426 | 69,479,205 |
OSError: [Errno 98] Address already in use with Gunicorn (Dash app deployment)
|
<p>I am creating a simple Dash app with gunicorn and nginx.</p>
<p>I am deploying it with docker-compose; one container for Dash + Gunicorn and one container for Nginx.</p>
<p>My project has the following structure:</p>
<pre><code>.
โโโ app_rwg
โ โโโ src (with some sub packages)
โ โโโ Dockerfile
โ โโโ setup.py
โ โโโ requirements.txt
โ โโโ run.py
โโโ nginx
โ โโโ conf
โ โโโ Dockerfile
โโโ docker-compose.yml
</code></pre>
<p>However, I am getting the following errors:</p>
<pre><code>rwg_app_1 | [2021-10-07 09:39:09 +0000] [13] [ERROR] Exception in worker process
rwg_app_1 | Traceback (most recent call last):
rwg_app_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker
rwg_app_1 | worker.init_process()
rwg_app_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 134, in init_process
rwg_app_1 | self.load_wsgi()
rwg_app_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi
rwg_app_1 | self.wsgi = self.app.wsgi()
rwg_app_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
rwg_app_1 | self.callable = self.load()
rwg_app_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
rwg_app_1 | return self.load_wsgiapp()
rwg_app_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
rwg_app_1 | return util.import_app(self.app_uri)
rwg_app_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/util.py", line 359, in import_app
rwg_app_1 | mod = importlib.import_module(module)
rwg_app_1 | File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
rwg_app_1 | return _bootstrap._gcd_import(name[level:], package, level)
rwg_app_1 | File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
rwg_app_1 | File "<frozen importlib._bootstrap>", line 983, in _find_and_load
rwg_app_1 | File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
rwg_app_1 | File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
rwg_app_1 | File "<frozen importlib._bootstrap_external>", line 728, in exec_module
rwg_app_1 | File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
rwg_app_1 | File "/code/run.py", line 7, in <module>
rwg_app_1 | application.run_server()
rwg_app_1 | File "/usr/local/lib/python3.7/site-packages/dash/dash.py", line 2033, in run_server
rwg_app_1 | self.server.run(host=host, port=port, debug=debug, **flask_run_options)
rwg_app_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 920, in run
rwg_app_1 | run_simple(t.cast(str, host), port, self, **options)
rwg_app_1 | File "/usr/local/lib/python3.7/site-packages/werkzeug/serving.py", line 1010, in run_simple
rwg_app_1 | inner()
rwg_app_1 | File "/usr/local/lib/python3.7/site-packages/werkzeug/serving.py", line 959, in inner
rwg_app_1 | fd=fd,
rwg_app_1 | File "/usr/local/lib/python3.7/site-packages/werkzeug/serving.py", line 783, in make_server
rwg_app_1 | host, port, app, request_handler, passthrough_errors, ssl_context, fd=fd
rwg_app_1 | File "/usr/local/lib/python3.7/site-packages/werkzeug/serving.py", line 688, in __init__
rwg_app_1 | super().__init__(server_address, handler) # type: ignore
rwg_app_1 | File "/usr/local/lib/python3.7/socketserver.py", line 452, in __init__
rwg_app_1 | self.server_bind()
rwg_app_1 | File "/usr/local/lib/python3.7/http/server.py", line 137, in server_bind
rwg_app_1 | socketserver.TCPServer.server_bind(self)
rwg_app_1 | File "/usr/local/lib/python3.7/socketserver.py", line 466, in server_bind
rwg_app_1 | self.socket.bind(self.server_address)
rwg_app_1 | OSError: [Errno 98] Address already in use
rwg_app_1 | [2021-10-07 09:39:09 +0000] [13] [INFO] Worker exiting (pid: 13)
rwg_app_1 | Traceback (most recent call last):
rwg_app_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 209, in run
rwg_app_1 | self.sleep()
rwg_app_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 357, in sleep
rwg_app_1 | ready = select.select([self.PIPE[0]], [], [], 1.0)
rwg_app_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 242, in handle_chld
rwg_app_1 | self.reap_workers()
rwg_app_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 525, in reap_workers
rwg_app_1 | raise HaltServer(reason, self.WORKER_BOOT_ERROR)
rwg_app_1 | gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3>
</code></pre>
<p>My <code>run.py</code>:</p>
<pre><code>from rwg_app.dash_app import create_app
application = create_app()
application.run_server()
</code></pre>
<p>My Dockerfile for Dash + Gunicorn:</p>
<pre><code>FROM python:3.7
RUN mkdir /code
WORKDIR /code
COPY .. .
RUN pip install -r requirements.txt
RUN pip install . --use-feature=in-tree-build
EXPOSE 8050
CMD gunicorn --bind 0.0.0.0:8050 -w 3 run:application --log-file -
</code></pre>
<p>My <code>conf</code> for Nginx:</p>
<pre><code>server {
listen 80;
server_name localhost;
location / {
proxy_pass http://rwg_app:8050;
}
}
</code></pre>
<p>The Dockerfile within my Nginx folder:</p>
<pre><code>FROM nginx:1.19.2-alpine
COPY conf /etc/nginx/conf.d/default.conf
</code></pre>
<p>My final docker-compose file:</p>
<pre><code>version: "3.7"
services:
rwg_app:
build: app_rwg
restart: always
ports:
- 8050:8050
networks:
- rwg_network
nginx:
build: nginx
restart: always
ports:
- 80:80
networks:
- rwg_network
depends_on:
- rwg_app
networks:
rwg_network:
</code></pre>
<hr />
<p>I already tried:
<code>kill -9 $(ps -A | grep python | awk '{print $1}')</code>
But without success...</p>
<p>I am out of options. Please help :D</p>
|
<p>So, before starting your Docker app, have you tried using <code>netstat -a</code> to check if there is anything running on that port? See more command arguments <a href="https://openport.net/netstat/" rel="nofollow noreferrer">here</a> or just run <code>man netstat</code>. Hope this will help you find what's running.</p>
<p>If there's nothing running on that port, you should review your configs to see whether the Docker compose up will attempt to start 2 processes on the same port (or start something twice).</p>
|
python|docker|nginx|flask|gunicorn
| 0 |
1,905,427 | 55,457,277 |
Select activity from a specific date using SQL
|
<p>I want to look up the number of questions are asked in a specific day on the Stack Overflow Question and answer dataset.
How many questions were asked at 2018-11-11?</p>
<pre><code>how = """SELECT
EXTRACT(DAY FROM DATE '2018-11-11') AS Day,
EXTRACT(MONTH FROM DATE '2018-11-11') AS Month,
EXTRACT(YEAR FROM DATE '2018-11-11') AS Year,
COUNT(*) AS Number_of_Questions,
ROUND(100 * SUM(IF(answer_count > 0, 1, 0)) / COUNT(*), 1) AS Percent_Questions_with_Answers
FROM
`bigquery-public-data.stackoverflow.posts_questions`
GROUP BY
Day
HAVING
Day > 0 AND day < 12
ORDER BY
Day;
"""
how = stackOverflow.query_to_pandas_safe(how)
how.head(12)
</code></pre>
<p>The code I use retrieves back all questions asked in the whole dataset Instead on the date I have selected. If I try to filter with @@ I get an error</p>
|
<p>Wouldn't the query look like this?</p>
<pre><code>SELECT COUNT(*) AS Number_of_Questions
FROM `bigquery-public-data.stackoverflow.posts_questions`
WHERE DATE = DATE('2018-11-11');
</code></pre>
<p>EDIT:</p>
<p>I see this is a public data set. Assuming you mean the creation date, then:</p>
<pre><code>SELECT count(*)
FROM `bigquery-public-data.stackoverflow.posts_questions` pq
WHERE creation_date >= TIMESTAMP('2018-11-11') and
creation_date < TIMESTAMP('2018-11-12') ;
</code></pre>
<p>This code is tested and works when I run it.</p>
|
python|sql|filter|google-bigquery|jupyter-notebook
| 4 |
1,905,428 | 55,431,038 |
Reshape data in python
|
<p>The dataset looks like this:-</p>
<pre><code>Source Jan_values Feb_values Mar_values
ABC 100 200 300
XYZ 200 300 400
</code></pre>
<p>i want to reshape the dataset which should look like this:</p>
<pre><code>Source Month values
ABC Jan 100
ABC Feb 200
ABC Mar 300
XYZ Jan 200
XYZ Feb 300
XYZ Mar 400
df = df.stack()
</code></pre>
|
<p>Use <code>df.melt</code> and sort the values by column <code>source</code> and <code>Month</code> columns</p>
<pre><code>df = pd.DataFrame({'source':['ABC','XYZ'], 'Jan_values':[100,200], 'Feb_values':[200,300], 'Mar_values':[300,400]})
df.columns = [c.replace("_values","") for c in df.columns]
df = df.melt(id_vars=['source'], var_name='Month')
# to sort by month namea
months = ["Jan", "Feb", "Mar", "Apr", "May", "Jun",
"Jul", "Aug", "Sep", "Oct", "Nov", "Dec"]
df['Month'] = pd.Categorical(df['Month'], categories=months, ordered=True)
print(df.sort_values(by=['source','Month']))
</code></pre>
<p>Output:</p>
<pre><code> source Month value
2 ABC Jan 100
0 ABC Feb 200
4 ABC Mar 300
3 XYZ Jan 200
1 XYZ Feb 300
5 XYZ Mar 400
</code></pre>
|
python|python-3.x|pandas
| 3 |
1,905,429 | 55,565,396 |
Question about df conditional selection using column.isnull() & column.str.len() > n
|
<p>Iโm wondering why the below conditional selection isnโt working. I would expect indices 0 and 3 to be selected but this is returning nothing. Wondering if Iโm missing something obvious. </p>
<pre><code>In [5]: a = {'A':['this', 'is', 'an', 'example'], 'B':[None, None, None, None],
...: 'C':['some', 'more', 'example', 'data']}
In [6]: df = pd.DataFrame(a)
In [7]: df
Out[7]:
A B C
0 this None some
1 is None more
2 an None example
3 example None data
</code></pre>
<p>This returns 2 rows:</p>
<pre><code>In [8]: df.loc[(df['A'].str.len() > 3)]
Out[8]:
A B C
0 this None some
3 example None data
</code></pre>
<p>And this returns all rows:</p>
<pre><code>In [9]: df.loc[(df['B'].isnull())]
Out[9]:
A B C
0 this None some
1 is None more
2 an None example
3 example None data
</code></pre>
<p>So I would expect this to return indices 0 and 3 but it doesn't return any rows</p>
<pre><code>In [10]: df.loc[(df['B'].isnull() & df['A'].str.len() > 3)]
Out[10]:
Empty DataFrame
Columns: [A, B, C]
Index: []
</code></pre>
<p>Any help would be appreciated. </p>
<p>Thanks!</p>
|
<p>You need to use separate brackets:</p>
<pre><code>df.loc[(df['B'].isnull()) & (df['A'].str.len() > 3)]
A B C
0 this None some
3 example None data
</code></pre>
<p>This is due to the <a href="https://docs.python.org/3/reference/expressions.html#operator-precedence" rel="nofollow noreferrer">Operator precedence</a>. In your code, <code>df['B'].isnull() & df['A'].str.len()</code> gets evaluated first, yielding:</p>
<pre><code>0 False
1 False
2 False
3 True
dtype: bool
</code></pre>
<p>Then the remaining comparison <code>>3</code> is applied, yielding:</p>
<pre><code>0 False
1 False
2 False
3 False
dtype: bool
</code></pre>
<p>And thus the original line returns no rows, instead of desired indices.</p>
|
python|pandas
| 1 |
1,905,430 | 57,523,362 |
Alternatives of deprecated scipy imresize() function?
|
<p>I used to use the scipy resize function in order to downscale an image. But as this function is deprecated in the newest version of scipy, I am looking for an alternative. PIL seems to be promising, but how can I use that for 3d images? (600,800,3) to (300,400,3)</p>
<p>I looked into numpy.resize, skimage, but especially for skimage, I am not sure if it works exactly like scipy's imresize() did.</p>
|
<p><a href="https://stackoverflow.com/questions/31639813/resizing-rgb-image-with-cv2-numpy-and-python-2-7">Here</a> is one way of resizing colour images with OpenCV.</p>
<pre><code>import numpy as np
import cv2
image = cv2.imread('image.png')
cv2.imshow("Original", image)
"""
The ratio is r. The new image will
have a height of 50 pixels. To determine the ratio of the new
height to the old height, we divide 50 by the old height.
"""
r = 50.0 / image.shape[0]
dim = (int(image.shape[1] * r), 50)
resized = cv2.resize(image, dim, interpolation = cv2.INTER_AREA)
cv2.imshow("Resized (Height) ", resized)
cv2.waitKey(0)
</code></pre>
|
python|image|resize
| 1 |
1,905,431 | 42,203,062 |
Scrapy request.meta is updated incorrectly
|
<p>I'm trying to log crawled paths to the <code>meta</code> attribute:</p>
<pre><code>import scrapy
from scrapy.linkextractors import LinkExtractor
class ExampleSpider(scrapy.Spider):
name = "example"
allowed_domains = ["www.iana.org"]
start_urls = ['http://www.iana.org/']
request_path_css = dict(
main_menu = r'#home-panel-domains > h2',
domain_names = r'#main_right > p',
)
def links(self, response, restrict_css=None):
lex = LinkExtractor(
allow_domains=self.allowed_domains,
restrict_css=restrict_css)
return lex.extract_links(response)
def requests(self, response, css, cb, append=True):
links = [link for link in self.links(response, css)]
for link in links:
request = scrapy.Request(
url=link.url,
callback=cb)
if append:
request.meta['req_path'] = response.meta['req_path']
request.meta['req_path'].append(dict(txt=link.text, url=link.url))
else:
request.meta['req_path'] = [dict(txt=link.text, url=link.url)]
yield request
def parse(self, response):
#self.logger.warn('## Request path: %s', response.meta['req_path'])
css = self.request_path_css['main_menu']
return self.requests(response, css, self.domain_names, False)
def domain_names(self, response):
#self.logger.warn('## Request path: %s', response.meta['req_path'])
css = self.request_path_css['domain_names']
return self.requests(response, css, self.domain_names_parser)
def domain_names_parser(self, response):
self.logger.warn('## Request path: %s', response.meta['req_path'])
</code></pre>
<p>Output:</p>
<pre><code>$ scrapy crawl -L WARN example
2017-02-13 11:06:37 [example] WARNING: ## Request path: [{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}]
2017-02-13 11:06:37 [example] WARNING: ## Request path: [{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}]
2017-02-13 11:06:37 [example] WARNING: ## Request path: [{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}]
2017-02-13 11:06:37 [example] WARNING: ## Request path: [{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}]
2017-02-13 11:06:37 [example] WARNING: ## Request path: [{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}]
2017-02-13 11:06:38 [example] WARNING: ## Request path: [{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}]
</code></pre>
<p>This is not what I expected, as I would like to have just the last url in <code>response.meta['req_path'][1]</code>, however <em>all urls from the last page</em> somehow find their way to the list.</p>
<p>In other words, the expected output is such as:</p>
<pre><code>[{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}]
[{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}]
[{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}]
[{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}]
[{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}]
[{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}]
</code></pre>
|
<p>After your second request, when you parse <a href="http://www.iana.org/domains" rel="nofollow noreferrer">http://www.iana.org/domains</a> and call <code>self.requests()</code> with <code>append=True</code> (because it's the default), this line:</p>
<pre><code>request.meta['req_path'] = response.meta['req_path']
</code></pre>
<p>does <em>not copy</em> the list. Instead, it gets <em>a reference to the original list</em>. You then append (to the original list!) with the next line:</p>
<pre><code>request.meta['req_path'].append(dict(txt=link.text, url=link.url))
</code></pre>
<p>On the next loop iteration, you again get a reference to the very same original list (that now already has two entries), and append to it again, and so forth.</p>
<p>What you want to do is create a new list for every request. You can do this for example by adding <code>.copy()</code> to the first line:</p>
<pre><code>request.meta['req_path'] = response.meta['req_path'].copy()
</code></pre>
<p>or you could save a line by doing this:</p>
<pre><code>request.meta['req_path'] = response.meta['req_path'] + [dict(txt=link.text, url=link.url)]
</code></pre>
|
python|web-scraping|scrapy|scrapy-spider
| 1 |
1,905,432 | 59,107,765 |
Pandas groupby on empty DataFrame results in no columns
|
<p>I have an empty DataFrame with columns but no rows.
I want to apply groupby on it but it results in no columns.
How to apply groupby and keep columns?</p>
<pre><code>df = pd.DataFrame(data={'a':[], 'b': []})
df = df.groupby('a').apply(lambda g: g).reset_index(drop=True)
</code></pre>
<p>output:</p>
<pre><code>Empty DataFrame
Columns: []
Index: []
</code></pre>
<p>df.index</p>
<pre><code>Float64Index([], dtype='float64', name='a')
</code></pre>
|
<p>For pandas <strong>1.1.0</strong> please use</p>
<pre><code>df.groupby('a', as_index=False).aggregate(lambda g: g)
</code></pre>
<p>output:</p>
<pre><code>Empty DataFrame
Columns: [a, b]
Index: []
</code></pre>
|
python|pandas|apply
| 1 |
1,905,433 | 53,924,741 |
BeautifulSoup: Parse JavaScript dynamic content
|
<p>I am developing a python web <strong>scraper</strong> with BeautifulSoup that parses "product listings" from <a href="https://ligamagic.com.br/?view=cards%2Fsearch&card=Hapatra%2C+Vizier+of+Poisons" rel="nofollow noreferrer">this</a> website and extracts some information for each product listing (i.e., price, vendor, etc.).
I am able to extract many of this information but one (i.e., the product quantity), which seems to be <em>hidden</em> from the <em>raw</em> html. Looking at the webpage through my browser what I see is (unid = units):</p>
<pre><code>product_name 1 unid $10.00
</code></pre>
<p>but the html for that doesn't show any integer value that I can extract. It shows this html text:</p>
<pre><code><div class="e-col5 e-col5-offmktplace ">
<div class="kWlJn zYaQqZ gQvJw">&nbsp;</div>
<div class="imgnum-unid"> unid</div>
</div>
</code></pre>
<p><strong>My question is</strong> how do I get this <em>hidden</em> content of <code>e-col5</code> which stores the product quantity?</p>
<pre><code>import re
import requests
from bs4 import BeautifulSoup
page = requests.get("https://ligamagic.com.br/?view=cards%2Fsearch&card=Hapatra%2C+Vizier+of+Poisons")
soup = BeautifulSoup(page.content, 'html.parser')
vendor = soup.find_all('div', class_="estoque-linha", mp="2")
print(vendor[1].find(class_='e-col1').find('img')['title'])
print(vendor[1].find(class_='e-col2').find_all(class_='ed-simb')[1].string)
print(vendor[1].find(class_='e-col5'))
</code></pre>
<p><strong>EDIT:</strong> <em>Hidden content</em> stands for JavasSript dynamically updated content in this case.</p>
|
<p>the <code>unid</code> is saved in JS array</p>
<pre><code>vetFiltro[0]=["e3724364",0,1,....];
</code></pre>
<p>the <code>1</code> is the unid, you can get it with regex</p>
<pre><code># e-col5
unitID = vendor[1].get('id').replace('line_', '') # line_e3724364 => e3724364
regEx = r'"%s",\d,(\d+)' % unitID
unit = re.search(regEx, page.text).group(1)
print(unit + ' unids')
</code></pre>
|
javascript|python|beautifulsoup
| 2 |
1,905,434 | 58,333,980 |
Copying installed packages to new Linux distribution
|
<p>I have been using Python on a Linux desktop for some time and I have installed many packages through pip (some as superuser and others with --user option). Now I am shifting to a newer Linux distribution installed on another partition of same computer. </p>
<p>Is it possible to copy some folder (where pip/python has all installed packages) from older Linux partition to newer Linux partition? This way I will save not only time but also internet bandwidth (for which there is increasing awareness). Thanks for your help.</p>
|
<p>Technically yes, but depending on the changes you've made and where you're coming from and going to, it might take less time to just reinstall them all from your pip list (likely).</p>
<pre><code>pip list > piplist #save list of installed packages into 'piplist' file
pip freeze > pipfreeze #save list with versions, which probably won't matter
pip install -r piplist #reinstall packages list (eg: on other system)
pip install -r pipfreeze #reinstall with versioned list
</code></pre>
<p>If you're shifting to a more recent version of the same distro, you should've just upgraded instead.</p>
<p>If your distro breaks every time you upgrade, you should find another distro.</p>
|
python|linux|package
| 0 |
1,905,435 | 28,774,550 |
Python relative import of an importable module not working
|
<p>I need to use the function MyFormatIO which is a part of the neo library. I can successfully import neo and neo.io BUT I cannot use the MyFormatIO function. <code>import neo.io</code> doesn't spit out any errors but <code>from neo.io import MyFormatIO</code> returns <code>NameError: name 'MyFormatIO' is not defined</code>. How can this be if MyFormatIO is a part of neo.io? I am running python2.7 on CentOS.</p>
|
<p>MyFormatIO is not a class in neo.io.</p>
<p><a href="http://pythonhosted.org/neo/io.html#module-neo.io" rel="nofollow">http://pythonhosted.org/neo/io.html#module-neo.io</a></p>
<blockquote>
<p>One format = one class</p>
<p>The basic syntax is as follows. If you want to load a file format that
is implemented in a generic MyFormatIO class:</p>
<blockquote>
<blockquote>
<blockquote>
<p>from neo.io import MyFormatIO
reader = MyFormatIO(filename = "myfile.dat")</p>
</blockquote>
</blockquote>
</blockquote>
<p>you can replace MyFormatIO by any implemented class, see List of
implemented formats</p>
</blockquote>
<p>You have to replace 'MyFormatIO' with a class from this list:
<a href="http://pythonhosted.org/neo/io.html#list-of-io" rel="nofollow">http://pythonhosted.org/neo/io.html#list-of-io</a></p>
<p>A quick way to check this kind of thing in the interpreter is with dir.</p>
<pre><code>import neo.io
dir(neo.io)
</code></pre>
<p>Those are the items that you can import or use from neo.io</p>
|
python|python-2.7|relative-import
| 1 |
1,905,436 | 28,414,960 |
Best way to get user to input items of different types into a list
|
<p>What's the best way in Python to prompt a user to input items to an empty list and ensure the entries are evaluated to correct data types?</p>
<p>For example user enters following mix of <code>int</code>, <code>float</code>, <code>str</code> and <code>list</code> item values:</p>
<pre><code>24
190.45
'steve'
'steve smith'
['4A', '7B']
</code></pre>
<p><code>new_list</code> becomes <code>[24, 190.45, 'steve', 'steve smith', ['4A', '7B']]</code></p>
<p>I have tried two methods, each with significant issues.</p>
<p><strong>Method 1</strong> - asking user to input space delimited line of list items, using <code>eval()</code> to evaluate and store data types correctly, and using <code>str.split()</code> to split string into component items using <code>' '</code> as delimiter:</p>
<pre><code># user enters values separated by spaces
input_list = [eval(l) for l in(raw_input('Enter the items for your list separated by spaces: ').split())]
# check by printing individual items with data type and also complete list
for item in input_list:
print item, type(item)
print input_list
</code></pre>
<p>However, I understand using <code>eval()</code> is not good from a security perspective. Also using <code>' '</code> delimiter for the split means I can't enter a string item like <code>'steve smith'</code>. However, I don't want to ask the user to input something ugly like comma delimiters, etc..</p>
<p><strong>Method 2</strong> - using a <code>while</code> loop with <code>break</code>, asking the user to input each list item:</p>
<pre><code>input_list = []
while True:
item = eval(raw_input('Enter new list item (or <Enter> to quit): '))
if item:
input_list.append(item)
else:
break
</code></pre>
<p>Again, this uses <code>eval()</code> which I believe should be avoided. Also hitting <kbd>Enter</kbd> to break throws an EOF parsing error, I'm guessing because <code>eval()</code> can't evaluate it.</p>
<p>Is there a better way to do this?</p>
|
<p>Method two is clearly superior, but needs a slight tweak to avoid the error you're seeing:</p>
<pre><code>from ast import literal_eval
def get_input_list():
input_list = []
while True:
item = raw_input('Enter new list item (or <Enter> to quit): ')
if not item:
break
input_list.append(literal_eval(item))
return input_list
</code></pre>
<p>Note the fact that the input is <em>only evaluated once it's known to be non-empty</em>, and the use of <a href="https://docs.python.org/2/library/ast.html#ast.literal_eval" rel="nofollow"><code>ast.literal_eval</code></a>, which is safer than <code>eval</code>, although more limited:</p>
<blockquote>
<p>The string or node provided may only consist of the following Python literal structures: strings, numbers, tuples, lists, dicts, booleans, and <code>None</code>.</p>
</blockquote>
<p>In use:</p>
<pre><code>>>> get_input_list()
Enter new list item (or <Enter> to quit): 24
Enter new list item (or <Enter> to quit): 190.45
Enter new list item (or <Enter> to quit): 'steve'
Enter new list item (or <Enter> to quit):
[24, 190.45, 'steve']
</code></pre>
<p>You could also add error handling, in case the user inputs a malformed string or something that can't be evaluated (e.g. <code>'foo</code> or <code>bar</code>):</p>
<pre><code>try:
input_list.append(literal_eval(item))
except (SyntaxError, ValueError):
print "Input not understood. Please try again."
</code></pre>
|
python|list|user-input
| 2 |
1,905,437 | 14,677,401 |
Scipy Bodgy curve fit
|
<p>I have simple survey data collected in the field using an uncalibrated compass. Realising the problem, in the field, bearing were taken comparing a good compass to the uncalibrated compass and the differences recorded in 11 bearings. The plot shows the difference being pretty close to a sin function.
I wish to fit a polynomial (degree 3) to this resulting function to use correcting the survey data using the uncalibrated compass. My curve fitting program produces a poor fitting curve. can anyone see what is wrong?</p>
<pre><code>import numpy as np
import scipy
import pylab
correctCompass=\
np.array([134.4,112.6,069.7,051.1,352.5,314.6,218.3,258.2,237.8,186.5,153.7])
errorCompass=\
np.array([131.6,108.9,065.6,047.0,349.8,314.0,284.6,262.7,243.4,189.8,153.2])
# sort compass values
for i in range(0,11):
for j in range(i+1,11):
if correctCompass[i] > correctCompass[j]:
tmp=correctCompass[j]
correctCompass[j]=correctCompass[i]
correctCompass[i]=tmp
tmp=errorCompass[j]
errorCompass[j]=errorCompass[i]
errorCompass[i]=tmp
diff = correctCompass - errorCompass + 15.0
height=diff.max() + 16.0
polycoeffs = scipy.polyfit(correctCompass, diff, 3)
# fit the data with a polynomial
yfit = scipy.polyval(polycoeffs,correctCompass)
pylab.plot(correctCompass, diff, 'k.')
pylab.plot(correctCompass, yfit, 'r-')
pylab.axis([0,360,-10.0,height])
pylab.show()
</code></pre>
|
<p><code>polyfit</code> works fine, the problem is one negative point in <code>diff</code> that bring down the fit but that is not shown in your plot as you set the lowest value for the yaxis to -10</p>
<pre><code>diff = array([ 19.1, 19.1, 18.7, 17.8, 15.5, 11.7, -51.3, 9.4, 10.5, 15.6, 17.7])
</code></pre>
<p>If you comment <code>pylab.axis([0,360,-10.0,height])</code> you'll see the "problem"</p>
<p>Besides, you can improve and make your code more readable substituting the two nested for loops with the following three lines:</p>
<pre><code>sort = np.argsort(correctCompass)
correctCompass = correctCompass[sort]
errorCompass = errorCompass[sort]
</code></pre>
|
python|curve-fitting
| 2 |
1,905,438 | 14,639,053 |
Django: Ordered list of model instances from different models?
|
<p>Users can upload three different types of content onto our site: image, video, audio. Here are the models for each type:</p>
<pre><code>class ImageItem(models.Model):
user = models.ForeignKey(User)
upload_date = models.DateTimeField(auto_now_add=True)
image = models.ImageField(upload_to=img_get_file_path)
title = models.CharFiled(max_length=1000,
blank=True)
class VideoItem(models.Model):
user = models.ForeignKey(User)
upload_date = models.DateTimeField(auto_now_add=True)
video = models.FileField(upload_to=vid_get_file_path)
title = models.CharFiled(max_length=1000,
blank=True)
class AudioItem(models.Model):
user = models.ForeignKey(User)
upload_date = models.DateTimeField(auto_now_add=True)
audio = models.FileField(upload_to=aud_get_file_path)
title = models.CharFiled(max_length=1000,
blank=True)
</code></pre>
<p>I have a page called <code>library.html</code>, which renders all the items that a user has uploaded, in order from <em>most recently uploaded</em> to oldest uploads (it displays the <code>title</code> and <code>upload_date</code> of each instance, and puts a little icon on the left symbolizing what kind of item it is).</p>
<p>Assuming it requires three separate queries, how can I merge the three querysets? How can I make sure they are in order from most recently uploaded?</p>
|
<p>As an alternative, you can use <a href="https://docs.djangoproject.com/en/dev/topics/db/models/#multi-table-inheritance" rel="nofollow">multi-table inheritance</a> and factor common attributes into a superclass model. Then you just <code>order_by</code> upload date on the superclass model. The third-party app <a href="https://bitbucket.org/carljm/django-model-utils/src" rel="nofollow">django-model-utils</a> provides a custom manager called Inheritance manager that lets you automatically downcast to the subclass models in your query.</p>
<pre><code>from model_utils.managers import InheritanceManager
class MediaItem(models.Model):
objects = InheritanceManager()
user = models.ForeignKey(User)
upload_date = models.DateTimeField(auto_now_add=True)
title = models.CharFiled(max_length=1000,
blank=True)
class ImageItem(MediaItem):
image = models.ImageField(upload_to=img_get_file_path)
class VideoItem(MediaItem):
video = models.FileField(upload_to=vid_get_file_path)
class AudioItem(MediaItem):
audio = models.FileField(upload_to=aud_get_file_path)
</code></pre>
<p>Then, your query is just:</p>
<pre><code>MediaItem.objects.all().order_by('upload_date').select_subclasses()
</code></pre>
<p>This way, you get what you want with one just query (with 3 joins). Plus, your data is better normalized, and it's fairly simple to support all sorts more complicated queries, as well as pagination.</p>
<p>Even if you don't go for that, I'd still use <a href="https://docs.djangoproject.com/en/dev/topics/db/models/#abstract-base-classes" rel="nofollow">abstract base class inheritance</a> to conceptually normalize your data model, even though you don't get the database-side and ORM benefits.</p>
|
python|django|sorting|django-models|django-queryset
| 5 |
1,905,439 | 68,543,514 |
Custom input to a Python program from a Java program
|
<p>I am trying to make a full-fledged Java program that runs a python program. The python program is as follows:</p>
<pre><code>print('Enter two numbers')
a = int(input())
b = int(input())
c = a + b
print(c)
</code></pre>
<p>If I execute this code, the terminal looks something like this:</p>
<pre><code>Enter two numbers
5
3
8
</code></pre>
<p>Now, I want the same output when executing this code from Java. Here is my Java code:</p>
<pre><code>import java.io.*;
class RunPython {
public static void main(String[] args) throws IOException {
String program = "print('Enter two numbers')\na = int(input())\nb = int(input())\nc = a + b\nprint(a)\nprint(b)\nprint(c)";
FileWriter fileWriter = new FileWriter("testjava.py");
BufferedWriter bufferedWriter = new BufferedWriter(fileWriter);
bufferedWriter.write(program);
bufferedWriter.close();
Process process = Runtime.getRuntime().exec("python testjava.py");
InputStreamReader inputStreamReader = new InputStreamReader(process.getInputStream());
BufferedReader bufferedReader = new BufferedReader(inputStreamReader);
OutputStreamWriter outputStreamWriter = new OutputStreamWriter(process.getOutputStream());
InputStreamReader read = new InputStreamReader(System.in);
BufferedReader in = new BufferedReader(read);
String output;
while (process.isAlive()) {
while (!bufferedReader.ready());
System.out.println(bufferedReader.ready());
while (!(output = bufferedReader.readLine()).isEmpty()) {
System.out.println(output);
}
bufferedReader.close();
if (process.isAlive()) {
outputStreamWriter.write(in.readLine());
}
}
}
}
</code></pre>
<p>But while running this program, only the first line is displayed and the first input is taken. After that, the program does not respond.
What mistake am I making? And what is the solution?</p>
|
<p>Dealing with Input and Output to another process is a bit messy and you can read
a good answer to how you could do it <a href="https://stackoverflow.com/q/3643939/5047819">here</a></p>
<p>So applying those answers to your code could be something like this:</p>
<pre class="lang-java prettyprint-override"><code>import java.io.*;
import java.util.Scanner;
class RunPython {
public static void main(String[] args) throws IOException {
// String program = "print('Enter two numbers')\na = int(input())\nprint(a)\nb = int(input())\nprint(b)\nc = a + b\nprint(c)";
// If you are using Java 15 or newer you can write code blocks
String program = """
print('Enter first number')
a = int(input())
print(a)
print('Enter second number')
b = int(input())
print(b)
c = a + b
print(c)
""";
FileWriter fileWriter = new FileWriter("testjava.py");
BufferedWriter bufferedWriter = new BufferedWriter(fileWriter);
bufferedWriter.write(program);
bufferedWriter.close();
Process process =
new ProcessBuilder("python", "testjava.py")
.redirectErrorStream(true)
.start();
Scanner scan = new Scanner(System.in);
BufferedReader pythonOutput = new BufferedReader(new InputStreamReader(process.getInputStream()));
BufferedWriter pythonInput = new BufferedWriter(new OutputStreamWriter(process.getOutputStream()));
Thread thread = new Thread(() -> {
String input;
while (process.isAlive() && (input = scan.nextLine()) != null) {
try {
pythonInput.write(input);
pythonInput.newLine();
pythonInput.flush();
} catch (IOException e) {
System.out.println(e.getLocalizedMessage());
process.destroy();
System.out.println("Python program terminated.");
}
}
});
thread.start();
String output;
while (process.isAlive() && (output = pythonOutput.readLine()) != null) {
System.out.println(output);
}
pythonOutput.close();
pythonInput.close();
}
}
</code></pre>
<p>Note that the pythonInput is a BufferedWriter in Java and vice versa
the pythonOutput is a BufferedReader.</p>
|
java|python|process
| 3 |
1,905,440 | 54,085,133 |
How to dynamically provide function argument defaults from config files?
|
<p>How to have argument defaults dynamically set by a configuration file created by the user?</p>
<p>The configuration file consists of function paths with their arguments declared. Being able to import the function will allow me to get the function name and its module path.</p>
<p>Does anyone have any idea how to import a function given the number of a line inside the function and the python file containing the function (including functions inside classes)?</p>
<hr>
<p>EDIT (Added the below use case as requested by the comments)</p>
<p>I want to retrieve the function that defaults its argument to <code>configured_arg()</code> in the examples below:</p>
<pre class="lang-python3 prettyprint-override"><code>class SomeClass:
def __init__(kwarg=configured_arg()):
...
def some_function(kwarg=configured_arg()):
...
def another_function(kwarg=configured_arg()):
...
def configured_arg():
# NOTE: `inspect.stack` returns the caller as "<module>"
# NOTE: `inspect.stack` returns the `lineno` and file path of the call
caller = get_caller() # Unsure of how to implement
arg_key = get_arg_key() # Unsure of how to implement
return fetch_configured_value(caller, arg_key) # Fetch default arg from global config
</code></pre>
<p>Before posting on SO, I tried to use <code>inspect.stack</code>. The caller (e.g. <code>some_function</code>) does not appear in the stack of <code>configurable_arg</code>. However, <code>inspect.stack</code> has the <code>lineno</code> the default arg was declared. This experience inspired the current question.</p>
|
<p>At the time the function <code>configured_args</code> is called, the function for which it sets default arguments is not yet defined.</p>
<p>You need to change the default arguments <em>after</em> the function definition. One way to do that is to use a decorator.</p>
<p>You can then use the following attributes of a function to find its location across your modules.</p>
<ul>
<li><code>__module__</code>: the name of the module where the function is defined;</li>
<li><code>__name__</code>: the name of the function;</li>
<li><a href="https://www.python.org/dev/peps/pep-3155/" rel="nofollow noreferrer"><code>__qualname__</code></a>: the full path of the function in the module. By example a function <code>f</code> defined inside a class <code>A</code> will have <code>__qualname__</code> set to <code>A.f</code>.</li>
</ul>
<h2>Code</h2>
<pre><code>from functools import wraps
def get_kwargs(qualname, module):
# Get the default kwargs from configs
return {'kwarg': ...}
def configure_args(f):
qualname = f.__qualname__
module = f.__module__
default_kwargs = get_kwargs(qualname, module)
@wraps(f)
def wrapper(*args, **kwargs):
new_kwargs = {**default_kwargs , **kwargs}
return f(*args, **new_kwargs)
return wrapper
@configure_args
def my_func(*args, **kwargs):
...
</code></pre>
<p>The function <code>my_func</code> now has a default keyword argument <code>kwarg</code>.</p>
|
python|python-3.x|abstract-syntax-tree|inspect
| 1 |
1,905,441 | 25,545,795 |
reStructuredText lists with custom enumerator
|
<p>Consider the item "2-3)" in the "Complexity" list on <a href="http://en.cppreference.com/w/cpp/container/vector/vector" rel="nofollow">http://en.cppreference.com/w/cpp/container/vector/vector</a>. How could I achieve the same thing in reStructuredText/Sphinx? </p>
<p>The closest I got was writing </p>
<pre><code>| 1) ...
| 2-3) ...
| 4) ...
</code></pre>
<p>but this destroys the list formatting (enumerators and split lines are not properly indented). Another option would be to use tables, but I don't want a table formatting. </p>
|
<p>Found the following workaround. Define your list as</p>
<pre><code>.. rst-class:: custom-enumeration
==== ===
\1) ...
2-3) ...
\4) ...
==== ===
</code></pre>
<p>and add the following snippet to your CSS file.</p>
<pre><code>/* Define enumerations with custom enumerators */
table.custom-enumeration td {
border: 0px none;
}
table.custom-enumeration td:first-child {
padding-right: 5px;
text-align: right;
}
</code></pre>
|
python-sphinx|restructuredtext
| 0 |
1,905,442 | 25,434,512 |
Modifying Local Within Function Changes Global Passed to Function in Python
|
<p>I am trying to understand why the global copy of an object passed to a function is modified by the function. I understand that if I assign an object <code>mylist</code> to a second variable name, it does not make a copy and modifying the second object changes the first because they are one in the same. For example:</p>
<pre><code>mylist = []
s = mylist
s.append(2)
print(mylist)
[2]
</code></pre>
<p>However, I did not think this occurred within a function unless made explicit with <code>global varname</code>. As this <a href="https://stackoverflow.com/a/10588342/2327821">answer</a> puts it, "If you want to simply access a global variable you just use its name. However to change its value you need to use the global keyword." This is how the function <code>mult</code> below behaves (<a href="https://stackoverflow.com/a/25212138/2327821">though I am not modifying x but assigning a new value to it</a>). However, when I pass a global variable to either <code>app</code> or <code>noreturn</code>, in both instances the global variable is modified by the function without a declaration that I want to modify the global variable with a global keyword.</p>
<pre><code>import pandas as pd
def mult(x):
x = x * x
return x
def app(mylist):
mylist.append(4)
return mylist
def noreturn(df):
df['indexcol'] = list(df.index)
df = pd.DataFrame({"A": [10,20,30,40,50], "B": [20, 30, 10, 40, 50], "C": [32, 234, 23, 23, 42523]})
print(df)
A B C
0 10 20 32
1 20 30 234
2 30 10 23
3 40 40 23
4 50 50 42523
noreturn(df)
print(df)
A B C indexcol
0 10 20 32 0
1 20 30 234 1
2 30 10 23 2
3 40 40 23 3
4 50 50 42523 4
x = 3
mylist = []
y = mult(x)
newlist = app(mylist)
print(x, y)
(3, 9)
print(mylist, newlist)
([4], [4])
</code></pre>
<p>If I want a function that does not modify global variables do I need to use <code>copy.deepcopy</code> on every variable passed to the function?</p>
|
<p>Your quote says:</p>
<blockquote>
<p>If you want to simply access a global variable you just use its name. However to change its value you need to use the global keyword.</p>
</blockquote>
<p>I would modify that to:</p>
<blockquote>
<p>If you want to simply access the object a global variable refers to, you just use its name. However to change what object it refers to, you need to use the global keyword.</p>
</blockquote>
<p>Accessing a global object <em>can</em> include changing things about the object (such as appending a new object to an existing global list). However, if your function tried to do:</p>
<pre><code>def app(mylist):
mylist = mylist + [4]
return mylist
</code></pre>
<p>which attempts to change the global <code>mylist</code> reference (by creating a new object and assigning the result to <code>mylist</code>), then you would need to use the <code>global</code> keyword.</p>
|
python|pandas
| 1 |
1,905,443 | 44,600,651 |
Django form validation failing - pytz and choices
|
<p>I am trying to let the user set which timezone they are in, however form validation <code>.is_valid()</code> is failing and cannot figure out why. </p>
<ul>
<li>The timezone value for a user is stored in a Profile model.</li>
<li>Using <code>ChoiceField</code> and <code>pytz.common_timezones</code> to fill the form field</li>
</ul>
<p>This would be appear to be quite simple to do, the only thing thats different to my usual way is that the data filling the combo/select box is the use of a <code>ChoiceField</code> and the data is coming from <code>pytz</code>.</p>
<p>I may switch to <a href="https://github.com/mfogel/django-timezone-field" rel="nofollow noreferrer">django-timezone-field</a> to solve this, but I would like to understand why it is failing. I have included all relevant (I think) code below. Any suggestions?</p>
<p><strong>models.py</strong></p>
<pre><code>class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
bio = models.TextField(max_length=500, blank=True)
location = models.CharField(max_length=30, blank=True)
birth_date = models.DateField(null=True, blank=True)
timezone = models.CharField(
max_length=255,
blank=True,
)
</code></pre>
<p><strong>forms.py</strong></p>
<pre><code>class ProfileEditForm(forms.Form):
profile_timezone = forms.ChoiceField(choices=[(x, x) for x in pytz.common_timezones])
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>@login_required
def userprofile_edit(request):
if request.method == "POST":
profile_edit_form = ProfileEditForm()
if profile_edit_form.is_valid():
cd = profile_edit_form.cleaned_data
user = User.objects.get(id=request.user.id)
user.profile.timezone = cd['timezone']
user.profile.save()
messages.success(request, "Profile updated successfully", fail_silently=True)
return redirect('coremgr:userprofile', request.user.id)
else:
messages.error(request, "Error occured. Contact your administrator", fail_silently=True)
print "error: form not valid"
else:
profile_edit_form = ProfileEditForm()
context = {
'profile_edit_form': profile_edit_form,
}
return render(request, 'apps/coremgr/userprofile_edit.html', context)
</code></pre>
<p><strong>template</strong></p>
<pre><code><form name="formprofile" method="POST" action="">
{% csrf_token %}
<p id="profile_timezone" class="form-inline">
{{ profile_edit_form.profile_timezone.errors }}
Timezone:
{{ profile_edit_form.profile_timezone }}
</p>
<button id="id_btn_profile_edit_save" type="submit" class="btn btn-default" tabindex=7>Save</button>
</form>
</code></pre>
|
<p>I believe you would need to pass in request.POST when initializing the form in your if block.</p>
|
python|django|django-forms|pytz|django-timezone
| 0 |
1,905,444 | 24,167,661 |
Using List For Conditional Statements
|
<p>I'm attempting to run through a column in my Python data file and only want to keep the lines of data that have values of 5, 6, 7, 8, or 9 in a certain column. </p>
<pre><code>var = [5, 6, 7, 8, 9]
import glob
import numpy as np
filname = glob.glob(''+fildir+'*')
for k in filname:
data = np.genfromtxt(k,skip_header=6,usecols=(2,3,4,5,8,9,10,11))
if data[:,1] not in var:
continue
</code></pre>
<p>"fildir" is just the directory where all of my files are at. data[:,1] have values that range from 1-15 and like I said, I just want to keep lines that have values 5-9. When I run this code I get:</p>
<pre><code> ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>Any helpful hints?</p>
|
<p>Your first problem is that you are trying to evaluate the boolean value of a numPy array.</p>
<pre><code>if data[:,1] not in var:
</code></pre>
<p>In your example, <code>data[:,1]</code> is a collection of all the second column values from your filed read as floats. So, disregarding your <code>usecols=</code> for the moment, a file that contains</p>
<pre><code>1 2 3 4 5 6 7 8
9 10 11 12 13 14 16
</code></pre>
<p>will in your example produce</p>
<pre><code>>> data[:,1]
array([2., 10.])
</code></pre>
<p>which is not what you want to check anyway. Why exactly the error occurs is somewhat deeper explained <a href="http://pandas.pydata.org/pandas-docs/stable/gotchas.html%20here" rel="nofollow">here</a>.</p>
<p>If all you want to do is store a list of all rows that have a value from the <code>var</code> list in their second column I'd suggest a simple approach.</p>
<pre><code>from glob import glob
import numpy as np
var = [5, 6, 7, 8, 9]
filname = glob('fildir/*')
# get the desired rows from all files in folder
# use int as dtype because float is default
# I'm not an expert on numpy so I'll .tolist() the arrays
data = [np.genfromtxt(k, dtype=int, skip_header=6).tolist() for k in filname]
# flatten the list to have all the rows file agnostic
data = [x for sl in data for x in sl]
# filter the data and return all the desired rows
filtered_data = filter(lambda x: x[1] in var, data)
</code></pre>
<p>There is probably a more numPythonic way to do this depending on the data structure you want to keep your rows in, but this one is very simple.</p>
|
python|numpy
| 0 |
1,905,445 | 20,789,113 |
wx.StatusBar - Why does my status bar add a gray box covering my panel?
|
<p>Why does the following code add a gray box over most of the wxPanel?</p>
<pre><code>statusBar = wx.StatusBar(frame)
</code></pre>
<p>However, nothing like that happens for other widgets like these:</p>
<pre><code>textbox = wx.TextCtrl(panel, size = (300,20), pos = (10,30))
button = wx.Button(panel, size = (300,20), pos = (10,60), label = "Go")
</code></pre>
|
<p>You have to set the wxStatusBar to the frame like this:</p>
<pre><code>frame.SetStatusBar(statusBar)
</code></pre>
<p>The status bar widget requires 2 lines of code to get started while most others just take 1 line of code.</p>
|
wxpython
| 0 |
1,905,446 | 72,116,685 |
None of the decimal rounding methods I know worked for my current need (converting string from a fractional odds to decimal odds)
|
<p>The values in the <code>a</code> list are the math division values that come from the API.</p>
<p>To convert a string into a calculation, I use <code>eval()</code>.</p>
<p>The values in <code>correct_list</code> are the final results of this division add <code>+1</code>.<br />
The API indicates this are the exact results of this calculation to work with its data.</p>
<p>But I'm having difficulties when these divisions have several decimal, each model I tried to format so that the limit of decimals was 2, the created list doesn't match perfectly with the correct result:</p>
<pre><code>import math
import decimal
decimal.getcontext().rounding = decimal.ROUND_CEILING
def main():
a = ['11/10', '29/20', '5/4', '1/2', '10/11', '2/1', '4/7', '11/8', '13/10', '1/8', '4/11', '5/4', '163/100', '11/20', '17/20', '23/20', '6/4', '1/1', '8/5']
b = []
correct_list = ['2.10', '2.45', '2.25', '1.50', '1.91', '3.00', '1.57', '2.38', '2.30', '1.13', '1.36', '2.25', '2.63', '1.55', '1.85', '2.15', '2.50', '2.00', '2.60']
for c in a:
b.append(f'{round(eval(c)+1, ndigits=2)}')
print(b)
if b == correct_list:
print('Ok. Amazing Match!')
if __name__ == '__main__':
main()
</code></pre>
<p>These were all my attempts:</p>
<pre><code>b.append(f'{eval(c)+1:.2f}')
['2.10', '2.45', '2.25', '1.50', '1.91', '3.00', '1.57', '2.38', '2.30', '1.12', '1.36', '2.25', '2.63', '1.55', '1.85', '2.15', '2.50', '2.00', '2.60']
</code></pre>
<pre><code>b.append(f'{math.ceil((eval(c)+1) * 100.0) / 100.0:.2f}')
['2.10', '2.46', '2.25', '1.50', '1.91', '3.00', '1.58', '2.38', '2.30', '1.13', '1.37', '2.25', '2.63', '1.55', '1.85', '2.15', '2.50', '2.00', '2.60']
</code></pre>
<pre><code>b.append(f'{float(round(decimal.Decimal(str(eval(c)+1)), ndigits=2)):.2f}')
['2.10', '2.45', '2.25', '1.50', '1.91', '3.00', '1.58', '2.38', '2.30', '1.13', '1.37', '2.25', '2.63', '1.55', '1.85', '2.15', '2.50', '2.00', '2.60']
</code></pre>
<pre><code>b.append(f'{round(eval(c)+1, ndigits=2):.2f}')
['2.10', '2.45', '2.25', '1.50', '1.91', '3.00', '1.57', '2.38', '2.30', '1.12', '1.36', '2.25', '2.63', '1.55', '1.85', '2.15', '2.50', '2.00', '2.60']
</code></pre>
<p>None delivered the results that perfectly match the API calculation model.</p>
|
<p>The only case that the normal <code>round()</code> function doesn't handle the way you want is that it rounds <code>0.005</code> down to <code>0.00</code> instead of up to <code>0.01</code>. So if you're willing to be cheesy, just add 1.001 instead of 1, and then use normal rounding.</p>
<pre><code>a = ['11/10', '29/20', '5/4', '1/2', '10/11', '2/1', '4/7', '11/8', '13/10', '1/8', '4/11', '5/4', '163/100', '11/20', '17/20', '23/20', '6/4', '1/1', '8/5']
correct_list = ['2.10', '2.45', '2.25', '1.50', '1.91', '3.00', '1.57', '2.38', '2.30', '1.13', '1.36', '2.25', '2.63', '1.55', '1.85', '2.15', '2.50', '2.00', '2.60']
b = [f'{float.__truediv__(*map(float, c.split("/"))) + 1.001:.2f}' for c in a]
if b == correct_list:
print('Ok. Amazing Match!') # succeeds
</code></pre>
<p>If you want to use the <code>decimal</code> module to do it in a non-cheesy way, you need to do the math with <code>Decimal</code> objects, and the rounding method you want is <code>ROUND_HALF_UP</code>.</p>
<pre><code>import decimal
decimal.getcontext().rounding = decimal.ROUND_HALF_UP
a = ['11/10', '29/20', '5/4', '1/2', '10/11', '2/1', '4/7', '11/8', '13/10', '1/8', '4/11', '5/4', '163/100', '11/20', '17/20', '23/20', '6/4', '1/1', '8/5']
correct_list = ['2.10', '2.45', '2.25', '1.50', '1.91', '3.00', '1.57', '2.38', '2.30', '1.13', '1.36', '2.25', '2.63', '1.55', '1.85', '2.15', '2.50', '2.00', '2.60']
b = []
for c in a:
n, d = map(int, c.split("/"))
result = decimal.Decimal(1) + decimal.Decimal(n) / decimal.Decimal(d)
b.append(f'{result:.2f}')
if b == correct_list:
print('Ok. Amazing Match!') # succeeds
</code></pre>
|
python|math|decimal|rounding|eval
| 1 |
1,905,447 | 72,012,089 |
How to use to_date in panda for yyyy-mm-dd format to extract month name?
|
<pre><code>df["month"]=pd.to_datetime(df['date'],format="%y-%m-%d").dt.month_name()
df.set_index('date', inplace=True)
</code></pre>
<p>I used this code to extract month name from the date series in my CSV file. All the dates had a format of yyyy-mm-dd. So i used <code>%y-%m-%d</code> to extract month name from the date. But I'm getting key error. Can u tell me where I made the mistake??</p>
<p>Errors:
<a href="https://i.stack.imgur.com/DZs2i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DZs2i.png" alt="0" /></a></p>
<p><a href="https://i.stack.imgur.com/BoZZh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BoZZh.png" alt="1" /></a></p>
|
<p>You will need to use need the capital Y, not y</p>
<pre><code>df["month"]=pd.to_datetime(df['date'],format="%Y-%m-%d").dt.month_name()
df.set_index('date', inplace=True)
</code></pre>
<p>Output:</p>
<pre><code> new
month
2022-02-01 February
2022-09-10 September
</code></pre>
|
python|pandas|csv
| 3 |
1,905,448 | 49,560,618 |
Is it possible to delete all zero columns and rows of a matrix in sympy?
|
<p>I have a matrix like this:</p>
<p><a href="https://i.stack.imgur.com/ZecE6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZecE6.png" alt="enter image description here"></a></p>
<p>And i would like to remove the zero rows and columns, is there a fast way to do it in sympy? Thanks.</p>
|
<p>The methods <code>row_del</code> and <code>col_del</code> allow one to delete one row or column at a time. It looks like you may have several of those, so doing this one step at a time might be slow. </p>
<p>I would probably use advanced indexing, passing lists of row and column numbers that are to be <em>kept</em>. </p>
<pre><code>A = Matrix([[0, 0, 0, 0], [0, 1, 2, 3], [0, a, b, c], [0, 7, 8, 9]])
m, n = A.shape
rows = [i for i in range(m) if any(A[i, j] != 0 for j in range(n))]
cols = [j for j in range(n) if any(A[i, j] != 0 for i in range(m))]
A = A[rows, cols]
</code></pre>
<p>Alternatively, if you don't mind using NumPy, this can be a little shorter. </p>
<pre><code>import numpy as np
A = Matrix([[0, 0, 0, 0], [0, 1, 2, 3], [0, a, b, c], [0, 7, 8, 9]])
B = np.array(A)
A = Matrix(B[np.ix_(np.any(B != 0, axis=0), np.any(B != 0, axis=1))])
</code></pre>
|
python|sympy
| 2 |
1,905,449 | 49,578,944 |
pandas: looping through several dataframes for efficiency
|
<p>I'm just learning pandas working on a Jupyter Notebook. I have 10 datasets containing daily temperatures from different years. However, the dates are in different columns for months, days and years, and in a different column the temperature. The original data have spaces in the column names. For example, the information for df1info() is the following:</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 17301 entries, 0 to 17300
Data columns (total 4 columns):
agno 17301 non-null int64
mes 17301 non-null int64
dia 17301 non-null int64
valor 17301 non-null float64
dtypes: float64(1), int64(3)
memory usage: 540.7 KB
</code></pre>
<p>I did write a the following code which perform the job after checking different solutions in this community, I wonder about a more efficient and pandorable way to do it.</p>
<pre><code>list_df = [df1,df2,df3,df4,df5,df6,df7,df8,df9,df10]
for i, df in enumerate(list_dfs):
df = list_df[i]
df = df.rename(columns=lambda x: x.strip())
df['mes'] = df['mes'].astype(str)
df['dia'] = df['dia'].astype(str)
df['combined']=df['agno'].astype(str) + "-" + df["mes"] + "-" + df["dia"]
df["date"]= pd.to_datetime(df["combined"])
df = df[['date','valor']]
list_df[i] = df
i = i + 1
</code></pre>
<p>I would appreciate your help. Thanks in advance.</p>
|
<p>I think it can be down by <code>concat</code>, this will create the multiple index data frame, you can just do those adjustment with the combine dataframe , then just <code>groupby</code> by the <code>key</code>(level=0 here) and split those single df to a new list you want </p>
<pre><code>s=pd.concat(list_df,keys=list(range(len(list_df))))
s = s.rename(columns=lambda x: x.strip())
s['mes'] = s['mes'].astype(str)
s['dia'] = s['dia'].astype(str)
s['combined']=s['agno'].astype(str) + "-" + s["mes"] + "-" + s["dia"]
s["date"]= pd.to_datetime(s["combined"])
s = s[['date','valor']]
list_df=[d for _,d in s.groupby(level=0)]
</code></pre>
|
python|pandas|for-loop
| 3 |
1,905,450 | 49,720,942 |
How to display in which position a number was entered by input?
|
<p>I have a list and once I sorted it to find the "biggest value", <strong>I need to display in which position this number was entered</strong>.</p>
<p>Any idea how to do it ?</p>
<p>I made this code to receive and sort the list but I have no idea how to receive the initial position of the "data".</p>
<hr>
<pre><code>liste = []
while len(liste) != 5:
liste.append (int(input("enter number :\n")))
listeSorted = sorted(liste)
print("the biggest number is : ", listeSorted[-1])
</code></pre>
|
<p>Instead of finding the biggest value of the list by sorting the list, use use the max function:</p>
<pre><code>largest_val = max(list)
</code></pre>
<p>To find the first occurrence of a value in a list, use the <code>.index</code> method.</p>
<pre><code>position = list.index(largest_val)
</code></pre>
|
python|python-3.x|list
| 6 |
1,905,451 | 62,802,195 |
Cloning a git repo from git bash using python script from command prompt failing. Why?
|
<p>The git bash closes abruptly without cloning the repo. I'm unable to understand what's wrong here.</p>
<pre class="lang-py prettyprint-override"><code>import os
import subprocess
parent_dir = r'C:\Users\user\Documents'
dir_name = 'Git_temp'
dir_path = os.path.join(parent_dir, dir_name)
os.mkdir(dir_path)
print("{0} created under {1}".format(dir_name, parent_dir))
os.chdir(dir_path)
print("Current Working Directory : {0}".format(os.getcwd()))
git_file = "C:\Program Files\Git\git-bash.exe"
git_clone_cmd = "git clone https://github.com/patankar-saransh/test_repo.git"
subprocess.Popen([git_file, git_clone_cmd])
</code></pre>
|
<p>If you do not want to use <a href="https://github.com/gitpython-developers/GitPython" rel="nofollow noreferrer">GitPython</a>, then simply make sure <code>git.exe</code> is in your <code>%PATH%</code>.</p>
<p>Then your call would be:</p>
<pre><code>import subprocess
process = subprocess.Popen(["git", "clone", "https://..."], stdout=subprocess.PIPE)
output = process.communicate()[0]
</code></pre>
<p>As <a href="https://stackoverflow.com/a/15315617/6309">seen here</a>, with Python 2.7+, use <a href="https://docs.python.org/3/library/subprocess.html#subprocess.check_output" rel="nofollow noreferrer">check_output</a></p>
<pre><code>import subprocess
output = subprocess.check_output(["git", "clone", "https://..."]])
</code></pre>
|
python|git|python-2.7|github
| 1 |
1,905,452 | 62,674,247 |
python postgresql TypeError: 'NoneType' object is not iterable for loop
|
<p>i have an postgresql database</p>
<p>#p1 = {613309872640, 135117629824, 880993344004, 158175822853, 1073752495240, 216760058760, 447123298313, 912909533578, 762965475336, 306930570381, 972176171275, 6619746319, 343850096144, 764400555409, 546205930000, 528687305491, 404493333779, 77590592280, 855904184346, 936056534938, 661500784670, 141923831071, 562200894625, 567019890466, 832033035811, 72309584292, 364867195430, 190122917676, 144896917811, 580869171253, 1627505081, 627702833596, 644839868222, 172977438016, 1031295913668, 463277927492, 238807296070, 500185745223, 780490132936, 947594928202, 888064501323, 992713674444, 453125481684, 502841610453, 81933315933, 94767975264, 103246266465, 275027812832, 276760003301, 420374924396, 263072530544, 203425700337, 334072272115, 917262638965, 989106489206, 598614815351, 328241935477, 554909676025}</p>
<p>i use this code</p>
<pre><code>connection = psycopg2.connect(user = "...",
password = "...",
host = "...",
port = "...",
database = "...")
c = connection.cursor()
.......
match_ids = set()
for chunk in p[1]:
for id in c.execute("SELECT S.val1 FROM table1 as F JOIN table2 AS S ON F.val2_id=S.val2_id WHERE F.val3=%s and F.val4=%s", (p[0], chunk)):
match_ids.add(id[0])
print(match_ids)
</code></pre>
<p>i can use this code on sqlite<br />
but, I get an error when using postgresql</p>
<p>Error: TypeError: 'NoneType' object is not iterable</p>
<pre><code>runcell(0, 'C:/Users/....../test1.py')
{613309872640, 135117629824, 880993344004, 158175822853, 1073752495240, 216760058760, 447123298313, 912909533578, 762965475336, 306930570381, 972176171275, 6619746319, 343850096144, 764400555409, 546205930000, 528687305491, 404493333779, 77590592280, 855904184346, 936056534938, 661500784670, 141923831071, 562200894625, 567019890466, 832033035811, 72309584292, 364867195430, 190122917676, 144896917811, 580869171253, 1627505081, 627702833596, 644839868222, 172977438016, 1031295913668, 463277927492, 238807296070, 500185745223, 780490132936, 947594928202, 888064501323, 992713674444, 453125481684, 502841610453, 81933315933, 94767975264, 103246266465, 275027812832, 276760003301, 420374924396, 263072530544, 203425700337, 334072272115, 917262638965, 989106489206, 598614815351, 328241935477, 554909676025}
Traceback (most recent call last):
File "C:\Users\.....\test1.py", line 208, in <module>
for id in c.execute("SELECT S.val1 FROM table1 as F JOIN table2 AS S ON F.val2_id=S.val2_id WHERE F.val3=%s and F.val4=%s", (p[0], chunk)):
TypeError: 'NoneType' object is not iterable
</code></pre>
|
<p>Try using replacing below snippet with your snippet.</p>
<pre><code>for chunk in p[1]:
for id in c.execute("SELECT S.val1 FROM table1 as F JOIN table2 AS S ON F.val2_id=S.val2_id WHERE F.val3=%s and F.val4=%s", (p[0], chunk)) or []:
match_ids.add(id[0])
print(match_ids)
</code></pre>
<p>Actual issue seems that, query can't fetch the result.</p>
|
python|postgresql
| 1 |
1,905,453 | 53,445,599 |
Plot subplots using seaborn pairplot
|
<p>If I draw the plot using the following code, it works and I can see all the subplots in a single row. I can specifically break the number of cols into three or two and show them. But I have 30 columns and I wanted to use a loop mechanism so that they are plotted in a grid of say 4x4 sub-plots</p>
<pre><code>regressionCols = ['col_a', 'col_b', 'col_c', 'col_d', 'col_e']
sns.pairplot(numerical_df, x_vars=regressionCols, y_vars='price',height=4, aspect=1, kind='scatter')
plt.show()
</code></pre>
<p>The code using loop is below. However, I don't see anything rendered.</p>
<pre><code> nr_rows = 4
nr_cols = 4
li_cat_cols = list(regressionCols)
fig, axs = plt.subplots(nr_rows, nr_cols, figsize=(nr_cols*4,nr_rows*4), squeeze=False)
for r in range(0, nr_rows):
for c in range(0,nr_cols):
i = r*nr_cols+c
if i < len(li_cat_cols):
sns.set(style="darkgrid")
bp=sns.pairplot(numerical_df, x_vars=li_cat_cols[i], y_vars='price',height=4, aspect=1, kind='scatter')
bp.set(xlabel=li_cat_cols[i], ylabel='Price')
plt.tight_layout()
plt.show()
</code></pre>
<p>Not sure what I am missing. </p>
|
<p>I think you didnt connect each of your subplot spaces in a matrix plot to scatter plots generated in a loop.</p>
<p>Maybe this solution with inner pandas plots could be proper for you:
For example,</p>
<p>1.Lets simply define an empty pandas dataframe.</p>
<pre><code>numerical_df = pd.DataFrame([])
</code></pre>
<p>2. Create some random features and price depending on them:</p>
<pre><code>numerical_df['A'] = np.random.randn(100)
numerical_df['B'] = np.random.randn(100)*10
numerical_df['C'] = np.random.randn(100)*-10
numerical_df['D'] = np.random.randn(100)*2
numerical_df['E'] = 20*(np.random.randn(100)**2)
numerical_df['F'] = np.random.randn(100)
numerical_df['price'] = 2*numerical_df['A'] +0.5*numerical_df['B'] - 9*numerical_df['C'] + numerical_df['E'] + numerical_df['D']
</code></pre>
<p>3. Define number of rows and columns. Create a subplots space with nr_rows and nr_cols. </p>
<pre><code>nr_rows = 2
nr_cols = 4
fig, axes = plt.subplots(nrows=nr_rows, ncols=nr_cols, figsize=(15, 8))
for idx, feature in enumerate(numerical_df.columns[:-1]):
numerical_df.plot(feature, "price", subplots=True,kind="scatter",ax=axes[idx // 4,idx % 4])
</code></pre>
<p>4. Enumerate each feature in dataframe and plot a scatterplot with price:</p>
<pre><code>for idx, feature in enumerate(numerical_df.columns[:-1]):
numerical_df.plot(feature, "price", subplots=True,kind="scatter",ax=axes[idx // 4,idx % 4])
</code></pre>
<p>where <code>axes[idx // 4, idx % 4]</code> defines the location of each scatterplot in a matrix you create in (3.)</p>
<p>So, we got a matrix plot:</p>
<p><a href="https://i.stack.imgur.com/xzCeX.png" rel="nofollow noreferrer">Scatterplot matrix</a></p>
|
python|pandas|matplotlib|seaborn
| 1 |
1,905,454 | 53,511,651 |
jQuery not recognized in extended html python flask
|
<p>Not sure what is going on here. It seems that jQuery is not "extended" from a base.html file to an external one.</p>
<p>I have:</p>
<pre><code>#base.html
<!doctype html>
<html class="no-js" lang="en">
<head>
...
... # css imports
</head>
<body>
# some fixed stuff
{% block body %}
# some stuff specific to base.html
{% endblock %}
# js imports at the end of the body
<script src="static/js/jquery-3.1.0.min.js"></script>
... # various other js
</body>
</html>
</code></pre>
<p>Then I have another html file:</p>
<pre><code>#test.html
{% extends "base.html" %}
{% block body %}
# some stuff
<script type="text/javascript">
$(document).ready( function () {
alert("foo")
} );
</script>
{% endblock %}
</code></pre>
<p>Now, I don't get any alert. However, if I use plain javascript it works as expected.</p>
<p>I noticed that if I import again jQuery in test.html, jQuery works fine. But what's the point of extending then? </p>
<p>There must be something that I'm missing. By the way, this seems to happen only with jQuery, all other javascript libraries seem to be extended fine.</p>
|
<p>It's really very simple. When the following code runs, it needs to run using jQuery.</p>
<pre><code><script type="text/javascript">
$(document).ready( function () {
alert("foo")
} );
</script>
</code></pre>
<p>However, your jQuery is being loaded AFTER these commands, whereas, it needs to be placed BEFORE that.</p>
<p>It's like you're drinking water from a glass before you're putting water in it. You need to put the water in the glass first and then drink it. I don't know if this example makes it easy or not.</p>
|
javascript|python|jquery|html|flask
| 1 |
1,905,455 | 45,879,793 |
where is the Django's migration listing (migrate -l)
|
<p>I've just upgraded to Django 1.11.4
and I cannot use <code>python manage.py migrate -l</code></p>
<p>There is not listing option any more. Why? What is the replacement for?</p>
<p>That listing was nice way to verify what is not migrated yet.</p>
<hr>
<p>From latest Django:</p>
<pre><code>usage: manage.py migrate [-h] [--version] [-v {0,1,2,3}] [--settings SETTINGS]
[--pythonpath PYTHONPATH] [--traceback] [--no-color]
[--noinput] [--database DATABASE] [--fake]
[--fake-initial] [--run-syncdb]
[app_label] [migration_name]
Updates database schema. Manages both apps with migrations and those without.
positional arguments:
app_label App label of an application to synchronize the state.
migration_name Database state will be brought to the state after that
migration. Use the name "zero" to unapply all
migrations.
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-v {0,1,2,3}, --verbosity {0,1,2,3}
Verbosity level; 0=minimal output, 1=normal output,
2=verbose output, 3=very verbose output
--settings SETTINGS The Python path to a settings module, e.g.
"myproject.settings.main". If this isn't provided, the
DJANGO_SETTINGS_MODULE environment variable will be
used.
--pythonpath PYTHONPATH
A directory to add to the Python path, e.g.
"/home/djangoprojects/myproject".
--traceback Raise on CommandError exceptions
--no-color Don't colorize the command output.
--noinput, --no-input
Tells Django to NOT prompt the user for input of any
kind.
--database DATABASE Nominates a database to synchronize. Defaults to the
"default" database.
--fake Mark migrations as run without actually running them.
--fake-initial Detect if tables already exist and fake-apply initial
migrations if so. Make sure that the current database
schema matches your initial migration before using
this flag. Django will only check for an existing
table name.
--run-syncdb Creates tables for apps without migrations
</code></pre>
|
<p>Django 1.8 introduced <code>showmigrations</code> command, which you can use instead (<a href="https://docs.djangoproject.com/en/1.11/ref/django-admin/#showmigrations" rel="nofollow noreferrer">details</a>).</p>
<p><code>migrate --list</code> was deprecated since 1.8 and was removed in 1.10. </p>
|
python|django
| 5 |
1,905,456 | 55,047,915 |
Running remote script on remote interpreter in PyCharm
|
<p>I have a python script on a remote server. I would like to run it on the remote server itself. However, PyCharm is not installed on the remote server and I need PyCharm to debug the code. </p>
<p>I have PyCharm on my local computer and I would like to run the script (which is on remote server) on the remote server using PyCharm on my local machine. I am aware of Deployment tools and remote server host in PyCharm. I do not know how to make them work together. </p>
<p>Having the script on my local computer is not really an option for me as there are huge data files involved with the code and I have very limited storage on my laptop.</p>
|
<p>I'm not sure if it's absolutely necessary to use Pycharm only for debugging on your remote server but if not, you can use pdb module for debugging too: <a href="https://docs.python.org/2/library/pdb.html" rel="nofollow noreferrer">https://docs.python.org/2/library/pdb.html</a></p>
|
python|ssh|pycharm|remote-server
| 0 |
1,905,457 | 54,899,010 |
How can I figure out which arbitrary number occurs twice in a list of integers from input? (Python)
|
<p>Say I'm receiving a list of arbitrary numbers from input, like </p>
<pre><code>[1,2,3,4,5,6,7,8,8,9,10]
</code></pre>
<p>My code doesn't know what numbers these are going to be before it receives the list, and I want to return the number that appears twice automatically. How do I go about doing so?</p>
<p>Thank you.</p>
|
<p>You can use Counter By defualt Method in python 2 and 3</p>
<pre><code>from collections import Counter
lst=[1,2,3,4,5,6,7,8,8,9,10]
items=[k for k,v in Counter(lst).items() if v==2]
print(items)
</code></pre>
|
python
| 1 |
1,905,458 | 54,759,052 |
I'm calculating the average of scores in a dictionary, but it returns error
|
<p>I'm calculating the average of scores in a dictionary, but it returns an error and I don't know exactly what's the error:</p>
<p><a href="https://i.stack.imgur.com/jXUZ2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jXUZ2.png" alt="enter image description here"></a></p>
|
<p>The issue is above the <code>else</code> statement, there is an extra <code>[</code> it should be</p>
<pre><code>dic[i[0]].append(int(i[1]))
</code></pre>
|
python
| -1 |
1,905,459 | 73,703,632 |
How to write a .cfg file with ":" instead of "=" in python
|
<p>I'm using python3 ConfigParser, my code is</p>
<pre><code>conf = configparser.ConfigParser()
cfg_file = open('./config.cfg', 'w')
conf.set(None, 'a', '0')
conf.write(cfg_file)
cfg_file.close()
</code></pre>
<p>I expected the result to be <code>a : 0</code> in the config.cfg file. However, I got <code>a = 0</code>. Is there a way to replace "=" with ":"?</p>
|
<p>Use the delimiters parameters to <a href="https://docs.python.org/3/library/configparser.html#configparser.ConfigParser" rel="nofollow noreferrer"><code>ConfigParser</code></a>:</p>
<blockquote>
<p>When `delimiters' is given, it will be used as the set of substrings
that divide keys from values.</p>
</blockquote>
<pre><code>conf = configparser.ConfigParser(delimiters=':')
</code></pre>
<p>Result:</p>
<pre><code>[DEFAULT]
a : 0
</code></pre>
|
python|string|config
| 2 |
1,905,460 | 12,788,859 |
Call function when wxPython Notebook is changed?
|
<p>I've got a Python script nearly identical to that on <a href="http://wiki.wxpython.org/Simple%20wx.Notebook%20Example" rel="nofollow">the wxPy Wiki</a>, and I'd like to run a function in (no indentation) every time the tab is changed. I've tried just putting <code>theFunction()</code> after the <code>wx.Frame.__init__...</code> line in the <code>def __init__</code> in each of hte three classes used by the Notebook. When I do this, the window closes immediately after closing, with no error in the terminal. What am I doing wrong? Here's what I have for each class used by the notebook:</p>
<pre><code>class Textures(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent)
self.textureslist = wx.ListBox(choices = ['One', 'Two', 'Three'], id = -1, name = 'textureslist', parent = self, pos = (6, 0), size = (382, 150))
getDivVars()
</code></pre>
<p>EDIT: The reason it was quitting was related to accidentally using App, not PySimpleApp. However, the list still doesn't update (which getDivVars() does) each time the tab is changed. I still need to do this.</p>
|
<p>To have a function called whenever the notebook page changes, bind the <code>EVT_NOTEBOOK_PAGE_CHANGED</code> event to a function or method. Taking the 'Simple wx.Notebook Example` as a starting point, you'd add a line</p>
<pre><code> nb.Bind(wx.EVT_NOTEBOOK_PAGE_CHANGED, self.some_method)
</code></pre>
<p>or, if you want to call a function outside of a class instead of a method,</p>
<pre><code> nb.Bind(wx.EVT_NOTEBOOK_PAGE_CHANGED, some_function)
</code></pre>
<p><code>some_function</code> would need to take a single parameter, and <code>some_method</code> would need to take a single parameter in addition to <code>self</code>.</p>
|
python|wxpython
| 1 |
1,905,461 | 38,199,429 |
Why the order of listed files in a directory change between Windows and Ubuntu?
|
<p>I have noticed that the order in which Python 3.4 lists files in a directory varies depending on which OS an script is executed. I have a script that list files in a directory using the 'os' package. The order of the list varies depending if I run the script in Windows 10 or Ubuntu 14.04 LTS. For Windows 10 the order seems to be the creation date of the files, but I do not know on which basis Ubuntu lists files. Any idea on why this happens and how to avoid it? </p>
<p>My script does this:</p>
<pre><code>import os
my_path = 'my/directory/'
files = os.listdir(my_path)
</code></pre>
<p>Windows 10 result:</p>
<pre><code>['my_file_2014', 'my_file_2015', 'my_file_2016']
</code></pre>
<p>Ubuntu 14.04 result:</p>
<pre><code>['my_file_2014', 'my_file_2016', 'my_file_2015']
</code></pre>
|
<p>Straight from <code>os.listdir</code> <a href="https://docs.python.org/3.4/library/os.html" rel="nofollow">documentation</a>:</p>
<blockquote>
<p>The list is in arbitrary order, and does not include the special entries '.' and '..' even if they are present in the directory.</p>
</blockquote>
|
python|windows|ubuntu-14.04
| 2 |
1,905,462 | 31,163,537 |
Extracting css links using Beautiful Soup
|
<p>I'm new to Beautiful Soup and I'd like to extract the CSS and JS links of a website using it. So far, I've succeeded but with a small flaw.</p>
<pre><code>from bs4 import BeautifulSoup
import urllib.request
url="http://www.something.com"
page = urllib.request.urlopen(url)
soup = BeautifulSoup(page.read())
for link in soup.find_all('link'): #Lists out css links
print(link.get('href'))
</code></pre>
<p>On using the snippet above, I'm able to get all the links to the css files. However, I also get other links like favicon. I'm kind of new to BeautifulSoup and I'd like to know if there's any way I can filter it out to only the stylesheets.</p>
<p>Also, for extracting the JS, if I run a simple find_all on the 'script' tag, I get the JS links as well as any JS that's written directly within script tags, in a very untidy manner. If I run a similar loop as my CSS one,</p>
<pre><code>for link in soup.find_all('script'): #Lists out all JS links
print(link.get('src'))
</code></pre>
<p>I get the links without the direct JS written in the file within script tags. I'm pretty sure there's a better way to extract it, just that I'm a little confused. Have had a look at the href extraction link here, didn't help me too much.</p>
<p>I'm trying to make the code generic for all or most websites that I try it with so while this has worked for sites that I've used so far, some sites would use 'link' for things other than just the css links. So if you have a more generic logic or method I could use to retrieve css links / JSS links and code of a website, I'd greatly appreciate it!</p>
<p>Thanks!</p>
|
<p>You can pass <a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all" rel="nofollow">extra parameters</a> to <code>find_all</code> to further filter your query.</p>
<p>Try:</p>
<pre><code>soup.find_all('link', rel="stylesheet")
soup.find_all('script', src=re.compile(".*"))
</code></pre>
|
python|css|beautifulsoup
| 2 |
1,905,463 | 40,313,389 |
IP Address Validation Using RegEx in Python Script MySQL Query
|
<p>I am attempting to extract IP Address from a MySQL database using REGEX ina a SELECT statement. When i run the Query in the MySQL console it returns the expected results:</p>
<pre><code>SELECT * FROM database.table WHERE field3 REGEXP '^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$';
</code></pre>
<p>When I run the same query through a python script it returns results that are not IP Addresses.</p>
<pre><code>query = ("SELECT * FROM database.table WHERE field3 REGEXP '^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$';")
</code></pre>
<p>Where am I going wrong?</p>
|
<p>Marc B. nailed it. I'll quote his comment.</p>
<blockquote>
<p>\. probably. one of those backslashes will be parsed away by python, leaving only . when it arrives at mysql, which then gets parsed as "this is a dot, not a single-char wildcard". \. directly in mysql will be parsed as backslash followed by dot โ Marc B</p>
</blockquote>
|
python|mysql
| 0 |
1,905,464 | 40,267,183 |
Using F2Py with OpenACC gives import error in Python
|
<p>I am writing a simple test code to see how I could wrap a fortran code containing openacc regions and call from python. Here's the code.</p>
<pre><code>module test
use iso_c_binding, only: sp => C_FLOAT, dp => C_DOUBLE, i8 => C_INT
implicit none
contains
subroutine add (a, b, n, c)
integer(kind=i8), intent(in) :: n
real(kind=dp), intent(in) :: a(n)
real(kind=dp), intent(in) :: b(n)
real(kind=dp), intent(out) :: c(n)
integer(kind=i8) :: i
!$acc enter data create(a, b, c)
do i = 1, n
c(i) = a(i) + b(i)
end do
!$acc exit data delete(a, b, c)
end subroutine add
subroutine mult (a, b, c)
real(kind=dp), intent(in) :: a
real(kind=dp), intent(in) :: b
real(kind=dp), intent(out) :: c
c = a * b
end subroutine mult
end module test
</code></pre>
<p>Now, if I don't use openacc, it works fine and I can use both add and mult from python. But after I put the openacc region, f2py compiles it fine, but when I try to import into python, I get the following error</p>
<pre><code>ImportError: /home/vikram/Experiments/Experiments/fortran_python/hello.cpython-35m-x86_64-linux-gnu.so: undefined symbol: GOACC_enter_exit_data
</code></pre>
<p>This seems to tell me that Python needs to know how to find GOACC_enter_exit_data, I see that GOACC_enter_exit_data is in libgomp.so.1. How do I tell python its path. </p>
|
<p>I decided to check out what the executables that I directly create by compiling link to. So doing</p>
<pre><code>ldd a.out
</code></pre>
<p>gives me</p>
<pre><code>linux-vdso.so.1 => (0x00007ffed24a0000)
libcublas.so.7.5 => /usr/local/cuda/lib64/libcublas.so.7.5 (0x00007f04c2d45000)
libcudart.so.7.5 => /usr/local/cuda/lib64/libcudart.so.7.5 (0x00007f04c2ae7000)
libgfortran.so.3 => /home//Experiments/Nvidia/OpenACC/OLCFHack15/gcc6/install/lib64/libgfortran.so.3 (0x00007f04c27c1000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f04c24bb000)
libgomp.so.1 => /home//Experiments/Nvidia/OpenACC/OLCFHack15/gcc6/install/lib64/libgomp.so.1 (0x00007f04c228d000)
libgcc_s.so.1 => /home//Experiments/Nvidia/OpenACC/OLCFHack15/gcc6/install/lib64/libgcc_s.so.1 (0x00007f04c2077000)
libquadmath.so.0 => /home//Experiments/Nvidia/OpenACC/OLCFHack15/gcc6/install/lib64/libquadmath.so.0 (0x00007f04c1e38000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f04c1c1a000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f04c1855000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f04c164d000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f04c1449000)
libstdc++.so.6 => /home//Experiments/Nvidia/OpenACC/OLCFHack15/gcc6/install/lib64/libstdc++.so.6 (0x00007f04c10c9000)
/lib64/ld-linux-x86-64.so.2 (0x00007f04c4624000)
</code></pre>
<p>whereas the module created by f2py using</p>
<pre><code>f2py -c -m --f90flags='-fopenacc -foffload=nvptx-none -foffload=-O3 -O3 - fPIC' hello hello.f90
</code></pre>
<p>gives me</p>
<pre><code>linux-vdso.so.1 => (0x00007ffeeef63000)
libpython3.5m.so.1.0 => not found
libgfortran.so.3 => /home//Experiments/Nvidia/OpenACC/OLCFHack15/gcc6/install/lib64/libgfortran.so.3 (0x00007f841918f000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f8418e89000)
libgcc_s.so.1 => /home//Experiments/Nvidia/OpenACC/OLCFHack15/gcc6/install/lib64/libgcc_s.so.1 (0x00007f8418c73000)
libquadmath.so.0 => /home//Experiments/Nvidia/OpenACC/OLCFHack15/gcc6/install/lib64/libquadmath.so.0 (0x00007f8418a34000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f841866f000)
/lib64/ld-linux-x86-64.so.2 (0x00007f84196be000)
</code></pre>
<p>Clearly libgomp was being linked in the executable but not in the f2py created object. So I modified the f2py command to</p>
<pre><code>f2py -c -m --f90flags='-fopenacc -foffload=nvptx-none -foffload=-O3 -O3 -fPIC' hello hello.f90 -L/usr/local/cuda/lib64 -lcublas -lcudart -lgomp
</code></pre>
<p>And now it compiles and I can import into python without getting that error.</p>
|
python|fortran|f2py|openacc
| 1 |
1,905,465 | 29,159,423 |
Python, -1 return value
|
<p>So guys, I have a funky error going on and I'm wondering if you could help me out.
I have a function that's supposed to find the way to give change using the least coins.</p>
<pre><code>def change_counter(cost, paid):
changefor = paid - cost
</code></pre>
<p>and i have a a variable, changefor which is basically how much money you need to make change for.
Then I declared some variables, penny = 0 and such that track how many of each coin are being given out.
Then I made a while loop, that cycles until changefor hits zero. </p>
<pre><code>while(changefor > 0):
if changefor >= 100:
changefor - 100
++hundreddollar
elif,etc
</code></pre>
<p>The rest is really self explanatory, and just printing the amounts of each coin used, etc. Does anyone see what could be causing a problem? I get a return value of -1, and nothing from the while loop seems to be doing anything.</p>
|
<p><strong>Because you're computing things but not storing them anywhere</strong></p>
<p>First, thank you for teaching me something new. </p>
<pre><code>++x
</code></pre>
<p><em>does</em> compile, and also <em>does</em> nothing. Look here, <a href="https://stackoverflow.com/questions/3654830/why-are-there-no-and-operators-in-python">Why are there no ++ and --โ operators in Python?</a>, but there are no increment/decrement operators in python. Similarly, </p>
<pre><code>changefor - 100
</code></pre>
<p>Doesn't do anything. Sure, it evaluates <code>changefor</code> and subtracts 100, but then it doesn't store that value anywhere. Pitfall of a dynamic language.</p>
<pre><code>changefor = changefor - 100
</code></pre>
<p>or</p>
<pre><code>changefor -= 100
</code></pre>
<p>will do you better</p>
<p>Edit: I was about to ask why the decrement syntax is accepted and found that this person <a href="https://stackoverflow.com/questions/1485841/behaviour-of-increment-and-decrement-operators-in-python">Behaviour of increment and decrement operators in Python</a> already ran into this oddity. </p>
|
python
| 2 |
1,905,466 | 52,141,147 |
Python function that takes a positive integer n and returns the sum of the squares of all the positive integers smaller than n
|
<p>What I think:</p>
<pre><code>def sum_square(n):
result = 0
if n > 0:
i = iter(n)
for i in n:
result += i * i
return result
elif n <= 0:
raise ValueError("n should be positive")
print(sum_square(4))
</code></pre>
<p>However, the terminal displays that int object is not iterable.
What's wrong with my answer? Could you do the modification based on my thought?</p>
|
<h2>Closed form</h2>
<p>First of all, know that the sum of squares has a <a href="http://www.wolframalpha.com/input/?i=sum(n%5E2)" rel="nofollow noreferrer">closed form</a>. Here is the formula shifted to sum up to <code>n - 1</code>.</p>
<pre><code>def sum_square(n):
if n < 0:
raise ValueError('n must be positive')
return n*(n-1)*(2*n-1)//6
</code></pre>
<p>Actually, all sums of powers have <a href="https://en.wikipedia.org/wiki/Faulhaber%27s_formula#Examples" rel="nofollow noreferrer">a known closed form</a>.</p>
<h2>About your code</h2>
<p>You cannot call <code>iter(n)</code> on an integer, you probably meant <code>range(n)</code>.</p>
<pre><code>def sum_square(n):
result = 0
if n > 0:
for i in range(n):
result += i * i
return result
elif n <= 0:
raise ValueError("n should be positive")
</code></pre>
<p>Although the above could be simplified using <code>sum</code>.</p>
<pre><code>def sum_square(n):
return sum(x**2 for x in range(n))
</code></pre>
|
python
| 4 |
1,905,467 | 52,140,425 |
Collect cells in pandas df that are listed in another pandas df (with same index)
|
<p>Consider the following example (the two elements of interest are <code>final_df</code> and <code>pivot_df</code>. The rest of the code is just to construct these two df's):</p>
<pre><code>import numpy
import pandas
numpy.random.seed(0)
input_df = pandas.concat([pandas.Series(numpy.round_(numpy.random.random_sample(10,), 2)),
pandas.Series(numpy.random.randint(0, 2, 10))], axis = 1)
input_df.columns = ['key', 'val']
pivot_df = input_df.pivot(columns = 'key', values = 'val')\
.fillna(method = 'pad')\
.cumsum()
index_df = pivot_df.notnull()\
.multiply(pivot_df.columns, axis = 1)\
.replace({0.0: numpy.nan})\
.values
final_df = numpy.delete(numpy.partition(index_df, 3, axis = 1),
numpy.s_[3:index_df.shape[1]], axis = 1)
final_df.sort(axis = 1)
final_df = pandas.DataFrame(final_df)
</code></pre>
<p><code>final_df</code> contains as many rows as <code>pivot_df</code>. I want to use these two to construct a third df: <code>bingo_df</code>.</p>
<p><code>bingo_df</code> should have the same dimensions as <code>final_df</code>. Then, the cells of <code>bingo_df</code> should contain:</p>
<ul>
<li>Whenever the entry <code>(row = i, col = j)</code> of <code>final_df</code> is <code>numpy.nan</code>,
the entry <code>(i,j)</code> of <code>bingo_df</code> should be <code>numpy.nan</code> as well.</li>
<li>Otherwise, [whenever the entry <code>(i, j)</code> of <code>final_df</code> is not <code>numpy.nan</code>] the entry <code>(i,j)</code> of <code>bingo_df</code> should be the value at cell <code>[i, final_df[i, j].value]</code> of <code>pivot_df</code> (in fact <code>final_df[i, j].value</code> is either the name of a column of <code>pivot_df</code> or <code>numpy.nan</code>)</li>
</ul>
<h1>Expected ouput:</h1>
<p>so the first row of <code>final_df</code> is</p>
<p><code>0.55, nan, nan</code>.</p>
<p>So I'm expecting the first row of <code>bingo_df</code> to be:</p>
<p><code>0.0, nan, nan</code></p>
<p>because the value in cell <code>(row = 0, col = 0.55)</code> of <code>pivot_df</code> is <code>0</code>
(and the two subsequent <code>numpy.nan</code> in the first row of <code>final_df</code> should also be <code>numpy.nan</code> in <code>bingo_df</code>)</p>
<p>so the second row of <code>final_df</code> is</p>
<p><code>0.55, 0.72, nan</code></p>
<p>So I'm expecting the second row of <code>bingo_df</code> to be:</p>
<p><code>0.0, 1.0, nan</code></p>
<p>because the value in cell <code>(row = 1, col = 0.55)</code> of <code>pivot_df</code> is <code>0.0</code>
and the value in cell <code>(row = 1, col = 0.72)</code> of <code>pivot_df</code> is <code>1.0</code></p>
|
<p>IIUC <code>lookup</code></p>
<pre><code>s=final_df.stack()
pd.Series(pivot_df.lookup(s.index.get_level_values(0),s),index=s.index).unstack()
Out[87]:
0 1 2
0 0.0 NaN NaN
1 0.0 1.0 NaN
2 0.0 1.0 2.0
3 0.0 0.0 2.0
4 0.0 0.0 0.0
5 0.0 0.0 0.0
6 0.0 1.0 0.0
7 0.0 2.0 0.0
8 0.0 3.0 0.0
9 0.0 0.0 4.0
</code></pre>
|
python|pandas
| 3 |
1,905,468 | 52,142,155 |
Have the boolean value of var follow into another file
|
<p>I have two files in my text-based game. The variable that is being assigned will be keep_note.</p>
<p>If you enter "Take" the boolean value True is assigned to the have_note-001
but in the next file when if have_note_001 == True I get an error saying
have_note-001 is undefined</p>
<pre><code>If keep_note is == "Take":
have_note_001 = True
</code></pre>
<p>Then in the next file I want that True value to travel over to the next file.</p>
<pre><code>If have_note_001 == True: print("This Value Is True")
keep_paper = input("Do you want to Leave the piece of paper or Take it? > ")
if keep_paper == "Take":
have_note_01 = True
if have_note_01 == True:
print("You have chosen to keep the piece of paper")
print("You leave the house with the note(" + note_001 + ")")
</code></pre>
<p>This is my next file</p>
<pre><code>from intros.intro_001 import have_note_001
if have_note_01 == True:
print("True")
elif have_note_01 == False:
print("False")
</code></pre>
<p>In the file, the import is working.
I am importing the have_note_001. It is just not transferring the value True over. It doesnt seem to remember when you give it that value you in the first file, to the second</p>
<p>How can I have the value assigned to a variable carry over to another file when imported?</p>
|
<p>I'm not sure what you are asking for is in your best interest. The values stored in variables are already carried over by default when you import the file they are from. However, this type of sporadic architecture is not really considered good practice. Let me give you some feedback on your program. First lets give it some input validation: </p>
<pre><code># start off setting keep_paper to nothing
keep_paper = ''
# As long as the player does not enter 'take' or 'leave' we are going to
# keep asking them to enter a proper response.
while keep_paper not in ['take', 'leave']:
# here we are going to "try" and ask the player for his choice
try:
# here we are getting input from the user with input(...)
# then converting it into a string with str(...)
# then converting it to lowercase with .lower()
# all together str(input(...)).lower()
keep_paper = str(input("Do you want to Leave the piece of paper or Take it? > ")).lower()
# if the player entered an invalid response such as "53" we will go back
# to the beginning and ask for another response.
except ValueError:
print("Sorry, I didn't understand that.")
# ask the user to provide valid input
continue
if have_note_01 == True:
print("True")
elif have_note_01 == False:
print("False")
</code></pre>
<p>Now let's address the main topic of your question. Having the value assigned to a variable carry over on imports. As I've already mentioned, this is generally not something that you want, which is why most Python programs have code including: </p>
<pre><code>if __name__ == "__main__":
# do xyz....
</code></pre>
<p>This ensures that <code>xyz</code> is only run if the file is being ran, and will not run if the file is imported. </p>
<p>For good measure, I recommend you checkout: <a href="https://github.com/phillipjohnson/text-adventure-tut/tree/master/adventuretutorial" rel="nofollow noreferrer">https://github.com/phillipjohnson/text-adventure-tut/tree/master/adventuretutorial</a>, reading over the code in this project will give you a better idea at how you might want to tackle your own project. (The basics of functions, classes and inheritance) </p>
|
python|python-3.x|variables|pycharm|text-based
| 1 |
1,905,469 | 51,755,825 |
Create encrypted Zip file from multiple files on Python
|
<p>I want to create an encrypted zip file which inlcudes 3 files. </p>
<p>For now I have this solution which allows me to create an encrypted zip with only one file. </p>
<pre><code>loc_7z = r"C:\Program Files\7-Zip\7z.exe"
archive_command = r'"{}" a "{}" "{}"'.format(loc_7z, 'file.zip', 'file_name.xls')
subprocess.call(archive_command, shell=True)
</code></pre>
<p>When I try to pass a list of files as the third parameter it crashes. </p>
<p>What format should the list of files have? I'm open to try different approaches as well. </p>
<p>Thank you. </p>
|
<p>The easy answer is to not use <code>shell=True</code>, and instead create a list of arguments. Then you can just stick the list of filenames onto that list of arguments. And you don't have to worry about quoting spaces or other funky characters in a way that works even if the filenames have quotes in them, etc.</p>
<pre><code>def makezip(*files):
loc_7z = r"C:\Program Files\7-Zip\7z.exe"
archive_command = [loc_7z, 'file.zip', *files]
subprocess.call(archive_command)
makezip('file_name.xls')
makezip('file name.xls')
makezip('file1.xls', 'file2.xls', 'file3.xls')
makezip(*big_old_list_of_files)
</code></pre>
<hr>
<p>If you really must use <code>shell=True</code> for some reason, then you can't just turn the list of files into a string, you have to add each string to the end. Sort of like this:</p>
<pre><code>fileargs = ' '.join('"{}".format(file) for file in files)
archive_command = r'"{}" a "{}" {}'.format(loc_7z, 'file.zip', fileargs)
</code></pre>
<p>That's already pretty horribleโand making it work with proper quoting rules for Windows is even more horrible. Just let <code>subprocess</code> do the work for you.</p>
|
python|encryption|zip
| 1 |
1,905,470 | 59,706,099 |
Django DEBUG=False getting server 500 error
|
<p>I'm getting server error(500) after I set DEBUG to False, both in local and after deploying to Heroku.
Everything is Ok when DEBUG mode is True.</p>
<p>Thanks for help</p>
|
<p>I think you need to configure your logging and mail correctly so you see what exactly is happening.</p>
<p><a href="https://docs.djangoproject.com/en/3.0/topics/email/" rel="nofollow noreferrer">https://docs.djangoproject.com/en/3.0/topics/email/</a></p>
<blockquote>
<p>Mail is sent using the SMTP host and port specified in the EMAIL_HOST and EMAIL_PORT settings. The EMAIL_HOST_USER and EMAIL_HOST_PASSWORD settings, if set, are used to authenticate to the SMTP server, and the EMAIL_USE_TLS and EMAIL_USE_SSL settings control whether a secure connection is used.</p>
</blockquote>
<p><a href="https://docs.djangoproject.com/en/3.0/topics/logging/#django.utils.log.RequireDebugFalse" rel="nofollow noreferrer">https://docs.djangoproject.com/en/3.0/topics/logging/#django.utils.log.RequireDebugFalse</a></p>
<p><code>'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse',
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
}
},</code></p>
|
python|django
| 1 |
1,905,471 | 18,942,217 |
Pythonic way of doing so?
|
<p>I wish to have the following simple algorithm implemented in Python.
Here is the pseudo-code for it.</p>
<pre><code>for elem in myList:
if only one elem satisfies myCondition:
returns it
if more than one satisfies myCondition:
randomly return one out of them
if none satisfies myCondition:
randomly return any one
</code></pre>
<p>I surely can implement it in a C-style snippet. But I am here looking for <strong>the most Pythonic way of doing so</strong>.</p>
|
<pre><code>return random.choice(filter(myCondition, myList) or myList)
</code></pre>
|
python
| 10 |
1,905,472 | 67,488,964 |
My function that has to return the 10 most frequent words while excluding words doesn't work when it seems like it should
|
<p>Python beginner here. I made this function to find the 10 most frequent words in a dictionary called "Counts".
The thing is, I have to exclude all the items from the englprep, englconj, englpronouns and specialwords lists from the "Counts" dictionary, and then get the top 10 most frequent words returned as a dictionary. Basically I have to get the "getMostFrequent()" function to take the "Counts" dictionary and the specified lists of "no-no" words as an input to output a new dictionary containing the 10 most frequent words.</p>
<p>I have tried for hours but I can't for the life of me get this to work.
expected output should be somewhere along the lines of: {'river': 755, 'party': 527, 'water': 472, etc...}
but i just get: {'the': 16517,
'of': 8550,
'and': 6390,
'to': 5471,
'a': 3508,
'in': 3298,
'was': 2371,
'on': 2094,
'that': 1893,
'he': 1557}, Which contains words that i specified not to be included :/
Would really aprecciate some help or maybe even a possible solution. Thanks in advance to anyone willing to help.</p>
<p>PS! I use python 3.8</p>
<pre><code>def countWords():
Counts = {}
for x in wordList:
if not x in Counts:
Counts[x] = wordList.count(x)
return Counts
def getMostFrequent():
exclWordList = tuple(englConj), tuple(englPrep), tuple(englPronouns), tuple(specialWords)
topNumber = 10
topFreqWords = dict(sorted(Counts.items(), key=lambda x: x[1], reverse=True)[:topNumber])
new_dict = {}
for key, value in topFreqWords.items():
for index in exclWordList:
for y in index:
if value is not y:
new_dict[key] = value
topFreqWords = new_dict
return topFreqWords
if __name__ == "__main__":
Counts = countWords()
englPrep = ['about', 'beside', 'near', 'to', 'above', 'between', 'of',
'towards', 'across', 'beyond', 'off', 'under', 'after', 'by',
'on', 'underneath', 'against', 'despite', 'onto', 'unlike',
'along', 'down', 'opposite', 'until', 'among', 'during', 'out',
'up', 'around', 'except', 'outside', 'along', 'as', 'for',
'over', 'via', 'at', 'from', 'past', 'with', 'before', 'in',
'round', 'within', 'behind', 'inside', 'since', 'without',
'below', 'into', 'than', 'beneath', 'like', 'through']
englConj = ['for', 'and', 'nor', 'but', 'or', 'yet', 'so']
englPronouns = ['you', 'he', 'she', 'him', 'her', 'his', 'hers', 'yours']
specialWords = ['the']
topFreqWords = getMostFrequent()
</code></pre>
|
<p>In your code you take top 10 most frequent words including stopwords. You need to remove stopwords from Counts before sorting dict by value.</p>
<pre><code>def getMostFrequent(Counts, englConj, englPronouns, specialWords):
exclWordList = set(englConj + englPrep + englPronouns + specialWords)
popitems = exclWordList.intersection(Counts.keys())
for i in popitems:
Counts.pop(i)
topNumber = 10
topFreqWords = dict(sorted(Counts.items(), key=lambda x: x[1], reverse=True)[:topNumber])
return topFreqWords
</code></pre>
|
python|python-3.x|dictionary
| 0 |
1,905,473 | 19,437,207 |
Python: parsing structured text to CSV format
|
<p>I want to convert plain structured text files to the CSV format using Python.</p>
<p>The input looks like this</p>
<pre><code>[-------- 1 -------]
Version: 2
Stream: 5
Account: A
[...]
[------- 2 --------]
Version: 3
Stream: 6
Account: B
[...]
</code></pre>
<p>The output is supposed to look like this:</p>
<pre><code>Version; Stream; Account; [...]
2; 5; A; [...]
3; 6; B; [...]
</code></pre>
<p>I.e. the input is structured text records delimited by <code>[----<sequence number>----]</code> and containing <code><key>: <values></code>-pairs and the ouput should be CSV containing one record per line.</p>
<p>I am able to retrive the <code><key>: <values></code>-pairs into CSV format via</p>
<pre><code>colonseperated = re.compile(' *(.+) *: *(.+) *')
fixedfields = re.compile('(\d{3} \w{7}) +(.*)')
</code></pre>
<p>-- but I have trouble to recognize beginning and end of the structured text records and with the re-writing as CSV line-records. Furthermore I would like to be able to separate different type of records, i.e. distinguish between - say - <code>Version: 2</code> and <code>Version: 3</code> type of records.</p>
|
<p>Reading the list is not that hard:</p>
<pre><code>def read_records(iterable):
record = {}
for line in iterable:
if line.startswith('[------'):
# new record, yield previous
if record:
yield record
record = {}
continue
key, value = line.strip().split(':', 1)
record[key.strip()] = value.strip()
# file done, yield last record
if record:
yield record
</code></pre>
<p>This produces dictionaries from your input file.</p>
<p>From this you can produce CSV output using the <code>csv</code> module, specifically the <a href="http://docs.python.org/2/library/csv.html#csv.DictWriter" rel="nofollow"><code>csv.DictWriter()</code> class</a>:</p>
<pre><code># List *all* possible keys, in the order the output file should list them
headers = ('Version', 'Stream', 'Account', ...)
with open(inputfile) as infile, open(outputfile, 'wb') as outfile:
records = read_records(infile)
writer = csv.DictWriter(outfile, headers, delimiter=';')
writer.writeheader()
# and write
writer.writerows(records)
</code></pre>
<p>Any header keys missing from a record will leave that column empty for that record. Any <em>extra</em> headers you missed will raise an exception; either add those to the <code>headers</code> tuple, or set the <code>extrasaction</code> keyword to the <code>DictWriter()</code> constructor to <code>'ignore'</code>.</p>
|
python|csv|text
| 1 |
1,905,474 | 13,584,900 |
What is PyOpenGL's "context specific data"?
|
<p>PyOpenGL docs say:</p>
<blockquote>
<p>Because of the way OpenGL and ctypes handle, for instance, pointers, to array data, it is often necessary to ensure that a Python data-structure is retained (i.e. not garbage collected). This is done by storing the data in an array of data-values that are indexed by a context-specific key. The functions to provide this functionality are provided by the OpenGL.contextdata module.</p>
</blockquote>
<p>When exactly is it the case?</p>
<p>One situation I've got in my mind is client-side vertex arrays back from OpenGL 1, but they have been replaced by buffer objects for years. A client side array isn't required any more after a buffer object is filled (= right after <code>glBufferData</code> returns, I pressume).</p>
<p>Are there any scenarios I'm missing?</p>
|
<blockquote>
<p>Are there any scenarios I'm missing?</p>
</blockquote>
<p>Buffer mappings obtained through glMapBuffer</p>
|
python|opengl|ctypes|pyopengl
| 1 |
1,905,475 | 43,900,218 |
Exclude more than one condition with regex Scrapy
|
<p>Iยดm new with Scrapy</p>
<p>I want to exclude two elements in the same item. Below Iยดm excluding "SKU:", I wanna add "<strong>sku</strong>". I didnยดt find the way.</p>
<pre><code>'SKU': ready.xpath(SKU).re_first(r'SKU:\s*(.*)'), # Limpia SKU:
</code></pre>
<p>Anny suggestion? thanks so much</p>
|
<p>Not sure what exactly you want, but looks like you talk about regex that can run against both "SKU" and "sku". In <code>extract_first</code> you can use python compiled regular expression rather than string, so it could be done like this:</p>
<pre><code>import re
re_sku = re.compile(r'sku:*\s*(.+)', re.IGNORECASE)
...
'SKU': ready.xpath(SKU).re_first(re_sku),
</code></pre>
|
python|python-2.7|web-scraping|scrapy
| 2 |
1,905,476 | 43,590,397 |
TextBlob, totally inaccurate
|
<p>Looking at responses from a recent survey we did. I do not think this respondent is all that happy. Here, TextBlob would have me believe his sentiment has reached a positive ceiling. If I remove the word 'best' from the string sentiment score turns to '0'. </p>
<p>Would you help to re-instill my trust is TextBlob, what am I doing wrong in this very simple application? </p>
<pre><code>a = "Follow on rounds for the best prospects. Some choke to death now."
b = TextBlob(a)
print b.sentiment
</code></pre>
<p><em>Sentiment(polarity=1.0, subjectivity=0.3)</em></p>
<p>Thanks, </p>
|
<p>You need to understand that a machine, even after learning few things, is not a human. The statement "Follow on rounds for the best prospects. Some choke to death now." is a bit confusing even for a human to identify the sentiment as there seems to be less or no relation between the first and second statement.</p>
<p>Also, you may see many other genuine cases where polarity is opposite of something very obvious. If you need to deal with many such cases, you can the following code which may improve your results drastically.</p>
<pre><code>from textblob import TextBlob
from textblob.sentiments import NaiveBayesAnalyzer
a = "Follow on rounds for the best prospects. Some choke to death now."
b = TextBlob(a, analyzer=NaiveBayesAnalyzer()))
print(b.sentiment)
</code></pre>
<p>For your example(which I personally believe is not a good one and confusing even for humans) here is the result:<br />
<code>Sentiment(classification='pos', p_pos=0.5730186699265399, p_neg=0.42698133007345906)</code></p>
<p>It is still positive, but you can see the difference between the pos and neg scores. For me it has been successful most of the time with relevant & meaningful sentences.</p>
<p>For explanation of what was changed in code, see below:</p>
<blockquote>
<p>The textblob.sentiments module contains two sentiment analysis implementations, PatternAnalyzer (based on the pattern library) and NaiveBayesAnalyzer (an NLTK classifier trained on a movie reviews corpus).</p>
<p>The default implementation is PatternAnalyzer, but you can override the analyzer by passing another implementation into a TextBlobโs constructor.</p>
</blockquote>
|
python|python-2.7|nltk|textblob
| 9 |
1,905,477 | 54,548,402 |
How can I fully utilise the GPU when training my model in FloyHub?
|
<p>I am training the below model on FloydHub using Jupyter Notebook. But, whenever I train the model it takes a lot of time(1 minute). The stats below the notebook displays that only 2% of the GPU is utilised.
I have tried running the command <code>torch.cuda.is_available()</code> and it returns <code>True</code>. </p>
<pre><code>import torch
from torch import nn,optim
import torch.nn.functional as F
from torchvision import datasets,transforms
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5)),
])
trainset = datasets.MNIST('~/.pytorch/MNIST_data/',download=True,train=True,transform=transform)
trainloader = torch.utils.data.DataLoader(trainset,batch_size=64,shuffle=True)
testset = datasets.MNIST('~/.pytorch/MNIST_data/',download=True,train=False,transform=transform)
testloader = torch.utils.data.DataLoader(testset,batch_size=64,shuffle=True)
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.hidden = nn.Linear(784,256).cuda()
self.output = nn.Linear(256,10).cuda()
self.dropout = nn.Dropout(p=0.2).cuda()
def forward(self,x):
x = x.view(x.shape[0],-1).cuda()
x = self.hidden(x).cuda()
x = torch.sigmoid(x).cuda()
x = self.dropout(x).cuda()
x = self.output(x).cuda()
x = F.log_softmax(x,dim=1).cuda()
return x.cuda()
model = Classifier()
model.cuda()
criterion = nn.NLLLoss().cuda()
optimizer = optim.SGD(model.parameters(),lr=0.5)
epochs = 30
training_losses = []
test_losses = []
for e in range(epochs):
train_loss = 0
test_loss = 0
accuracy = 0
for images,labels in trainloader:
optimizer.zero_grad()
output = model(images)
labels = labels.cuda()
loss = criterion(output,labels)
loss.backward()
optimizer.step()
train_loss+=loss.item()
with torch.no_grad():
# set the model to testing mode
model.eval()
for images,labels in testloader:
output = model(images)
labels = labels.cuda()
test_loss+=criterion(output,labels)
ps = torch.exp(output)
# get the class with the highest probability
_,top_class = ps.topk(1,dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy+=torch.mean(equals.type(torch.FloatTensor))
model.train()
training_losses.append(train_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
if((e+1)%5 == 0):
print(f"Epoch:{e+1}\n",
f"Training Loss:{train_loss/len(trainloader)}\n",
f"Test Loss:{test_loss/len(testloader)}\n",
f"Test Accuracy:{(accuracy/len(testloader)*100)}\n\n")
</code></pre>
|
<p>Three suggestions based on use of MNIST (small dataset):<br>
- pre load your data: i.e. don't use the standard <code>dataloader</code>, but pre-load to <code>.to(cuda)</code>, and iterate on this.<br>
- increase your batch size.<br>
- don't use MLPs (linear layers), try CNNs instead.</p>
|
machine-learning|deep-learning|nvidia|pytorch
| 0 |
1,905,478 | 54,499,193 |
Python - Scraping web page for information that only appears after scrolling
|
<p>I'm trying to scrape <a href="https://en.arguman.org/fallacies" rel="nofollow noreferrer">this web page</a> for the arguments that are in each of the headers.</p>
<p>What I've tried to do is scroll all the way to the bottom of the page so all the arguments are revealed (it doesn't take that long to reach the bottom of the page) and then extract the html code from there.</p>
<p>Here's what I've done. I got the scrolling code from <a href="https://stackoverflow.com/questions/20986631/how-can-i-scroll-a-web-page-using-selenium-webdriver-in-python">here</a> by the way.</p>
<pre><code>SCROLL_PAUSE_TIME = 0.5
#launch url
url = 'https://en.arguman.org/fallacies'
#create chrome sessioin
driver = webdriver.Chrome()
driver.implicitly_wait(30)
driver.get(url)
#get scroll height
last_height = driver.execute_script("return document.body.scrollHeight")
while True:
# Scroll down to bottom
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
# Wait to load page
time.sleep(SCROLL_PAUSE_TIME)
# Calculate new scroll height and compare with last scroll height
new_height = driver.execute_script("return document.body.scrollHeight")
if new_height == last_height:
break
last_height = new_height
http = urllib3.PoolManager()
response = http.request('GET', url)
soup = BeautifulSoup(response.data, 'html.parser')
claims_h2 = soup('h2')
claims =[]
for c in claims_h2:
claims.append(c.get_text())
for c in claims:
print (c)
</code></pre>
<p>This is what I get, which are all the arguments you would see without scrolling and having more added to the page.</p>
<pre><code>Plants should have the right to vote.
Plants should have the right to vote.
Plants should have the right to vote.
Postmortem organ donation should be opt-out
Jimmy Kimmel should not bring up inaction on gun policy (now)
A monarchy is the best form of government
A monarchy is the best form of government
El lenguaje inclusivo es innecesario
Society suffers the most when dealing with people having mental disorders
Illegally downloading copyrighted music and other files is morally wrong.
</code></pre>
<p>If you look and scroll all the way to the bottom of the page you'll see these arguments as well as many others.</p>
<p>Basically, my code doesn't seem to parse the updated html code.</p>
|
<p>It doesn't make sense to open the site with Selenium, do all the scrolling, and then make the request again with <code>urllib</code>. The two processes are completely separate and unrelated.</p>
<p>Instead, when the scrolling is complete, pass <code>driver.page_source</code> to <code>BeautifulSoup</code> and extract the content from there:</p>
<pre><code>import time
from bs4 import BeautifulSoup
from selenium import webdriver
driver = webdriver.Chrome()
driver.implicitly_wait(30)
try:
SCROLL_PAUSE_TIME = 0.5
driver.get("https://en.arguman.org/fallacies")
last_height = driver.execute_script("return document.body.scrollHeight")
while True:
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(SCROLL_PAUSE_TIME)
new_height = driver.execute_script("return document.body.scrollHeight")
if new_height == last_height:
break
last_height = new_height
soup = BeautifulSoup(driver.page_source, "html.parser")
for c in soup("h2"):
print(c.get_text())
finally:
driver.quit()
</code></pre>
<p>Result:</p>
<pre>
Plants should have the right to vote.
Plants should have the right to vote.
Plants should have the right to vote.
Postmortem organ donation should be opt-out
Jimmy Kimmel should not bring up inaction on gun policy (now)
A monarchy is the best form of government
A monarchy is the best form of government
El lenguaje inclusivo es innecesario
Society suffers the most when dealing with people having mental disorders
Illegally downloading copyrighted music and other files is morally wrong.
Semi-colons are pointless in Javascript
You can't measure how good a programming language is.
You can't measure how good a programming language is.
Semi-colons are pointless in Javascript
Semi-colons are pointless in Javascript
Semi-colons are pointless in Javascript
...
</pre>
|
python|selenium|web-scraping|beautifulsoup
| 4 |
1,905,479 | 54,599,459 |
ReGex Parsing Underscore Character
|
<p>I have this sentence that I need to parse. The senetence is the following bellow:</p>
<pre><code>I'm sure _I_ shan't be able! I shall be a great deal too far off to trouble myself
</code></pre>
<p>The outcome that I need below is :</p>
<pre><code>I'm sure I shan't be able! I shall be a great deal too far off to trouble myself
</code></pre>
<p>What would the Exrepssion look like? </p>
<p>The current expression I am using is </p>
<pre><code>r'(?!_)\w+'
</code></pre>
<p>but the outcome I get is:</p>
<pre><code>I'm sure I_ shan't be able! I shall be a great deal too far off to trouble myself
</code></pre>
<p>Any suggestions would be appreciated thank you!</p>
|
<p>I guess you want something like this.</p>
<pre><code>text = "I'm sure _I_ shan't be able! I shall be a great deal too far off to trouble myself"
print(text.replace('_',''))
#expected output:
#I'm sure I shan't be able! I shall be a great deal too far off to trouble myself
</code></pre>
|
python|regex|python-3.x
| 0 |
1,905,480 | 9,156,490 |
why is models.ForeignKey advantageous?
|
<p>In my models I have a <code>Concert</code> class and a <code>Venue</code> class. Each venue has multiple concerts. I have been linking the Concert class to a Venue with a simple </p>
<p><code>venue = models.IntegerField(max_length = 10)</code></p>
<p>...containing the venue object's primary key. A colleague suggested we use <code>venue = models.ForeignKey(Venue)</code> instead. While this also works, I wonder if it's worth the switch because I have been able to parse out all the concerts for a venue by simply using the venue's ID in <code>Concert.objects.filter(venue=4)</code> the same way I could do this with a <code>ForeignKey</code>: <code>Venue_instance.Concert_set.all()</code>. I've never had any problems using my method.</p>
<p>The way I see it, using the <code>IntegerField</code> and <code>objects.filter()</code> is just as much of a "ManyToOne" relationship as a <code>ForeignKey</code>, so I want to know where I'm wrong. Why are <code>ForeignKeys</code> advantageous? Are they faster? Is it better database design? Cleaner code?</p>
|
<p>I would say that the most practical benefit of a foreign key is the ability to query across relationships automatically. Django generates the JOINs automatically.</p>
<p>The automatic reverse relation helpers are great too as you mentioned.</p>
<p>Here are some examples that would be more complicated with only an integer relationship.</p>
<pre><code>concerts = Concert.objects.filter(...)
concerts.order_by('venue__attribute') # ordering beyond PK.
concerts.filter(venue__name='foo') # filter by a value across the relationship
concerts.values_list('venue__name') # get just venue names
concerts.values('venue__city').annotate() # get unique values across the venue
concerts.filter(venue__more__relationships='foo')
Venue.objects.filter(concert__name='Coachella') # reverse lookups work too
# with an integer field for Concert.venue, you'd have to do something like...
Venue.objects.filter(id__in=Concert.objects.filter(name='Coachella'))
</code></pre>
<p>As others have pointed out... database integrity is useful, cascading deletes (customizable of course), and <em>facepalm</em> it just occurred to me that the django admin and forms framework work amazingly with foreign keys. </p>
<pre><code>class ConcertInline(admin.TabularInline):
model = Concert
class VenueAdmin(admin.ModelAdmin):
inlines = [ConcertInline]
# that was quick!
</code></pre>
<p>I'm sure there are more examples of django features handling foreign keys.</p>
|
python|django
| 3 |
1,905,481 | 55,392,456 |
Underline abruptly appears and deletes the code. How can I prevent this?
|
<p>Sometimes when I'm editing python code in Jupyter Notebook, an underline abruptly appears. </p>
<p><a href="https://i.stack.imgur.com/BasWq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BasWq.png" alt="enter image description here"></a></p>
<p>I'm trying to edit the code anyway, then the underlined code is deleted when I click it or try to make a selection of some part of the code with Shift button on the keyboard or dragging of the mouse. I tried Ctrl+A to select and copy the whole code but Ctrl+A deleted the whole code.</p>
<p>Actually I don't know what is happening. I don't know exactly how I created the underline or how I deleted the underlined code, hence, I cannot prevent it from happening.</p>
<p>I'm new to Python and Jupyter and trying to use it.
But this has already happened 3-4 times. Please help me.</p>
|
<p>I had the same issue today and I didn't know why, that's the reason I find this post.
However, I was able to recover with the following method:</p>
<p>1, hold the Crtl + z, not the single click, and the deleted content will be back.</p>
<p>2, once you got everything deleted back, save the notebook and reopen it.</p>
|
python|jupyter-notebook
| 2 |
1,905,482 | 55,162,668 |
Calculate similarity between list of words
|
<p>I want to calculate the similarity between two list of words, for example :</p>
<p><code>['email','user','this','email','address','customer']</code></p>
<p>is similar to this list:</p>
<p><code>['email','mail','address','netmail']</code></p>
<p>I want to have a higher percentage of similarity than another list, for example:
<code>['address','ip','network']</code> even if <strong><code>address</code></strong> exists in the list.</p>
|
<p>Since you haven't really been able to demonstrate a crystal output, here is my best shot:</p>
<pre><code>list_A = ['email','user','this','email','address','customer']
list_B = ['email','mail','address','netmail']
</code></pre>
<p>In the above two list, we will find the cosine similarity between each element of the list with the rest. i.e. <code>email</code> from <code>list_B</code> with every element in <code>list_A</code>:</p>
<pre><code>def word2vec(word):
from collections import Counter
from math import sqrt
# count the characters in word
cw = Counter(word)
# precomputes a set of the different characters
sw = set(cw)
# precomputes the "length" of the word vector
lw = sqrt(sum(c*c for c in cw.values()))
# return a tuple
return cw, sw, lw
def cosdis(v1, v2):
# which characters are common to the two words?
common = v1[1].intersection(v2[1])
# by definition of cosine distance we have
return sum(v1[0][ch]*v2[0][ch] for ch in common)/v1[2]/v2[2]
list_A = ['email','user','this','email','address','customer']
list_B = ['email','mail','address','netmail']
threshold = 0.80 # if needed
for key in list_A:
for word in list_B:
try:
# print(key)
# print(word)
res = cosdis(word2vec(word), word2vec(key))
# print(res)
print("The cosine similarity between : {} and : {} is: {}".format(word, key, res*100))
# if res > threshold:
# print("Found a word with cosine distance > 80 : {} with original word: {}".format(word, key))
except IndexError:
pass
</code></pre>
<p><strong>OUTPUT</strong>:</p>
<pre><code>The cosine similarity between : email and : email is: 100.0
The cosine similarity between : mail and : email is: 89.44271909999159
The cosine similarity between : address and : email is: 26.967994498529684
The cosine similarity between : netmail and : email is: 84.51542547285166
The cosine similarity between : email and : user is: 22.360679774997898
The cosine similarity between : mail and : user is: 0.0
The cosine similarity between : address and : user is: 60.30226891555272
The cosine similarity between : netmail and : user is: 18.89822365046136
The cosine similarity between : email and : this is: 22.360679774997898
The cosine similarity between : mail and : this is: 25.0
The cosine similarity between : address and : this is: 30.15113445777636
The cosine similarity between : netmail and : this is: 37.79644730092272
The cosine similarity between : email and : email is: 100.0
The cosine similarity between : mail and : email is: 89.44271909999159
The cosine similarity between : address and : email is: 26.967994498529684
The cosine similarity between : netmail and : email is: 84.51542547285166
The cosine similarity between : email and : address is: 26.967994498529684
The cosine similarity between : mail and : address is: 15.07556722888818
The cosine similarity between : address and : address is: 100.0
The cosine similarity between : netmail and : address is: 22.79211529192759
The cosine similarity between : email and : customer is: 31.62277660168379
The cosine similarity between : mail and : customer is: 17.677669529663685
The cosine similarity between : address and : customer is: 42.640143271122085
The cosine similarity between : netmail and : customer is: 40.08918628686365
</code></pre>
<blockquote>
<p>Note: I have also commented the <code>threshold</code> part in the code, in case
you only want the words if their similarity exceeds a certain
threshold i.e. 80%</p>
</blockquote>
<p><strong>EDIT</strong>:</p>
<p><strong>OP</strong>: <em>but what i want exactly to do in not the comparaison word by word but, list by list</em></p>
<p>Using <code>Counter</code> and <code>math</code>:</p>
<pre><code>from collections import Counter
import math
counterA = Counter(list_A)
counterB = Counter(list_B)
def counter_cosine_similarity(c1, c2):
terms = set(c1).union(c2)
dotprod = sum(c1.get(k, 0) * c2.get(k, 0) for k in terms)
magA = math.sqrt(sum(c1.get(k, 0)**2 for k in terms))
magB = math.sqrt(sum(c2.get(k, 0)**2 for k in terms))
return dotprod / (magA * magB)
print(counter_cosine_similarity(counterA, counterB) * 100)
</code></pre>
<p><strong>OUTPUT</strong>:</p>
<pre><code>53.03300858899106
</code></pre>
|
python|data-mining|text-mining|similarity
| 12 |
1,905,483 | 52,838,903 |
Using input to determine which function to call in Python
|
<p>I'm supposed to create a function called menu, that will display the menu, check if the input is valid, and if it's not, ask the user to enter the input again. If the input is correct, then it returns the value.</p>
<p>I know that the argument has to be a int, but how do I make it such that it takes in the user's input, and then displays the function accordingly? This is what I have so far.</p>
<pre><code>def menu(choice):
print("===============================================")
print("Welcome to Students' Result System")
print("===============================================")
print("1: Display modules average scores")
print("2: Display modules top scorer")
print("0: Exit")
choice=int(input("Enter choice:"))
while (choice=="" or choice!=0 or choice!=1 or choice!=2):
print("Invalid choice, please enter again")
choice=int(input("Enter choice:"))
return choice
if choice ==1:
display_modules_average_scores()
elif choice ==2:
display_modules_top_scorer()
elif choice==0:
print("Goodbye")
def display_modules_average_scores():
print("average")
def display_modules_top_scorer():
print ("top")
def main():
menu(choice)
main()
</code></pre>
|
<p>You have several errors in the code you've presented:</p>
<ol>
<li><p>Python is a whitespace-sensitive language. You need to indent your code so that the code relevant to the function is contained at a higher indent level than the <code>def</code> declaration it's associated with.</p>
</li>
<li><p>You have defined <code>choice</code> as an input to the <code>menu</code> function, and are calling <code>menu()</code> using the <code>main()</code> method. However, <code>choice</code> is not a defined variable at that point, and is not defined in your code until you accept the input from the user. Thus, the code will fail when attempting to use this call.</p>
<p>In the way you have this code currently set up, you can simply remove <code>choice</code> as an input to <code>menu()</code> in both the definition as well as the invocation, and the menu will appear as expected.</p>
</li>
<li><p>You also seem to have a misunderstanding when it comes to creating conditional statements. Your conditionals, joined by an <code>or</code>, will not be evaluated after the first condition that returns <code>true</code>. Thus, a menu choice of <code>1</code> will make the conditional <code>choice!=0</code> <code>TRUE</code>, and will always enter the <code>Invalid choice</code> state.</p>
<p>If you're interested in validating the user's input, you might want to consider chaining the numeric comparisons (after the blank string checking) using the <code>and</code> keyword:</p>
<pre><code>while (choice=="" or (choice!=0 and choice!=1 and choice!=2)):
</code></pre>
</li>
</ol>
<p>With all these issues resolved, your code would look more like the following:</p>
<pre><code>def menu():
print("===============================================")
print("Welcome to Students' Result System")
print("===============================================")
print("1: Display modules average scores")
print("2: Display modules top scorer")
print("0: Exit")
choice=int(input("Enter choice:"))
while (choice=="" or (choice!=0 and choice!=1 and choice!=2)):
print("Invalid choice, please enter again")
choice=int(input("Enter choice:"))
return choice
if choice ==1:
display_modules_average_scores()
elif choice ==2:
display_modules_top_scorer()
elif choice==0:
print("Goodbye")
def display_modules_average_scores():
print("average")
def display_modules_top_scorer():
print ("top")
def main():
menu()
main()
</code></pre>
<p><a href="https://repl.it/repls/PlumpSoggyTwintext" rel="nofollow noreferrer">Repl.it</a></p>
|
python
| 0 |
1,905,484 | 47,652,885 |
How to get a number of same character and their position in a string in Python?
|
<p>I have a string, list and start position:</p>
<pre><code>s = "NNDRJGLDFDNJASNJBSA82NNNNNDHDWUEB3J4JJX"
l = [0, ""]
start = 0
</code></pre>
<p>Now I want to extract all the N's and their positions in the string. What I have tried so far is:</p>
<pre><code>for i in range(len(s)):
if s[i] == "N":
l[0] = i+start
l[1] = s[i]
</code></pre>
<p>But I only get the last "N" character from the string. Any suggestions?</p>
|
<p>You can use a list comprehension combined with <a href="https://docs.python.org/3.6/library/functions.html#enumerate" rel="nofollow noreferrer"><code>enumerate()</code></a> to get the indices of each target character:</p>
<pre><code>s = "NNDRJGLDFDNJASNJBSA82NNNNNDHDWUEB3J4JJX"
positions = [i for i,c in enumerate(s) if c == 'N']
>>> positions
[0, 1, 10, 14, 21, 22, 23, 24, 25]
</code></pre>
|
python|string|extract
| 3 |
1,905,485 | 16,564,609 |
How can change or override sorl-thumbnail cache path and add image with absolute path?
|
<p>I have and mediaservice website and my file storage is in other folder from media folder and I want to cache thumbnails in the other folder, How can I change sorl-thumbnail cache folder and How can I give image with absolute path to make thumbnail ?</p>
|
<p>Take a look at <a href="https://github.com/sorl/sorl-thumbnail/blob/master/sorl/thumbnail/base.py" rel="noreferrer">the class responsible</a> for naming thumbnails produced by sorl-thumbnail.</p>
<p>You could subclass it and use your custom class as your thumbnail backend:</p>
<pre><code># in your settings.py:
THUMBNAIL_BACKEND = 'path.to.MyThumbnailBackend'
#some module, in one of yours apps:
from sorl.thumbnail.base import ThumbnailBackend
from sorl.thumbnail.conf import settings
from sorl.thumbnail.helpers import tokey, serialize
class MyThumbnailBackend(ThumbnailBackend):
def _get_thumbnail_filename(self, source, geometry_string, options):
"""
Computes the destination filename.
"""
key = tokey(source.key, geometry_string, serialize(options))
# make some subdirs
path = '%s/%s/%s' % (key[:2], key[2:4], key)
return '%s%s.%s' % (settings.THUMBNAIL_PREFIX, path,
EXTENSIONS[options['format']])
</code></pre>
<p>The previous snippet was the original code of <code>_get_thumbnail_filename</code>. You can tweak this code to generate the name at your convenience.</p>
|
python|django|sorl-thumbnail
| 6 |
1,905,486 | 31,799,450 |
Create a window on user display from System account
|
<p>I have a pywin32 application that I am running from a schtask that triggers at system startup. The task runs from the SYSTEM account so that it will run on any account logged in.</p>
<p>The application runs as expected (upon system startup) and reads/writes to disk, however the application's window will not show up on any user's account after login, even though the window is not created until a user logs in.</p>
<pre><code>import os
r = os.popen('quser console')
u = r.read()
if u: #(variables previously initialized)
self.hwnd = CreateWindow( mywinclass, "MyApp", style, \
0, 0, win32con.CW_USEDEFAULT, win32con.CW_USEDEFAULT, \
0, 0, hinst, None)
</code></pre>
<p>The window displays fine when running the app from a logged in user's console, but no window when initiated from the schtask.</p>
<p>My log indicates self.hwnd is a normal handle and CreateWindow shows no errors from GetLastError().</p>
<p>The task indicates no GDI objects in task manager when running from SYSTEM account, but of course shows objects when running from a logged in users console.</p>
<p>Is it possible to create a window from the SYSTEM account for a logged in user? How would I do this so that the app will run (either on startup or on logon trigger) for all users, but elevated privileges (so it will display for non-admin, but not allow him to delete the task)?</p>
|
<p>Okay, I now realize the security feature of session 0 isolation will not allow me to do what I want (create a UI window for a user from a sysem account) for good reasons. <a href="http://www.codeproject.com/Questions/270936/Start-Process-in-Session-from-a-Windows-Servic" rel="nofollow">This Q/A</a> helped me to understand the concept better.</p>
<p>I believe my options are to create a service or app with no UI that saves data to a readonly file, then another app that reads the data for the user on his UI. The service will run automatic with elevated privileges and can't be killed by any user (except admin).</p>
<p>The other option is to create an app as before, use schtasks on startup with SYSTEM account that does the same.</p>
<p>I think either option will need a separate app (run with user account) that just reads the data created by the higher privilege service/app, allowing the user to interface and take actions allowed by him.</p>
|
python|windows|py2exe|pywin32|windows-task-scheduler
| 1 |
1,905,487 | 38,700,271 |
(Python) Converting image time series to ROS bag
|
<p>I have a set of images with metadata including time stamps and odometry data from a data set. I want to convert this time series of images into a rosbag so I can easily plug it out of the code I'll write and switch in real time data after testing. By the tutorial, the process of programmatically saving data to a rosbag seems to work like:</p>
<pre><code>import rosbag
from std_msgs.msg import Int32, String
bag = rosbag.Bag('test.bag', 'w')
try:
str = String()
str.data = 'foo'
i = Int32()
i.data = 42
bag.write('chatter', str)
bag.write('numbers', i)
finally:
bag.close()
</code></pre>
<p>For images, the data type is different but the process is the same. However, when I have data that is supposed to be associated with each other, e.g. each image should be paired with a timestamp and an odometry recording. This sample code seems to write "chatter" and "numbers" disjointly. What would be the way to publish data so that for each image, it would be paired with other pieces of data automatically?</p>
<p>In other words, I need help with the commented lines in the following code:</p>
<pre><code>import rosbag
import cv2
from std_msgs.msg import Int32, String
from sensor_msgs.msg import Image
bag = rosbag.Bag('test.bag', 'w')
for data in data_list:
(imname, timestamp, speed) = data
ts_str = String()
ts_str.data = timestamp
s = Float32()
s.data = speed
image = cv2.imread(imname)
# convert cv2 image to rosbag format
# write image, ts_str and s to the rosbag jointly
bag.close()
</code></pre>
|
<p>Many of the standard message types like <a href="http://docs.ros.org/api/sensor_msgs/html/msg/Image.html" rel="nofollow">sensor_msgs/Image</a> or <a href="http://docs.ros.org/jade/api/nav_msgs/html/msg/Odometry.html" rel="nofollow">nav_msgs/Odometry</a> have an attribute <a href="http://docs.ros.org/api/std_msgs/html/msg/Header.html" rel="nofollow"><code>header</code></a> which contains an attribute <code>stamp</code> which is meant to be used for timestamps.</p>
<p>When creating the bag file, simply set the same timestamp in the headers of messages that belong together and then write them to separate topics:</p>
<pre><code>from sensor_msgs.msg import Image
from nav_msgs.msg import Odometry
...
img_msg = Image()
odo_msg = Odometry()
img_msg.header.stamp = timestamp
odo_msg.header.stamp = timestamp
# set other values of messages...
bag.write("image", img_msg)
bag.write("odometry", odo_msg)
</code></pre>
<p>Later when subscribing for the messages, you can then match messages from the different topics based on their timestamp.</p>
<p>Depending on how important exact synchronization of image and odometry is for you application, it might be enough to just use the last message you got on each topic and assume they fit together.
In case you need something more precise, there is a <a href="http://wiki.ros.org/message_filters#ApproximateTime_Policy" rel="nofollow">message_filter</a> which takes care of synchronizing two topics and provides the messages of both topics together in one callback.</p>
|
python|opencv|ros
| 1 |
1,905,488 | 26,345,972 |
How do I upgrade the SQLite version used by Python's SQLite3 module on Mac?
|
<p>I would like to use the SQLite version 3.8 with Python, but the SQLite3 module is using an out of date version. I've installed SQLite version 3.8.4.3 on my Mac, but sqlite3.sqlite_version still returns 3.7.13.</p>
<p>I've done quite a bit of searching on SO and elsewhere, but can't seem to find a definitive answer.</p>
<p>Thanks!</p>
|
<p>From your comments, your problem is that your pre-installed sqlite 3.7 comes higher on your path than your third-party 3.8. This means that when you build <code>pysqlite2</code>, by default, it will find and use that 3.7, so it's not doing you any good. And you probably don't want to change around your whole path just to deal with this.</p>
<p>But that's fine, as long as the 3.8 is found first at build time, it doesn't matter what comes first at runtime; the path to 3.8 will be baked into the module. There are a number of ways to do this, but the simplest is something like this:</p>
<pre><code>$ brew install sqlite3
$ sudo -s
# LDFLAGS=-L/usr/local/opt/sqlite/lib CPPFLAGS=-I/usr/local/opt/sqlite/include pip2.7 install pysqlite
# ^D
$ python
>>> import sqlite3
>>> sqlite3.sqlite_version
'3.7.13'
>>> import pysqlite2.dbapi2
>>> pysqlite2.dbapi2.sqlite_version
'3.8.6'
</code></pre>
<p>The <code>LDFLAGS</code> and <code>CPPFLAGS</code> variables came from the output of the <code>brew install sqlite3</code> step. If you've installed <code>sqlite3</code> in some other way, you'll need to get the appropriate valuesโpossibly <code>/usr/local/lib</code> and <code>/usr/local/include</code>, but if not, search for <code>libsqlite3.dylib</code> and <code>sqlite3.h</code>.</p>
<p>Note that if you follow <em>exactly</em> these steps, you'll get a non-fat version of <code>libsqlite3</code>, meaning that <code>pysqlite2</code> won't work in 32-bit mode. I doubt that's an issue for you, but if it is, you can just install it <code>--universal</code>, or use a different installer instead of Homebrew.</p>
|
python|macos|sqlite
| 6 |
1,905,489 | 60,045,322 |
Capitalise and Sort in Python
|
<p>I have created a small function that sorts the rows in a CSV File alphabetically. However, it sorts them capitalised vs not capitalised. Is there anyway to capitalise all of the entries and then sort them ? </p>
<pre><code>import csv
def CSV_alphabetisch():
try:
reader = csv.reader(open("G.csv"), delimiter=";")
sortedlist = sorted(reader,)
with open ("G.csv","w") as new:
writer = csv.writer(new,delimiter=";")
for n in sortedlist:
writer.writerows([n])
except IndexError:
print ("Index Error")
CSV_alphabetisch()
</code></pre>
<p>I have tried using the .capitalise() function, but I unfortunately get an error message. I would appreciate any help.</p>
<p>Thank you </p>
|
<p>Do you mean: <code>'string'.capitalize()</code>? (with a 'z')</p>
|
python|csv|sorting|alphabetic
| 1 |
1,905,490 | 28,355,069 |
How do I install OpenCV for Python 3.4?
|
<p>I use Python 3.4 through Anaconda distribution. They don't seem to have Python 3.4 bindings for OpenCV. I tried to use Cmake from the source, but to no avail.
Could anybody please help me to install OpenCV for Python 3.4.x?</p>
|
<pre><code>conda install -c menpo opencv3
</code></pre>
<p>Does the trick. However you have to have anaconda installed.</p>
|
python|opencv
| 8 |
1,905,491 | 43,987,072 |
Write output of a while loop to multiple text files
|
<p>I have two issues: the while loop is finishing at 1.1 not 1 and how can I save a text file for each value of alpha_min as the way I wrote the code, only the last message of alpha_min is being saved in the text file? </p>
<pre><code>alpha_min = 0
alpha_max = 1
while (alpha_min < alpha_max):
alpha_min += 0.1
#Length of message
length_msg = (alpha_min * n)
len_msg = int(length_msg)
print(alpha_min)
#Generates random messages, 1D vectora consisting of 1s and 0s for different values of alpha
msg = np.random.randint(2, size= len_msg)
print(msg)
#Save messages in text format representing each bit as 0 or 1 on a separate line
msg_text_file = open("msg_file.txt", "w") # Path of Data File
msg_text_file.write("\n".join(map(lambda x: str(x), msg)))
msg_text_file.close()
</code></pre>
|
<p>You should only open the file once and close it at the end, because what you're doing right now is overwriting the file at each iteration (or you could use append rather than write)</p>
<pre><code>alpha_min = 0
alpha_max = 1
while (alpha_min < alpha_max):
alpha_min += 0.1
#Length of message
length_msg = (alpha_min * n)
len_msg = int(length_msg)
print(alpha_min)
#Generates random messages, 1D vectora consisting of 1s and 0s for different values of alpha
msg = np.random.randint(2, size= len_msg)
print(msg)
#Save messages in text format representing each bit as 0 or 1 on a separate line
msg_text_file = open("msg_file_{}.txt".format(alpha_min), "w") # Path of Data File
msg_text_file.write("\n".join(map(lambda x: str(x), msg)))
msg_text_file.close()
</code></pre>
|
python|printing|while-loop|text-files
| 1 |
1,905,492 | 13,941,545 |
Python re.compile between two html tags
|
<p>This should be quite straightforward but I can't quite twig it. I want to get the name from this html string:</p>
<pre><code> soup = </ul>
Brian
<p class="f">
</code></pre>
<p>I've tried:</p>
<pre><code>namePattern = re.compile(r'(?<=</ul>)(.*?)(?<=<p)')
rev.reviewerName = re.findall(namePattern, str(soup))
</code></pre>
<p>and</p>
<pre><code>namePattern = re.compile(r'</ul>(.*?)<p')
</code></pre>
<p>Can you tell me how to do it? Thanks.</p>
|
<p>By default, <code>.</code> doesn't match newlines. You need to specify <a href="http://docs.python.org/2/library/re.html#re.S" rel="nofollow"><code>re.DOTALL</code></a> as the second argument to <a href="http://docs.python.org/2/library/re.html#re.compile" rel="nofollow"><code>re.compile()</code></a>.</p>
<p>Note that this will include the newlines as part of your capture group. If you don't want that, you can explicitly match them with <code>\s*</code>:</p>
<pre><code>In [5]: re.findall(r'</ul>\s*(.*?)\s*<p', s)
Out[5]: ['Brian']
</code></pre>
|
python|regex|beautifulsoup
| 3 |
1,905,493 | 41,900,930 |
Python XML Compatible String
|
<p>I am writing an XML file using lxml and am having issues with control characters. I am reading text from a file to assign to an element that contains control characters. When I run the script I receive this error:</p>
<pre><code>ValueError: All strings must be XML compatible: Unicode or ASCII, no NULL bytes or control characters
</code></pre>
<p>So I wrote a small function to replace the control characters with a '?', when I look at the generated XML it appears that the control characters are new lines 0x0A. With this knowledge I wrote a function to encode there control characters :</p>
<pre><code>def encodeXMLText(text):
text = text.replace("&", "&amp;")
text = text.replace("\"", "&quot;")
text = text.replace("'", "&apos;")
text = text.replace("<", "&lt;")
text = text.replace(">", "&gt;")
text = text.replace("\n", "&#xA;")
text = text.replace("\r", "&#xD;")
return text
</code></pre>
<p>This still returns the same error as before. I want to preserve the new lines so simply stripping them isn't a valid option for me. No idea what I am doing wrong at this point. I am looking for a way to do this with lxml, similar to this:</p>
<pre><code> ruleTitle = ET.SubElement(rule,'title')
ruleTitle.text = encodeXMLText(titleText)
</code></pre>
<p>The other questions I have read either don't use lxml or don't address new line (/n) and line feed (/r) characters as control characters</p>
|
<p>I printed out the string to see what specific characters were causing the issue and noticed these characters : \xe2\x80\x99 in the text. So the issue was the encoding, changing the code to look like this fixed my issue:</p>
<pre><code>ruleTitle = ET.SubElement(rule,'title')
ruleTitle.text = titleText.decode('UTF-8')
</code></pre>
|
python|xml|python-2.7|lxml|control-characters
| 0 |
1,905,494 | 47,460,168 |
Pandas data frame removing rows with 'nan' by column name
|
<p>After reading excel via pandas <code>read_excel</code> end up with rows with that has type string 'nan'. I tried to drop them using all the available method discussed here but seems like it doesn't work: </p>
<p>Here are the attempts: </p>
<p><code>df.dropna(subset=['A'], inplace=True)</code></p>
<p>I thought this would work, it reduced the number of rows from the data frame without removing rows that has <code>'nan'</code></p>
<p><code>df = df[df.A.str.match('nan') == False]</code></p>
|
<p>We can <code>replace</code> 'nan' first then use <code>dropna</code></p>
<pre><code>df.replace({'A':{'nan':np.nan}}).dropna(subset=['A'], inplace=True)
</code></pre>
|
python|pandas
| 1 |
1,905,495 | 58,561,475 |
How can I get more than one digit using parenthesis in regular expressions
|
<p>I was trying to extract values from a html code using urllib and regular expressions in python3 and when I tried to run this code, it only gave me one of the digits of the number instead of both values even though I added a "+" sign meaning one or more times. What's wrong here?</p>
<pre><code>import re
import urllib.error,urllib.parse,urllib.request
from bs4 import BeautifulSoup
finalnums=[]
sumn=0
urlfile = urllib.request.urlopen("http://py4e-data.dr-chuck.net/comments_42.html")
html=urlfile.read()
soup = BeautifulSoup( html,"html.parser" )
spantags = soup("span")
for span in spantags:
span=span.decode()
numlist=re.findall(".+([0-9].*)<",span)
print(numlist)
finalnums.extend(numlist)
for anum in finalnums:
sumn=sumn+int(anum)
print("Sum = ",sumn)
</code></pre>
<p>This is an example of the string I'm trying to extract the number from:</p>
<pre><code> <span class="comments">54</span>
</code></pre>
|
<p>Use <code>numlist=re.findall("\d+",span)</code> to search for all contiguous groups of digit characters.</p>
<p><code>\d</code> is a character class that's equivalent to <code>[0-9]</code>, so it would also work if you did <code>numlist=re.findall("[0-9]+",span)</code></p>
|
python|regex|web-scraping
| 0 |
1,905,496 | 33,603,442 |
Python Tkinter Entry. I can't type Korean in to the Entry field
|
<p>I am making a p2p chat program in Python 3 using Tkinter. I can paste Korean text into the Entry widget and send to the other user and it works.</p>
<p>However, I can't 'type' Korean into the widget directly.</p>
<p>Why is this happening? </p>
<p>I am using Mac OS X Yosemite.</p>
|
<p>As mentioned by @mohit-bhasi, upgrading my python version to 3.8 which has tkinter 8.6 in it solved the problem. I can now type Korean directly into the widgets.</p>
<p>Only caveat is that I need to press right arrow once when I finished typing to have the last letter appear. Otherwise, the last letter is not recognized.</p>
|
python|unicode|tkinter|tkinter-entry
| 0 |
1,905,497 | 47,020,074 |
pandas getting boolean values for a new column in terms of existing columns for each row
|
<p>I want to get boolean values for a new column based on existing columns for each row, a sample <code>dataframe</code> is, </p>
<pre><code>key doc_no_list amount date doc_no
a1 [1,2] 1.0 2017-10-01 1
a2 [2,1] 1.0 2017-10-01 2
a3 [3] 2.0 2017-10-02 3
a4 [4,5] 3.0 2017-10-03 4
a5 [5,4] 3.0 2017-10-04 5
a6 [2,6] 4.0 2017-10-05 2
a7 [6,2] 4.0 2017-10-05 6
</code></pre>
<p>for rows with keys <code>a1</code> and <code>a2</code>, their <code>doc_no</code> (not unique) are put in a list <code>[1,2]</code> or <code>[2,1]</code> (this list has been keeping unique, i.e. no duplicate <code>doc_no</code>), since they have the same <code>amount</code> value.</p>
<p>Now, for <code>doc_no_list</code> values whose sizes > 1, I want to check if the rows corresponding to each <code>doc_no</code> in each <code>doc_no_list</code> have the same <code>date</code> and <code>amount</code> values, if they do, put <code>True</code> in a new column <code>same_date</code>. So a result <code>dataframe</code> should look like,</p>
<pre><code>key doc_no_list amount date doc_no same_date
a1 [1,2] 1.0 2017-10-01 1 True
a2 [2,1] 1.0 2017-10-01 2 True
a3 [3] 2.0 2017-10-02 3 nan
a4 [4,5] 3.0 2017-10-03 4 False
a5 [5,4] 3.0 2017-10-04 5 False
a6 [2,6] 4.0 2017-10-05 2 True
a7 [6,2] 4.0 2017-10-05 6 True
</code></pre>
<p>I am wondering whats the best way to do this.</p>
|
<p>Sort doc_no_list, and join them into one str , then sort it and apply <code>duplicated</code></p>
<pre><code>df['same_date']=df.groupby(df['doc_no_list'].apply(sorted).apply(lambda x : ''.join(str(x)))).apply(lambda x : x.duplicated(['amount','date'],keep=False)).reset_index(level=0,drop=True)
df
Out[1246]:
key doc_no_list amount date doc_no same_date
0 a1 [1, 2] 1 10/1/2017 1 True
1 a2 [2, 1] 1 10/1/2017 2 True
2 a3 [3] 2 10/2/2017 3 False
3 a4 [4, 5] 3 10/3/2017 4 False
4 a5 [5, 4] 3 10/4/2017 5 False
5 a6 [2, 6] 4 10/5/2017 2 True
6 a7 [6, 2] 4 10/5/2017 6 True
</code></pre>
|
python|python-3.x|pandas|dataframe
| 1 |
1,905,498 | 46,859,990 |
How to use requests to login to this website
|
<p>I'm trying to automate some tasks with python, and webscraping. but first, I need to login to a website I have an account on. </p>
<p>I've seen several examples on stack overflow, but for some reason, this website won't let me login using requests. Can anyone tell me what I'm doing wrong?</p>
<p>The webpage:
<a href="https://www.americanbulls.com/Signin.aspx?lang=en" rel="nofollow noreferrer">https://www.americanbulls.com/Signin.aspx?lang=en</a></p>
<p>the form variables:
ctl00$MainContent$uEmail
ctl00$MainContent$uPassword</p>
<p>Is it the variable names have '$' in them?</p>
<p>Any help would be greatly appreciated.</p>
<pre><code>import sys
print(sys.path)
sys.path.append('C:\program files\python36\lib\site-packages\pip\_vendor')
import requests
import sys
import time
EMAIL = '<my_email>'
PASSWORD = '<my_password>'
URL = 'https://www.americanbulls.com/Signin.aspx?lang=en'
# Start a session so we can have persistant cookies
session = requests.session()
#This is the form data that the page sends when logging in
login_data = {
'ctl00$MainContent$uEmail': EMAIL,
'ctl00$MainContent$uPassword': PASSWORD
}
# Authenticate
r = session.post(URL, data=login_data, timeout=15, verify=True)
# Try accessing a page that requires you to be logged in
r = session.get('https://www.americanbulls.com/members/SignalPage.aspx?lang=en&Ticker=SQ')
print(r.url)
</code></pre>
|
<p>I submitted a form using test@test.test as the email and test as the password, and when I looked at the headers of the request I'd sent in the network tab of chrome dev tools it said I submitted the following form data.</p>
<pre><code>ctl00$ScriptManager1:ctl00$MainContent$UpdatePanel|ctl00$MainContent$btnSubmit
__LASTFOCUS:
__EVENTTARGET:
__EVENTARGUMENT:
__VIEWSTATE:/wEPDwULLTE5MzMzODAyNzIPZBYCZg9kFgICAQ9kFgICAw9kFgICBQ9kFhICAQ8WAh4FY2xhc3MFFmhlYWRlcmNvbnRhaW5lcl9zYWZhcmkWCgIBDzwrAAkCAA8WAh4OXyFVc2VWaWV3U3RhdGVnZAYPZBAWAmYCARYCPCsADAEAFgYeC05hdmlnYXRlVXJsBRVSZWdpc3Rlci5hc3B4P2xhbmc9ZW4eBFRleHQFCFJlZ2lzdGVyHgdUb29sVGlwBTFSZWdpc3RlciBub3cgdG8gZ2V0IGFjY2VzcyB0byBleGNsdXNpdmUgZmVhdHVyZXMhPCsADAEAFgYfAwUHU2lnbiBJbh8CBRNTaWduaW4uYXNweD9sYW5nPWVuHghTZWxlY3RlZGdkZAIDDw8WBB8CBRREZWZhdWx0LmFzcHg/bGFuZz1lbh4ISW1hZ2VVcmwFGH4vaW1nL2FtZXJpY2FuYnVsbHMxLmdpZmRkAgcPZBYCAgEPPCsACQIADxYCHwFnZAYPZBAWAWYWATwrAAwCABYCHwMFB0VuZ2xpc2gBD2QQFghmAgECAgIDAgQCBQIGAgcWCDwrAAwCABYGHwMFB0VuZ2xpc2gfAgUUL1NpZ25pbi5hc3B4P2xhbmc9ZW4fBWcCFCsAAhYCHgNVcmwFEn4vaW1nL2VuaWNvbjAxLnBuZ2Q8KwAMAgAWBB8DBQdEZXV0c2NoHwIFFC9TaWduaW4uYXNweD9sYW5nPWRlAhQrAAIWAh8HBRJ+L2ltZy9kZWljb24wMS5wbmdkPCsADAIAFgQfAwUG5Lit5paHHwIFFC9TaWduaW4uYXNweD9sYW5nPXpoAhQrAAIWAh8HBRJ+L2ltZy96aGljb24wMS5wbmdkPCsADAIAFgQfAwUJRnJhbsOnYWlzHwIFFC9TaWduaW4uYXNweD9sYW5nPWZyAhQrAAIWAh8HBRJ+L2ltZy9mcmljb24wMS5wbmdkPCsADAIAFgQfAwUIVMO8cmvDp2UfAgUUL1NpZ25pbi5hc3B4P2xhbmc9dHICFCsAAhYCHwcFEn4vaW1nL3RyaWNvbjAxLnBuZ2Q8KwAMAgAWBB8DBQlJbmRvbmVzaWEfAgUUL1NpZ25pbi5hc3B4P2xhbmc9aWQCFCsAAhYCHwcFEn4vaW1nL2lkaWNvbjAxLnBuZ2Q8KwAMAgAWBB8DBQhFc3Bhw7FvbB8CBRQvU2lnbmluLmFzcHg/bGFuZz1lcwIUKwACFgIfBwUSfi9pbWcvZXNpY29uMDEucG5nZDwrAAwCABYEHwMFCEl0YWxpYW5vHwIFFC9TaWduaW4uYXNweD9sYW5nPWl0AhQrAAIWAh8HBRJ+L2ltZy9pdGljb24wMS5wbmdkZGRkAgkPZBYCAgEPPCsABAEADxYEHgVWYWx1ZQULTGFzdCBVcGRhdGUeB1Zpc2libGVoZGQCCw9kFgICAQ88KwAEAQAPFgIfCWhkZAIDDxYCHwAFGG1haW5tZW51Y29udGFpbmVyX3NhZmFyaRYGAgEPPCsACQIADxYCHwFnZAYPZBAWDWYCAQICAgMCBAIFAgYCBwIIAgkCCgILAgwWDTwrAAwBABYEHwIFFERlZmF1bHQuYXNweD9sYW5nPWVuHwMFBEhPTUU8KwAMAQAWBB8DBQRBTUVYHwIFKVNpZ25hbExpc3QuYXNweD9sYW5nPWVuJk1hcmtldFN5bWJvbD1BTUVYPCsADAEAFgQfAwUETllTRR8CBSlTaWduYWxMaXN0LmFzcHg/bGFuZz1lbiZNYXJrZXRTeW1ib2w9TllTRTwrAAwBABYEHwMFBk5BU0RBUR8CBStTaWduYWxMaXN0LmFzcHg/bGFuZz1lbiZNYXJrZXRTeW1ib2w9TkFTREFRPCsADAEAFgQfAwUIT1RDIFBJTksfAgUpU2lnbmFsTGlzdC5hc3B4P2xhbmc9ZW4mTWFya2V0U3ltYm9sPVBJTks8KwAMAQAWBB8DBQlQUkVGRVJSRUQfAgUuU2lnbmFsTGlzdC5hc3B4P2xhbmc9ZW4mTWFya2V0U3ltYm9sPVBSRUZFUlJFRDwrAAwBABYEHwMFCFdBUlJBTlRTHwIFLVNpZ25hbExpc3QuYXNweD9sYW5nPWVuJk1hcmtldFN5bWJvbD1XQVJSQU5UUzwrAAwBABYEHwMFB0lOREVYRVMfAgUcSW5kZXhTaWduYWxMaXN0LmFzcHg/bGFuZz1lbjwrAAwCABYEHwMFAmZ4HwIFGVNpZ25hbExpc3RGWC5hc3B4P2xhbmc9ZW4KPCsADgEAFgYeCUZvcmVDb2xvcgpgHgtGb250X0l0YWxpY2ceBF8hU0IChCA8KwAMAQAWAh8JaDwrAAwBABYCHwloPCsADAEAFgIfCWg8KwAMAQAWAh8JaGRkAgMPFCsABA8WBB8IBRRTdXBwb3J0LmFzcHg/bGFuZz1lbh8JaGRkZDwrAAUBABYCHwMFBEhlbHBkAgUPZBYCAgMPPCsABAEADxYCHwgFJmh0dHBzOi8vd3d3LnR3aXR0ZXIuY29tL2FtZXJpY2FuX0J1bGxzZGQCBQ8WAh8ABRdzdWJtZW51Y29udGFpbmVyX3NhZmFyaRYKAgEPPCsACQIADxYCHwFnZAYPZBAWAWYWATwrAAwBABYEHwIFFVJlZ2lzdGVyLmFzcHg/bGFuZz1lbh8DBTFSZWdpc3RlciBub3cgdG8gZ2V0IGFjY2VzcyB0byBleGNsdXNpdmUgZmVhdHVyZXMhZGQCAw88KwAJAgAPFgQfAWcfCWhkBg9kEBYBZhYBPCsADAEAFgQfAgUTU2lnbmluLmFzcHg/bGFuZz1lbh8DBQdTaWduIEluZGQCBQ88KwAJAgAPFgIfAWdkBg9kEBYBZhYBPCsADAEAFgQfAgUfTWVtYmVyc2hpcEJlbmVmaXRzLmFzcHg/bGFuZz1lbh8DBRNNZW1iZXJzaGlwIEJlbmVmaXRzZGQCBw88KwAJAQAPFgQfAWcfCWhkZAILDzwrAAYBAzwrAAgBABYCHghOdWxsVGV4dAUMRW50ZXIgU3ltYm9sZAIHDxYCHwAFEGNvbnRhaW5lcl9zYWZhcmkWAgIBD2QWAgIBD2QWAgIBD2QWAgIDD2QWAmYPZBYcAgEPPCsABAEADxYCHwgFB1NpZ24gSW5kZAIDDzwrAAQBAA8WAh8IBQlOZXcgVXNlcj9kZAIFDxQrAAQPFgIfCAUVUmVnaXN0ZXIuYXNweD9sYW5nPWVuZGRkPCsABQEAFgIfAwUIUmVnaXN0ZXJkAgcPPCsABAEADxYCHwhlZGQCCQ88KwAEAQAPFgIfCAUFRW1haWxkZAINDw8WAh4MRXJyb3JNZXNzYWdlBQ1JbnZhbGlkIGVtYWlsZGQCDw8PFgIfDgUNSW52YWxpZCBlbWFpbGRkAhEPPCsABAEADxYCHwgFCFBhc3N3b3JkZGQCFQ8PFgIfDgUUUGFzc3dvcmQgaXMgcmVxdWlyZWRkZAIXDzwrAAQBAA8WAh8IBQtSZW1lbWJlciBNZWRkAhsPDxYCHwMFB1NpZ24gSW5kZAIdDw8WAh8DBQZDYW5jZWxkZAIfDzwrAAQBAA8WAh8IBShJZiB5b3UgY2Fubm90IHJlYWNoIHlvdXIgYWNjb3VudCwgcGxlYXNlZGQCIQ8UKwAEDxYCHwgFGVNlbmRQYXNzd29yZC5hc3B4P2xhbmc9ZW5kZGQ8KwAFAQAWAh8DBQtjbGljayBoZXJlLmQCCQ8WAh8ABRB3aGl0ZWJhbnRfc2FmYXJpZAILDxYCHwAFG3N1cHBvcnRtZW51Y29udGFpbmVyX3NhZmFyaRYCAgEPPCsACQIADxYCHwFnZAYPZBAWBmYCAQICAgMCBAIFFgY8KwAMAQAWBB8DBQhBYm91dCBVcx8CBRRBYm91dFVzLmFzcHg/bGFuZz1lbjwrAAwBABYEHwMFB1N1cHBvcnQfAgUUU3VwcG9ydC5hc3B4P2xhbmc9ZW48KwAMAQAWBB8DBQdQcml2YWN5HwIFFFByaXZhY3kuYXNweD9sYW5nPWVuPCsADAEAFgQfAwUDVE9THwIFEFRvcy5hc3B4P2xhbmc9ZW48KwAMAQAWBB8DBRNNZW1iZXJzaGlwIEJlbmVmaXRzHwIFH01lbWJlcnNoaXBCZW5lZml0cy5hc3B4P2xhbmc9ZW48KwAMAQAWBB8DBQ9JbXBvcnRhbnQgTGlua3MfAgUbSW1wb3J0YW50TGlua3MuYXNweD9sYW5nPWVuZGQCDQ8WAh8ABRdmb290ZXJjb250YWluZXIxX3NhZmFyaRYCAgEPPCsACQIADxYCHwFnZAYPZBAWCGYCAQICAgMCBAIFAgYCBxYIPCsADAEAFgYfAwUHRW5nbGlzaB8FZx8CBRZ+Ly9TaWduaW4uYXNweD9sYW5nPWVuPCsADAEAFgQfAwUHRGV1dHNjaB8CBRZ+Ly9TaWduaW4uYXNweD9sYW5nPWRlPCsADAEAFgQfAwUG5Lit5paHHwIFFn4vL1NpZ25pbi5hc3B4P2xhbmc9emg8KwAMAQAWBB8DBQlGcmFuw6dhaXMfAgUWfi8vU2lnbmluLmFzcHg/bGFuZz1mcjwrAAwBABYEHwMFCFTDvHJrw6dlHwIFFn4vL1NpZ25pbi5hc3B4P2xhbmc9dHI8KwAMAQAWBB8DBQlJbmRvbmVzaWEfAgUWfi8vU2lnbmluLmFzcHg/bGFuZz1pZDwrAAwBABYEHwMFCEVzcGHDsW9sHwIFFn4vL1NpZ25pbi5hc3B4P2xhbmc9ZXM8KwAMAQAWBB8DBQhJdGFsaWFubx8CBRZ+Ly9TaWduaW4uYXNweD9sYW5nPWl0ZGQCDw8WAh8ABRdmb290ZXJjb250YWluZXIzX3NhZmFyaRYOAgEPFgIeCWlubmVyaHRtbAUMRGlzY2xhaW1lcnM6ZAIDDxYCHw8FjwVBbWVyaWNhbmJ1bGxzLmNvbSBMTEMgaXMgbm90IHJlZ2lzdGVyZWQgYXMgYW4gaW52ZXN0bWVudCBhZHZpc2VyIHdpdGggdGhlIFUuUy4gU2VjdXJpdGllcyBhbmQgRXhjaGFuZ2UgQ29tbWlzc2lvbi4gIFJhdGhlciwgQW1lcmljYW5idWxscy5jb20gTExDIHJlbGllcyB1cG9uIHRoZSDigJxwdWJsaXNoZXLigJlzIGV4Y2x1c2lvbuKAnSBmcm9tIHRoZSBkZWZpbml0aW9uIG9mIGludmVzdG1lbnQgYWR2aXNlciBhcyBwcm92aWRlZCB1bmRlciBTZWN0aW9uIDIwMihhKSgxMSkgb2YgdGhlIEludmVzdG1lbnQgQWR2aXNlcnMgQWN0IG9mIDE5NDAgYW5kIGNvcnJlc3BvbmRpbmcgc3RhdGUgc2VjdXJpdGllcyBsYXdzLiBBcyBzdWNoLCBBbWVyaWNhbmJ1bGxzLmNvbSBMTEMgZG9lcyBub3Qgb2ZmZXIgb3IgcHJvdmlkZSBwZXJzb25hbGl6ZWQgaW52ZXN0bWVudCBhZHZpY2UuIFRoaXMgc2l0ZSBhbmQgYWxsIG90aGVycyBvd25lZCBhbmQgb3BlcmF0ZWQgYnkgQW1lcmljYW5idWxscy5jb20gTExDIGFyZSBib25hIGZpZGUgcHVibGljYXRpb25zIG9mIGdlbmVyYWwgYW5kIHJlZ3VsYXIgY2lyY3VsYXRpb24gb2ZmZXJpbmcgaW1wZXJzb25hbCBpbnZlc3RtZW50LXJlbGF0ZWQgYWR2aWNlIHRvIG1lbWJlciBhbmQgL29yIHByb3NwZWN0aXZlIG1lbWJlcnMuZAIFDxYCHw8FrAJBbWVyaWNhbmJ1bGxzLmNvbSBpcyBhbiBpbmRlcGVuZGVudCB3ZWJzaXRlLiBBbWVyaWNhbmJ1bGxzLmNvbSBMTEMgZG9lcyBub3QgcmVjZWl2ZSBjb21wZW5zYXRpb24gYnkgYW55IGRpcmVjdCBvciBpbmRpcmVjdCBtZWFucyBmcm9tIHRoZSBzdG9ja3MsIHNlY3VyaXRpZXMgYW5kIG90aGVyIGluc3RpdHV0aW9ucyBvciBhbnkgdW5kZXJ3cml0ZXJzIG9yIGRlYWxlcnMgYXNzb2NpYXRlZCB3aXRoIHRoZSBicm9hZGVyIG5hdGlvbmFsIG9yIGludGVybmF0aW9uYWwgZm9yZXgsIGNvbW1vZGl0eSBhbmQgc3RvY2sgbWFya2V0cy5kAgcPFgIfDwX3CFRoZXJlZm9yZSwgQW1lcmljYW5idWxscy5jb20gYW5kIEFtZXJpY2FuYnVsbHMuY29tIExMQyBpcyBleGVtcHQgZnJvbSB0aGUgZGVmaW5pdGlvbiBvZiDigJxpbnZlc3RtZW50IGFkdmlzZXLigJ0gYXMgcHJvdmlkZWQgdW5kZXIgU2VjdGlvbiAyMDIoYSkgKDExKSBvZiB0aGUgSW52ZXN0bWVudCBBZHZpc2VycyBBY3Qgb2YgMTk0MCBhbmQgY29ycmVzcG9uZGluZyBzdGF0ZSBzZWN1cml0aWVzIGxhd3MsIGFuZCBoZW5jZSByZWdpc3RyYXRpb24gYXMgc3VjaCBpcyBub3QgcmVxdWlyZWQuIFdlIGFyZSBub3QgYSByZWdpc3RlcmVkIGJyb2tlci1kZWFsZXIuIE1hdGVyaWFsIHByb3ZpZGVkIGJ5IEFtZXJpY2FuYnVsbHMuY29tIExMQyBpcyBmb3IgaW5mb3JtYXRpb25hbCBwdXJwb3NlcyBvbmx5LCBhbmQgdGhhdCBubyBtZW50aW9uIG9mIGEgcGFydGljdWxhciBzZWN1cml0eSBpbiBhbnkgb2Ygb3VyIG1hdGVyaWFscyBjb25zdGl0dXRlcyBhIHJlY29tbWVuZGF0aW9uIHRvIGJ1eSwgc2VsbCwgb3IgaG9sZCB0aGF0IG9yIGFueSBvdGhlciBzZWN1cml0eSwgb3IgdGhhdCBhbnkgcGFydGljdWxhciBzZWN1cml0eSwgcG9ydGZvbGlvIG9mIHNlY3VyaXRpZXMsIHRyYW5zYWN0aW9uIG9yIGludmVzdG1lbnQgc3RyYXRlZ3kgaXMgc3VpdGFibGUgZm9yIGFueSBzcGVjaWZpYyBwZXJzb24uIFRvIHRoZSBleHRlbnQgdGhhdCBhbnkgb2YgdGhlIGluZm9ybWF0aW9uIG9idGFpbmVkIGZyb20gQW1lcmljYW5idWxscy5jb20gTExDIG1heSBiZSBkZWVtZWQgdG8gYmUgaW52ZXN0bWVudCBvcGluaW9uLCBzdWNoIGluZm9ybWF0aW9uIGlzIGltcGVyc29uYWwgYW5kIG5vdCB0YWlsb3JlZCB0byB0aGUgaW52ZXN0bWVudCBuZWVkcyBvZiBhbnkgc3BlY2lmaWMgcGVyc29uLiBBbWVyaWNhbmJ1bGxzLmNvbSBMTEMgZG9lcyBub3QgcHJvbWlzZSwgZ3VhcmFudGVlIG9yIGltcGx5IHZlcmJhbGx5IG9yIGluIHdyaXRpbmcgdGhhdCBhbnkgaW5mb3JtYXRpb24gcHJvdmlkZWQgdGhyb3VnaCBvdXIgd2Vic2l0ZXMsIGNvbW1lbnRhcmllcywgb3IgcmVwb3J0cywgaW4gYW55IHByaW50ZWQgbWF0ZXJpYWwsIG9yIGRpc3BsYXllZCBvbiBhbnkgb2Ygb3VyIHdlYnNpdGVzLCB3aWxsIHJlc3VsdCBpbiBhIHByb2ZpdCBvciBsb3NzLmQCCQ8WAh8PBeMGR292ZXJubWVudCByZWd1bGF0aW9ucyByZXF1aXJlIGRpc2Nsb3N1cmUgb2YgdGhlIGZhY3QgdGhhdCB3aGlsZSB0aGVzZSBtZXRob2RzIG1heSBoYXZlIHdvcmtlZCBpbiB0aGUgcGFzdCwgcGFzdCByZXN1bHRzIGFyZSBub3QgbmVjZXNzYXJpbHkgaW5kaWNhdGl2ZSBvZiBmdXR1cmUgcmVzdWx0cy4gV2hpbGUgdGhlcmUgaXMgYSBwb3RlbnRpYWwgZm9yIHByb2ZpdHMgdGhlcmUgaXMgYWxzbyBhIHJpc2sgb2YgbG9zcy4gVGhlcmUgaXMgc3Vic3RhbnRpYWwgcmlzayBpbiBzZWN1cml0eSB0cmFkaW5nLiBMb3NzZXMgaW5jdXJyZWQgaW4gY29ubmVjdGlvbiB3aXRoIHRyYWRpbmcgc3RvY2tzIG9yIGZ1dHVyZXMgY29udHJhY3RzIGNhbiBiZSBzaWduaWZpY2FudC4gWW91IHNob3VsZCB0aGVyZWZvcmUgY2FyZWZ1bGx5IGNvbnNpZGVyIHdoZXRoZXIgc3VjaCB0cmFkaW5nIGlzIHN1aXRhYmxlIGZvciB5b3UgaW4gdGhlIGxpZ2h0IG9mIHlvdXIgZmluYW5jaWFsIGNvbmRpdGlvbiBzaW5jZSBhbGwgc3BlY3VsYXRpdmUgdHJhZGluZyBpcyBpbmhlcmVudGx5IHJpc2t5IGFuZCBzaG91bGQgb25seSBiZSB1bmRlcnRha2VuIGJ5IGluZGl2aWR1YWxzIHdpdGggYWRlcXVhdGUgcmlzayBjYXBpdGFsLiBOZWl0aGVyIEFtZXJpY2FuYnVsbHMuY29tIExMQywgbm9yIEFtZXJpY2FuYnVsbHMuY29tIG1ha2VzIGFueSBjbGFpbXMgd2hhdHNvZXZlciByZWdhcmRpbmcgcGFzdCBvciBmdXR1cmUgcGVyZm9ybWFuY2UuIEFsbCBleGFtcGxlcywgY2hhcnRzLCBoaXN0b3JpZXMsIHRhYmxlcywgY29tbWVudGFyaWVzLCBvciByZWNvbW1lbmRhdGlvbnMgYXJlIGZvciBlZHVjYXRpb25hbCBvciBpbmZvcm1hdGlvbmFsIHB1cnBvc2VzIG9ubHkuZAILDxYCHw8F3wZEaXNwbGF5ZWQgaW5mb3JtYXRpb24gaXMgYmFzZWQgb24gd2lkZWx5LWFjY2VwdGVkIG1ldGhvZHMgb2YgdGVjaG5pY2FsIGFuYWx5c2lzIGJhc2VkIG9uIGNhbmRsZXN0aWNrIHBhdHRlcm5zLiBBbGwgaW5mb3JtYXRpb24gaXMgZnJvbSBzb3VyY2VzIGRlZW1lZCB0byBiZSByZWxpYWJsZSwgYnV0IHRoZXJlIGlzIG5vIGd1YXJhbnRlZSB0byB0aGUgYWNjdXJhY3kuIExvbmctdGVybSBpbnZlc3RtZW50IHN1Y2Nlc3MgcmVsaWVzIG9uIHJlY29nbml6aW5nIHByb2JhYmlsaXRpZXMgaW4gcHJpY2UgYWN0aW9uIGZvciBwb3NzaWJsZSBmdXR1cmUgb3V0Y29tZXMsIHJhdGhlciB0aGFuIGFic29sdXRlIGNlcnRhaW50eSDigJMgcmlzayBtYW5hZ2VtZW50IGlzIGNyaXRpY2FsIGZvciBzdWNjZXNzLiBFcnJvciBhbmQgdW5jZXJ0YWludHkgYXJlIHBhcnQgb2YgYW55IGZvcm0gb2YgbWFya2V0IGFuYWx5c2lzLiBQYXN0IHBlcmZvcm1hbmNlIGlzIG5vIGd1YXJhbnRlZSBvZiBmdXR1cmUgcGVyZm9ybWFuY2UuIEludmVzdG1lbnQvIHRyYWRpbmcgY2FycmllcyBzaWduaWZpY2FudCByaXNrIG9mIGxvc3MgYW5kIHlvdSBzaG91bGQgY29uc3VsdCB5b3VyIGZpbmFuY2lhbCBwcm9mZXNzaW9uYWwgYmVmb3JlIGludmVzdGluZyBvciB0cmFkaW5nLiBZb3VyIGZpbmFuY2lhbCBhZHZpc2VyIGNhbiBnaXZlIHlvdSBzcGVjaWZpYyBmaW5hbmNpYWwgYWR2aWNlIHRoYXQgaXMgYXBwcm9wcmlhdGUgdG8geW91ciBuZWVkcywgcmlzay10b2xlcmFuY2UsIGFuZCBmaW5hbmNpYWwgcG9zaXRpb24uIEFueSB0cmFkZXMgb3IgaGVkZ2VzIHlvdSBtYWtlIGFyZSB0YWtlbiBhdCB5b3VyIG93biByaXNrIGZvciB5b3VyIG93biBhY2NvdW50LmQCDQ8WAh8PBdsBWW91IGFncmVlIHRoYXQgQW1lcmljYW5idWxscy5jb20gYW5kIEFtZXJpY2FuYnVsbHMuY29tIExMQyBpdHMgcGFyZW50IGNvbXBhbnksIHN1YnNpZGlhcmllcywgYWZmaWxpYXRlcywgb2ZmaWNlcnMgYW5kIGVtcGxveWVlcyBzaGFsbCBub3QgYmUgbGlhYmxlIGZvciBhbnkgZGlyZWN0LCBpbmRpcmVjdCwgaW5jaWRlbnRhbCwgc3BlY2lhbCBvciBjb25zZXF1ZW50aWFsIGRhbWFnZXMuZAIRDxYCHwAFHGJvdHRvbWJhbm5lcmNvbnRhaW5lcl9zYWZhcmlkGAEFHl9fQ29udHJvbHNSZXF1aXJlUG9zdEJhY2tLZXlfXxYIBQ9jdGwwMCRMb2dpbk1lbnUFC2N0bDAwJG1NYWluBQ5jdGwwMCRNYWluTWVudQUWY3RsMDAkRnJlZVJlZ2lzdGVyTWVudQUYY3RsMDAkTWVtYmVyc2hpcEJlbmVmaXRzBRJjdGwwMCRTZWFyY2hCdXR0b24FEWN0bDAwJFN1cHBvcnRNZW51BRNjdGwwMCRMYW5ndWFnZXNNZW51NlBIALTovVw6LJEOuDXyhCTS4+M=
__VIEWSTATEGENERATOR:ECDA716A
__EVENTVALIDATION:/wEdAAVswH4c0JxRe30eXDiX0bhcXr7XOgipC8DNcjKl0sbO7fwNII+YQgXfxmh/KZz6Myr4IcjYoaGuA6R78NuEHgsNQX9+ScDGDIM47zqhQCjs5Ynd+DEUmo0/Xv9Oy6tQgLO7ip/G
ctl00$mMain:{&quot;selectedItemIndexPath&quot;:&quot;0i0&quot;,&quot;checkedState&quot;:&quot;&quot;}
ctl00$MainMenu:{&quot;selectedItemIndexPath&quot;:&quot;&quot;,&quot;checkedState&quot;:&quot;&quot;}
ctl00$FreeRegisterMenu:{&quot;selectedItemIndexPath&quot;:&quot;&quot;,&quot;checkedState&quot;:&quot;&quot;}
ctl00$MembershipBenefits:{&quot;selectedItemIndexPath&quot;:&quot;&quot;,&quot;checkedState&quot;:&quot;&quot;}
ctl00$SearchBox$State:{&quot;rawValue&quot;:&quot;&quot;,&quot;validationState&quot;:&quot;&quot;}
ctl00$SearchBox:Enter Symbol
ctl00$MainContent$uEmail:test@test.test
ctl00$MainContent$uPassword:test
ctl00$MainContent$ASPxCheckBox1:I
ctl00$SupportMenu:{&quot;selectedItemIndexPath&quot;:&quot;&quot;,&quot;checkedState&quot;:&quot;&quot;}
ctl00$LanguagesMenu:{&quot;selectedItemIndexPath&quot;:&quot;&quot;,&quot;checkedState&quot;:&quot;&quot;}
DXScript:1_304,1_185,1_298,1_211,1_221,1_188,1_182,1_290,1_296,1_279,1_198,1_209,1_217,1_201
DXCss:1_40,1_50,1_53,1_51,1_4,1_16,1_13,0_4617,0_4621,1_14,1_17,Styles/Site.css,img/favicon.ico,https://adservice.google.com/adsid/integrator.js?domain=www.americanbulls.com,https://securepubads.g.doubleclick.net/static/3p_cookie.html
__ASYNCPOST:true
ctl00$MainContent$btnSubmit:Sign In
</code></pre>
<p>Your code looks great. It just looks like the script is failing because you're not submitting everything that the browser would normally submit. You could try continuing down the path you are on, submit all of the extra form data, and hope you don't have to bother with adding a CSRF token (a CSRF token is a randomly generated string that you're required to send back), or you can do as Sidharth Shah sugggested and use <a href="https://en.wikipedia.org/wiki/Selenium_(software)" rel="nofollow noreferrer">Selenium</a>.</p>
<p>There is a Firefox extension for Selenium that will allow you to start recording your mouse and keyboard actions, and then when you are done, you can export the results in Python. That Python code will depend on the Selenium library and a Selenium Chrome/Firefox/IE driver. When you run your Python code, a new browser window will open up, controlled by the selenium driver and your Python code. It's pretty cool, your basically writing Python code that controls a browser window. You will have to modify the Python code that the Firefox extension gives you a little bit to read all of the data from the page and start doing stuff with it after you're logged in, but the code for opening the browser window, navigating to athe login page, filling in your login credentials and submitting the form, and navigating to other pages after you're logged in will all be written for you.</p>
|
python|forms|python-requests
| 0 |
1,905,499 | 47,083,067 |
python openpyxl find certain cell then return the next nonempty cell
|
<p>I am using openpyxl to work with this excel sheet. Once i find the cell that contains "Mandatory Field", I want to keep looking down that column to find the first nonempty value.</p>
<pre><code> for row in ws.iter_rows():
for cell in row[0:4]:
if cell.value == 'Mandatory Field'
print (cell.value)
</code></pre>
<p>This is what I have so far. I do not know how to tell it to say ok now that you have found the cell with Mandatory field. return the the value of the cell that is not empty below you. I am looking through 5 columns because i need to do this to twice. </p>
|
<p>If you need to do this twice within a range of cells you should use a "sentinel" flag.</p>
<pre><code>sentinel = False
for row in ws.iter_rows(max_col=5):
for cell in row:
if cell.value == "Mandatory Field":
sentinel = True
if sentinel = True:
print(cell.offset(row=1).value)
sentinel = False
</code></pre>
<p>Note that in the example you can avoid the use of a sentinel because of the <code>offset()</code> method but I'm including it as an example.</p>
|
python|openpyxl
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.